All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] start cleanup of rte_flow_item_*
@ 2022-10-25 21:44 Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 1/8] ethdev: use Ethernet protocol struct for flow matching Thomas Monjalon
                   ` (13 more replies)
  0 siblings, 14 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, andrew.rybchenko

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.


Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  24 +--
 app/test-pmd/cmdline_flow.c              | 172 ++++++++++----------
 doc/guides/prog_guide/rte_flow.rst       |  57 ++-----
 doc/guides/rel_notes/deprecation.rst     |  34 +++-
 drivers/net/bnxt/bnxt_flow.c             |  54 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++++++------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 ++---
 drivers/net/dpaa2/dpaa2_flow.c           |  60 +++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 +--
 drivers/net/enic/enic_fm_flow.c          |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 ++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 +++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 +--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 ++---
 drivers/net/mlx5/mlx5_flow.c             |  62 ++++----
 drivers/net/mlx5/mlx5_flow_dv.c          | 194 ++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  46 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/sfc/sfc_flow.c               |  52 +++---
 drivers/net/sfc/sfc_mae.c                |  46 +++---
 drivers/net/tap/tap_flow.c               |  58 +++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    | 117 +++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 +++-
 43 files changed, 926 insertions(+), 883 deletions(-)

-- 
2.36.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH 1/8] ethdev: use Ethernet protocol struct for flow matching
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 2/8] net: add smaller fields for VXLAN Thomas Monjalon
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, andrew.rybchenko, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Chas Williams,
	Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Jiawen Wu,
	Jian Wang

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |  38 ++++--
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          | 100 ++++++++--------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 36 files changed, 590 insertions(+), 571 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f..b7f51030a1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 64297992d2..68fb4b3fe4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3599,19 +3599,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3632,7 +3632,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3640,7 +3640,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3648,7 +3648,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3656,7 +3656,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3664,7 +3664,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7402,10 +7402,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7417,9 +7417,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7537,10 +7537,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7550,9 +7550,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7613,10 +7613,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7643,22 +7643,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7677,10 +7677,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7707,7 +7707,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7731,10 +7731,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7783,22 +7783,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7837,8 +7837,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7878,22 +7878,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7924,10 +7924,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7977,22 +7977,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8031,8 +8031,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8074,22 +8074,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 565868aeea..4323681b86 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05cacb3ea8..368b857e20 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,14 +58,36 @@ Deprecation Notices
   to using the general ``rte_flow_modify_field`` action.
 
 * ethdev: The flow API matching pattern structures, ``struct rte_flow_item_*``,
-  should start with relevant protocol header.
-  Some matching pattern structures implements this by duplicating protocol header
-  fields in the struct. To clarify the intention and to be sure protocol header
-  is intact, will replace those fields with relevant protocol header struct.
-  In v21.02 both individual protocol header fields and the protocol header struct
-  will be added as union, target is switch usage to the protocol header by time.
-  In v21.11 LTS, protocol header fields will be cleaned and only protocol header
-  struct will remain.
+  should start with relevant protocol header structure from lib/net/.
+  The individual protocol header fields and the protocol header struct
+  may be kept together in an union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
+
+  These items are not compliant (not including struct from lib/net/):
+  - ``rte_flow_item_ah``
+  - ``rte_flow_item_arp_eth_ipv4``
+  - ``rte_flow_item_e_tag``
+  - ``rte_flow_item_geneve``
+  - ``rte_flow_item_geneve_opt``
+  - ``rte_flow_item_gre``
+  - ``rte_flow_item_gtp``
+  - ``rte_flow_item_icmp6``
+  - ``rte_flow_item_icmp6_nd_na``
+  - ``rte_flow_item_icmp6_nd_ns``
+  - ``rte_flow_item_icmp6_nd_opt``
+  - ``rte_flow_item_icmp6_nd_opt_sla_eth``
+  - ``rte_flow_item_icmp6_nd_opt_tla_eth``
+  - ``rte_flow_item_igmp``
+  - ``rte_flow_item_ipv6_ext``
+  - ``rte_flow_item_l2tpv3oip``
+  - ``rte_flow_item_mpls``
+  - ``rte_flow_item_nsh``
+  - ``rte_flow_item_nvgre``
+  - ``rte_flow_item_pfcp``
+  - ``rte_flow_item_pppoe``
+  - ``rte_flow_item_pppoe_proto_id``
+  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460c..8f66049340 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c..2928598ced 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 4081b21338..47b6a930a9 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6..f5787c247f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e..eec7e60650 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f42..2ff1a98fda 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b775310651..ea9b290e1c 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfe..e6c9ad442a 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af847..90027dc676 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07..d1a564a163 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c39..ef1832982d 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c..0acbd5a061 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index a1ff85fceb..3f6285720f 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -990,7 +990,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1037,7 +1037,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f80873925..a6c88cb55b 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 3be75923a5..4760b9bae3 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -189,7 +189,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -197,12 +197,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -419,10 +419,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index afd7f8f467..259594c3df 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1690,9 +1690,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0..f2ddbd7b9b 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba9407..5d297afc29 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a16..d84061340e 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539..b677a0d613 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341..ee56d0f43d 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 1250c2dc12..6bcd4f7126 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81..8ef9fd2db4 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index e4744b0a67..0f88fa41d9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -276,13 +276,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2328,9 +2328,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2390,8 +2390,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2419,7 +2419,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2439,8 +2439,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7669,10 +7669,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 91f287af5c..4987631e79 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -644,17 +644,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2303,8 +2303,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2332,7 +2332,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2871,9 +2871,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2895,23 +2895,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -2961,8 +2961,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -2974,8 +2974,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7076,10 +7076,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7095,10 +7095,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8356,9 +8356,9 @@ flow_dv_translate_item_eth(void *matcher, void *key,
 	const struct rte_flow_item_eth *eth_m = item->mask;
 	const struct rte_flow_item_eth *eth_v = item->spec;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_m;
@@ -8380,17 +8380,17 @@ flow_dv_translate_item_eth(void *matcher, void *key,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	}
 	memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16),
-	       &eth_m->dst, sizeof(eth_m->dst));
+	       &eth_m->hdr.dst_addr, sizeof(eth_m->hdr.dst_addr));
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16),
-	       &eth_m->src, sizeof(eth_m->src));
+	       &eth_m->hdr.src_addr, sizeof(eth_m->hdr.src_addr));
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8399,10 +8399,10 @@ flow_dv_translate_item_eth(void *matcher, void *key,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
+	if (eth_m->hdr.ether_type == 0xFFFF) {
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1);
-		switch (eth_v->type) {
+		switch (eth_v->hdr.ether_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
 			return;
@@ -8432,9 +8432,9 @@ flow_dv_translate_item_eth(void *matcher, void *key,
 		}
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype,
-		 rte_be_to_cpu_16(eth_m->type));
+		 rte_be_to_cpu_16(eth_m->hdr.ether_type));
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8478,7 +8478,7 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow,
 		 */
 		if (vlan_v)
 			dev_flow->handle->vf_vlan.tag =
-					rte_be_to_cpu_16(vlan_v->tci) & 0x0fff;
+					rte_be_to_cpu_16(vlan_v->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8492,8 +8492,8 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow,
 		return;
 	if (!vlan_m)
 		vlan_m = &rte_flow_item_vlan_mask;
-	tci_m = rte_be_to_cpu_16(vlan_m->tci);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_m = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12);
@@ -8504,8 +8504,8 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		switch (vlan_v->inner_type) {
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		switch (vlan_v->hdr.eth_proto) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1);
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8529,9 +8529,9 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto));
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fd902078f8..a8b84a2119 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -430,16 +430,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -515,11 +515,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -528,7 +528,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1268,10 +1268,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1291,10 +1291,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index c68b32cf14..be35fc3db2 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1294,19 +1294,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1367,9 +1367,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1378,9 +1378,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1392,7 +1392,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1427,14 +1427,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c4..e74a5f83f5 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b5..f098edc6eb 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da95..710d04be13 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe059..ed4d42f92f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8..7ef52d0b0f 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 2/8] net: add smaller fields for VXLAN
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 1/8] ethdev: use Ethernet protocol struct for flow matching Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 3/8] ethdev: use VXLAN protocol struct for flow matching Thomas Monjalon
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, andrew.rybchenko, Olivier Matz

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd..997fc784fc 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 3/8] ethdev: use VXLAN protocol struct for flow matching
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 1/8] ethdev: use Ethernet protocol struct for flow matching Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 2/8] net: add smaller fields for VXLAN Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 4/8] ethdev: use GRE " Thomas Monjalon
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, andrew.rybchenko, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Dongdong Liu,
	Yisen Zhuang, Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu,
	Wenjun Wu, Matan Azrad, Viacheslav Ovsiienko

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 +++++++++++-----------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 44 ++++++++++++------------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 +++++++++----
 18 files changed, 128 insertions(+), 124 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa..f1d5931325 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a1..a58245239b 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 68fb4b3fe4..fcbd0a2534 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3950,7 +3950,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3958,7 +3958,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4176,7 +4176,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7415,7 +7415,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7469,7 +7469,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 4323681b86..92d3168d39 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 368b857e20..52c43bb652 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,7 +87,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f66049340..4a107e81e9 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced..80869b79c3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982d..e88f9b7e45 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061..2855b14fe6 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e..7cb20fa0b4 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d..d20a29b9a2 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 6bcd4f7126..1cbebc9a3e 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 0f88fa41d9..10f6abeb07 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -308,7 +308,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2816,8 +2816,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2857,8 +2857,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -2928,14 +2928,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 4987631e79..a06b6e1860 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9244,8 +9244,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	uint16_t dport;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	if (inner) {
@@ -9287,12 +9287,12 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 		misc_m = MLX5_ADDR_OF(fte_match_param,
 				      matcher, misc_parameters);
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
-		memcpy(vni_m, vxlan_m->vni, size);
+		memcpy(vni_m, vxlan_m->hdr.vni, size);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+			vni_v[i] = vni_m[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5);
@@ -9303,18 +9303,18 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_m,
 						   tunnel_header_1);
-	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	*tunnel_header_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	if (*tunnel_header_v)
-		*tunnel_header_m = vxlan_m->vni[0] |
-			vxlan_m->vni[1] << 8 |
-			vxlan_m->vni[2] << 16;
+		*tunnel_header_m = vxlan_m->hdr.vni[0] |
+			vxlan_m->hdr.vni[1] << 8 |
+			vxlan_m->hdr.vni[2] << 16;
 	else
 		*tunnel_header_m = 0x0;
-	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
-	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
-		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
+	*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
+	if (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+		*tunnel_header_m |= vxlan_m->hdr.rsvd1 << 24;
 }
 
 /**
@@ -9349,7 +9349,7 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key,
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9366,17 +9366,17 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key,
 		if (!vxlan_m)
 			vxlan_m = &rte_flow_item_vxlan_gpe_mask;
 	}
-	memcpy(vni_m, vxlan_m->vni, size);
+	memcpy(vni_m, vxlan_m->hdr.vni, size);
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vni_m[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m);
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.proto;
+	v_protocol = vxlan_v->hdr.proto;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index a8b84a2119..facab1b313 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -778,9 +778,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -820,9 +820,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb..fe1f5ba55f 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13..aab697b204 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index cddbe74c33..6045a352ae 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -987,7 +987,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1204,18 +1204,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 4/8] ethdev: use GRE protocol struct for flow matching
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (2 preceding siblings ...)
  2022-10-25 21:44 ` [PATCH 3/8] ethdev: use VXLAN protocol struct for flow matching Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-26  8:45   ` David Marchand
  2022-10-25 21:44 ` [PATCH 5/8] ethdev: use GTP " Thomas Monjalon
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, andrew.rybchenko, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Hemant Agrawal,
	Sachin Saxena, Matan Azrad, Viacheslav Ovsiienko

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |  4 ++--
 app/test-pmd/cmdline_flow.c              |  6 +++---
 doc/guides/prog_guide/rte_flow.rst       |  6 +-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++--------
 drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++------
 drivers/net/mlx5/mlx5_flow.c             | 22 +++++++++-----------
 drivers/net/mlx5/mlx5_flow_dv.c          | 26 +++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++++----
 lib/ethdev/rte_flow.h                    | 24 +++++++++++++++-------
 10 files changed, 60 insertions(+), 61 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239b..0f19e5e536 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gre gre_spec = {
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
 	};
 	static struct rte_flow_item_gre gre_mask = {
-		.protocol = RTE_BE16(0xffff),
+		.hdr.proto = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fcbd0a2534..3c2d090a08 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4037,7 +4037,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     protocol)),
+					     hdr.proto)),
 	},
 	[ITEM_GRE_C_RSVD0_VER] = {
 		.name = "c_rsvd0_ver",
@@ -7752,7 +7752,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls = {
 		.ttl = 0,
@@ -7850,7 +7850,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls;
 	uint8_t *header;
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 92d3168d39..b6ffb03b01 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -980,8 +980,7 @@ Item: ``GRE``
 
 Matches a GRE header.
 
-- ``c_rsvd0_ver``: checksum, reserved 0 and version.
-- ``protocol``: protocol type.
+- ``hdr``:  header definition (``rte_gre.h``).
 - Default ``mask`` matches protocol only.
 
 Item: ``GRE_KEY``
@@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
 Matches a GRE optional fields (checksum/key/sequence).
 This should be preceded by item ``GRE``.
 
-- ``checksum``: checksum.
-- ``key``: key.
-- ``sequence``: sequence.
 - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
   item. The bit flags need be set with GRE item by application. When the items
   present, the corresponding bits in GRE spec and mask should be set "1" by
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 52c43bb652..8e7af28318 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -70,7 +70,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gre``
   - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 80869b79c3..280ddc0d94 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1461,16 +1461,10 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
-			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
-			      ULP_PRSR_ACT_DEFAULT);
-
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
-	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, protocol),
-			      ulp_deference_struct(gre_mask, protocol),
+			      ulp_deference_struct(gre_spec, hdr.proto),
+			      ulp_deference_struct(gre_mask, hdr.proto),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with GRE */
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e60650..8a6d44da48 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 };
 
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(0xffff),
 };
 
 #endif
@@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->protocol)
+	if (!mask->hdr.proto)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_GRE,
 				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
+				&spec->hdr.proto,
+				&mask->hdr.proto,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
@@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_GRE,
 			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
+			&spec->hdr.proto,
+			&mask->hdr.proto,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 10f6abeb07..b4a560c18a 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -304,7 +304,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_GRE:
-		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
@@ -2987,8 +2987,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	gre_spec = gre_item->spec;
-	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Key bit must be on");
@@ -3063,21 +3062,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	if (mask->checksum_rsvd.checksum)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
+		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "Checksum bit must be on");
 	if (mask->key.key)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item, "Key bit must be on");
 	if (mask->sequence.sequence)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
+		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
@@ -3128,8 +3124,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 	const struct rte_flow_item_gre *mask = item->mask;
 	int ret;
 	const struct rte_flow_item_gre nic_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 
 	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
@@ -3157,7 +3155,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 		return ret;
 #ifndef HAVE_MLX5DV_DR
 #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
-	if (spec && (spec->protocol & mask->protocol))
+	if (spec && (spec->hdr.proto & mask->hdr.proto))
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "without MPLS support the"
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index a06b6e1860..f70018be50 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9019,8 +9019,8 @@ flow_dv_translate_item_gre(void *matcher, void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
-	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
+	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(*(const uint16_t*)&gre_m->hdr);
+	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(*(const uint16_t*)&gre_v->hdr);
 	MLX5_SET(fte_match_set_misc, misc_m, gre_c_present,
 		 gre_crks_rsvd0_ver_m.c_present);
 	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
@@ -9036,8 +9036,8 @@ flow_dv_translate_item_gre(void *matcher, void *key,
 	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
 		 gre_crks_rsvd0_ver_v.s_present &
 		 gre_crks_rsvd0_ver_m.s_present);
-	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
-	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
+	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
+	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
@@ -9101,8 +9101,8 @@ flow_dv_translate_item_gre_option(void *matcher, void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	protocol_v = gre_v->protocol;
-	protocol_m = gre_m->protocol;
+	protocol_v = gre_v->hdr.proto;
+	protocol_m = gre_m->hdr.proto;
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		uint16_t ether_type =
@@ -9112,8 +9112,8 @@ flow_dv_translate_item_gre_option(void *matcher, void *key,
 			protocol_m = UINT16_MAX;
 		}
 	}
-	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
-	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
+	c_rsvd0_ver_v = *(const uint16_t*)&gre_v->hdr;
+	c_rsvd0_ver_m = *(const uint16_t*)&gre_m->hdr;
 	if (option_m->sequence.sequence) {
 		c_rsvd0_ver_v |= RTE_BE16(0x1000);
 		c_rsvd0_ver_m |= RTE_BE16(0x1000);
@@ -9183,12 +9183,14 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
 
 	/* For NVGRE, GRE header fields must be set with defined values. */
 	const struct rte_flow_item_gre gre_spec = {
-		.c_rsvd0_ver = RTE_BE16(0x2000),
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
+		.hdr.k = 1,
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
 	};
 	const struct rte_flow_item_gre gre_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 	const struct rte_flow_item gre_item = {
 		.spec = &gre_spec,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index facab1b313..46f961cbb2 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -959,10 +959,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		if (!mask)
 			mask = &rte_flow_item_gre_mask;
 	}
-	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
-	tunnel.val.protocol = spec->protocol;
-	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
-	tunnel.mask.protocol = mask->protocol;
+	tunnel.val.c_ks_res0_ver = *(const uint16_t*)&spec->hdr;
+	tunnel.val.protocol = spec->hdr.proto;
+	tunnel.mask.c_ks_res0_ver = *(const uint16_t*)&mask->hdr;
+	tunnel.mask.protocol = mask->hdr.proto;
 	/* Remove unwanted bits from values. */
 	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
 	tunnel.val.key &= tunnel.mask.key;
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 6045a352ae..fd9be56e31 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1069,19 +1069,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
  *
  * Matches a GRE header.
  */
+RTE_STD_C11
 struct rte_flow_item_gre {
-	/**
-	 * Checksum (1b), reserved 0 (12b), version (3b).
-	 * Refer to RFC 2784.
-	 */
-	rte_be16_t c_rsvd0_ver;
-	rte_be16_t protocol; /**< Protocol type. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Checksum (1b), reserved 0 (12b), version (3b).
+			 * Refer to RFC 2784.
+			 */
+			rte_be16_t c_rsvd0_ver;
+			rte_be16_t protocol; /**< Protocol type. */
+		};
+		struct rte_gre_hdr hdr; /**< GRE header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(UINT16_MAX),
 };
 #endif
 
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 5/8] ethdev: use GTP protocol struct for flow matching
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (3 preceding siblings ...)
  2022-10-25 21:44 ` [PATCH 4/8] ethdev: use GRE " Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 6/8] ethdev: use ARP " Thomas Monjalon
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev
  Cc: ferruh.yigit, andrew.rybchenko, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
	Matan Azrad, Viacheslav Ovsiienko

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c       |  4 ++--
 app/test-pmd/cmdline_flow.c          |  8 +++----
 doc/guides/prog_guide/rte_flow.rst   | 10 ++-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 drivers/net/i40e/i40e_fdir.c         | 14 ++++++------
 drivers/net/i40e/i40e_flow.c         | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c         |  8 +++----
 drivers/net/ice/ice_fdir_filter.c    | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c  | 12 +++++------
 drivers/net/mlx5/mlx5_flow_dv.c      | 24 ++++++++++-----------
 lib/ethdev/rte_flow.h                | 32 ++++++++++++++++++----------
 11 files changed, 73 insertions(+), 70 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e536..55eb6f5cf0 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 3c2d090a08..90da247eaf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4103,19 +4103,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11135,7 +11135,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b6ffb03b01..e8512e0a03 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8e7af28318..b4b97d3165 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -70,7 +70,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb..47f79ecf11 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe6..3c550733f2 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b..811a10287b 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc29..480b369af8 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4..110d8895fe 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f70018be50..5d1ca7d658 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2391,9 +2391,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2411,7 +2411,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2462,8 +2462,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -10298,16 +10298,16 @@ flow_dv_translate_item_gtp(void *matcher, void *key,
 	if (!gtp_m)
 		gtp_m = &rte_flow_item_gtp_mask;
 	MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags,
-		 gtp_m->v_pt_rsv_flags);
+		 gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
-	MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
+	MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_m->hdr.teid));
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index fd9be56e31..02c6cc9981 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1148,23 +1148,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 6/8] ethdev: use ARP protocol struct for flow matching
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (4 preceding siblings ...)
  2022-10-25 21:44 ` [PATCH 5/8] ethdev: use GTP " Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 7/8] doc: fix description of L2TPV2 flow item Thomas Monjalon
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, andrew.rybchenko, Ori Kam, Aman Singh, Yuying Zhang

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 90da247eaf..84e1ed039f 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4192,7 +4192,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4200,7 +4200,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4208,7 +4208,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4216,7 +4216,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e8512e0a03..421c6407a9 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b4b97d3165..af266cf9f4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -66,7 +66,6 @@ Deprecation Notices
 
   These items are not compliant (not including struct from lib/net/):
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 02c6cc9981..d890c253b4 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -19,6 +19,7 @@
 
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1254,26 +1255,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 7/8] doc: fix description of L2TPV2 flow item
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (5 preceding siblings ...)
  2022-10-25 21:44 ` [PATCH 6/8] ethdev: use ARP " Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-25 21:44 ` [PATCH 8/8] net: mark all big endian types Thomas Monjalon
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, andrew.rybchenko, Ori Kam

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 421c6407a9..007051f070 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 8/8] net: mark all big endian types
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (6 preceding siblings ...)
  2022-10-25 21:44 ` [PATCH 7/8] doc: fix description of L2TPV2 flow item Thomas Monjalon
@ 2022-10-25 21:44 ` Thomas Monjalon
  2022-10-26  8:41   ` David Marchand
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 90+ messages in thread
From: Thomas Monjalon @ 2022-10-25 21:44 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, andrew.rybchenko, Olivier Matz

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/net/rte_arp.h   | 28 ++++++++++++++--------------
 lib/net/rte_higig.h |  6 +++---
 lib/net/rte_mpls.h  |  2 +-
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314..151e6c641f 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /** format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /** format of protocol address */
+	uint8_t    arp_hlen;     /** length of hardware address */
+	uint8_t    arp_plen;     /** length of protocol address */
+	rte_be16_t arp_opcode;   /** ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /** request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /** response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db..bba3898a88 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec3..51523e7a11 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH 8/8] net: mark all big endian types
  2022-10-25 21:44 ` [PATCH 8/8] net: mark all big endian types Thomas Monjalon
@ 2022-10-26  8:41   ` David Marchand
  0 siblings, 0 replies; 90+ messages in thread
From: David Marchand @ 2022-10-26  8:41 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, ferruh.yigit, andrew.rybchenko, Olivier Matz

On Tue, Oct 25, 2022 at 11:46 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
> index b55fb1a7db..bba3898a88 100644
> --- a/lib/net/rte_higig.h
> +++ b/lib/net/rte_higig.h
> @@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
>   */
>  __extension__
>  struct rte_higig2_ppt_type1 {
> -       uint16_t classification;
> -       uint16_t resv;
> -       uint16_t vid;
> +       rte_be16_t classification;
> +       rte_be16_t resv;
> +       rte_be16_t vid;

Compiling with sparse (from OVS dpdk-latest), there are, at least,
some annotations missing in the public headers for higig2.
lib/ethdev/rte_flow.h:644:                      .classification = 0xffff,
lib/ethdev/rte_flow.h:645:                      .vid = 0xfff,

And the 0xfff mask for a 16 bits field (vid) is suspicious, isn't it?


>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>         uint16_t opcode:3;
>         uint16_t resv1:2;


-- 
David Marchand


^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 4/8] ethdev: use GRE protocol struct for flow matching
  2022-10-25 21:44 ` [PATCH 4/8] ethdev: use GRE " Thomas Monjalon
@ 2022-10-26  8:45   ` David Marchand
  2023-01-20 17:21     ` Ferruh Yigit
  0 siblings, 1 reply; 90+ messages in thread
From: David Marchand @ 2022-10-26  8:45 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, ferruh.yigit, andrew.rybchenko, Wisam Jaddo, Ori Kam,
	Aman Singh, Yuying Zhang, Ajit Khaparde, Somnath Kotur,
	Hemant Agrawal, Sachin Saxena, Matan Azrad, Viacheslav Ovsiienko

On Tue, Oct 25, 2022 at 11:45 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 6045a352ae..fd9be56e31 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -1069,19 +1069,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
>   *
>   * Matches a GRE header.
>   */
> +RTE_STD_C11
>  struct rte_flow_item_gre {
> -       /**
> -        * Checksum (1b), reserved 0 (12b), version (3b).
> -        * Refer to RFC 2784.
> -        */
> -       rte_be16_t c_rsvd0_ver;
> -       rte_be16_t protocol; /**< Protocol type. */
> +       union {
> +               struct {
> +                       /*
> +                        * These are old fields kept for compatibility.
> +                        * Please prefer hdr field below.
> +                        */
> +                       /**
> +                        * Checksum (1b), reserved 0 (12b), version (3b).
> +                        * Refer to RFC 2784.
> +                        */
> +                       rte_be16_t c_rsvd0_ver;
> +                       rte_be16_t protocol; /**< Protocol type. */
> +               };
> +               struct rte_gre_hdr hdr; /**< GRE header definition. */
> +       };
>  };
>
>  /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
>  #ifndef __cplusplus
>  static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
> -       .protocol = RTE_BE16(0xffff),
> +       .hdr.proto = RTE_BE16(UINT16_MAX),


The proto field in struct rte_gre_hdr from lib/net lacks endianness annotation.
This triggers a sparse warning (from OVS dpdk-latest build):

/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
error: incorrect type in initializer (different base types)
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
expected unsigned short [usertype] proto
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
got restricted ovs_be16 [usertype]


-- 
David Marchand


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v2 0/8] start cleanup of rte_flow_item_*
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (7 preceding siblings ...)
  2022-10-25 21:44 ` [PATCH 8/8] net: mark all big endian types Thomas Monjalon
@ 2023-01-20 17:18 ` Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
                     ` (8 more replies)
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
                   ` (4 subsequent siblings)
  13 siblings, 9 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:18 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, dev

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.


v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp


Cc: David Marchand <david.marchand@redhat.com>

Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

For series,
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  24 +--
 app/test-pmd/cmdline_flow.c              | 180 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |  57 ++-----
 doc/guides/rel_notes/deprecation.rst     |   6 +-
 drivers/net/bnxt/bnxt_flow.c             |  54 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++++++-------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c           |  60 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 +--
 drivers/net/enic/enic_fm_flow.c          |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 ++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 +--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 ++++----
 drivers/net/mlx4/mlx4_flow.c             |  38 ++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  48 +++---
 drivers/net/mlx5/mlx5_flow.c             |  62 ++++----
 drivers/net/mlx5/mlx5_flow_dv.c          | 178 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 +++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c       |  46 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/nfp/nfp_flow.c               |  21 +--
 drivers/net/sfc/sfc_flow.c               |  52 +++----
 drivers/net/sfc/sfc_mae.c                |  46 +++---
 drivers/net/tap/tap_flow.c               |  58 ++++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    | 117 ++++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_gre.h                        |   7 +-
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 ++++-
 47 files changed, 982 insertions(+), 947 deletions(-)

--
2.25.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v2 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-01-20 17:18   ` Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 2/8] net: add smaller fields for VXLAN Ferruh Yigit
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:18 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |   2 +
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803dc0..27c3780c4f17 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e18ac344ef8e..53b10b51d81a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,6 +58,8 @@ Deprecation Notices
   should start with relevant protocol header structure from lib/net/.
   The individual protocol header fields and the protocol header struct
   may be kept together in a union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
 
   These items are not compliant (not including struct from lib/net/):
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460cf5..8f660493402c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c49..2928598ced55 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index e0364ef015e0..3a43b7f3ef6f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6b8..f5787c247f1f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e7c..eec7e6065097 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f425c..2ff1a98fda7c 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b77531065196..ea9b290e1cb5 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfef0..e6c9ad442ac0 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af8476c..90027dc67695 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07e8..d1a564a16303 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c3980..ef1832982dee 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c17..0acbd5a061e0 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 0c848189776d..02e1457d8017 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f8087392538..a6c88cb55b88 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 4082c0069f31..74e1e7099b8c 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index 868921cac595..08a80137e5b9 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba940731..5d297afc290e 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a1697..d84061340e6c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539c6..b677a0d61340 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341d3..ee56d0f43d93 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 110ff34fcceb..a11da3dc8beb 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81d0..8ef9fd2db44e 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c9666..604384a24253 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
 
 /* Xmacro used to create generic item setter from items */
 #define LIST_OF_FIELDS_INFO \
-	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
-	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
+	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
+	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
 	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
 	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
 	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
@@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (m->type) {
+	if (m->hdr.ether_type) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
@@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 47_16 */
-	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
@@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 15_0 */
-	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
@@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 47_16 */
-	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
@@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 15_0 */
-	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
@@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
 	}
 
-	if (m->tci) {
+	if (m->hdr.vlan_tci) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_tci_set;
 		DR_CALC_SET(fc, eth_l2, tci, inner);
 	}
 
-	if (m->inner_type) {
+	if (m->hdr.eth_proto) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a0cf677fb099..2512d6b52db9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 62c38b87a1f0..ff915183b7cc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *eth_m;
 	const struct rte_flow_item_eth *eth_v;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_v;
@@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
-		rte_be16_t type = eth_v->type;
+	if (eth_m->hdr.ether_type == 0xFFFF) {
+		rte_be16_t type = eth_v->hdr.ether_type;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
@@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M) {
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
-			type = eth_vv->type;
+			type = eth_vv->hdr.ether_type;
 		}
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		switch (type) {
@@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 			return;
 	}
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		 * and pre-validated.
 		 */
 		if (vlan_vv)
-			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
+			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
 			 &rte_flow_item_vlan_mask);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
@@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		rte_be16_t inner_type = vlan_v->inner_type;
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
 		 * value.
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M)
-			inner_type = vlan_vv->inner_type;
+			inner_type = vlan_vv->hdr.eth_proto;
 		switch (inner_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a3c8056515da..b8f96839c8bf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
 
 /* Ethernet item spec for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
-	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item mask for unicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for broadcast. */
 static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /**
@@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
 		.egress = 1,
 	};
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
 	for (i = 0; i < priv->vlan_filter_n; ++i) {
 		uint16_t vlan = priv->vlan_filter[i];
 		struct rte_flow_item_vlan vlan_spec = {
-			.tci = rte_cpu_to_be_16(vlan),
+			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 		};
 
 		items[1].spec = &vlan_spec;
@@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
 			return -rte_errno;
 	}
@@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		for (j = 0; j < priv->vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 
 			items[1].spec = &vlan_spec;
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 28ea28bfbe02..1902b97ec6d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f54443ed1ac4..3457bf65d3e1 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c496..e74a5f83f55b 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index d6c8c8992149..113e71a6aebc 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	eth = (void *)*mbuf_off;
 
 	if (is_mask) {
-		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	} else {
-		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	}
 
 	eth->mpls_lse = 0;
@@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	if (is_mask) {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
-		meta_tci->tci |= mask->tci;
+		meta_tci->tci |= mask->hdr.vlan_tci;
 	} else {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-		meta_tci->tci |= spec->tci;
+		meta_tci->tci |= spec->hdr.vlan_tci;
 	}
 
 	return 0;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b563..f098edc6eb33 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da9582..710d04be13af 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe0593d..ed4d42f92f9f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8d2..7ef52d0b0fcd 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 2/8] net: add smaller fields for VXLAN
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-01-20 17:18   ` Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:18 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 2/8] net: add smaller fields for VXLAN Ferruh Yigit
@ 2023-01-20 17:18   ` Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 4/8] ethdev: use GRE " Ferruh Yigit
                     ` (5 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:18 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Dongdong Liu, Yisen Zhuang,
	Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu,
	Matan Azrad, Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 ++++++++++----
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27c3780c4f17..116722351486 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 53b10b51d81a..638051789d19 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -85,7 +85,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f660493402c..4a107e81e955 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced55..80869b79c3fe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982dee..e88f9b7e452b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061e0..2855b14fe679 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e6c..7cb20fa0b4f8 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d93..d20a29b9a2d6 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index a11da3dc8beb..fe710b79008d 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2512d6b52db9..ff08a629e2c6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff915183b7cc..261c60a5c33a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
@@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	    ((attr->group || (attr->transfer && priv->fdb_def_rule)) &&
 	    !priv->sh->misc5_cap)) {
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
+			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_v,
 						   tunnel_header_1);
-	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	*tunnel_header_v = tunnel_v;
 	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
+		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 		if (!tunnel_v)
 			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
+		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
 	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
 	}
 }
 
@@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 		MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		vxlan_m = vxlan_v;
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags,
 		 flags_m & flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.protocol;
+	v_protocol = vxlan_v->hdr.protocol;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1902b97ec6d4..4ef4f3044515 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb33..fe1f5ba55f86 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13af..aab697b204c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b4f..e2364823d622 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -988,7 +988,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (2 preceding siblings ...)
  2023-01-20 17:18   ` [PATCH v2 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-01-20 17:18   ` Ferruh Yigit
  2023-01-20 17:18   ` [PATCH v2 5/8] ethdev: use GTP " Ferruh Yigit
                     ` (4 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:18 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Hemant Agrawal, Sachin Saxena,
	Matan Azrad, Viacheslav Ovsiienko, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 app/test-flow-perf/items_gen.c           |  4 ++--
 app/test-pmd/cmdline_flow.c              | 14 ++++++-------
 doc/guides/prog_guide/rte_flow.rst       |  6 +-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++------
 drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++------
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 ++++----
 drivers/net/mlx5/mlx5_flow.c             | 22 +++++++++-----------
 drivers/net/mlx5/mlx5_flow_dv.c          | 26 +++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++++----
 drivers/net/nfp/nfp_flow.c               |  9 ++++----
 lib/ethdev/rte_flow.h                    | 24 +++++++++++++++-------
 lib/net/rte_gre.h                        |  5 +++++
 13 files changed, 81 insertions(+), 70 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..0f19e5e53648 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gre gre_spec = {
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
 	};
 	static struct rte_flow_item_gre gre_mask = {
-		.protocol = RTE_BE16(0xffff),
+		.hdr.proto = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..0e115956514c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     protocol)),
+					     hdr.proto)),
 	},
 	[ITEM_GRE_C_RSVD0_VER] = {
 		.name = "c_rsvd0_ver",
@@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     c_rsvd0_ver)),
+					     hdr.c_rsvd0_ver)),
 	},
 	[ITEM_GRE_C_BIT] = {
 		.name = "c_bit",
@@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x80\x00\x00\x00")),
 	},
 	[ITEM_GRE_S_BIT] = {
@@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
 		.help = "sequence number bit (S)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x10\x00\x00\x00")),
 	},
 	[ITEM_GRE_K_BIT] = {
@@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
 		.help = "key bit (K)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x20\x00\x00\x00")),
 	},
 	[ITEM_FUZZY] = {
@@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls = {
 		.ttl = 0,
@@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls;
 	uint8_t *header;
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 116722351486..603e1b866be3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -980,8 +980,7 @@ Item: ``GRE``
 
 Matches a GRE header.
 
-- ``c_rsvd0_ver``: checksum, reserved 0 and version.
-- ``protocol``: protocol type.
+- ``hdr``:  header definition (``rte_gre.h``).
 - Default ``mask`` matches protocol only.
 
 Item: ``GRE_KEY``
@@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
 Matches a GRE optional fields (checksum/key/sequence).
 This should be preceded by item ``GRE``.
 
-- ``checksum``: checksum.
-- ``key``: key.
-- ``sequence``: sequence.
 - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
   item. The bit flags need be set with GRE item by application. When the items
   present, the corresponding bits in GRE spec and mask should be set "1" by
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 638051789d19..80bf7209065a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gre``
   - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 80869b79c3fe..c1e231ce8c49 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1461,16 +1461,16 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.c_rsvd0_ver);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
-			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
+			      ulp_deference_struct(gre_spec, hdr.c_rsvd0_ver),
+			      ulp_deference_struct(gre_mask, hdr.c_rsvd0_ver),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, protocol),
-			      ulp_deference_struct(gre_mask, protocol),
+			      ulp_deference_struct(gre_spec, hdr.proto),
+			      ulp_deference_struct(gre_mask, hdr.proto),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with GRE */
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e6065097..8a6d44da4875 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 };
 
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(0xffff),
 };
 
 #endif
@@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->protocol)
+	if (!mask->hdr.proto)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_GRE,
 				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
+				&spec->hdr.proto,
+				&mask->hdr.proto,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
@@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_GRE,
 			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
+			&spec->hdr.proto,
+			&mask->hdr.proto,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 604384a24253..3a438f2c9d12 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -156,8 +156,8 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		source_qp,		v->queue,		mlx5_rte_flow_item_sq) \
 	X(SET,		tag,			v->data,		rte_flow_item_tag) \
 	X(SET,		metadata,		v->data,		rte_flow_item_meta) \
-	X(SET_BE16,	gre_c_ver,		v->c_rsvd0_ver,		rte_flow_item_gre) \
-	X(SET_BE16,	gre_protocol_type,	v->protocol,		rte_flow_item_gre) \
+	X(SET_BE16,	gre_c_ver,		v->hdr.c_rsvd0_ver,	rte_flow_item_gre) \
+	X(SET_BE16,	gre_protocol_type,	v->hdr.proto,		rte_flow_item_gre) \
 	X(SET,		ipv4_protocol_gre,	IPPROTO_GRE,		rte_flow_item_gre) \
 	X(SET_BE32,	gre_opt_key,		v->key.key,		rte_flow_item_gre_opt) \
 	X(SET_BE32,	gre_opt_seq,		v->sequence.sequence,	rte_flow_item_gre_opt) \
@@ -1210,7 +1210,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->c_rsvd0_ver) {
+	if (m->hdr.c_rsvd0_ver) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_c_ver_set;
@@ -1219,7 +1219,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 		fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver);
 	}
 
-	if (m->protocol) {
+	if (m->hdr.proto) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_protocol_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ff08a629e2c6..7b19c5f03f5d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -329,7 +329,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_GRE:
-		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
@@ -3089,8 +3089,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	gre_spec = gre_item->spec;
-	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Key bit must be on");
@@ -3165,21 +3164,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	if (mask->checksum_rsvd.checksum)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
+		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "Checksum bit must be on");
 	if (mask->key.key)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item, "Key bit must be on");
 	if (mask->sequence.sequence)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
+		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
@@ -3230,8 +3226,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 	const struct rte_flow_item_gre *mask = item->mask;
 	int ret;
 	const struct rte_flow_item_gre nic_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 
 	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
@@ -3259,7 +3257,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 		return ret;
 #ifndef HAVE_MLX5DV_DR
 #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
-	if (spec && (spec->protocol & mask->protocol))
+	if (spec && (spec->hdr.proto & mask->hdr.proto))
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "without MPLS support the"
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 261c60a5c33a..1165b29a5cb7 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9021,8 +9021,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 		gre_v = gre_m;
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		gre_m = gre_v;
-	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
-	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
+	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->hdr.c_rsvd0_ver);
+	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->hdr.c_rsvd0_ver);
 	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
 		 gre_crks_rsvd0_ver_v.c_present &
 		 gre_crks_rsvd0_ver_m.c_present);
@@ -9032,8 +9032,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
 		 gre_crks_rsvd0_ver_v.s_present &
 		 gre_crks_rsvd0_ver_m.s_present);
-	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
-	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
+	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
+	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
@@ -9097,8 +9097,8 @@ flow_dv_translate_item_gre_option(void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	protocol_v = gre_v->protocol;
-	protocol_m = gre_m->protocol;
+	protocol_v = gre_v->hdr.proto;
+	protocol_m = gre_m->hdr.proto;
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		uint16_t ether_type =
@@ -9108,8 +9108,8 @@ flow_dv_translate_item_gre_option(void *key,
 			protocol_m = UINT16_MAX;
 		}
 	}
-	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
-	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
+	c_rsvd0_ver_v = gre_v->hdr.c_rsvd0_ver;
+	c_rsvd0_ver_m = gre_m->hdr.c_rsvd0_ver;
 	if (option_m->sequence.sequence) {
 		c_rsvd0_ver_v |= RTE_BE16(0x1000);
 		c_rsvd0_ver_m |= RTE_BE16(0x1000);
@@ -9171,12 +9171,14 @@ flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item,
 
 	/* For NVGRE, GRE header fields must be set with defined values. */
 	const struct rte_flow_item_gre gre_spec = {
-		.c_rsvd0_ver = RTE_BE16(0x2000),
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
+		.hdr.k = 1,
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
 	};
 	const struct rte_flow_item_gre gre_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 	const struct rte_flow_item gre_item = {
 		.spec = &gre_spec,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 4ef4f3044515..956df2274b38 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -946,10 +946,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		if (!mask)
 			mask = &rte_flow_item_gre_mask;
 	}
-	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
-	tunnel.val.protocol = spec->protocol;
-	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
-	tunnel.mask.protocol = mask->protocol;
+	tunnel.val.c_ks_res0_ver = spec->hdr.c_rsvd0_ver;
+	tunnel.val.protocol = spec->hdr.proto;
+	tunnel.mask.c_ks_res0_ver = mask->hdr.c_rsvd0_ver;
+	tunnel.mask.protocol = mask->hdr.proto;
 	/* Remove unwanted bits from values. */
 	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
 	tunnel.val.key &= tunnel.mask.key;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 113e71a6aebc..f3d42b9feb86 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1812,8 +1812,9 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
 		.mask_support = &(const struct rte_flow_item_gre){
-			.c_rsvd0_ver = RTE_BE16(0xa000),
-			.protocol = RTE_BE16(0xffff),
+			.hdr.c = 1,
+			.hdr.k = 1,
+			.hdr.proto = RTE_BE16(0xffff),
 		},
 		.mask_default = &rte_flow_item_gre_mask,
 		.mask_sz = sizeof(struct rte_flow_item_gre),
@@ -3144,7 +3145,7 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 	memset(set_tun, 0, act_set_size);
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
@@ -3181,7 +3182,7 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv6->hdr.hop_limits, tos);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e2364823d622..3ae89e367c16 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1070,19 +1070,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
  *
  * Matches a GRE header.
  */
+RTE_STD_C11
 struct rte_flow_item_gre {
-	/**
-	 * Checksum (1b), reserved 0 (12b), version (3b).
-	 * Refer to RFC 2784.
-	 */
-	rte_be16_t c_rsvd0_ver;
-	rte_be16_t protocol; /**< Protocol type. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Checksum (1b), reserved 0 (12b), version (3b).
+			 * Refer to RFC 2784.
+			 */
+			rte_be16_t c_rsvd0_ver;
+			rte_be16_t protocol; /**< Protocol type. */
+		};
+		struct rte_gre_hdr hdr; /**< GRE header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(UINT16_MAX),
 };
 #endif
 
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 6c6aef6fcaa0..210b81c99018 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -28,6 +28,8 @@ extern "C" {
  */
 __extension__
 struct rte_gre_hdr {
+	union {
+		struct {
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t res2:4; /**< Reserved */
 	uint16_t s:1;    /**< Sequence Number Present bit */
@@ -45,6 +47,9 @@ struct rte_gre_hdr {
 	uint16_t res3:5; /**< Reserved */
 	uint16_t ver:3;  /**< Version Number */
 #endif
+		};
+		rte_be16_t c_rsvd0_ver;
+	};
 	uint16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 5/8] ethdev: use GTP protocol struct for flow matching
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (3 preceding siblings ...)
  2023-01-20 17:18   ` [PATCH v2 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-01-20 17:18   ` Ferruh Yigit
  2023-01-20 17:19   ` [PATCH v2 6/8] ethdev: use ARP " Ferruh Yigit
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:18 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 app/test-flow-perf/items_gen.c        |  4 ++--
 app/test-pmd/cmdline_flow.c           |  8 +++----
 doc/guides/prog_guide/rte_flow.rst    | 10 ++-------
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c          | 14 ++++++------
 drivers/net/i40e/i40e_flow.c          | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c          |  8 +++----
 drivers/net/ice/ice_fdir_filter.c     | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c   | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------
 drivers/net/mlx5/mlx5_flow_dv.c       | 18 +++++++--------
 lib/ethdev/rte_flow.h                 | 32 ++++++++++++++++++---------
 12 files changed, 77 insertions(+), 74 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e53648..55eb6f5cf009 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0e115956514c..dd6da9d98d9b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 603e1b866be3..ec2e335fac3d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 80bf7209065a..b89450b239ef 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb58..47f79ecf11cc 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe679..3c550733f2bb 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b88..811a10287b70 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc290e..480b369af816 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4f8..110d8895fea3 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 3a438f2c9d12..127cebcf3e11 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data {
 	X(SET_BE16,	tcp_src_port,		v->hdr.src_port,	rte_flow_item_tcp) \
 	X(SET_BE16,	tcp_dst_port,		v->hdr.dst_port,	rte_flow_item_tcp) \
 	X(SET,		gtp_udp_port,		RTE_GTPU_UDP_PORT,	rte_flow_item_gtp) \
-	X(SET_BE32,	gtp_teid,		v->teid,		rte_flow_item_gtp) \
-	X(SET,		gtp_msg_type,		v->msg_type,		rte_flow_item_gtp) \
-	X(SET,		gtp_ext_flag,		!!v->v_pt_rsv_flags,	rte_flow_item_gtp) \
+	X(SET_BE32,	gtp_teid,		v->hdr.teid,		rte_flow_item_gtp) \
+	X(SET,		gtp_msg_type,		v->hdr.msg_type,	rte_flow_item_gtp) \
+	X(SET,		gtp_ext_flag,		!!v->hdr.gtp_hdr_info,	rte_flow_item_gtp) \
 	X(SET,		gtp_next_ext_hdr,	GTP_PDU_SC,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_pdu,	v->hdr.type,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_qfi,	v->hdr.qfi,		rte_flow_item_gtp_psc) \
@@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
+	if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
 		rte_errno = ENOTSUP;
 		return rte_errno;
 	}
 
-	if (m->teid) {
+	if (m->hdr.teid) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 		fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
 	}
 
-	if (m->v_pt_rsv_flags) {
+	if (m->hdr.gtp_hdr_info) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	}
 
 
-	if (m->msg_type) {
+	if (m->hdr.msg_type) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 1165b29a5cb7..d76d6a0ef086 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -10358,11 +10358,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item,
 	MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m,
 		&rte_flow_item_gtp_mask);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3ae89e367c16..85ca73d1dc04 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1149,23 +1149,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 6/8] ethdev: use ARP protocol struct for flow matching
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (4 preceding siblings ...)
  2023-01-20 17:18   ` [PATCH v2 5/8] ethdev: use GTP " Ferruh Yigit
@ 2023-01-20 17:19   ` Ferruh Yigit
  2023-01-20 17:19   ` [PATCH v2 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:19 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index dd6da9d98d9b..1d337a96199d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec2e335fac3d..8bf85df2f611 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b89450b239ef..8e3683990117 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -64,7 +64,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 85ca73d1dc04..a215daa83640 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include <rte_compat.h>
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1255,26 +1256,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 7/8] doc: fix description of L2TPV2 flow item
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (5 preceding siblings ...)
  2023-01-20 17:19   ` [PATCH v2 6/8] ethdev: use ARP " Ferruh Yigit
@ 2023-01-20 17:19   ` Ferruh Yigit
  2023-01-20 17:19   ` [PATCH v2 8/8] net: mark all big endian types Ferruh Yigit
  2023-01-22 10:52   ` [PATCH v2 0/8] start cleanup of rte_flow_item_* David Marchand
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:19 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Jie Wang, Andrew Rybchenko, Wenjun Wu,
	Ferruh Yigit
  Cc: David Marchand, dev, stable

From: Thomas Monjalon <thomas@monjalon.net>

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
Cc: jie1x.wang@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 8bf85df2f611..c01b53aad8ed 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v2 8/8] net: mark all big endian types
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (6 preceding siblings ...)
  2023-01-20 17:19   ` [PATCH v2 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-01-20 17:19   ` Ferruh Yigit
  2023-01-22 10:52   ` [PATCH v2 0/8] start cleanup of rte_flow_item_* David Marchand
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:19 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 lib/net/rte_arp.h   | 28 ++++++++++++++--------------
 lib/net/rte_gre.h   |  2 +-
 lib/net/rte_higig.h |  6 +++---
 lib/net/rte_mpls.h  |  2 +-
 4 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..151e6c641fc5 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /** format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /** format of protocol address */
+	uint8_t    arp_hlen;     /** length of hardware address */
+	uint8_t    arp_plen;     /** length of protocol address */
+	rte_be16_t arp_opcode;   /** ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /** request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /** response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 210b81c99018..6b1169c8b0c1 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -50,7 +50,7 @@ struct rte_gre_hdr {
 		};
 		rte_be16_t c_rsvd0_ver;
 	};
-	uint16_t proto;  /**< Protocol Type */
+	rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH 4/8] ethdev: use GRE protocol struct for flow matching
  2022-10-26  8:45   ` David Marchand
@ 2023-01-20 17:21     ` Ferruh Yigit
  0 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-20 17:21 UTC (permalink / raw)
  To: David Marchand, Thomas Monjalon
  Cc: dev, andrew.rybchenko, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Hemant Agrawal,
	Sachin Saxena, Matan Azrad, Viacheslav Ovsiienko

On 10/26/2022 9:45 AM, David Marchand wrote:
> On Tue, Oct 25, 2022 at 11:45 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>> index 6045a352ae..fd9be56e31 100644
>> --- a/lib/ethdev/rte_flow.h
>> +++ b/lib/ethdev/rte_flow.h
>> @@ -1069,19 +1069,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
>>   *
>>   * Matches a GRE header.
>>   */
>> +RTE_STD_C11
>>  struct rte_flow_item_gre {
>> -       /**
>> -        * Checksum (1b), reserved 0 (12b), version (3b).
>> -        * Refer to RFC 2784.
>> -        */
>> -       rte_be16_t c_rsvd0_ver;
>> -       rte_be16_t protocol; /**< Protocol type. */
>> +       union {
>> +               struct {
>> +                       /*
>> +                        * These are old fields kept for compatibility.
>> +                        * Please prefer hdr field below.
>> +                        */
>> +                       /**
>> +                        * Checksum (1b), reserved 0 (12b), version (3b).
>> +                        * Refer to RFC 2784.
>> +                        */
>> +                       rte_be16_t c_rsvd0_ver;
>> +                       rte_be16_t protocol; /**< Protocol type. */
>> +               };
>> +               struct rte_gre_hdr hdr; /**< GRE header definition. */
>> +       };
>>  };
>>
>>  /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
>>  #ifndef __cplusplus
>>  static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
>> -       .protocol = RTE_BE16(0xffff),
>> +       .hdr.proto = RTE_BE16(UINT16_MAX),
> 
> 
> The proto field in struct rte_gre_hdr from lib/net lacks endianness annotation.
> This triggers a sparse warning (from OVS dpdk-latest build):
> 
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
> error: incorrect type in initializer (different base types)
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
> expected unsigned short [usertype] proto
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:1095:22:
> got restricted ovs_be16 [usertype]
> 
> 

added endianness annotation for GRE 'proto' field in v2, can you please
check if it fixes the warning?

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v2 0/8] start cleanup of rte_flow_item_*
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (7 preceding siblings ...)
  2023-01-20 17:19   ` [PATCH v2 8/8] net: mark all big endian types Ferruh Yigit
@ 2023-01-22 10:52   ` David Marchand
  2023-01-24  9:07     ` Ferruh Yigit
  8 siblings, 1 reply; 90+ messages in thread
From: David Marchand @ 2023-01-22 10:52 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Thomas Monjalon, dev

Hi Ferruh, Thomas,

On Fri, Jan 20, 2023 at 6:19 PM Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
> There was a plan to have structures from lib/net/ at the beginning
> of corresponding flow item structures.
> Unfortunately this plan has not been followed up so far.
> This series is a step to make the most used items,
> compliant with the inheritance design explained above.
> The old API is kept in anonymous union for compatibility,
> but the code in drivers and apps is updated to use the new API.
>
>
> v2: (by Ferruh)
>  * Rebased on latest next-net for v23.03
>  * 'struct rte_gre_hdr' endianness annotation added to protocol field
>  * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
>  * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
>    field and updated drivers accordingly
>  * more driver code updated for rte_flow_item_gre
>  * more driver code updated for rte_flow_item_gtp
>
>
> Cc: David Marchand <david.marchand@redhat.com>

Note: it is relatively easy to run OVS checks, you only need a github
fork of ovs with a dpdk-latest branch + some github yml update to
point at a dpdk repo + branch of yours (see the last commit in my repo
below).

I ran this series in my dpdk-latest (rebased) OVS branch
https://github.com/david-marchand/ovs/commits/dpdk-latest, through
GHA.

Sparse spotted an issue on rte_flow.h header, following HIGIG2 update.
https://github.com/david-marchand/ovs/actions/runs/3979243283/jobs/6821543439#step:12:2592

2023-01-22T10:31:37.5911785Z ../../lib/ofp-packet.c: note: in included
file (through ../../lib/netdev-dpdk.h, ../../lib/dp-packet.h):
2023-01-22T10:31:37.5918848Z
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:645:43:
error: incorrect type in initializer (different base types)
2023-01-22T10:31:37.5919574Z
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:645:43:
expected restricted ovs_be16 [usertype] classification
2023-01-22T10:31:37.5920131Z
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:645:43:
got int
2023-01-22T10:31:37.5920720Z
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:646:32:
error: incorrect type in initializer (different base types)
2023-01-22T10:31:37.5921341Z
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:646:32:
expected restricted ovs_be16 [usertype] vid
2023-01-22T10:31:37.5921866Z
/home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:646:32:
got int
2023-01-22T10:31:37.6042168Z make[2]: *** [Makefile:4681:
lib/ofp-packet.lo] Error 1
2023-01-22T10:31:37.6042717Z make[2]: *** Waiting for unfinished jobs....

This should be fixed with:

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a215daa836..99f8340f82 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
 static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
        .hdr = {
                .ppt1 = {
-                       .classification = 0xffff,
-                       .vid = 0xfff,
+                       .classification = RTE_BE16(0xffff),
+                       .vid = RTE_BE16(0xfff),
                },
        },
 };

However, looking at existing code, and though I don't know HIGIG2, it
is a bit strange to use a 12 bits large mask for vid.


With this fix, OVS sparse check passes:
https://github.com/david-marchand/ovs/actions/runs/3979288868


-- 
David Marchand


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 0/8] start cleanup of rte_flow_item_*
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (8 preceding siblings ...)
  2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-01-24  9:02 ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
                     ` (7 more replies)
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                   ` (3 subsequent siblings)
  13 siblings, 8 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, dev

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.


v3:
 * Updated Higig2 protocol flow item assignment taking into account endianness
   annotations.

v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp


Cc: David Marchand <david.marchand@redhat.com>

Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  24 +--
 app/test-pmd/cmdline_flow.c              | 180 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |  57 ++-----
 doc/guides/rel_notes/deprecation.rst     |   6 +-
 drivers/net/bnxt/bnxt_flow.c             |  54 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++++++-------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c           |  60 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 +--
 drivers/net/enic/enic_fm_flow.c          |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 ++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 +--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 ++++----
 drivers/net/mlx4/mlx4_flow.c             |  38 ++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  48 +++---
 drivers/net/mlx5/mlx5_flow.c             |  62 ++++----
 drivers/net/mlx5/mlx5_flow_dv.c          | 178 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 +++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c       |  46 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/nfp/nfp_flow.c               |  21 +--
 drivers/net/sfc/sfc_flow.c               |  52 +++----
 drivers/net/sfc/sfc_mae.c                |  46 +++---
 drivers/net/tap/tap_flow.c               |  58 ++++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    | 121 ++++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_gre.h                        |   7 +-
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 ++++-
 47 files changed, 984 insertions(+), 949 deletions(-)

--
2.25.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v3 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 2/8] net: add smaller fields for VXLAN Ferruh Yigit
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |   2 +
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803dc0..27c3780c4f17 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e18ac344ef8e..53b10b51d81a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,6 +58,8 @@ Deprecation Notices
   should start with relevant protocol header structure from lib/net/.
   The individual protocol header fields and the protocol header struct
   may be kept together in a union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
 
   These items are not compliant (not including struct from lib/net/):
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460cf5..8f660493402c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c49..2928598ced55 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index e0364ef015e0..3a43b7f3ef6f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6b8..f5787c247f1f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e7c..eec7e6065097 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f425c..2ff1a98fda7c 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b77531065196..ea9b290e1cb5 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfef0..e6c9ad442ac0 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af8476c..90027dc67695 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07e8..d1a564a16303 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c3980..ef1832982dee 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c17..0acbd5a061e0 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 0c848189776d..02e1457d8017 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f8087392538..a6c88cb55b88 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 4082c0069f31..74e1e7099b8c 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index 868921cac595..08a80137e5b9 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba940731..5d297afc290e 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a1697..d84061340e6c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539c6..b677a0d61340 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341d3..ee56d0f43d93 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 110ff34fcceb..a11da3dc8beb 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81d0..8ef9fd2db44e 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c9666..604384a24253 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
 
 /* Xmacro used to create generic item setter from items */
 #define LIST_OF_FIELDS_INFO \
-	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
-	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
+	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
+	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
 	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
 	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
 	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
@@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (m->type) {
+	if (m->hdr.ether_type) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
@@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 47_16 */
-	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
@@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 15_0 */
-	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
@@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 47_16 */
-	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
@@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 15_0 */
-	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
@@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
 	}
 
-	if (m->tci) {
+	if (m->hdr.vlan_tci) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_tci_set;
 		DR_CALC_SET(fc, eth_l2, tci, inner);
 	}
 
-	if (m->inner_type) {
+	if (m->hdr.eth_proto) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a0cf677fb099..2512d6b52db9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 62c38b87a1f0..ff915183b7cc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *eth_m;
 	const struct rte_flow_item_eth *eth_v;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_v;
@@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
-		rte_be16_t type = eth_v->type;
+	if (eth_m->hdr.ether_type == 0xFFFF) {
+		rte_be16_t type = eth_v->hdr.ether_type;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
@@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M) {
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
-			type = eth_vv->type;
+			type = eth_vv->hdr.ether_type;
 		}
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		switch (type) {
@@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 			return;
 	}
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		 * and pre-validated.
 		 */
 		if (vlan_vv)
-			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
+			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
 			 &rte_flow_item_vlan_mask);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
@@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		rte_be16_t inner_type = vlan_v->inner_type;
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
 		 * value.
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M)
-			inner_type = vlan_vv->inner_type;
+			inner_type = vlan_vv->hdr.eth_proto;
 		switch (inner_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a3c8056515da..b8f96839c8bf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
 
 /* Ethernet item spec for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
-	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item mask for unicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for broadcast. */
 static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /**
@@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
 		.egress = 1,
 	};
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
 	for (i = 0; i < priv->vlan_filter_n; ++i) {
 		uint16_t vlan = priv->vlan_filter[i];
 		struct rte_flow_item_vlan vlan_spec = {
-			.tci = rte_cpu_to_be_16(vlan),
+			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 		};
 
 		items[1].spec = &vlan_spec;
@@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
 			return -rte_errno;
 	}
@@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		for (j = 0; j < priv->vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 
 			items[1].spec = &vlan_spec;
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 28ea28bfbe02..1902b97ec6d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f54443ed1ac4..3457bf65d3e1 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c496..e74a5f83f55b 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index d6c8c8992149..113e71a6aebc 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	eth = (void *)*mbuf_off;
 
 	if (is_mask) {
-		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	} else {
-		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	}
 
 	eth->mpls_lse = 0;
@@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	if (is_mask) {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
-		meta_tci->tci |= mask->tci;
+		meta_tci->tci |= mask->hdr.vlan_tci;
 	} else {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-		meta_tci->tci |= spec->tci;
+		meta_tci->tci |= spec->hdr.vlan_tci;
 	}
 
 	return 0;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b563..f098edc6eb33 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da9582..710d04be13af 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe0593d..ed4d42f92f9f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8d2..7ef52d0b0fcd 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 2/8] net: add smaller fields for VXLAN
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 2/8] net: add smaller fields for VXLAN Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 4/8] ethdev: use GRE " Ferruh Yigit
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Dongdong Liu, Yisen Zhuang,
	Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu,
	Matan Azrad, Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 ++++++++++----
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27c3780c4f17..116722351486 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 53b10b51d81a..638051789d19 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -85,7 +85,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f660493402c..4a107e81e955 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced55..80869b79c3fe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982dee..e88f9b7e452b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061e0..2855b14fe679 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e6c..7cb20fa0b4f8 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d93..d20a29b9a2d6 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index a11da3dc8beb..fe710b79008d 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2512d6b52db9..ff08a629e2c6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff915183b7cc..261c60a5c33a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
@@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	    ((attr->group || (attr->transfer && priv->fdb_def_rule)) &&
 	    !priv->sh->misc5_cap)) {
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
+			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_v,
 						   tunnel_header_1);
-	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	*tunnel_header_v = tunnel_v;
 	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
+		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 		if (!tunnel_v)
 			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
+		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
 	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
 	}
 }
 
@@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 		MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		vxlan_m = vxlan_v;
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags,
 		 flags_m & flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.protocol;
+	v_protocol = vxlan_v->hdr.protocol;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1902b97ec6d4..4ef4f3044515 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb33..fe1f5ba55f86 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13af..aab697b204c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b4f..e2364823d622 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -988,7 +988,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
                     ` (2 preceding siblings ...)
  2023-01-24  9:02   ` [PATCH v3 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 5/8] ethdev: use GTP " Ferruh Yigit
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Hemant Agrawal, Sachin Saxena,
	Matan Azrad, Viacheslav Ovsiienko, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |  4 ++--
 app/test-pmd/cmdline_flow.c              | 14 ++++++-------
 doc/guides/prog_guide/rte_flow.rst       |  6 +-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++------
 drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++------
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 ++++----
 drivers/net/mlx5/mlx5_flow.c             | 22 +++++++++-----------
 drivers/net/mlx5/mlx5_flow_dv.c          | 26 +++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++++----
 drivers/net/nfp/nfp_flow.c               |  9 ++++----
 lib/ethdev/rte_flow.h                    | 24 +++++++++++++++-------
 lib/net/rte_gre.h                        |  5 +++++
 13 files changed, 81 insertions(+), 70 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..0f19e5e53648 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gre gre_spec = {
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
 	};
 	static struct rte_flow_item_gre gre_mask = {
-		.protocol = RTE_BE16(0xffff),
+		.hdr.proto = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..0e115956514c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     protocol)),
+					     hdr.proto)),
 	},
 	[ITEM_GRE_C_RSVD0_VER] = {
 		.name = "c_rsvd0_ver",
@@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     c_rsvd0_ver)),
+					     hdr.c_rsvd0_ver)),
 	},
 	[ITEM_GRE_C_BIT] = {
 		.name = "c_bit",
@@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x80\x00\x00\x00")),
 	},
 	[ITEM_GRE_S_BIT] = {
@@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
 		.help = "sequence number bit (S)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x10\x00\x00\x00")),
 	},
 	[ITEM_GRE_K_BIT] = {
@@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
 		.help = "key bit (K)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x20\x00\x00\x00")),
 	},
 	[ITEM_FUZZY] = {
@@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls = {
 		.ttl = 0,
@@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls;
 	uint8_t *header;
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 116722351486..603e1b866be3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -980,8 +980,7 @@ Item: ``GRE``
 
 Matches a GRE header.
 
-- ``c_rsvd0_ver``: checksum, reserved 0 and version.
-- ``protocol``: protocol type.
+- ``hdr``:  header definition (``rte_gre.h``).
 - Default ``mask`` matches protocol only.
 
 Item: ``GRE_KEY``
@@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
 Matches a GRE optional fields (checksum/key/sequence).
 This should be preceded by item ``GRE``.
 
-- ``checksum``: checksum.
-- ``key``: key.
-- ``sequence``: sequence.
 - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
   item. The bit flags need be set with GRE item by application. When the items
   present, the corresponding bits in GRE spec and mask should be set "1" by
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 638051789d19..80bf7209065a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gre``
   - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 80869b79c3fe..c1e231ce8c49 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1461,16 +1461,16 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.c_rsvd0_ver);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
-			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
+			      ulp_deference_struct(gre_spec, hdr.c_rsvd0_ver),
+			      ulp_deference_struct(gre_mask, hdr.c_rsvd0_ver),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, protocol),
-			      ulp_deference_struct(gre_mask, protocol),
+			      ulp_deference_struct(gre_spec, hdr.proto),
+			      ulp_deference_struct(gre_mask, hdr.proto),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with GRE */
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e6065097..8a6d44da4875 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 };
 
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(0xffff),
 };
 
 #endif
@@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->protocol)
+	if (!mask->hdr.proto)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_GRE,
 				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
+				&spec->hdr.proto,
+				&mask->hdr.proto,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
@@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_GRE,
 			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
+			&spec->hdr.proto,
+			&mask->hdr.proto,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 604384a24253..3a438f2c9d12 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -156,8 +156,8 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		source_qp,		v->queue,		mlx5_rte_flow_item_sq) \
 	X(SET,		tag,			v->data,		rte_flow_item_tag) \
 	X(SET,		metadata,		v->data,		rte_flow_item_meta) \
-	X(SET_BE16,	gre_c_ver,		v->c_rsvd0_ver,		rte_flow_item_gre) \
-	X(SET_BE16,	gre_protocol_type,	v->protocol,		rte_flow_item_gre) \
+	X(SET_BE16,	gre_c_ver,		v->hdr.c_rsvd0_ver,	rte_flow_item_gre) \
+	X(SET_BE16,	gre_protocol_type,	v->hdr.proto,		rte_flow_item_gre) \
 	X(SET,		ipv4_protocol_gre,	IPPROTO_GRE,		rte_flow_item_gre) \
 	X(SET_BE32,	gre_opt_key,		v->key.key,		rte_flow_item_gre_opt) \
 	X(SET_BE32,	gre_opt_seq,		v->sequence.sequence,	rte_flow_item_gre_opt) \
@@ -1210,7 +1210,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->c_rsvd0_ver) {
+	if (m->hdr.c_rsvd0_ver) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_c_ver_set;
@@ -1219,7 +1219,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 		fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver);
 	}
 
-	if (m->protocol) {
+	if (m->hdr.proto) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_protocol_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ff08a629e2c6..7b19c5f03f5d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -329,7 +329,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_GRE:
-		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
@@ -3089,8 +3089,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	gre_spec = gre_item->spec;
-	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Key bit must be on");
@@ -3165,21 +3164,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	if (mask->checksum_rsvd.checksum)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
+		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "Checksum bit must be on");
 	if (mask->key.key)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item, "Key bit must be on");
 	if (mask->sequence.sequence)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
+		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
@@ -3230,8 +3226,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 	const struct rte_flow_item_gre *mask = item->mask;
 	int ret;
 	const struct rte_flow_item_gre nic_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 
 	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
@@ -3259,7 +3257,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 		return ret;
 #ifndef HAVE_MLX5DV_DR
 #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
-	if (spec && (spec->protocol & mask->protocol))
+	if (spec && (spec->hdr.proto & mask->hdr.proto))
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "without MPLS support the"
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 261c60a5c33a..1165b29a5cb7 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9021,8 +9021,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 		gre_v = gre_m;
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		gre_m = gre_v;
-	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
-	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
+	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->hdr.c_rsvd0_ver);
+	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->hdr.c_rsvd0_ver);
 	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
 		 gre_crks_rsvd0_ver_v.c_present &
 		 gre_crks_rsvd0_ver_m.c_present);
@@ -9032,8 +9032,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
 		 gre_crks_rsvd0_ver_v.s_present &
 		 gre_crks_rsvd0_ver_m.s_present);
-	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
-	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
+	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
+	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
@@ -9097,8 +9097,8 @@ flow_dv_translate_item_gre_option(void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	protocol_v = gre_v->protocol;
-	protocol_m = gre_m->protocol;
+	protocol_v = gre_v->hdr.proto;
+	protocol_m = gre_m->hdr.proto;
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		uint16_t ether_type =
@@ -9108,8 +9108,8 @@ flow_dv_translate_item_gre_option(void *key,
 			protocol_m = UINT16_MAX;
 		}
 	}
-	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
-	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
+	c_rsvd0_ver_v = gre_v->hdr.c_rsvd0_ver;
+	c_rsvd0_ver_m = gre_m->hdr.c_rsvd0_ver;
 	if (option_m->sequence.sequence) {
 		c_rsvd0_ver_v |= RTE_BE16(0x1000);
 		c_rsvd0_ver_m |= RTE_BE16(0x1000);
@@ -9171,12 +9171,14 @@ flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item,
 
 	/* For NVGRE, GRE header fields must be set with defined values. */
 	const struct rte_flow_item_gre gre_spec = {
-		.c_rsvd0_ver = RTE_BE16(0x2000),
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
+		.hdr.k = 1,
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
 	};
 	const struct rte_flow_item_gre gre_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 	const struct rte_flow_item gre_item = {
 		.spec = &gre_spec,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 4ef4f3044515..956df2274b38 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -946,10 +946,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		if (!mask)
 			mask = &rte_flow_item_gre_mask;
 	}
-	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
-	tunnel.val.protocol = spec->protocol;
-	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
-	tunnel.mask.protocol = mask->protocol;
+	tunnel.val.c_ks_res0_ver = spec->hdr.c_rsvd0_ver;
+	tunnel.val.protocol = spec->hdr.proto;
+	tunnel.mask.c_ks_res0_ver = mask->hdr.c_rsvd0_ver;
+	tunnel.mask.protocol = mask->hdr.proto;
 	/* Remove unwanted bits from values. */
 	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
 	tunnel.val.key &= tunnel.mask.key;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 113e71a6aebc..f3d42b9feb86 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1812,8 +1812,9 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
 		.mask_support = &(const struct rte_flow_item_gre){
-			.c_rsvd0_ver = RTE_BE16(0xa000),
-			.protocol = RTE_BE16(0xffff),
+			.hdr.c = 1,
+			.hdr.k = 1,
+			.hdr.proto = RTE_BE16(0xffff),
 		},
 		.mask_default = &rte_flow_item_gre_mask,
 		.mask_sz = sizeof(struct rte_flow_item_gre),
@@ -3144,7 +3145,7 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 	memset(set_tun, 0, act_set_size);
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
@@ -3181,7 +3182,7 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv6->hdr.hop_limits, tos);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e2364823d622..3ae89e367c16 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1070,19 +1070,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
  *
  * Matches a GRE header.
  */
+RTE_STD_C11
 struct rte_flow_item_gre {
-	/**
-	 * Checksum (1b), reserved 0 (12b), version (3b).
-	 * Refer to RFC 2784.
-	 */
-	rte_be16_t c_rsvd0_ver;
-	rte_be16_t protocol; /**< Protocol type. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Checksum (1b), reserved 0 (12b), version (3b).
+			 * Refer to RFC 2784.
+			 */
+			rte_be16_t c_rsvd0_ver;
+			rte_be16_t protocol; /**< Protocol type. */
+		};
+		struct rte_gre_hdr hdr; /**< GRE header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(UINT16_MAX),
 };
 #endif
 
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 6c6aef6fcaa0..210b81c99018 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -28,6 +28,8 @@ extern "C" {
  */
 __extension__
 struct rte_gre_hdr {
+	union {
+		struct {
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t res2:4; /**< Reserved */
 	uint16_t s:1;    /**< Sequence Number Present bit */
@@ -45,6 +47,9 @@ struct rte_gre_hdr {
 	uint16_t res3:5; /**< Reserved */
 	uint16_t ver:3;  /**< Version Number */
 #endif
+		};
+		rte_be16_t c_rsvd0_ver;
+	};
 	uint16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 5/8] ethdev: use GTP protocol struct for flow matching
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
                     ` (3 preceding siblings ...)
  2023-01-24  9:02   ` [PATCH v3 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 6/8] ethdev: use ARP " Ferruh Yigit
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c        |  4 ++--
 app/test-pmd/cmdline_flow.c           |  8 +++----
 doc/guides/prog_guide/rte_flow.rst    | 10 ++-------
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c          | 14 ++++++------
 drivers/net/i40e/i40e_flow.c          | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c          |  8 +++----
 drivers/net/ice/ice_fdir_filter.c     | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c   | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------
 drivers/net/mlx5/mlx5_flow_dv.c       | 18 +++++++--------
 lib/ethdev/rte_flow.h                 | 32 ++++++++++++++++++---------
 12 files changed, 77 insertions(+), 74 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e53648..55eb6f5cf009 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0e115956514c..dd6da9d98d9b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 603e1b866be3..ec2e335fac3d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 80bf7209065a..b89450b239ef 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb58..47f79ecf11cc 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe679..3c550733f2bb 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b88..811a10287b70 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc290e..480b369af816 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4f8..110d8895fea3 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 3a438f2c9d12..127cebcf3e11 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data {
 	X(SET_BE16,	tcp_src_port,		v->hdr.src_port,	rte_flow_item_tcp) \
 	X(SET_BE16,	tcp_dst_port,		v->hdr.dst_port,	rte_flow_item_tcp) \
 	X(SET,		gtp_udp_port,		RTE_GTPU_UDP_PORT,	rte_flow_item_gtp) \
-	X(SET_BE32,	gtp_teid,		v->teid,		rte_flow_item_gtp) \
-	X(SET,		gtp_msg_type,		v->msg_type,		rte_flow_item_gtp) \
-	X(SET,		gtp_ext_flag,		!!v->v_pt_rsv_flags,	rte_flow_item_gtp) \
+	X(SET_BE32,	gtp_teid,		v->hdr.teid,		rte_flow_item_gtp) \
+	X(SET,		gtp_msg_type,		v->hdr.msg_type,	rte_flow_item_gtp) \
+	X(SET,		gtp_ext_flag,		!!v->hdr.gtp_hdr_info,	rte_flow_item_gtp) \
 	X(SET,		gtp_next_ext_hdr,	GTP_PDU_SC,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_pdu,	v->hdr.type,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_qfi,	v->hdr.qfi,		rte_flow_item_gtp_psc) \
@@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
+	if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
 		rte_errno = ENOTSUP;
 		return rte_errno;
 	}
 
-	if (m->teid) {
+	if (m->hdr.teid) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 		fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
 	}
 
-	if (m->v_pt_rsv_flags) {
+	if (m->hdr.gtp_hdr_info) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	}
 
 
-	if (m->msg_type) {
+	if (m->hdr.msg_type) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 1165b29a5cb7..d76d6a0ef086 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -10358,11 +10358,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item,
 	MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m,
 		&rte_flow_item_gtp_mask);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3ae89e367c16..85ca73d1dc04 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1149,23 +1149,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 6/8] ethdev: use ARP protocol struct for flow matching
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
                     ` (4 preceding siblings ...)
  2023-01-24  9:02   ` [PATCH v3 5/8] ethdev: use GTP " Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:02   ` [PATCH v3 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
  2023-01-24  9:03   ` [PATCH v3 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index dd6da9d98d9b..1d337a96199d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec2e335fac3d..8bf85df2f611 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b89450b239ef..8e3683990117 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -64,7 +64,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 85ca73d1dc04..a215daa83640 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include <rte_compat.h>
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1255,26 +1256,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 7/8] doc: fix description of L2TPV2 flow item
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
                     ` (5 preceding siblings ...)
  2023-01-24  9:02   ` [PATCH v3 6/8] ethdev: use ARP " Ferruh Yigit
@ 2023-01-24  9:02   ` Ferruh Yigit
  2023-01-24  9:03   ` [PATCH v3 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:02 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Wenjun Wu, Jie Wang, Andrew Rybchenko,
	Ferruh Yigit
  Cc: David Marchand, dev, stable

From: Thomas Monjalon <thomas@monjalon.net>

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
Cc: jie1x.wang@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 8bf85df2f611..c01b53aad8ed 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v3 8/8] net: mark all big endian types
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
                     ` (6 preceding siblings ...)
  2023-01-24  9:02   ` [PATCH v3 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-01-24  9:03   ` Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:03 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/ethdev/rte_flow.h |  4 ++--
 lib/net/rte_arp.h     | 28 ++++++++++++++--------------
 lib/net/rte_gre.h     |  2 +-
 lib/net/rte_higig.h   |  6 +++---
 lib/net/rte_mpls.h    |  2 +-
 5 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a215daa83640..99f8340f8274 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
 static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
 	.hdr = {
 		.ppt1 = {
-			.classification = 0xffff,
-			.vid = 0xfff,
+			.classification = RTE_BE16(0xffff),
+			.vid = RTE_BE16(0xfff),
 		},
 	},
 };
diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..151e6c641fc5 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /** format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /** format of protocol address */
+	uint8_t    arp_hlen;     /** length of hardware address */
+	uint8_t    arp_plen;     /** length of protocol address */
+	rte_be16_t arp_opcode;   /** ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /** request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /** response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 210b81c99018..6b1169c8b0c1 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -50,7 +50,7 @@ struct rte_gre_hdr {
 		};
 		rte_be16_t c_rsvd0_ver;
 	};
-	uint16_t proto;  /**< Protocol Type */
+	rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH v2 0/8] start cleanup of rte_flow_item_*
  2023-01-22 10:52   ` [PATCH v2 0/8] start cleanup of rte_flow_item_* David Marchand
@ 2023-01-24  9:07     ` Ferruh Yigit
  0 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-24  9:07 UTC (permalink / raw)
  To: David Marchand, Thomas Monjalon, Andrew Rybchenko; +Cc: dev

On 1/22/2023 10:52 AM, David Marchand wrote:
> Hi Ferruh, Thomas,
> 
> On Fri, Jan 20, 2023 at 6:19 PM Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>>
>> There was a plan to have structures from lib/net/ at the beginning
>> of corresponding flow item structures.
>> Unfortunately this plan has not been followed up so far.
>> This series is a step to make the most used items,
>> compliant with the inheritance design explained above.
>> The old API is kept in anonymous union for compatibility,
>> but the code in drivers and apps is updated to use the new API.
>>
>>
>> v2: (by Ferruh)
>>  * Rebased on latest next-net for v23.03
>>  * 'struct rte_gre_hdr' endianness annotation added to protocol field
>>  * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
>>  * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
>>    field and updated drivers accordingly
>>  * more driver code updated for rte_flow_item_gre
>>  * more driver code updated for rte_flow_item_gtp
>>
>>
>> Cc: David Marchand <david.marchand@redhat.com>
> 
> Note: it is relatively easy to run OVS checks, you only need a github
> fork of ovs with a dpdk-latest branch + some github yml update to
> point at a dpdk repo + branch of yours (see the last commit in my repo
> below).
> 
> I ran this series in my dpdk-latest (rebased) OVS branch
> https://github.com/david-marchand/ovs/commits/dpdk-latest, through
> GHA.
> 
> Sparse spotted an issue on rte_flow.h header, following HIGIG2 update.
> https://github.com/david-marchand/ovs/actions/runs/3979243283/jobs/6821543439#step:12:2592
> 
> 2023-01-22T10:31:37.5911785Z ../../lib/ofp-packet.c: note: in included
> file (through ../../lib/netdev-dpdk.h, ../../lib/dp-packet.h):
> 2023-01-22T10:31:37.5918848Z
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:645:43:
> error: incorrect type in initializer (different base types)
> 2023-01-22T10:31:37.5919574Z
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:645:43:
> expected restricted ovs_be16 [usertype] classification
> 2023-01-22T10:31:37.5920131Z
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:645:43:
> got int
> 2023-01-22T10:31:37.5920720Z
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:646:32:
> error: incorrect type in initializer (different base types)
> 2023-01-22T10:31:37.5921341Z
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:646:32:
> expected restricted ovs_be16 [usertype] vid
> 2023-01-22T10:31:37.5921866Z
> /home/runner/work/ovs/ovs/dpdk-dir/build/include/rte_flow.h:646:32:
> got int
> 2023-01-22T10:31:37.6042168Z make[2]: *** [Makefile:4681:
> lib/ofp-packet.lo] Error 1
> 2023-01-22T10:31:37.6042717Z make[2]: *** Waiting for unfinished jobs....
> 
> This should be fixed with:
> 
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a215daa836..99f8340f82 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
>  static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
>         .hdr = {
>                 .ppt1 = {
> -                       .classification = 0xffff,
> -                       .vid = 0xfff,
> +                       .classification = RTE_BE16(0xffff),
> +                       .vid = RTE_BE16(0xfff),
>                 },
>         },
>  };
> 
> However, looking at existing code, and though I don't know HIGIG2, it
> is a bit strange to use a 12 bits large mask for vid.
> 
> 
> With this fix, OVS sparse check passes:
> https://github.com/david-marchand/ovs/actions/runs/3979288868
> 
> 

Thanks David, fixed this as you suggested in v3:
https://patches.dpdk.org/project/dpdk/list/?series=26632

@Thomas, @Andrew, can we get this set for this release, what do you think?


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v4 0/8] start cleanup of rte_flow_item_*
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (9 preceding siblings ...)
  2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
@ 2023-01-26 13:17 ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
                     ` (7 more replies)
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                   ` (2 subsequent siblings)
  13 siblings, 8 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, dev

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.


v4:
 * Fix build error for RHEL7 (gcc 4.8.5) caused by nested struct
   initialization.

v3:
 * Updated Higig2 protocol flow item assignment taking into account endianness
   annotations.

v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp


Cc: David Marchand <david.marchand@redhat.com>

Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  24 +--
 app/test-pmd/cmdline_flow.c              | 180 +++++++++++-----------
 doc/guides/prog_guide/rte_flow.rst       |  57 ++-----
 doc/guides/rel_notes/deprecation.rst     |   6 +-
 drivers/net/bnxt/bnxt_flow.c             |  54 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++++++-------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c           |  60 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 +--
 drivers/net/enic/enic_fm_flow.c          |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 ++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 +++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 +--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 ++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  48 +++---
 drivers/net/mlx5/mlx5_flow.c             |  62 ++++----
 drivers/net/mlx5/mlx5_flow_dv.c          | 184 ++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 +++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c       |  46 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/nfp/nfp_flow.c               |  21 +--
 drivers/net/sfc/sfc_flow.c               |  52 +++----
 drivers/net/sfc/sfc_mae.c                |  46 +++---
 drivers/net/tap/tap_flow.c               |  58 +++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    | 121 ++++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_gre.h                        |   7 +-
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 ++++-
 47 files changed, 987 insertions(+), 952 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v4 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 2/8] net: add smaller fields for VXLAN Ferruh Yigit
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |   2 +
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803dc0..27c3780c4f17 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e18ac344ef8e..53b10b51d81a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,6 +58,8 @@ Deprecation Notices
   should start with relevant protocol header structure from lib/net/.
   The individual protocol header fields and the protocol header struct
   may be kept together in a union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
 
   These items are not compliant (not including struct from lib/net/):
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460cf5..8f660493402c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c49..2928598ced55 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index e0364ef015e0..3a43b7f3ef6f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6b8..f5787c247f1f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e7c..eec7e6065097 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f425c..2ff1a98fda7c 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b77531065196..ea9b290e1cb5 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfef0..e6c9ad442ac0 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af8476c..90027dc67695 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07e8..d1a564a16303 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c3980..ef1832982dee 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c17..0acbd5a061e0 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 0c848189776d..02e1457d8017 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f8087392538..a6c88cb55b88 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 4082c0069f31..74e1e7099b8c 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index 868921cac595..08a80137e5b9 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba940731..5d297afc290e 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a1697..d84061340e6c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539c6..b677a0d61340 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341d3..ee56d0f43d93 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 110ff34fcceb..a11da3dc8beb 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81d0..8ef9fd2db44e 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c9666..604384a24253 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
 
 /* Xmacro used to create generic item setter from items */
 #define LIST_OF_FIELDS_INFO \
-	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
-	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
+	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
+	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
 	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
 	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
 	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
@@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (m->type) {
+	if (m->hdr.ether_type) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
@@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 47_16 */
-	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
@@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 15_0 */
-	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
@@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 47_16 */
-	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
@@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 15_0 */
-	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
@@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
 	}
 
-	if (m->tci) {
+	if (m->hdr.vlan_tci) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_tci_set;
 		DR_CALC_SET(fc, eth_l2, tci, inner);
 	}
 
-	if (m->inner_type) {
+	if (m->hdr.eth_proto) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a0cf677fb099..2512d6b52db9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 62c38b87a1f0..ff915183b7cc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *eth_m;
 	const struct rte_flow_item_eth *eth_v;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_v;
@@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
-		rte_be16_t type = eth_v->type;
+	if (eth_m->hdr.ether_type == 0xFFFF) {
+		rte_be16_t type = eth_v->hdr.ether_type;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
@@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M) {
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
-			type = eth_vv->type;
+			type = eth_vv->hdr.ether_type;
 		}
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		switch (type) {
@@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 			return;
 	}
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		 * and pre-validated.
 		 */
 		if (vlan_vv)
-			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
+			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
 			 &rte_flow_item_vlan_mask);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
@@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		rte_be16_t inner_type = vlan_v->inner_type;
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
 		 * value.
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M)
-			inner_type = vlan_vv->inner_type;
+			inner_type = vlan_vv->hdr.eth_proto;
 		switch (inner_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a3c8056515da..b8f96839c8bf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
 
 /* Ethernet item spec for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
-	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item mask for unicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for broadcast. */
 static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /**
@@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
 		.egress = 1,
 	};
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
 	for (i = 0; i < priv->vlan_filter_n; ++i) {
 		uint16_t vlan = priv->vlan_filter[i];
 		struct rte_flow_item_vlan vlan_spec = {
-			.tci = rte_cpu_to_be_16(vlan),
+			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 		};
 
 		items[1].spec = &vlan_spec;
@@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
 			return -rte_errno;
 	}
@@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		for (j = 0; j < priv->vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 
 			items[1].spec = &vlan_spec;
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 28ea28bfbe02..1902b97ec6d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f54443ed1ac4..3457bf65d3e1 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c496..e74a5f83f55b 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ff2e21c817b4..bd3a8d2a3b2f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	eth = (void *)*mbuf_off;
 
 	if (is_mask) {
-		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	} else {
-		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	}
 
 	eth->mpls_lse = 0;
@@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	if (is_mask) {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
-		meta_tci->tci |= mask->tci;
+		meta_tci->tci |= mask->hdr.vlan_tci;
 	} else {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-		meta_tci->tci |= spec->tci;
+		meta_tci->tci |= spec->hdr.vlan_tci;
 	}
 
 	return 0;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b563..f098edc6eb33 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da9582..710d04be13af 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe0593d..ed4d42f92f9f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8d2..7ef52d0b0fcd 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 2/8] net: add smaller fields for VXLAN
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 2/8] net: add smaller fields for VXLAN Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 4/8] ethdev: use GRE " Ferruh Yigit
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Dongdong Liu, Yisen Zhuang,
	Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu,
	Matan Azrad, Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 ++++++++++----
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27c3780c4f17..116722351486 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 53b10b51d81a..638051789d19 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -85,7 +85,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f660493402c..4a107e81e955 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced55..80869b79c3fe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982dee..e88f9b7e452b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061e0..2855b14fe679 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e6c..7cb20fa0b4f8 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d93..d20a29b9a2d6 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index a11da3dc8beb..fe710b79008d 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2512d6b52db9..ff08a629e2c6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff915183b7cc..261c60a5c33a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
@@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	    ((attr->group || (attr->transfer && priv->fdb_def_rule)) &&
 	    !priv->sh->misc5_cap)) {
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
+			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_v,
 						   tunnel_header_1);
-	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	*tunnel_header_v = tunnel_v;
 	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
+		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 		if (!tunnel_v)
 			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
+		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
 	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
 	}
 }
 
@@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 		MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		vxlan_m = vxlan_v;
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags,
 		 flags_m & flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.protocol;
+	v_protocol = vxlan_v->hdr.protocol;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1902b97ec6d4..4ef4f3044515 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb33..fe1f5ba55f86 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13af..aab697b204c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b4f..e2364823d622 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -988,7 +988,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (2 preceding siblings ...)
  2023-01-26 13:17   ` [PATCH v4 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 5/8] ethdev: use GTP " Ferruh Yigit
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Hemant Agrawal, Sachin Saxena,
	Matan Azrad, Viacheslav Ovsiienko, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |  4 ++--
 app/test-pmd/cmdline_flow.c              | 14 +++++------
 doc/guides/prog_guide/rte_flow.rst       |  6 +----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++-----
 drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 +++----
 drivers/net/mlx5/mlx5_flow.c             | 22 ++++++++---------
 drivers/net/mlx5/mlx5_flow_dv.c          | 30 +++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 +++----
 drivers/net/nfp/nfp_flow.c               |  9 +++----
 lib/ethdev/rte_flow.h                    | 24 +++++++++++++------
 lib/net/rte_gre.h                        |  5 ++++
 13 files changed, 83 insertions(+), 72 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..0f19e5e53648 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gre gre_spec = {
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
 	};
 	static struct rte_flow_item_gre gre_mask = {
-		.protocol = RTE_BE16(0xffff),
+		.hdr.proto = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..0e115956514c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     protocol)),
+					     hdr.proto)),
 	},
 	[ITEM_GRE_C_RSVD0_VER] = {
 		.name = "c_rsvd0_ver",
@@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     c_rsvd0_ver)),
+					     hdr.c_rsvd0_ver)),
 	},
 	[ITEM_GRE_C_BIT] = {
 		.name = "c_bit",
@@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x80\x00\x00\x00")),
 	},
 	[ITEM_GRE_S_BIT] = {
@@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
 		.help = "sequence number bit (S)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x10\x00\x00\x00")),
 	},
 	[ITEM_GRE_K_BIT] = {
@@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
 		.help = "key bit (K)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x20\x00\x00\x00")),
 	},
 	[ITEM_FUZZY] = {
@@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls = {
 		.ttl = 0,
@@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls;
 	uint8_t *header;
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 116722351486..603e1b866be3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -980,8 +980,7 @@ Item: ``GRE``
 
 Matches a GRE header.
 
-- ``c_rsvd0_ver``: checksum, reserved 0 and version.
-- ``protocol``: protocol type.
+- ``hdr``:  header definition (``rte_gre.h``).
 - Default ``mask`` matches protocol only.
 
 Item: ``GRE_KEY``
@@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
 Matches a GRE optional fields (checksum/key/sequence).
 This should be preceded by item ``GRE``.
 
-- ``checksum``: checksum.
-- ``key``: key.
-- ``sequence``: sequence.
 - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
   item. The bit flags need be set with GRE item by application. When the items
   present, the corresponding bits in GRE spec and mask should be set "1" by
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 638051789d19..80bf7209065a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gre``
   - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 80869b79c3fe..c1e231ce8c49 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1461,16 +1461,16 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.c_rsvd0_ver);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
-			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
+			      ulp_deference_struct(gre_spec, hdr.c_rsvd0_ver),
+			      ulp_deference_struct(gre_mask, hdr.c_rsvd0_ver),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, protocol),
-			      ulp_deference_struct(gre_mask, protocol),
+			      ulp_deference_struct(gre_spec, hdr.proto),
+			      ulp_deference_struct(gre_mask, hdr.proto),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with GRE */
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e6065097..8a6d44da4875 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 };
 
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(0xffff),
 };
 
 #endif
@@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->protocol)
+	if (!mask->hdr.proto)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_GRE,
 				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
+				&spec->hdr.proto,
+				&mask->hdr.proto,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
@@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_GRE,
 			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
+			&spec->hdr.proto,
+			&mask->hdr.proto,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 604384a24253..3a438f2c9d12 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -156,8 +156,8 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		source_qp,		v->queue,		mlx5_rte_flow_item_sq) \
 	X(SET,		tag,			v->data,		rte_flow_item_tag) \
 	X(SET,		metadata,		v->data,		rte_flow_item_meta) \
-	X(SET_BE16,	gre_c_ver,		v->c_rsvd0_ver,		rte_flow_item_gre) \
-	X(SET_BE16,	gre_protocol_type,	v->protocol,		rte_flow_item_gre) \
+	X(SET_BE16,	gre_c_ver,		v->hdr.c_rsvd0_ver,	rte_flow_item_gre) \
+	X(SET_BE16,	gre_protocol_type,	v->hdr.proto,		rte_flow_item_gre) \
 	X(SET,		ipv4_protocol_gre,	IPPROTO_GRE,		rte_flow_item_gre) \
 	X(SET_BE32,	gre_opt_key,		v->key.key,		rte_flow_item_gre_opt) \
 	X(SET_BE32,	gre_opt_seq,		v->sequence.sequence,	rte_flow_item_gre_opt) \
@@ -1210,7 +1210,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->c_rsvd0_ver) {
+	if (m->hdr.c_rsvd0_ver) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_c_ver_set;
@@ -1219,7 +1219,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 		fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver);
 	}
 
-	if (m->protocol) {
+	if (m->hdr.proto) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_protocol_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ff08a629e2c6..7b19c5f03f5d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -329,7 +329,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_GRE:
-		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
@@ -3089,8 +3089,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	gre_spec = gre_item->spec;
-	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Key bit must be on");
@@ -3165,21 +3164,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	if (mask->checksum_rsvd.checksum)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
+		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "Checksum bit must be on");
 	if (mask->key.key)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item, "Key bit must be on");
 	if (mask->sequence.sequence)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
+		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
@@ -3230,8 +3226,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 	const struct rte_flow_item_gre *mask = item->mask;
 	int ret;
 	const struct rte_flow_item_gre nic_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 
 	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
@@ -3259,7 +3257,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 		return ret;
 #ifndef HAVE_MLX5DV_DR
 #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
-	if (spec && (spec->protocol & mask->protocol))
+	if (spec && (spec->hdr.proto & mask->hdr.proto))
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "without MPLS support the"
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 261c60a5c33a..2b9c2ba6a4b5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8984,7 +8984,7 @@ static void
 flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 			   uint64_t pattern_flags, uint32_t key_type)
 {
-	static const struct rte_flow_item_gre empty_gre = {0,};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	const struct rte_flow_item_gre *gre_m = item->mask;
 	const struct rte_flow_item_gre *gre_v = item->spec;
 	void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
@@ -9021,8 +9021,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 		gre_v = gre_m;
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		gre_m = gre_v;
-	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
-	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
+	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->hdr.c_rsvd0_ver);
+	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->hdr.c_rsvd0_ver);
 	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
 		 gre_crks_rsvd0_ver_v.c_present &
 		 gre_crks_rsvd0_ver_m.c_present);
@@ -9032,8 +9032,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
 		 gre_crks_rsvd0_ver_v.s_present &
 		 gre_crks_rsvd0_ver_m.s_present);
-	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
-	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
+	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
+	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
@@ -9072,7 +9072,7 @@ flow_dv_translate_item_gre_option(void *key,
 	const struct rte_flow_item_gre_opt *option_v = item->spec;
 	const struct rte_flow_item_gre *gre_m = gre_item->mask;
 	const struct rte_flow_item_gre *gre_v = gre_item->spec;
-	static const struct rte_flow_item_gre empty_gre = {0};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	struct rte_flow_item gre_key_item;
 	uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v;
 	uint16_t protocol_m, protocol_v;
@@ -9097,8 +9097,8 @@ flow_dv_translate_item_gre_option(void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	protocol_v = gre_v->protocol;
-	protocol_m = gre_m->protocol;
+	protocol_v = gre_v->hdr.proto;
+	protocol_m = gre_m->hdr.proto;
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		uint16_t ether_type =
@@ -9108,8 +9108,8 @@ flow_dv_translate_item_gre_option(void *key,
 			protocol_m = UINT16_MAX;
 		}
 	}
-	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
-	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
+	c_rsvd0_ver_v = gre_v->hdr.c_rsvd0_ver;
+	c_rsvd0_ver_m = gre_m->hdr.c_rsvd0_ver;
 	if (option_m->sequence.sequence) {
 		c_rsvd0_ver_v |= RTE_BE16(0x1000);
 		c_rsvd0_ver_m |= RTE_BE16(0x1000);
@@ -9171,12 +9171,14 @@ flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item,
 
 	/* For NVGRE, GRE header fields must be set with defined values. */
 	const struct rte_flow_item_gre gre_spec = {
-		.c_rsvd0_ver = RTE_BE16(0x2000),
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
+		.hdr.k = 1,
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
 	};
 	const struct rte_flow_item_gre gre_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 	const struct rte_flow_item gre_item = {
 		.spec = &gre_spec,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 4ef4f3044515..956df2274b38 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -946,10 +946,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		if (!mask)
 			mask = &rte_flow_item_gre_mask;
 	}
-	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
-	tunnel.val.protocol = spec->protocol;
-	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
-	tunnel.mask.protocol = mask->protocol;
+	tunnel.val.c_ks_res0_ver = spec->hdr.c_rsvd0_ver;
+	tunnel.val.protocol = spec->hdr.proto;
+	tunnel.mask.c_ks_res0_ver = mask->hdr.c_rsvd0_ver;
+	tunnel.mask.protocol = mask->hdr.proto;
 	/* Remove unwanted bits from values. */
 	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
 	tunnel.val.key &= tunnel.mask.key;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index bd3a8d2a3b2f..0994fdeeb49f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1812,8 +1812,9 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
 		.mask_support = &(const struct rte_flow_item_gre){
-			.c_rsvd0_ver = RTE_BE16(0xa000),
-			.protocol = RTE_BE16(0xffff),
+			.hdr.c = 1,
+			.hdr.k = 1,
+			.hdr.proto = RTE_BE16(0xffff),
 		},
 		.mask_default = &rte_flow_item_gre_mask,
 		.mask_sz = sizeof(struct rte_flow_item_gre),
@@ -3144,7 +3145,7 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 	memset(set_tun, 0, act_set_size);
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
@@ -3181,7 +3182,7 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv6->hdr.hop_limits, tos);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e2364823d622..3ae89e367c16 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1070,19 +1070,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
  *
  * Matches a GRE header.
  */
+RTE_STD_C11
 struct rte_flow_item_gre {
-	/**
-	 * Checksum (1b), reserved 0 (12b), version (3b).
-	 * Refer to RFC 2784.
-	 */
-	rte_be16_t c_rsvd0_ver;
-	rte_be16_t protocol; /**< Protocol type. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Checksum (1b), reserved 0 (12b), version (3b).
+			 * Refer to RFC 2784.
+			 */
+			rte_be16_t c_rsvd0_ver;
+			rte_be16_t protocol; /**< Protocol type. */
+		};
+		struct rte_gre_hdr hdr; /**< GRE header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(UINT16_MAX),
 };
 #endif
 
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 6c6aef6fcaa0..210b81c99018 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -28,6 +28,8 @@ extern "C" {
  */
 __extension__
 struct rte_gre_hdr {
+	union {
+		struct {
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t res2:4; /**< Reserved */
 	uint16_t s:1;    /**< Sequence Number Present bit */
@@ -45,6 +47,9 @@ struct rte_gre_hdr {
 	uint16_t res3:5; /**< Reserved */
 	uint16_t ver:3;  /**< Version Number */
 #endif
+		};
+		rte_be16_t c_rsvd0_ver;
+	};
 	uint16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 5/8] ethdev: use GTP protocol struct for flow matching
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (3 preceding siblings ...)
  2023-01-26 13:17   ` [PATCH v4 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 6/8] ethdev: use ARP " Ferruh Yigit
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c        |  4 ++--
 app/test-pmd/cmdline_flow.c           |  8 +++----
 doc/guides/prog_guide/rte_flow.rst    | 10 ++-------
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c          | 14 ++++++------
 drivers/net/i40e/i40e_flow.c          | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c          |  8 +++----
 drivers/net/ice/ice_fdir_filter.c     | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c   | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------
 drivers/net/mlx5/mlx5_flow_dv.c       | 20 ++++++++---------
 lib/ethdev/rte_flow.h                 | 32 ++++++++++++++++++---------
 12 files changed, 78 insertions(+), 75 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e53648..55eb6f5cf009 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0e115956514c..dd6da9d98d9b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 603e1b866be3..ec2e335fac3d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 80bf7209065a..b89450b239ef 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb58..47f79ecf11cc 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe679..3c550733f2bb 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b88..811a10287b70 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc290e..480b369af816 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4f8..110d8895fea3 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 3a438f2c9d12..127cebcf3e11 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data {
 	X(SET_BE16,	tcp_src_port,		v->hdr.src_port,	rte_flow_item_tcp) \
 	X(SET_BE16,	tcp_dst_port,		v->hdr.dst_port,	rte_flow_item_tcp) \
 	X(SET,		gtp_udp_port,		RTE_GTPU_UDP_PORT,	rte_flow_item_gtp) \
-	X(SET_BE32,	gtp_teid,		v->teid,		rte_flow_item_gtp) \
-	X(SET,		gtp_msg_type,		v->msg_type,		rte_flow_item_gtp) \
-	X(SET,		gtp_ext_flag,		!!v->v_pt_rsv_flags,	rte_flow_item_gtp) \
+	X(SET_BE32,	gtp_teid,		v->hdr.teid,		rte_flow_item_gtp) \
+	X(SET,		gtp_msg_type,		v->hdr.msg_type,	rte_flow_item_gtp) \
+	X(SET,		gtp_ext_flag,		!!v->hdr.gtp_hdr_info,	rte_flow_item_gtp) \
 	X(SET,		gtp_next_ext_hdr,	GTP_PDU_SC,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_pdu,	v->hdr.type,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_qfi,	v->hdr.qfi,		rte_flow_item_gtp_psc) \
@@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
+	if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
 		rte_errno = ENOTSUP;
 		return rte_errno;
 	}
 
-	if (m->teid) {
+	if (m->hdr.teid) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 		fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
 	}
 
-	if (m->v_pt_rsv_flags) {
+	if (m->hdr.gtp_hdr_info) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	}
 
 
-	if (m->msg_type) {
+	if (m->hdr.msg_type) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2b9c2ba6a4b5..bdd56cf0f9ae 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -9320,7 +9320,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 				 const uint64_t pattern_flags,
 				 uint32_t key_type)
 {
-	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, };
+	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {{{0}}};
 	const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask;
 	const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec;
 	/* The item was validated to be on the outer side */
@@ -10358,11 +10358,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item,
 	MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m,
 		&rte_flow_item_gtp_mask);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3ae89e367c16..85ca73d1dc04 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1149,23 +1149,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 6/8] ethdev: use ARP protocol struct for flow matching
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (4 preceding siblings ...)
  2023-01-26 13:17   ` [PATCH v4 5/8] ethdev: use GTP " Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index dd6da9d98d9b..1d337a96199d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec2e335fac3d..8bf85df2f611 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b89450b239ef..8e3683990117 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -64,7 +64,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 85ca73d1dc04..a215daa83640 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include <rte_compat.h>
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1255,26 +1256,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 7/8] doc: fix description of L2TPV2 flow item
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (5 preceding siblings ...)
  2023-01-26 13:17   ` [PATCH v4 6/8] ethdev: use ARP " Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  2023-01-26 13:17   ` [PATCH v4 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Jie Wang, Ferruh Yigit,
	Andrew Rybchenko, Wenjun Wu
  Cc: David Marchand, dev, stable

From: Thomas Monjalon <thomas@monjalon.net>

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
Cc: jie1x.wang@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 8bf85df2f611..c01b53aad8ed 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v4 8/8] net: mark all big endian types
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (6 preceding siblings ...)
  2023-01-26 13:17   ` [PATCH v4 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-01-26 13:17   ` Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 13:17 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/ethdev/rte_flow.h |  4 ++--
 lib/net/rte_arp.h     | 28 ++++++++++++++--------------
 lib/net/rte_gre.h     |  2 +-
 lib/net/rte_higig.h   |  6 +++---
 lib/net/rte_mpls.h    |  2 +-
 5 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a215daa83640..99f8340f8274 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
 static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
 	.hdr = {
 		.ppt1 = {
-			.classification = 0xffff,
-			.vid = 0xfff,
+			.classification = RTE_BE16(0xffff),
+			.vid = RTE_BE16(0xfff),
 		},
 	},
 };
diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..151e6c641fc5 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /** format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /** format of protocol address */
+	uint8_t    arp_hlen;     /** length of hardware address */
+	uint8_t    arp_plen;     /** length of protocol address */
+	rte_be16_t arp_opcode;   /** ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /** request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /** response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 210b81c99018..6b1169c8b0c1 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -50,7 +50,7 @@ struct rte_gre_hdr {
 		};
 		rte_be16_t c_rsvd0_ver;
 	};
-	uint16_t proto;  /**< Protocol Type */
+	rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 0/8] start cleanup of rte_flow_item_*
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (10 preceding siblings ...)
  2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-01-26 16:18 ` Ferruh Yigit
  2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
                     ` (7 more replies)
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
  13 siblings, 8 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:18 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, dev

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.

v5:
 * Fix more RHEL7 build error

v4:
 * Fix build error for RHEL7 (gcc 4.8.5) caused by nested struct
   initialization.

v3:
 * Updated Higig2 protocol flow item assignment taking into account endianness
   annotations.

v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp

Cc: David Marchand <david.marchand@redhat.com>

Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  24 +--
 app/test-pmd/cmdline_flow.c              | 180 +++++++++++-----------
 doc/guides/prog_guide/rte_flow.rst       |  57 ++-----
 doc/guides/rel_notes/deprecation.rst     |   6 +-
 drivers/net/bnxt/bnxt_flow.c             |  54 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++++++-------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c           |  60 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 +--
 drivers/net/enic/enic_fm_flow.c          |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 ++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 +++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 +--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 ++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  48 +++---
 drivers/net/mlx5/mlx5_flow.c             |  62 ++++----
 drivers/net/mlx5/mlx5_flow_dv.c          | 184 ++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 +++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c       |  48 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/nfp/nfp_flow.c               |  21 +--
 drivers/net/sfc/sfc_flow.c               |  52 +++----
 drivers/net/sfc/sfc_mae.c                |  46 +++---
 drivers/net/tap/tap_flow.c               |  58 +++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    | 121 ++++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_gre.h                        |   7 +-
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 ++++-
 47 files changed, 988 insertions(+), 953 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-01-26 16:18   ` Ferruh Yigit
  2023-01-27 14:33     ` Niklas Söderlund
                       ` (2 more replies)
  2023-01-26 16:18   ` [PATCH v5 2/8] net: add smaller fields for VXLAN Ferruh Yigit
                     ` (6 subsequent siblings)
  7 siblings, 3 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:18 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |   2 +
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803dc0..27c3780c4f17 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e18ac344ef8e..53b10b51d81a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,6 +58,8 @@ Deprecation Notices
   should start with relevant protocol header structure from lib/net/.
   The individual protocol header fields and the protocol header struct
   may be kept together in a union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
 
   These items are not compliant (not including struct from lib/net/):
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460cf5..8f660493402c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c49..2928598ced55 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index e0364ef015e0..3a43b7f3ef6f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6b8..f5787c247f1f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e7c..eec7e6065097 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f425c..2ff1a98fda7c 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b77531065196..ea9b290e1cb5 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfef0..e6c9ad442ac0 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af8476c..90027dc67695 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07e8..d1a564a16303 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c3980..ef1832982dee 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c17..0acbd5a061e0 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 0c848189776d..02e1457d8017 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f8087392538..a6c88cb55b88 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 4082c0069f31..74e1e7099b8c 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index 868921cac595..08a80137e5b9 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba940731..5d297afc290e 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a1697..d84061340e6c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539c6..b677a0d61340 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341d3..ee56d0f43d93 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 110ff34fcceb..a11da3dc8beb 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81d0..8ef9fd2db44e 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c9666..604384a24253 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
 
 /* Xmacro used to create generic item setter from items */
 #define LIST_OF_FIELDS_INFO \
-	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
-	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
+	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
+	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
 	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
 	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
 	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
@@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (m->type) {
+	if (m->hdr.ether_type) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
@@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 47_16 */
-	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
@@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 15_0 */
-	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
@@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 47_16 */
-	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
@@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 15_0 */
-	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
@@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
 	}
 
-	if (m->tci) {
+	if (m->hdr.vlan_tci) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_tci_set;
 		DR_CALC_SET(fc, eth_l2, tci, inner);
 	}
 
-	if (m->inner_type) {
+	if (m->hdr.eth_proto) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a0cf677fb099..2512d6b52db9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 62c38b87a1f0..ff915183b7cc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *eth_m;
 	const struct rte_flow_item_eth *eth_v;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_v;
@@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
-		rte_be16_t type = eth_v->type;
+	if (eth_m->hdr.ether_type == 0xFFFF) {
+		rte_be16_t type = eth_v->hdr.ether_type;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
@@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M) {
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
-			type = eth_vv->type;
+			type = eth_vv->hdr.ether_type;
 		}
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		switch (type) {
@@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 			return;
 	}
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		 * and pre-validated.
 		 */
 		if (vlan_vv)
-			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
+			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
 			 &rte_flow_item_vlan_mask);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
@@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		rte_be16_t inner_type = vlan_v->inner_type;
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
 		 * value.
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M)
-			inner_type = vlan_vv->inner_type;
+			inner_type = vlan_vv->hdr.eth_proto;
 		switch (inner_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a3c8056515da..b8f96839c8bf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
 
 /* Ethernet item spec for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
-	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item mask for unicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for broadcast. */
 static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /**
@@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
 		.egress = 1,
 	};
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
 	for (i = 0; i < priv->vlan_filter_n; ++i) {
 		uint16_t vlan = priv->vlan_filter[i];
 		struct rte_flow_item_vlan vlan_spec = {
-			.tci = rte_cpu_to_be_16(vlan),
+			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 		};
 
 		items[1].spec = &vlan_spec;
@@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
 			return -rte_errno;
 	}
@@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		for (j = 0; j < priv->vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 
 			items[1].spec = &vlan_spec;
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 28ea28bfbe02..1902b97ec6d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f54443ed1ac4..3457bf65d3e1 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c496..e74a5f83f55b 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ff2e21c817b4..bd3a8d2a3b2f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	eth = (void *)*mbuf_off;
 
 	if (is_mask) {
-		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	} else {
-		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	}
 
 	eth->mpls_lse = 0;
@@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	if (is_mask) {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
-		meta_tci->tci |= mask->tci;
+		meta_tci->tci |= mask->hdr.vlan_tci;
 	} else {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-		meta_tci->tci |= spec->tci;
+		meta_tci->tci |= spec->hdr.vlan_tci;
 	}
 
 	return 0;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b563..f098edc6eb33 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da9582..710d04be13af 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe0593d..ed4d42f92f9f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8d2..7ef52d0b0fcd 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 2/8] net: add smaller fields for VXLAN
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-01-26 16:18   ` Ferruh Yigit
  2023-01-26 16:18   ` [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:18 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-01-26 16:18   ` [PATCH v5 2/8] net: add smaller fields for VXLAN Ferruh Yigit
@ 2023-01-26 16:18   ` Ferruh Yigit
  2023-02-01 17:41     ` Ori Kam
  2023-02-02  9:52     ` Andrew Rybchenko
  2023-01-26 16:19   ` [PATCH v5 4/8] ethdev: use GRE " Ferruh Yigit
                     ` (4 subsequent siblings)
  7 siblings, 2 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:18 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Dongdong Liu, Yisen Zhuang,
	Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu,
	Matan Azrad, Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 ++++++++++----
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27c3780c4f17..116722351486 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 53b10b51d81a..638051789d19 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -85,7 +85,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f660493402c..4a107e81e955 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced55..80869b79c3fe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982dee..e88f9b7e452b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061e0..2855b14fe679 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e6c..7cb20fa0b4f8 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d93..d20a29b9a2d6 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index a11da3dc8beb..fe710b79008d 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2512d6b52db9..ff08a629e2c6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff915183b7cc..261c60a5c33a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
@@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	    ((attr->group || (attr->transfer && priv->fdb_def_rule)) &&
 	    !priv->sh->misc5_cap)) {
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
+			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_v,
 						   tunnel_header_1);
-	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	*tunnel_header_v = tunnel_v;
 	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
+		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 		if (!tunnel_v)
 			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
+		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
 	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
 	}
 }
 
@@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 		MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		vxlan_m = vxlan_v;
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags,
 		 flags_m & flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.protocol;
+	v_protocol = vxlan_v->hdr.protocol;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1902b97ec6d4..4ef4f3044515 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb33..fe1f5ba55f86 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13af..aab697b204c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b4f..e2364823d622 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -988,7 +988,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (2 preceding siblings ...)
  2023-01-26 16:18   ` [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-01-26 16:19   ` Ferruh Yigit
  2023-01-27 14:34     ` Niklas Söderlund
                       ` (2 more replies)
  2023-01-26 16:19   ` [PATCH v5 5/8] ethdev: use GTP " Ferruh Yigit
                     ` (3 subsequent siblings)
  7 siblings, 3 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:19 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Hemant Agrawal, Sachin Saxena,
	Matan Azrad, Viacheslav Ovsiienko, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c           |  4 ++--
 app/test-pmd/cmdline_flow.c              | 14 +++++------
 doc/guides/prog_guide/rte_flow.rst       |  6 +----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++-----
 drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 +++----
 drivers/net/mlx5/mlx5_flow.c             | 22 ++++++++---------
 drivers/net/mlx5/mlx5_flow_dv.c          | 30 +++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       | 10 ++++----
 drivers/net/nfp/nfp_flow.c               |  9 +++----
 lib/ethdev/rte_flow.h                    | 24 +++++++++++++------
 lib/net/rte_gre.h                        |  5 ++++
 13 files changed, 84 insertions(+), 73 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..0f19e5e53648 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gre gre_spec = {
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
 	};
 	static struct rte_flow_item_gre gre_mask = {
-		.protocol = RTE_BE16(0xffff),
+		.hdr.proto = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..0e115956514c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     protocol)),
+					     hdr.proto)),
 	},
 	[ITEM_GRE_C_RSVD0_VER] = {
 		.name = "c_rsvd0_ver",
@@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     c_rsvd0_ver)),
+					     hdr.c_rsvd0_ver)),
 	},
 	[ITEM_GRE_C_BIT] = {
 		.name = "c_bit",
@@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x80\x00\x00\x00")),
 	},
 	[ITEM_GRE_S_BIT] = {
@@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
 		.help = "sequence number bit (S)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x10\x00\x00\x00")),
 	},
 	[ITEM_GRE_K_BIT] = {
@@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
 		.help = "key bit (K)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x20\x00\x00\x00")),
 	},
 	[ITEM_FUZZY] = {
@@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls = {
 		.ttl = 0,
@@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls;
 	uint8_t *header;
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 116722351486..603e1b866be3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -980,8 +980,7 @@ Item: ``GRE``
 
 Matches a GRE header.
 
-- ``c_rsvd0_ver``: checksum, reserved 0 and version.
-- ``protocol``: protocol type.
+- ``hdr``:  header definition (``rte_gre.h``).
 - Default ``mask`` matches protocol only.
 
 Item: ``GRE_KEY``
@@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
 Matches a GRE optional fields (checksum/key/sequence).
 This should be preceded by item ``GRE``.
 
-- ``checksum``: checksum.
-- ``key``: key.
-- ``sequence``: sequence.
 - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
   item. The bit flags need be set with GRE item by application. When the items
   present, the corresponding bits in GRE spec and mask should be set "1" by
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 638051789d19..80bf7209065a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gre``
   - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 80869b79c3fe..c1e231ce8c49 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1461,16 +1461,16 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.c_rsvd0_ver);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
-			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
+			      ulp_deference_struct(gre_spec, hdr.c_rsvd0_ver),
+			      ulp_deference_struct(gre_mask, hdr.c_rsvd0_ver),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, protocol),
-			      ulp_deference_struct(gre_mask, protocol),
+			      ulp_deference_struct(gre_spec, hdr.proto),
+			      ulp_deference_struct(gre_mask, hdr.proto),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with GRE */
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e6065097..8a6d44da4875 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 };
 
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(0xffff),
 };
 
 #endif
@@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->protocol)
+	if (!mask->hdr.proto)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_GRE,
 				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
+				&spec->hdr.proto,
+				&mask->hdr.proto,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
@@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_GRE,
 			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
+			&spec->hdr.proto,
+			&mask->hdr.proto,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 604384a24253..3a438f2c9d12 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -156,8 +156,8 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		source_qp,		v->queue,		mlx5_rte_flow_item_sq) \
 	X(SET,		tag,			v->data,		rte_flow_item_tag) \
 	X(SET,		metadata,		v->data,		rte_flow_item_meta) \
-	X(SET_BE16,	gre_c_ver,		v->c_rsvd0_ver,		rte_flow_item_gre) \
-	X(SET_BE16,	gre_protocol_type,	v->protocol,		rte_flow_item_gre) \
+	X(SET_BE16,	gre_c_ver,		v->hdr.c_rsvd0_ver,	rte_flow_item_gre) \
+	X(SET_BE16,	gre_protocol_type,	v->hdr.proto,		rte_flow_item_gre) \
 	X(SET,		ipv4_protocol_gre,	IPPROTO_GRE,		rte_flow_item_gre) \
 	X(SET_BE32,	gre_opt_key,		v->key.key,		rte_flow_item_gre_opt) \
 	X(SET_BE32,	gre_opt_seq,		v->sequence.sequence,	rte_flow_item_gre_opt) \
@@ -1210,7 +1210,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->c_rsvd0_ver) {
+	if (m->hdr.c_rsvd0_ver) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_c_ver_set;
@@ -1219,7 +1219,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 		fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver);
 	}
 
-	if (m->protocol) {
+	if (m->hdr.proto) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_protocol_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ff08a629e2c6..7b19c5f03f5d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -329,7 +329,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_GRE:
-		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
@@ -3089,8 +3089,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	gre_spec = gre_item->spec;
-	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Key bit must be on");
@@ -3165,21 +3164,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	if (mask->checksum_rsvd.checksum)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
+		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "Checksum bit must be on");
 	if (mask->key.key)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item, "Key bit must be on");
 	if (mask->sequence.sequence)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
+		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
@@ -3230,8 +3226,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 	const struct rte_flow_item_gre *mask = item->mask;
 	int ret;
 	const struct rte_flow_item_gre nic_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 
 	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
@@ -3259,7 +3257,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 		return ret;
 #ifndef HAVE_MLX5DV_DR
 #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
-	if (spec && (spec->protocol & mask->protocol))
+	if (spec && (spec->hdr.proto & mask->hdr.proto))
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "without MPLS support the"
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 261c60a5c33a..2b9c2ba6a4b5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8984,7 +8984,7 @@ static void
 flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 			   uint64_t pattern_flags, uint32_t key_type)
 {
-	static const struct rte_flow_item_gre empty_gre = {0,};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	const struct rte_flow_item_gre *gre_m = item->mask;
 	const struct rte_flow_item_gre *gre_v = item->spec;
 	void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
@@ -9021,8 +9021,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 		gre_v = gre_m;
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		gre_m = gre_v;
-	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
-	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
+	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->hdr.c_rsvd0_ver);
+	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->hdr.c_rsvd0_ver);
 	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
 		 gre_crks_rsvd0_ver_v.c_present &
 		 gre_crks_rsvd0_ver_m.c_present);
@@ -9032,8 +9032,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
 		 gre_crks_rsvd0_ver_v.s_present &
 		 gre_crks_rsvd0_ver_m.s_present);
-	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
-	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
+	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
+	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
@@ -9072,7 +9072,7 @@ flow_dv_translate_item_gre_option(void *key,
 	const struct rte_flow_item_gre_opt *option_v = item->spec;
 	const struct rte_flow_item_gre *gre_m = gre_item->mask;
 	const struct rte_flow_item_gre *gre_v = gre_item->spec;
-	static const struct rte_flow_item_gre empty_gre = {0};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	struct rte_flow_item gre_key_item;
 	uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v;
 	uint16_t protocol_m, protocol_v;
@@ -9097,8 +9097,8 @@ flow_dv_translate_item_gre_option(void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	protocol_v = gre_v->protocol;
-	protocol_m = gre_m->protocol;
+	protocol_v = gre_v->hdr.proto;
+	protocol_m = gre_m->hdr.proto;
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		uint16_t ether_type =
@@ -9108,8 +9108,8 @@ flow_dv_translate_item_gre_option(void *key,
 			protocol_m = UINT16_MAX;
 		}
 	}
-	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
-	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
+	c_rsvd0_ver_v = gre_v->hdr.c_rsvd0_ver;
+	c_rsvd0_ver_m = gre_m->hdr.c_rsvd0_ver;
 	if (option_m->sequence.sequence) {
 		c_rsvd0_ver_v |= RTE_BE16(0x1000);
 		c_rsvd0_ver_m |= RTE_BE16(0x1000);
@@ -9171,12 +9171,14 @@ flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item,
 
 	/* For NVGRE, GRE header fields must be set with defined values. */
 	const struct rte_flow_item_gre gre_spec = {
-		.c_rsvd0_ver = RTE_BE16(0x2000),
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
+		.hdr.k = 1,
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
 	};
 	const struct rte_flow_item_gre gre_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 	const struct rte_flow_item gre_item = {
 		.spec = &gre_spec,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 4ef4f3044515..291369d437d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -930,7 +930,7 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		.size = size,
 	};
 #else
-	static const struct rte_flow_item_gre empty_gre = {0,};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	const struct rte_flow_item_gre *spec = item->spec;
 	const struct rte_flow_item_gre *mask = item->mask;
 	unsigned int size = sizeof(struct ibv_flow_spec_gre);
@@ -946,10 +946,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		if (!mask)
 			mask = &rte_flow_item_gre_mask;
 	}
-	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
-	tunnel.val.protocol = spec->protocol;
-	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
-	tunnel.mask.protocol = mask->protocol;
+	tunnel.val.c_ks_res0_ver = spec->hdr.c_rsvd0_ver;
+	tunnel.val.protocol = spec->hdr.proto;
+	tunnel.mask.c_ks_res0_ver = mask->hdr.c_rsvd0_ver;
+	tunnel.mask.protocol = mask->hdr.proto;
 	/* Remove unwanted bits from values. */
 	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
 	tunnel.val.key &= tunnel.mask.key;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index bd3a8d2a3b2f..0994fdeeb49f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1812,8 +1812,9 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
 		.mask_support = &(const struct rte_flow_item_gre){
-			.c_rsvd0_ver = RTE_BE16(0xa000),
-			.protocol = RTE_BE16(0xffff),
+			.hdr.c = 1,
+			.hdr.k = 1,
+			.hdr.proto = RTE_BE16(0xffff),
 		},
 		.mask_default = &rte_flow_item_gre_mask,
 		.mask_sz = sizeof(struct rte_flow_item_gre),
@@ -3144,7 +3145,7 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 	memset(set_tun, 0, act_set_size);
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
@@ -3181,7 +3182,7 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv6->hdr.hop_limits, tos);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e2364823d622..3ae89e367c16 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1070,19 +1070,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
  *
  * Matches a GRE header.
  */
+RTE_STD_C11
 struct rte_flow_item_gre {
-	/**
-	 * Checksum (1b), reserved 0 (12b), version (3b).
-	 * Refer to RFC 2784.
-	 */
-	rte_be16_t c_rsvd0_ver;
-	rte_be16_t protocol; /**< Protocol type. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Checksum (1b), reserved 0 (12b), version (3b).
+			 * Refer to RFC 2784.
+			 */
+			rte_be16_t c_rsvd0_ver;
+			rte_be16_t protocol; /**< Protocol type. */
+		};
+		struct rte_gre_hdr hdr; /**< GRE header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(UINT16_MAX),
 };
 #endif
 
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 6c6aef6fcaa0..210b81c99018 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -28,6 +28,8 @@ extern "C" {
  */
 __extension__
 struct rte_gre_hdr {
+	union {
+		struct {
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t res2:4; /**< Reserved */
 	uint16_t s:1;    /**< Sequence Number Present bit */
@@ -45,6 +47,9 @@ struct rte_gre_hdr {
 	uint16_t res3:5; /**< Reserved */
 	uint16_t ver:3;  /**< Version Number */
 #endif
+		};
+		rte_be16_t c_rsvd0_ver;
+	};
 	uint16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 5/8] ethdev: use GTP protocol struct for flow matching
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (3 preceding siblings ...)
  2023-01-26 16:19   ` [PATCH v5 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-01-26 16:19   ` Ferruh Yigit
  2023-02-02  9:54     ` Andrew Rybchenko
  2023-01-26 16:19   ` [PATCH v5 6/8] ethdev: use ARP " Ferruh Yigit
                     ` (2 subsequent siblings)
  7 siblings, 1 reply; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:19 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-flow-perf/items_gen.c        |  4 ++--
 app/test-pmd/cmdline_flow.c           |  8 +++----
 doc/guides/prog_guide/rte_flow.rst    | 10 ++-------
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c          | 14 ++++++------
 drivers/net/i40e/i40e_flow.c          | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c          |  8 +++----
 drivers/net/ice/ice_fdir_filter.c     | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c   | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------
 drivers/net/mlx5/mlx5_flow_dv.c       | 20 ++++++++---------
 lib/ethdev/rte_flow.h                 | 32 ++++++++++++++++++---------
 12 files changed, 78 insertions(+), 75 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e53648..55eb6f5cf009 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0e115956514c..dd6da9d98d9b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 603e1b866be3..ec2e335fac3d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 80bf7209065a..b89450b239ef 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -68,7 +68,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb58..47f79ecf11cc 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe679..3c550733f2bb 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b88..811a10287b70 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc290e..480b369af816 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4f8..110d8895fea3 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 3a438f2c9d12..127cebcf3e11 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data {
 	X(SET_BE16,	tcp_src_port,		v->hdr.src_port,	rte_flow_item_tcp) \
 	X(SET_BE16,	tcp_dst_port,		v->hdr.dst_port,	rte_flow_item_tcp) \
 	X(SET,		gtp_udp_port,		RTE_GTPU_UDP_PORT,	rte_flow_item_gtp) \
-	X(SET_BE32,	gtp_teid,		v->teid,		rte_flow_item_gtp) \
-	X(SET,		gtp_msg_type,		v->msg_type,		rte_flow_item_gtp) \
-	X(SET,		gtp_ext_flag,		!!v->v_pt_rsv_flags,	rte_flow_item_gtp) \
+	X(SET_BE32,	gtp_teid,		v->hdr.teid,		rte_flow_item_gtp) \
+	X(SET,		gtp_msg_type,		v->hdr.msg_type,	rte_flow_item_gtp) \
+	X(SET,		gtp_ext_flag,		!!v->hdr.gtp_hdr_info,	rte_flow_item_gtp) \
 	X(SET,		gtp_next_ext_hdr,	GTP_PDU_SC,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_pdu,	v->hdr.type,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_qfi,	v->hdr.qfi,		rte_flow_item_gtp_psc) \
@@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
+	if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
 		rte_errno = ENOTSUP;
 		return rte_errno;
 	}
 
-	if (m->teid) {
+	if (m->hdr.teid) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 		fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
 	}
 
-	if (m->v_pt_rsv_flags) {
+	if (m->hdr.gtp_hdr_info) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	}
 
 
-	if (m->msg_type) {
+	if (m->hdr.msg_type) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2b9c2ba6a4b5..bdd56cf0f9ae 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -9320,7 +9320,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 				 const uint64_t pattern_flags,
 				 uint32_t key_type)
 {
-	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, };
+	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {{{0}}};
 	const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask;
 	const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec;
 	/* The item was validated to be on the outer side */
@@ -10358,11 +10358,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item,
 	MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m,
 		&rte_flow_item_gtp_mask);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3ae89e367c16..85ca73d1dc04 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1149,23 +1149,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 6/8] ethdev: use ARP protocol struct for flow matching
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (4 preceding siblings ...)
  2023-01-26 16:19   ` [PATCH v5 5/8] ethdev: use GTP " Ferruh Yigit
@ 2023-01-26 16:19   ` Ferruh Yigit
  2023-02-01 17:46     ` Ori Kam
  2023-02-02  9:55     ` Andrew Rybchenko
  2023-01-26 16:19   ` [PATCH v5 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
  2023-01-26 16:19   ` [PATCH v5 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 2 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:19 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index dd6da9d98d9b..1d337a96199d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec2e335fac3d..8bf85df2f611 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b89450b239ef..8e3683990117 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -64,7 +64,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 85ca73d1dc04..a215daa83640 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include <rte_compat.h>
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1255,26 +1256,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 7/8] doc: fix description of L2TPV2 flow item
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (5 preceding siblings ...)
  2023-01-26 16:19   ` [PATCH v5 6/8] ethdev: use ARP " Ferruh Yigit
@ 2023-01-26 16:19   ` Ferruh Yigit
  2023-02-02  9:56     ` Andrew Rybchenko
  2023-01-26 16:19   ` [PATCH v5 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 1 reply; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:19 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Wenjun Wu, Jie Wang, Andrew Rybchenko,
	Ferruh Yigit
  Cc: David Marchand, dev, stable

From: Thomas Monjalon <thomas@monjalon.net>

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
Cc: jie1x.wang@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 8bf85df2f611..c01b53aad8ed 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v5 8/8] net: mark all big endian types
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (6 preceding siblings ...)
  2023-01-26 16:19   ` [PATCH v5 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-01-26 16:19   ` Ferruh Yigit
  2023-02-02 10:01     ` Andrew Rybchenko
  7 siblings, 1 reply; 90+ messages in thread
From: Ferruh Yigit @ 2023-01-26 16:19 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 lib/ethdev/rte_flow.h |  4 ++--
 lib/net/rte_arp.h     | 28 ++++++++++++++--------------
 lib/net/rte_gre.h     |  2 +-
 lib/net/rte_higig.h   |  6 +++---
 lib/net/rte_mpls.h    |  2 +-
 5 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a215daa83640..99f8340f8274 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
 static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
 	.hdr = {
 		.ppt1 = {
-			.classification = 0xffff,
-			.vid = 0xfff,
+			.classification = RTE_BE16(0xffff),
+			.vid = RTE_BE16(0xfff),
 		},
 	},
 };
diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..151e6c641fc5 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /** format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /** format of protocol address */
+	uint8_t    arp_hlen;     /** length of hardware address */
+	uint8_t    arp_plen;     /** length of protocol address */
+	rte_be16_t arp_opcode;   /** ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /** request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /** response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 210b81c99018..6b1169c8b0c1 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -50,7 +50,7 @@ struct rte_gre_hdr {
 		};
 		rte_be16_t c_rsvd0_ver;
 	};
-	uint16_t proto;  /**< Protocol Type */
+	rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-01-27 14:33     ` Niklas Söderlund
  2023-02-01 17:34     ` Ori Kam
  2023-02-02  9:51     ` Andrew Rybchenko
  2 siblings, 0 replies; 90+ messages in thread
From: Niklas Söderlund @ 2023-01-27 14:33 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Andrew Rybchenko, Jiawen Wu, Jian Wang, David Marchand, dev

Hi Ferruh and Thomas,

Thanks for your work.

On 2023-01-26 16:18:57 +0000, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> The Ethernet headers (including VLAN) structures are used
> instead of the redundant fields in the flow items.
> 
> The remaining protocols to clean up are listed for future work
> in the deprecation list.
> Some protocols are not even defined in the directory net yet.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  app/test-flow-perf/items_gen.c           |   4 +-
>  app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
>  doc/guides/prog_guide/rte_flow.rst       |   7 +-
>  doc/guides/rel_notes/deprecation.rst     |   2 +
>  drivers/net/bnxt/bnxt_flow.c             |  42 +++----
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
>  drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
>  drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
>  drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
>  drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
>  drivers/net/e1000/igb_flow.c             |  14 +--
>  drivers/net/enic/enic_flow.c             |  24 ++--
>  drivers/net/enic/enic_fm_flow.c          |  16 +--
>  drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
>  drivers/net/hns3/hns3_flow.c             |  28 ++---
>  drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
>  drivers/net/i40e/i40e_hash.c             |   4 +-
>  drivers/net/iavf/iavf_fdir.c             |  10 +-
>  drivers/net/iavf/iavf_fsub.c             |  10 +-
>  drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
>  drivers/net/ice/ice_acl_filter.c         |  20 ++--
>  drivers/net/ice/ice_fdir_filter.c        |  14 +--
>  drivers/net/ice/ice_switch_filter.c      |  34 +++---
>  drivers/net/igc/igc_flow.c               |   8 +-
>  drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
>  drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
>  drivers/net/mlx4/mlx4_flow.c             |  38 +++---
>  drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
>  drivers/net/mlx5/mlx5_flow.c             |  24 ++--
>  drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
>  drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
>  drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
>  drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
>  drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
>  drivers/net/nfp/nfp_flow.c               |  12 +-

For NFP,

Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>

>  drivers/net/sfc/sfc_flow.c               |  46 ++++----
>  drivers/net/sfc/sfc_mae.c                |  38 +++---
>  drivers/net/tap/tap_flow.c               |  58 +++++-----
>  drivers/net/txgbe/txgbe_flow.c           |  28 ++---
>  39 files changed, 618 insertions(+), 619 deletions(-)
> 
> diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
> index a73de9031f54..b7f51030a119 100644
> --- a/app/test-flow-perf/items_gen.c
> +++ b/app/test-flow-perf/items_gen.c
> @@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
>  	__rte_unused struct additional_para para)
>  {
>  	static struct rte_flow_item_vlan vlan_spec = {
> -		.tci = RTE_BE16(VLAN_VALUE),
> +		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
>  	};
>  	static struct rte_flow_item_vlan vlan_mask = {
> -		.tci = RTE_BE16(0xffff),
> +		.hdr.vlan_tci = RTE_BE16(0xffff),
>  	};
>  
>  	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 88108498e0c3..694a7eb647c5 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
>  		.name = "dst",
>  		.help = "destination MAC",
>  		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
> -		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
> +		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
>  	},
>  	[ITEM_ETH_SRC] = {
>  		.name = "src",
>  		.help = "source MAC",
>  		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
> -		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
> +		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
>  	},
>  	[ITEM_ETH_TYPE] = {
>  		.name = "type",
>  		.help = "EtherType",
>  		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
> -		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
> +		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
>  	},
>  	[ITEM_ETH_HAS_VLAN] = {
>  		.name = "has_vlan",
> @@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
>  		.help = "tag control information",
>  		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
> -		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
> +		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
>  	},
>  	[ITEM_VLAN_PCP] = {
>  		.name = "pcp",
> @@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
> -						  tci, "\xe0\x00")),
> +						  hdr.vlan_tci, "\xe0\x00")),
>  	},
>  	[ITEM_VLAN_DEI] = {
>  		.name = "dei",
> @@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
> -						  tci, "\x10\x00")),
> +						  hdr.vlan_tci, "\x10\x00")),
>  	},
>  	[ITEM_VLAN_VID] = {
>  		.name = "vid",
> @@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
> -						  tci, "\x0f\xff")),
> +						  hdr.vlan_tci, "\x0f\xff")),
>  	},
>  	[ITEM_VLAN_INNER_TYPE] = {
>  		.name = "inner_type",
> @@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
> -					     inner_type)),
> +					     hdr.eth_proto)),
>  	},
>  	[ITEM_VLAN_HAS_MORE_VLAN] = {
>  		.name = "has_more_vlan",
> @@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
>  				.type = RTE_FLOW_ITEM_TYPE_END,
>  			},
>  		},
> -		.item_eth.type = 0,
> +		.item_eth.hdr.ether_type = 0,
>  		.item_vlan = {
> -			.tci = vxlan_encap_conf.vlan_tci,
> -			.inner_type = 0,
> +			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
> +			.hdr.eth_proto = 0,
>  		},
>  		.item_ipv4.hdr = {
>  			.src_addr = vxlan_encap_conf.ipv4_src,
> @@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
>  		},
>  		.item_vxlan.flags = 0,
>  	};
> -	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
> +	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
>  	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
> +	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
>  	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	if (!vxlan_encap_conf.select_ipv4) {
>  		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
> @@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
>  				.type = RTE_FLOW_ITEM_TYPE_END,
>  			},
>  		},
> -		.item_eth.type = 0,
> +		.item_eth.hdr.ether_type = 0,
>  		.item_vlan = {
> -			.tci = nvgre_encap_conf.vlan_tci,
> -			.inner_type = 0,
> +			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
> +			.hdr.eth_proto = 0,
>  		},
>  		.item_ipv4.hdr = {
>  		       .src_addr = nvgre_encap_conf.ipv4_src,
> @@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
>  		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
>  		.item_nvgre.flow_id = 0,
>  	};
> -	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
> +	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
>  	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
> +	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
>  	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	if (!nvgre_encap_conf.select_ipv4) {
>  		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
> @@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
>  	struct buffer *out = buf;
>  	struct rte_flow_action *action;
>  	struct action_raw_encap_data *action_encap_data;
> -	struct rte_flow_item_eth eth = { .type = 0, };
> +	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
>  	struct rte_flow_item_vlan vlan = {
> -		.tci = mplsoudp_encap_conf.vlan_tci,
> -		.inner_type = 0,
> +		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
> +		.hdr.eth_proto = 0,
>  	};
>  	uint8_t *header;
>  	int ret;
> @@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
>  	};
>  	header = action_encap_data->data;
>  	if (l2_encap_conf.select_vlan)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
>  	else if (l2_encap_conf.select_ipv4)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  	else
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> -	memcpy(eth.dst.addr_bytes,
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +	memcpy(eth.hdr.dst_addr.addr_bytes,
>  	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(eth.src.addr_bytes,
> +	memcpy(eth.hdr.src_addr.addr_bytes,
>  	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	memcpy(header, &eth, sizeof(eth));
>  	header += sizeof(eth);
>  	if (l2_encap_conf.select_vlan) {
>  		if (l2_encap_conf.select_ipv4)
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  		else
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
>  		memcpy(header, &vlan, sizeof(vlan));
>  		header += sizeof(vlan);
>  	}
> @@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
>  	struct buffer *out = buf;
>  	struct rte_flow_action *action;
>  	struct action_raw_decap_data *action_decap_data;
> -	struct rte_flow_item_eth eth = { .type = 0, };
> +	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
>  	struct rte_flow_item_vlan vlan = {
> -		.tci = mplsoudp_encap_conf.vlan_tci,
> -		.inner_type = 0,
> +		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
> +		.hdr.eth_proto = 0,
>  	};
>  	uint8_t *header;
>  	int ret;
> @@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
>  	};
>  	header = action_decap_data->data;
>  	if (l2_decap_conf.select_vlan)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
>  	memcpy(header, &eth, sizeof(eth));
>  	header += sizeof(eth);
>  	if (l2_decap_conf.select_vlan) {
> @@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
>  	struct buffer *out = buf;
>  	struct rte_flow_action *action;
>  	struct action_raw_encap_data *action_encap_data;
> -	struct rte_flow_item_eth eth = { .type = 0, };
> +	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
>  	struct rte_flow_item_vlan vlan = {
> -		.tci = mplsogre_encap_conf.vlan_tci,
> -		.inner_type = 0,
> +		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
> +		.hdr.eth_proto = 0,
>  	};
>  	struct rte_flow_item_ipv4 ipv4 = {
>  		.hdr =  {
> @@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
>  	};
>  	header = action_encap_data->data;
>  	if (mplsogre_encap_conf.select_vlan)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
>  	else if (mplsogre_encap_conf.select_ipv4)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  	else
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> -	memcpy(eth.dst.addr_bytes,
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +	memcpy(eth.hdr.dst_addr.addr_bytes,
>  	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(eth.src.addr_bytes,
> +	memcpy(eth.hdr.src_addr.addr_bytes,
>  	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	memcpy(header, &eth, sizeof(eth));
>  	header += sizeof(eth);
>  	if (mplsogre_encap_conf.select_vlan) {
>  		if (mplsogre_encap_conf.select_ipv4)
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  		else
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
>  		memcpy(header, &vlan, sizeof(vlan));
>  		header += sizeof(vlan);
>  	}
> @@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
>  	struct buffer *out = buf;
>  	struct rte_flow_action *action;
>  	struct action_raw_decap_data *action_decap_data;
> -	struct rte_flow_item_eth eth = { .type = 0, };
> -	struct rte_flow_item_vlan vlan = {.tci = 0};
> +	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
> +	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
>  	struct rte_flow_item_ipv4 ipv4 = {
>  		.hdr =  {
>  			.next_proto_id = IPPROTO_GRE,
> @@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
>  	};
>  	header = action_decap_data->data;
>  	if (mplsogre_decap_conf.select_vlan)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
>  	else if (mplsogre_encap_conf.select_ipv4)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  	else
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> -	memcpy(eth.dst.addr_bytes,
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +	memcpy(eth.hdr.dst_addr.addr_bytes,
>  	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(eth.src.addr_bytes,
> +	memcpy(eth.hdr.src_addr.addr_bytes,
>  	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	memcpy(header, &eth, sizeof(eth));
>  	header += sizeof(eth);
>  	if (mplsogre_encap_conf.select_vlan) {
>  		if (mplsogre_encap_conf.select_ipv4)
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  		else
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
>  		memcpy(header, &vlan, sizeof(vlan));
>  		header += sizeof(vlan);
>  	}
> @@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
>  	struct buffer *out = buf;
>  	struct rte_flow_action *action;
>  	struct action_raw_encap_data *action_encap_data;
> -	struct rte_flow_item_eth eth = { .type = 0, };
> +	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
>  	struct rte_flow_item_vlan vlan = {
> -		.tci = mplsoudp_encap_conf.vlan_tci,
> -		.inner_type = 0,
> +		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
> +		.hdr.eth_proto = 0,
>  	};
>  	struct rte_flow_item_ipv4 ipv4 = {
>  		.hdr =  {
> @@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
>  	};
>  	header = action_encap_data->data;
>  	if (mplsoudp_encap_conf.select_vlan)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
>  	else if (mplsoudp_encap_conf.select_ipv4)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  	else
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> -	memcpy(eth.dst.addr_bytes,
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +	memcpy(eth.hdr.dst_addr.addr_bytes,
>  	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(eth.src.addr_bytes,
> +	memcpy(eth.hdr.src_addr.addr_bytes,
>  	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	memcpy(header, &eth, sizeof(eth));
>  	header += sizeof(eth);
>  	if (mplsoudp_encap_conf.select_vlan) {
>  		if (mplsoudp_encap_conf.select_ipv4)
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  		else
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
>  		memcpy(header, &vlan, sizeof(vlan));
>  		header += sizeof(vlan);
>  	}
> @@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
>  	struct buffer *out = buf;
>  	struct rte_flow_action *action;
>  	struct action_raw_decap_data *action_decap_data;
> -	struct rte_flow_item_eth eth = { .type = 0, };
> -	struct rte_flow_item_vlan vlan = {.tci = 0};
> +	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
> +	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
>  	struct rte_flow_item_ipv4 ipv4 = {
>  		.hdr =  {
>  			.next_proto_id = IPPROTO_UDP,
> @@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
>  	};
>  	header = action_decap_data->data;
>  	if (mplsoudp_decap_conf.select_vlan)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
>  	else if (mplsoudp_encap_conf.select_ipv4)
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  	else
> -		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> -	memcpy(eth.dst.addr_bytes,
> +		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +	memcpy(eth.hdr.dst_addr.addr_bytes,
>  	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
> -	memcpy(eth.src.addr_bytes,
> +	memcpy(eth.hdr.src_addr.addr_bytes,
>  	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
>  	memcpy(header, &eth, sizeof(eth));
>  	header += sizeof(eth);
>  	if (mplsoudp_encap_conf.select_vlan) {
>  		if (mplsoudp_encap_conf.select_ipv4)
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
>  		else
> -			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
> +			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
>  		memcpy(header, &vlan, sizeof(vlan));
>  		header += sizeof(vlan);
>  	}
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 3e6242803dc0..27c3780c4f17 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -840,9 +840,7 @@ instead of using the ``type`` field.
>  If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
>  and untagged packets will match the pattern.
>  
> -- ``dst``: destination MAC.
> -- ``src``: source MAC.
> -- ``type``: EtherType or TPID.
> +- ``hdr``:  header definition (``rte_ether.h``).
>  - ``has_vlan``: packet header contains at least one VLAN.
>  - Default ``mask`` matches destination and source addresses only.
>  
> @@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
>  If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
>  then any tagged packets will match the pattern.
>  
> -- ``tci``: tag control information.
> -- ``inner_type``: inner EtherType or TPID.
> +- ``hdr``:  header definition (``rte_ether.h``).
>  - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
>  - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
>  
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index e18ac344ef8e..53b10b51d81a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -58,6 +58,8 @@ Deprecation Notices
>    should start with relevant protocol header structure from lib/net/.
>    The individual protocol header fields and the protocol header struct
>    may be kept together in a union as a first migration step.
> +  In future (target is DPDK 23.11), the protocol header fields will be cleaned
> +  and only protocol header struct will remain.
>  
>    These items are not compliant (not including struct from lib/net/):
>  
> diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
> index 96ef00460cf5..8f660493402c 100644
> --- a/drivers/net/bnxt/bnxt_flow.c
> +++ b/drivers/net/bnxt/bnxt_flow.c
> @@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  			 * Destination MAC address mask must not be partially
>  			 * set. Should be all 1's or all 0's.
>  			 */
> -			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
> -			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
> -			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
> -			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
> +			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
> +			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
> +			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
>  				rte_flow_error_set(error,
>  						   EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  			}
>  
>  			/* Mask is not allowed. Only exact matches are */
> -			if (eth_mask->type &&
> -			    eth_mask->type != RTE_BE16(0xffff)) {
> +			if (eth_mask->hdr.ether_type &&
> +			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  				return -rte_errno;
>  			}
>  
> -			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> -				dst = &eth_spec->dst;
> +			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
> +				dst = &eth_spec->hdr.dst_addr;
>  				if (!rte_is_valid_assigned_ether_addr(dst)) {
>  					rte_flow_error_set(error,
>  							   EINVAL,
> @@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  					return -rte_errno;
>  				}
>  				rte_memcpy(filter->dst_macaddr,
> -					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
> +					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
>  				en |= use_ntuple ?
>  					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
>  					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
> @@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  				PMD_DRV_LOG(DEBUG,
>  					    "Creating a priority flow\n");
>  			}
> -			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
> -				src = &eth_spec->src;
> +			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
> +				src = &eth_spec->hdr.src_addr;
>  				if (!rte_is_valid_assigned_ether_addr(src)) {
>  					rte_flow_error_set(error,
>  							   EINVAL,
> @@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  					return -rte_errno;
>  				}
>  				rte_memcpy(filter->src_macaddr,
> -					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
> +					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
>  				en |= use_ntuple ?
>  					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
>  					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
> @@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
>  			   * }
>  			   */
> -			if (eth_mask->type) {
> +			if (eth_mask->hdr.ether_type) {
>  				filter->ethertype =
> -					rte_be_to_cpu_16(eth_spec->type);
> +					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  				en |= en_ethertype;
>  			}
>  			if (inner)
> @@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  						   " supported");
>  				return -rte_errno;
>  			}
> -			if (vlan_mask->tci &&
> -			    vlan_mask->tci == RTE_BE16(0x0fff)) {
> +			if (vlan_mask->hdr.vlan_tci &&
> +			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
>  				/* Only the VLAN ID can be matched. */
>  				filter->l2_ovlan =
> -					rte_be_to_cpu_16(vlan_spec->tci &
> +					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
>  							 RTE_BE16(0x0fff));
>  				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
>  			} else {
> @@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  						   "VLAN mask is invalid");
>  				return -rte_errno;
>  			}
> -			if (vlan_mask->inner_type &&
> -			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
> +			if (vlan_mask->hdr.eth_proto &&
> +			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
>  						   " valid");
>  				return -rte_errno;
>  			}
> -			if (vlan_mask->inner_type) {
> +			if (vlan_mask->hdr.eth_proto) {
>  				filter->ethertype =
> -					rte_be_to_cpu_16(vlan_spec->inner_type);
> +					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
>  				en |= en_ethertype;
>  			}
>  
> diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
> index 1be649a16c49..2928598ced55 100644
> --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
> +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
> @@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
>  	/* Perform validations */
>  	if (eth_spec) {
>  		/* Todo: work around to avoid multicast and broadcast addr */
> -		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
> +		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
>  			return BNXT_TF_RC_PARSE_ERR;
>  
> -		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
> +		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
>  			return BNXT_TF_RC_PARSE_ERR;
>  
> -		eth_type = eth_spec->type;
> +		eth_type = eth_spec->hdr.ether_type;
>  	}
>  
>  	if (ulp_rte_prsr_fld_size_validate(params, &idx,
> @@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
>  	 * header fields
>  	 */
>  	dmac_idx = idx;
> -	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
> +	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
>  	ulp_rte_prsr_fld_mask(params, &idx, size,
> -			      ulp_deference_struct(eth_spec, dst.addr_bytes),
> -			      ulp_deference_struct(eth_mask, dst.addr_bytes),
> +			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
> +			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
>  			      ULP_PRSR_ACT_DEFAULT);
>  
> -	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
> +	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
>  	ulp_rte_prsr_fld_mask(params, &idx, size,
> -			      ulp_deference_struct(eth_spec, src.addr_bytes),
> -			      ulp_deference_struct(eth_mask, src.addr_bytes),
> +			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
> +			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
>  			      ULP_PRSR_ACT_DEFAULT);
>  
> -	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
> +	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
>  	ulp_rte_prsr_fld_mask(params, &idx, size,
> -			      ulp_deference_struct(eth_spec, type),
> -			      ulp_deference_struct(eth_mask, type),
> +			      ulp_deference_struct(eth_spec, hdr.ether_type),
> +			      ulp_deference_struct(eth_mask, hdr.ether_type),
>  			      ULP_PRSR_ACT_MATCH_IGNORE);
>  
>  	/* Update the protocol hdr bitmap */
> @@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
>  	uint32_t size;
>  
>  	if (vlan_spec) {
> -		vlan_tag = ntohs(vlan_spec->tci);
> +		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
>  		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
>  		vlan_tag &= ULP_VLAN_TAG_MASK;
>  		vlan_tag = htons(vlan_tag);
> -		eth_type = vlan_spec->inner_type;
> +		eth_type = vlan_spec->hdr.eth_proto;
>  	}
>  
>  	if (vlan_mask) {
> -		vlan_tag_mask = ntohs(vlan_mask->tci);
> +		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
>  		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
>  		vlan_tag_mask &= 0xfff;
>  
> @@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
>  	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
>  	 * header fields
>  	 */
> -	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
> +	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
>  	/*
>  	 * The priority field is ignored since OVS is setting it as
>  	 * wild card match and it is not supported. This is a work
> @@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
>  			      (vlan_mask) ? &vlan_tag_mask : NULL,
>  			      ULP_PRSR_ACT_DEFAULT);
>  
> -	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
> +	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
>  	ulp_rte_prsr_fld_mask(params, &idx, size,
> -			      ulp_deference_struct(vlan_spec, inner_type),
> -			      ulp_deference_struct(vlan_mask, inner_type),
> +			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
> +			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
>  			      ULP_PRSR_ACT_MATCH_IGNORE);
>  
>  	/* Get the outer tag and inner tag counts */
> @@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
>  	uint32_t size;
>  
>  	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
> -	size = sizeof(eth_spec->dst.addr_bytes);
> -	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
> +	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
> +	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
>  
> -	size = sizeof(eth_spec->src.addr_bytes);
> -	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
> +	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
> +	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
>  
> -	size = sizeof(eth_spec->type);
> -	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
> +	size = sizeof(eth_spec->hdr.ether_type);
> +	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
>  
>  	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
>  }
> @@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
>  			       BNXT_ULP_HDR_BIT_OI_VLAN);
>  	}
>  
> -	size = sizeof(vlan_spec->tci);
> -	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
> +	size = sizeof(vlan_spec->hdr.vlan_tci);
> +	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
>  
> -	size = sizeof(vlan_spec->inner_type);
> -	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
> +	size = sizeof(vlan_spec->hdr.eth_proto);
> +	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
>  }
>  
>  /* Function to handle the parsing of RTE Flow item ipv4 Header. */
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index e0364ef015e0..3a43b7f3ef6f 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
>   */
>  
>  static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
> -	.dst.addr_bytes = { 0 },
> -	.src.addr_bytes = { 0 },
> -	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
> +	.hdr.dst_addr.addr_bytes = { 0 },
> +	.hdr.src_addr.addr_bytes = { 0 },
> +	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
>  };
>  
>  static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
> -	.dst.addr_bytes = { 0 },
> -	.src.addr_bytes = { 0 },
> -	.type = 0xFFFF,
> +	.hdr.dst_addr.addr_bytes = { 0 },
> +	.hdr.src_addr.addr_bytes = { 0 },
> +	.hdr.ether_type = 0xFFFF,
>  };
>  
>  static struct rte_flow_item flow_item_8023ad[] = {
> diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
> index d66672a9e6b8..f5787c247f1f 100644
> --- a/drivers/net/cxgbe/cxgbe_flow.c
> +++ b/drivers/net/cxgbe/cxgbe_flow.c
> @@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
>  		return 0;
>  
>  	/* we don't support SRC_MAC filtering*/
> -	if (!rte_is_zero_ether_addr(&spec->src) ||
> -	    (umask && !rte_is_zero_ether_addr(&umask->src)))
> +	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
> +	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
>  		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
>  					  item,
>  					  "src mac filtering not supported");
>  
> -	if (!rte_is_zero_ether_addr(&spec->dst) ||
> -	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
> +	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
> +	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
>  		CXGBE_FILL_FS(0, 0x1ff, macidx);
> -		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
> +		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
>  				     dmac);
>  	}
>  
> -	if (spec->type || (umask && umask->type))
> -		CXGBE_FILL_FS(be16_to_cpu(spec->type),
> -			      be16_to_cpu(mask->type), ethtype);
> +	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
> +		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
> +			      be16_to_cpu(mask->hdr.ether_type), ethtype);
>  
>  	return 0;
>  }
> @@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
>  	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
>  		CXGBE_FILL_FS(1, 1, ovlan_vld);
>  		if (spec) {
> -			if (spec->tci || (umask && umask->tci))
> -				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
> -					      be16_to_cpu(mask->tci), ovlan);
> +			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
> +				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
> +					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
>  			fs->mask.ethtype = 0;
>  			fs->val.ethtype = 0;
>  		}
>  	} else {
>  		CXGBE_FILL_FS(1, 1, ivlan_vld);
>  		if (spec) {
> -			if (spec->tci || (umask && umask->tci))
> -				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
> -					      be16_to_cpu(mask->tci), ivlan);
> +			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
> +				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
> +					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
>  			fs->mask.ethtype = 0;
>  			fs->val.ethtype = 0;
>  		}
>  	}
>  
> -	if (spec && (spec->inner_type || (umask && umask->inner_type)))
> -		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
> -			      be16_to_cpu(mask->inner_type), ethtype);
> +	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
> +		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
> +			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
>  
>  	return 0;
>  }
> @@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
>  	[RTE_FLOW_ITEM_TYPE_ETH] = {
>  		.fptr  = ch_rte_parsetype_eth,
>  		.dmask = &(const struct rte_flow_item_eth){
> -			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -			.type = 0xffff,
> +			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +			.hdr.ether_type = 0xffff,
>  		}
>  	},
>  
>  	[RTE_FLOW_ITEM_TYPE_VLAN] = {
>  		.fptr = ch_rte_parsetype_vlan,
>  		.dmask = &(const struct rte_flow_item_vlan){
> -			.tci = 0xffff,
> -			.inner_type = 0xffff,
> +			.hdr.vlan_tci = 0xffff,
> +			.hdr.eth_proto = 0xffff,
>  		}
>  	},
>  
> diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
> index df06c3862e7c..eec7e6065097 100644
> --- a/drivers/net/dpaa2/dpaa2_flow.c
> +++ b/drivers/net/dpaa2/dpaa2_flow.c
> @@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
>  
>  #ifndef __cplusplus
>  static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
> -	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -	.type = RTE_BE16(0xffff),
> +	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +	.hdr.ether_type = RTE_BE16(0xffff),
>  };
>  
>  static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
> -	.tci = RTE_BE16(0xffff),
> +	.hdr.vlan_tci = RTE_BE16(0xffff),
>  };
>  
>  static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
> @@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  		return -1;
>  	}
>  
> -	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
> +	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
>  		index = dpaa2_flow_extract_search(
>  				&priv->extract.qos_key_extract.dpkg,
>  				NET_PROT_ETH, NH_FLD_ETH_SA);
> @@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  				&flow->qos_rule,
>  				NET_PROT_ETH,
>  				NH_FLD_ETH_SA,
> -				&spec->src.addr_bytes,
> -				&mask->src.addr_bytes,
> +				&spec->hdr.src_addr.addr_bytes,
> +				&mask->hdr.src_addr.addr_bytes,
>  				sizeof(struct rte_ether_addr));
>  		if (ret) {
>  			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
> @@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  				&flow->fs_rule,
>  				NET_PROT_ETH,
>  				NH_FLD_ETH_SA,
> -				&spec->src.addr_bytes,
> -				&mask->src.addr_bytes,
> +				&spec->hdr.src_addr.addr_bytes,
> +				&mask->hdr.src_addr.addr_bytes,
>  				sizeof(struct rte_ether_addr));
>  		if (ret) {
>  			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
> @@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  		}
>  	}
>  
> -	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
> +	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
>  		index = dpaa2_flow_extract_search(
>  				&priv->extract.qos_key_extract.dpkg,
>  				NET_PROT_ETH, NH_FLD_ETH_DA);
> @@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  				&flow->qos_rule,
>  				NET_PROT_ETH,
>  				NH_FLD_ETH_DA,
> -				&spec->dst.addr_bytes,
> -				&mask->dst.addr_bytes,
> +				&spec->hdr.dst_addr.addr_bytes,
> +				&mask->hdr.dst_addr.addr_bytes,
>  				sizeof(struct rte_ether_addr));
>  		if (ret) {
>  			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
> @@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  				&flow->fs_rule,
>  				NET_PROT_ETH,
>  				NH_FLD_ETH_DA,
> -				&spec->dst.addr_bytes,
> -				&mask->dst.addr_bytes,
> +				&spec->hdr.dst_addr.addr_bytes,
> +				&mask->hdr.dst_addr.addr_bytes,
>  				sizeof(struct rte_ether_addr));
>  		if (ret) {
>  			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
> @@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  		}
>  	}
>  
> -	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
> +	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
>  		index = dpaa2_flow_extract_search(
>  				&priv->extract.qos_key_extract.dpkg,
>  				NET_PROT_ETH, NH_FLD_ETH_TYPE);
> @@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  				&flow->qos_rule,
>  				NET_PROT_ETH,
>  				NH_FLD_ETH_TYPE,
> -				&spec->type,
> -				&mask->type,
> +				&spec->hdr.ether_type,
> +				&mask->hdr.ether_type,
>  				sizeof(rte_be16_t));
>  		if (ret) {
>  			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
> @@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
>  				&flow->fs_rule,
>  				NET_PROT_ETH,
>  				NH_FLD_ETH_TYPE,
> -				&spec->type,
> -				&mask->type,
> +				&spec->hdr.ether_type,
> +				&mask->hdr.ether_type,
>  				sizeof(rte_be16_t));
>  		if (ret) {
>  			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
> @@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
>  		return -1;
>  	}
>  
> -	if (!mask->tci)
> +	if (!mask->hdr.vlan_tci)
>  		return 0;
>  
>  	index = dpaa2_flow_extract_search(
> @@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
>  				&flow->qos_rule,
>  				NET_PROT_VLAN,
>  				NH_FLD_VLAN_TCI,
> -				&spec->tci,
> -				&mask->tci,
> +				&spec->hdr.vlan_tci,
> +				&mask->hdr.vlan_tci,
>  				sizeof(rte_be16_t));
>  	if (ret) {
>  		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
> @@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
>  			&flow->fs_rule,
>  			NET_PROT_VLAN,
>  			NH_FLD_VLAN_TCI,
> -			&spec->tci,
> -			&mask->tci,
> +			&spec->hdr.vlan_tci,
> +			&mask->hdr.vlan_tci,
>  			sizeof(rte_be16_t));
>  	if (ret) {
>  		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
> diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
> index 7456f43f425c..2ff1a98fda7c 100644
> --- a/drivers/net/dpaa2/dpaa2_mux.c
> +++ b/drivers/net/dpaa2/dpaa2_mux.c
> @@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
>  		kg_cfg.num_extracts = 1;
>  
>  		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
> -		eth_type = rte_constant_bswap16(spec->type);
> +		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
>  		memcpy((void *)key_iova, (const void *)&eth_type,
>  							sizeof(rte_be16_t));
>  		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
> diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
> index b77531065196..ea9b290e1cb5 100644
> --- a/drivers/net/e1000/igb_flow.c
> +++ b/drivers/net/e1000/igb_flow.c
> @@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	 * Mask bits of destination MAC address must be full
>  	 * of 1 or full of 0.
>  	 */
> -	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
> -	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
> -	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
> +	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
>  		rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ether address mask");
>  		return -rte_errno;
>  	}
>  
> -	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
> +	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
>  		rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ethertype mask");
> @@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	/* If mask bits of destination MAC address
>  	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
>  	 */
> -	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> -		filter->mac_addr = eth_spec->dst;
> +	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
> +		filter->mac_addr = eth_spec->hdr.dst_addr;
>  		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
>  	} else {
>  		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
>  	}
> -	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
> +	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  	/* Check if the next non-void item is END. */
>  	index++;
> diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
> index cf51793cfef0..e6c9ad442ac0 100644
> --- a/drivers/net/enic/enic_flow.c
> +++ b/drivers/net/enic/enic_flow.c
> @@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
>  	if (!mask)
>  		mask = &rte_flow_item_eth_mask;
>  
> -	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
> +	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
>  	       RTE_ETHER_ADDR_LEN);
> -	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
> +	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
>  	       RTE_ETHER_ADDR_LEN);
>  
> -	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
> +	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
>  	       RTE_ETHER_ADDR_LEN);
> -	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
> +	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
>  	       RTE_ETHER_ADDR_LEN);
> -	enic_spec.ether_type = spec->type;
> -	enic_mask.ether_type = mask->type;
> +	enic_spec.ether_type = spec->hdr.ether_type;
> +	enic_mask.ether_type = mask->hdr.ether_type;
>  
>  	/* outer header */
>  	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
> @@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
>  		struct rte_vlan_hdr *vlan;
>  
>  		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
> -		vlan->eth_proto = mask->inner_type;
> +		vlan->eth_proto = mask->hdr.eth_proto;
>  		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
> -		vlan->eth_proto = spec->inner_type;
> +		vlan->eth_proto = spec->hdr.eth_proto;
>  	} else {
> -		eth_mask->ether_type = mask->inner_type;
> -		eth_val->ether_type = spec->inner_type;
> +		eth_mask->ether_type = mask->hdr.eth_proto;
> +		eth_val->ether_type = spec->hdr.eth_proto;
>  	}
>  	/* For TCI, use the vlan mask/val fields (little endian). */
> -	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
> -	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
> +	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
> +	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
>  	return 0;
>  }
>  
> diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
> index c87d3af8476c..90027dc67695 100644
> --- a/drivers/net/enic/enic_fm_flow.c
> +++ b/drivers/net/enic/enic_fm_flow.c
> @@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
>  	eth_val = (void *)&fm_data->l2.eth;
>  
>  	/*
> -	 * Outer TPID cannot be matched. If inner_type is 0, use what is
> +	 * Outer TPID cannot be matched. If protocol is 0, use what is
>  	 * in the eth header.
>  	 */
> -	if (eth_mask->ether_type && mask->inner_type)
> +	if (eth_mask->ether_type && mask->hdr.eth_proto)
>  		return -ENOTSUP;
>  
>  	/*
> @@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
>  	 * L2, regardless of vlan stripping settings. So, the inner type
>  	 * from vlan becomes the ether type of the eth header.
>  	 */
> -	if (mask->inner_type) {
> -		eth_mask->ether_type = mask->inner_type;
> -		eth_val->ether_type = spec->inner_type;
> +	if (mask->hdr.eth_proto) {
> +		eth_mask->ether_type = mask->hdr.eth_proto;
> +		eth_val->ether_type = spec->hdr.eth_proto;
>  	}
>  	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
>  	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
> -	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
> -	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
> +	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
> +	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
>  	return 0;
>  }
>  
> @@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
>  
>  		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
>  		spec = item->spec;
> -		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
> +		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
>  		item++;
>  		flow_item_skip_void(&item);
>  	}
> diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
> index 358b372e07e8..d1a564a16303 100644
> --- a/drivers/net/hinic/hinic_pmd_flow.c
> +++ b/drivers/net/hinic/hinic_pmd_flow.c
> @@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	 * Mask bits of destination MAC address must be full
>  	 * of 1 or full of 0.
>  	 */
> -	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
> -	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
> -	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
> +	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
>  		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ether address mask");
>  		return -rte_errno;
>  	}
>  
> -	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
> +	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
>  		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ethertype mask");
>  		return -rte_errno;
> @@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	 * If mask bits of destination MAC address
>  	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
>  	 */
> -	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> -		filter->mac_addr = eth_spec->dst;
> +	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
> +		filter->mac_addr = eth_spec->hdr.dst_addr;
>  		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
>  	} else {
>  		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
>  	}
> -	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
> +	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  	/* Check if the next non-void item is END. */
>  	item = next_no_void_pattern(pattern, item);
> diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
> index a2c1589c3980..ef1832982dee 100644
> --- a/drivers/net/hns3/hns3_flow.c
> +++ b/drivers/net/hns3/hns3_flow.c
> @@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
>  
>  	if (item->mask) {
>  		eth_mask = item->mask;
> -		if (eth_mask->type) {
> +		if (eth_mask->hdr.ether_type) {
>  			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
>  			rule->key_conf.mask.ether_type =
> -			    rte_be_to_cpu_16(eth_mask->type);
> +			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
>  		}
> -		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
> +		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
>  			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
>  			memcpy(rule->key_conf.mask.src_mac,
> -			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
> +			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
>  		}
> -		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
> +		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
>  			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
>  			memcpy(rule->key_conf.mask.dst_mac,
> -			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
> +			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
>  		}
>  	}
>  
>  	eth_spec = item->spec;
> -	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
> -	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
> +	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
> +	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
>  	       RTE_ETHER_ADDR_LEN);
> -	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
> +	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
>  	       RTE_ETHER_ADDR_LEN);
>  	return 0;
>  }
> @@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
>  
>  	if (item->mask) {
>  		vlan_mask = item->mask;
> -		if (vlan_mask->tci) {
> +		if (vlan_mask->hdr.vlan_tci) {
>  			if (rule->key_conf.vlan_num == 1) {
>  				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
>  					     1);
>  				rule->key_conf.mask.vlan_tag1 =
> -				    rte_be_to_cpu_16(vlan_mask->tci);
> +				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
>  			} else {
>  				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
>  					     1);
>  				rule->key_conf.mask.vlan_tag2 =
> -				    rte_be_to_cpu_16(vlan_mask->tci);
> +				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
>  			}
>  		}
>  	}
> @@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
>  	vlan_spec = item->spec;
>  	if (rule->key_conf.vlan_num == 1)
>  		rule->key_conf.spec.vlan_tag1 =
> -		    rte_be_to_cpu_16(vlan_spec->tci);
> +		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
>  	else
>  		rule->key_conf.spec.vlan_tag2 =
> -		    rte_be_to_cpu_16(vlan_spec->tci);
> +		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
>  	return 0;
>  }
>  
> diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
> index 65a826d51c17..0acbd5a061e0 100644
> --- a/drivers/net/i40e/i40e_flow.c
> +++ b/drivers/net/i40e/i40e_flow.c
> @@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
>  			 * Mask bits of destination MAC address must be full
>  			 * of 1 or full of 0.
>  			 */
> -			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
> -			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
> -			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
> +			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
>  				return -rte_errno;
>  			}
>  
> -			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
> +			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
>  			/* If mask bits of destination MAC address
>  			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
>  			 */
> -			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> -				filter->mac_addr = eth_spec->dst;
> +			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
> +				filter->mac_addr = eth_spec->hdr.dst_addr;
>  				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
>  			} else {
>  				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
>  			}
> -			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
> +			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
>  			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
> @@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  			}
>  
>  			if (eth_spec && eth_mask) {
> -				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
> -					rte_is_zero_ether_addr(&eth_mask->src)) {
> +				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
> +					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
>  					filter->input.flow.l2_flow.dst =
> -						eth_spec->dst;
> +						eth_spec->hdr.dst_addr;
>  					input_set |= I40E_INSET_DMAC;
> -				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
> -					rte_is_broadcast_ether_addr(&eth_mask->src)) {
> +				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
>  					filter->input.flow.l2_flow.src =
> -						eth_spec->src;
> +						eth_spec->hdr.src_addr;
>  					input_set |= I40E_INSET_SMAC;
> -				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
> -					rte_is_broadcast_ether_addr(&eth_mask->src)) {
> +				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
> +					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
>  					filter->input.flow.l2_flow.dst =
> -						eth_spec->dst;
> +						eth_spec->hdr.dst_addr;
>  					filter->input.flow.l2_flow.src =
> -						eth_spec->src;
> +						eth_spec->hdr.src_addr;
>  					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
> -				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
> -					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
> +				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
>  					rte_flow_error_set(error, EINVAL,
>  						      RTE_FLOW_ERROR_TYPE_ITEM,
>  						      item,
> @@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  			}
>  			if (eth_spec && eth_mask &&
>  			next_type == RTE_FLOW_ITEM_TYPE_END) {
> -				if (eth_mask->type != RTE_BE16(0xffff)) {
> +				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
>  					rte_flow_error_set(error, EINVAL,
>  						      RTE_FLOW_ERROR_TYPE_ITEM,
>  						      item,
> @@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  					return -rte_errno;
>  				}
>  
> -				ether_type = rte_be_to_cpu_16(eth_spec->type);
> +				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
>  				    ether_type == RTE_ETHER_TYPE_IPV4 ||
> @@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  				}
>  				input_set |= I40E_INSET_LAST_ETHER_TYPE;
>  				filter->input.flow.l2_flow.ether_type =
> -					eth_spec->type;
> +					eth_spec->hdr.ether_type;
>  			}
>  
>  			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
> @@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  
>  			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
>  			if (vlan_spec && vlan_mask) {
> -				if (vlan_mask->tci !=
> +				if (vlan_mask->hdr.vlan_tci !=
>  				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
> -				    vlan_mask->tci !=
> +				    vlan_mask->hdr.vlan_tci !=
>  				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
> -				    vlan_mask->tci !=
> +				    vlan_mask->hdr.vlan_tci !=
>  				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
> -				    vlan_mask->tci !=
> +				    vlan_mask->hdr.vlan_tci !=
>  				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
>  					rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  				}
>  				input_set |= I40E_INSET_VLAN_INNER;
>  				filter->input.flow_ext.vlan_tci =
> -					vlan_spec->tci;
> +					vlan_spec->hdr.vlan_tci;
>  			}
> -			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
> -				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
> +			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
> +				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
>  					rte_flow_error_set(error, EINVAL,
>  						      RTE_FLOW_ERROR_TYPE_ITEM,
>  						      item,
> @@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  				}
>  
>  				ether_type =
> -					rte_be_to_cpu_16(vlan_spec->inner_type);
> +					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
>  
>  				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
>  				    ether_type == RTE_ETHER_TYPE_IPV6 ||
> @@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
>  				}
>  				input_set |= I40E_INSET_LAST_ETHER_TYPE;
>  				filter->input.flow.l2_flow.ether_type =
> -					vlan_spec->inner_type;
> +					vlan_spec->hdr.eth_proto;
>  			}
>  
>  			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
> @@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
>  				/* DST address of inner MAC shouldn't be masked.
>  				 * SRC address of Inner MAC should be masked.
>  				 */
> -				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
> -				    !rte_is_zero_ether_addr(&eth_mask->src) ||
> -				    eth_mask->type) {
> +				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
> +				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +				    eth_mask->hdr.ether_type) {
>  					rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
>  
>  				if (!vxlan_flag) {
>  					rte_memcpy(&filter->outer_mac,
> -						   &eth_spec->dst,
> +						   &eth_spec->hdr.dst_addr,
>  						   RTE_ETHER_ADDR_LEN);
>  					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
>  				} else {
>  					rte_memcpy(&filter->inner_mac,
> -						   &eth_spec->dst,
> +						   &eth_spec->hdr.dst_addr,
>  						   RTE_ETHER_ADDR_LEN);
>  					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
>  				}
> @@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
>  			vlan_spec = item->spec;
>  			vlan_mask = item->mask;
>  			if (!(vlan_spec && vlan_mask) ||
> -			    vlan_mask->inner_type) {
> +			    vlan_mask->hdr.eth_proto) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
>  			}
>  
>  			if (vlan_spec && vlan_mask) {
> -				if (vlan_mask->tci ==
> +				if (vlan_mask->hdr.vlan_tci ==
>  				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
>  					filter->inner_vlan =
> -					      rte_be_to_cpu_16(vlan_spec->tci) &
> +					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
>  					      I40E_VLAN_TCI_MASK;
>  				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
>  			}
> @@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
>  				/* DST address of inner MAC shouldn't be masked.
>  				 * SRC address of Inner MAC should be masked.
>  				 */
> -				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
> -				    !rte_is_zero_ether_addr(&eth_mask->src) ||
> -				    eth_mask->type) {
> +				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
> +				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +				    eth_mask->hdr.ether_type) {
>  					rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
>  
>  				if (!nvgre_flag) {
>  					rte_memcpy(&filter->outer_mac,
> -						   &eth_spec->dst,
> +						   &eth_spec->hdr.dst_addr,
>  						   RTE_ETHER_ADDR_LEN);
>  					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
>  				} else {
>  					rte_memcpy(&filter->inner_mac,
> -						   &eth_spec->dst,
> +						   &eth_spec->hdr.dst_addr,
>  						   RTE_ETHER_ADDR_LEN);
>  					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
>  				}
> @@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
>  			vlan_spec = item->spec;
>  			vlan_mask = item->mask;
>  			if (!(vlan_spec && vlan_mask) ||
> -			    vlan_mask->inner_type) {
> +			    vlan_mask->hdr.eth_proto) {
>  				rte_flow_error_set(error, EINVAL,
>  						   RTE_FLOW_ERROR_TYPE_ITEM,
>  						   item,
> @@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
>  			}
>  
>  			if (vlan_spec && vlan_mask) {
> -				if (vlan_mask->tci ==
> +				if (vlan_mask->hdr.vlan_tci ==
>  				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
>  					filter->inner_vlan =
> -					      rte_be_to_cpu_16(vlan_spec->tci) &
> +					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
>  					      I40E_VLAN_TCI_MASK;
>  				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
>  			}
> @@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
>  			vlan_mask = item->mask;
>  
>  			if (!(vlan_spec && vlan_mask) ||
> -			    vlan_mask->inner_type) {
> +			    vlan_mask->hdr.eth_proto) {
>  				rte_flow_error_set(error, EINVAL,
>  					   RTE_FLOW_ERROR_TYPE_ITEM,
>  					   item,
> @@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
>  
>  	/* Get filter specification */
>  	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
> -		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
> -		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
> +		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
> +		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
>  	} else {
>  			rte_flow_error_set(error, EINVAL,
>  					   RTE_FLOW_ERROR_TYPE_ITEM,
> diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
> index 0c848189776d..02e1457d8017 100644
> --- a/drivers/net/i40e/i40e_hash.c
> +++ b/drivers/net/i40e/i40e_hash.c
> @@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
>  	vlan_spec = pattern->spec;
>  	vlan_mask = pattern->mask;
>  	if (!vlan_spec || !vlan_mask ||
> -	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
> +	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
>  		return rte_flow_error_set(error, EINVAL,
>  					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
>  					  "Pattern error.");
> @@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
>  
>  	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
>  	rss_conf->region_queue_start = rss_act->queue[0];
> -	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
> +	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
>  	return 0;
>  }
>  
> diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
> index 8f8087392538..a6c88cb55b88 100644
> --- a/drivers/net/iavf/iavf_fdir.c
> +++ b/drivers/net/iavf/iavf_fdir.c
> @@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
>  			}
>  
>  			if (eth_spec && eth_mask) {
> -				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
> +				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
>  					input_set |= IAVF_INSET_DMAC;
>  					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
>  									ETH,
>  									DST);
> -				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
> +				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
>  					input_set |= IAVF_INSET_SMAC;
>  					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
>  									ETH,
>  									SRC);
>  				}
>  
> -				if (eth_mask->type) {
> -					if (eth_mask->type != RTE_BE16(0xffff)) {
> +				if (eth_mask->hdr.ether_type) {
> +					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
>  						rte_flow_error_set(error, EINVAL,
>  							RTE_FLOW_ERROR_TYPE_ITEM,
>  							item, "Invalid type mask.");
>  						return -rte_errno;
>  					}
>  
> -					ether_type = rte_be_to_cpu_16(eth_spec->type);
> +					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
>  						ether_type == RTE_ETHER_TYPE_IPV6) {
>  						rte_flow_error_set(error, EINVAL,
> diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
> index 4082c0069f31..74e1e7099b8c 100644
> --- a/drivers/net/iavf/iavf_fsub.c
> +++ b/drivers/net/iavf/iavf_fsub.c
> @@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
>  			if (eth_spec && eth_mask) {
>  				input = &outer_input_set;
>  
> -				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
> +				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
>  					*input |= IAVF_INSET_DMAC;
>  					input_set_byte += 6;
>  				} else {
> @@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
>  					input_set_byte += 6;
>  				}
>  
> -				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
> +				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
>  					*input |= IAVF_INSET_SMAC;
>  					input_set_byte += 6;
>  				}
>  
> -				if (eth_mask->type) {
> +				if (eth_mask->hdr.ether_type) {
>  					*input |= IAVF_INSET_ETHERTYPE;
>  					input_set_byte += 2;
>  				}
> @@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
>  
>  				*input |= IAVF_INSET_VLAN_OUTER;
>  
> -				if (vlan_mask->tci)
> +				if (vlan_mask->hdr.vlan_tci)
>  					input_set_byte += 2;
>  
> -				if (vlan_mask->inner_type) {
> +				if (vlan_mask->hdr.eth_proto) {
>  					rte_flow_error_set(error, EINVAL,
>  						RTE_FLOW_ERROR_TYPE_ITEM,
>  						item,
> diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
> index 868921cac595..08a80137e5b9 100644
> --- a/drivers/net/iavf/iavf_ipsec_crypto.c
> +++ b/drivers/net/iavf/iavf_ipsec_crypto.c
> @@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
>  		struct rte_ether_hdr *eth)
>  {
>  	memcpy(eth->src_addr.addr_bytes,
> -			item->src.addr_bytes, sizeof(eth->src_addr));
> +			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
>  	memcpy(eth->dst_addr.addr_bytes,
> -			item->dst.addr_bytes, sizeof(eth->dst_addr));
> +			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
>  }
>  
>  static void
> diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
> index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
> --- a/drivers/net/ice/ice_acl_filter.c
> +++ b/drivers/net/ice/ice_acl_filter.c
> @@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
>  			eth_mask = item->mask;
>  
>  			if (eth_spec && eth_mask) {
> -				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
> -				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> +				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
> +				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
>  					rte_flow_error_set(error, EINVAL,
>  						RTE_FLOW_ERROR_TYPE_ITEM,
>  						item, "Invalid mac addr mask");
>  					return -rte_errno;
>  				}
>  
> -				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
> -				    !rte_is_zero_ether_addr(&eth_mask->src)) {
> +				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
> +				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
>  					input_set |= ICE_INSET_SMAC;
>  					ice_memcpy(&filter->input.ext_data.src_mac,
> -						   &eth_spec->src,
> +						   &eth_spec->hdr.src_addr,
>  						   RTE_ETHER_ADDR_LEN,
>  						   ICE_NONDMA_TO_NONDMA);
>  					ice_memcpy(&filter->input.ext_mask.src_mac,
> -						   &eth_mask->src,
> +						   &eth_mask->hdr.src_addr,
>  						   RTE_ETHER_ADDR_LEN,
>  						   ICE_NONDMA_TO_NONDMA);
>  				}
>  
> -				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
> -				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
> +				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
> +				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
>  					input_set |= ICE_INSET_DMAC;
>  					ice_memcpy(&filter->input.ext_data.dst_mac,
> -						   &eth_spec->dst,
> +						   &eth_spec->hdr.dst_addr,
>  						   RTE_ETHER_ADDR_LEN,
>  						   ICE_NONDMA_TO_NONDMA);
>  					ice_memcpy(&filter->input.ext_mask.dst_mac,
> -						   &eth_mask->dst,
> +						   &eth_mask->hdr.dst_addr,
>  						   RTE_ETHER_ADDR_LEN,
>  						   ICE_NONDMA_TO_NONDMA);
>  				}
> diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
> index 7914ba940731..5d297afc290e 100644
> --- a/drivers/net/ice/ice_fdir_filter.c
> +++ b/drivers/net/ice/ice_fdir_filter.c
> @@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
>  			if (!(eth_spec && eth_mask))
>  				break;
>  
> -			if (!rte_is_zero_ether_addr(&eth_mask->dst))
> +			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
>  				*input_set |= ICE_INSET_DMAC;
> -			if (!rte_is_zero_ether_addr(&eth_mask->src))
> +			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
>  				*input_set |= ICE_INSET_SMAC;
>  
>  			next_type = (item + 1)->type;
>  			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
> -			if (eth_mask->type == RTE_BE16(0xffff) &&
> +			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
>  			    next_type == RTE_FLOW_ITEM_TYPE_END) {
>  				*input_set |= ICE_INSET_ETHERTYPE;
> -				ether_type = rte_be_to_cpu_16(eth_spec->type);
> +				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
>  				    ether_type == RTE_ETHER_TYPE_IPV6) {
> @@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
>  				     &filter->input.ext_data_outer :
>  				     &filter->input.ext_data;
>  			rte_memcpy(&p_ext_data->src_mac,
> -				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
> +				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
>  			rte_memcpy(&p_ext_data->dst_mac,
> -				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
> +				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
>  			rte_memcpy(&p_ext_data->ether_type,
> -				   &eth_spec->type, sizeof(eth_spec->type));
> +				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_IPV4:
>  			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
> diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
> index 60f7934a1697..d84061340e6c 100644
> --- a/drivers/net/ice/ice_switch_filter.c
> +++ b/drivers/net/ice/ice_switch_filter.c
> @@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
>  			eth_spec = item->spec;
>  			eth_mask = item->mask;
>  			if (eth_spec && eth_mask) {
> -				const uint8_t *a = eth_mask->src.addr_bytes;
> -				const uint8_t *b = eth_mask->dst.addr_bytes;
> +				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
> +				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
>  				if (tunnel_valid)
>  					input = &inner_input_set;
>  				else
> @@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
>  						break;
>  					}
>  				}
> -				if (eth_mask->type)
> +				if (eth_mask->hdr.ether_type)
>  					*input |= ICE_INSET_ETHERTYPE;
>  				list[t].type = (tunnel_valid  == 0) ?
>  					ICE_MAC_OFOS : ICE_MAC_IL;
> @@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
>  				h = &list[t].h_u.eth_hdr;
>  				m = &list[t].m_u.eth_hdr;
>  				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
> -					if (eth_mask->src.addr_bytes[j]) {
> +					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
>  						h->src_addr[j] =
> -						eth_spec->src.addr_bytes[j];
> +						eth_spec->hdr.src_addr.addr_bytes[j];
>  						m->src_addr[j] =
> -						eth_mask->src.addr_bytes[j];
> +						eth_mask->hdr.src_addr.addr_bytes[j];
>  						i = 1;
>  						input_set_byte++;
>  					}
> -					if (eth_mask->dst.addr_bytes[j]) {
> +					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
>  						h->dst_addr[j] =
> -						eth_spec->dst.addr_bytes[j];
> +						eth_spec->hdr.dst_addr.addr_bytes[j];
>  						m->dst_addr[j] =
> -						eth_mask->dst.addr_bytes[j];
> +						eth_mask->hdr.dst_addr.addr_bytes[j];
>  						i = 1;
>  						input_set_byte++;
>  					}
>  				}
>  				if (i)
>  					t++;
> -				if (eth_mask->type) {
> +				if (eth_mask->hdr.ether_type) {
>  					list[t].type = ICE_ETYPE_OL;
>  					list[t].h_u.ethertype.ethtype_id =
> -						eth_spec->type;
> +						eth_spec->hdr.ether_type;
>  					list[t].m_u.ethertype.ethtype_id =
> -						eth_mask->type;
> +						eth_mask->hdr.ether_type;
>  					input_set_byte += 2;
>  					t++;
>  				}
> @@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
>  					*input |= ICE_INSET_VLAN_INNER;
>  				}
>  
> -				if (vlan_mask->tci) {
> +				if (vlan_mask->hdr.vlan_tci) {
>  					list[t].h_u.vlan_hdr.vlan =
> -						vlan_spec->tci;
> +						vlan_spec->hdr.vlan_tci;
>  					list[t].m_u.vlan_hdr.vlan =
> -						vlan_mask->tci;
> +						vlan_mask->hdr.vlan_tci;
>  					input_set_byte += 2;
>  				}
> -				if (vlan_mask->inner_type) {
> +				if (vlan_mask->hdr.eth_proto) {
>  					rte_flow_error_set(error, EINVAL,
>  						RTE_FLOW_ERROR_TYPE_ITEM,
>  						item,
> @@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
>  				eth_mask = item->mask;
>  			else
>  				continue;
> -			if (eth_mask->type == UINT16_MAX)
> +			if (eth_mask->hdr.ether_type == UINT16_MAX)
>  				tun_type = ICE_SW_TUN_AND_NON_TUN;
>  		}
>  
> diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
> index 58a6a8a539c6..b677a0d61340 100644
> --- a/drivers/net/igc/igc_flow.c
> +++ b/drivers/net/igc/igc_flow.c
> @@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
>  	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
>  
>  	/* destination and source MAC address are not supported */
> -	if (!rte_is_zero_ether_addr(&mask->src) ||
> -		!rte_is_zero_ether_addr(&mask->dst))
> +	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
> +		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
>  		return rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
>  				"Only support ether-type");
>  
>  	/* ether-type mask bits must be all 1 */
> -	if (IGC_NOT_ALL_BITS_SET(mask->type))
> +	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
>  		return rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
>  				"Ethernet type mask bits must be all 1");
> @@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
>  	ether = &filter->ethertype;
>  
>  	/* get ether-type */
> -	ether->ether_type = rte_be_to_cpu_16(spec->type);
> +	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
>  
>  	/* ether-type should not be IPv4 and IPv6 */
>  	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
> diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
> index 5b57ee9341d3..ee56d0f43d93 100644
> --- a/drivers/net/ipn3ke/ipn3ke_flow.c
> +++ b/drivers/net/ipn3ke/ipn3ke_flow.c
> @@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
>  			eth = item->spec;
>  
>  			rte_memcpy(&parser->key[0],
> -					eth->src.addr_bytes,
> +					eth->hdr.src_addr.addr_bytes,
>  					RTE_ETHER_ADDR_LEN);
>  			break;
>  
> @@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
>  			eth = item->spec;
>  
>  			rte_memcpy(parser->key,
> -					eth->src.addr_bytes,
> +					eth->hdr.src_addr.addr_bytes,
>  					RTE_ETHER_ADDR_LEN);
>  			break;
>  
> @@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
>  			if (!outer_vlan) {
>  				outer_vlan = item->spec;
>  
> -				tci = rte_be_to_cpu_16(outer_vlan->tci);
> +				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
>  				parser->key[0]  = (tci & 0xff0) >> 4;
>  				parser->key[1] |= (tci & 0x00f) << 4;
>  			} else {
>  				inner_vlan = item->spec;
>  
> -				tci = rte_be_to_cpu_16(inner_vlan->tci);
> +				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
>  				parser->key[1] |= (tci & 0xf00) >> 8;
>  				parser->key[2]  = (tci & 0x0ff);
>  			}
> diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
> index 110ff34fcceb..a11da3dc8beb 100644
> --- a/drivers/net/ixgbe/ixgbe_flow.c
> +++ b/drivers/net/ixgbe/ixgbe_flow.c
> @@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	 * Mask bits of destination MAC address must be full
>  	 * of 1 or full of 0.
>  	 */
> -	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
> -	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
> -	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
> +	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
>  		rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ether address mask");
>  		return -rte_errno;
>  	}
>  
> -	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
> +	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
>  		rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ethertype mask");
> @@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	/* If mask bits of destination MAC address
>  	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
>  	 */
> -	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> -		filter->mac_addr = eth_spec->dst;
> +	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
> +		filter->mac_addr = eth_spec->hdr.dst_addr;
>  		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
>  	} else {
>  		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
>  	}
> -	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
> +	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  	/* Check if the next non-void item is END. */
>  	item = next_no_void_pattern(pattern, item);
> @@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
>  			/* Get the dst MAC. */
>  			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
>  				rule->ixgbe_fdir.formatted.inner_mac[j] =
> -					eth_spec->dst.addr_bytes[j];
> +					eth_spec->hdr.dst_addr.addr_bytes[j];
>  			}
>  		}
>  
> @@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
>  			eth_mask = item->mask;
>  
>  			/* Ether type should be masked. */
> -			if (eth_mask->type ||
> +			if (eth_mask->hdr.ether_type ||
>  			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
>  				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
>  				rte_flow_error_set(error, EINVAL,
> @@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
>  			 * and don't support dst MAC address mask.
>  			 */
>  			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
> -				if (eth_mask->src.addr_bytes[j] ||
> -					eth_mask->dst.addr_bytes[j] != 0xFF) {
> +				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
> +					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
>  					memset(rule, 0,
>  					sizeof(struct ixgbe_fdir_rule));
>  					rte_flow_error_set(error, EINVAL,
> @@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
>  		vlan_spec = item->spec;
>  		vlan_mask = item->mask;
>  
> -		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
> +		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
>  
> -		rule->mask.vlan_tci_mask = vlan_mask->tci;
> +		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
>  		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
>  		/* More than one tags are not supported. */
>  
> @@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  	eth_mask = item->mask;
>  
>  	/* Ether type should be masked. */
> -	if (eth_mask->type) {
> +	if (eth_mask->hdr.ether_type) {
>  		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
>  		rte_flow_error_set(error, EINVAL,
>  			RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  
>  	/* src MAC address should be masked. */
>  	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
> -		if (eth_mask->src.addr_bytes[j]) {
> +		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
>  			memset(rule, 0,
>  			       sizeof(struct ixgbe_fdir_rule));
>  			rte_flow_error_set(error, EINVAL,
> @@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  	rule->mask.mac_addr_byte_mask = 0;
>  	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
>  		/* It's a per byte mask. */
> -		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
> +		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
>  			rule->mask.mac_addr_byte_mask |= 0x1 << j;
> -		} else if (eth_mask->dst.addr_bytes[j]) {
> +		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
>  			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
>  			rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  		/* Get the dst MAC. */
>  		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
>  			rule->ixgbe_fdir.formatted.inner_mac[j] =
> -				eth_spec->dst.addr_bytes[j];
> +				eth_spec->hdr.dst_addr.addr_bytes[j];
>  		}
>  	}
>  
> @@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  		vlan_spec = item->spec;
>  		vlan_mask = item->mask;
>  
> -		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
> +		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
>  
> -		rule->mask.vlan_tci_mask = vlan_mask->tci;
> +		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
>  		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
>  		/* More than one tags are not supported. */
>  
> diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
> index 9d7247cf81d0..8ef9fd2db44e 100644
> --- a/drivers/net/mlx4/mlx4_flow.c
> +++ b/drivers/net/mlx4/mlx4_flow.c
> @@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
>  		uint32_t sum_dst = 0;
>  		uint32_t sum_src = 0;
>  
> -		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
> -			sum_dst += mask->dst.addr_bytes[i];
> -			sum_src += mask->src.addr_bytes[i];
> +		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
> +			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
> +			sum_src += mask->hdr.src_addr.addr_bytes[i];
>  		}
>  		if (sum_src) {
>  			msg = "mlx4 does not support source MAC matching";
>  			goto error;
>  		} else if (!sum_dst) {
>  			flow->promisc = 1;
> -		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
> -			if (!(spec->dst.addr_bytes[0] & 1)) {
> +		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
> +			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
>  				msg = "mlx4 does not support the explicit"
>  					" exclusion of all multicast traffic";
>  				goto error;
> @@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
>  		flow->promisc = 1;
>  		return 0;
>  	}
> -	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
> -	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
> +	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
> +	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
>  	/* Remove unwanted bits from values. */
>  	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
>  		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
> @@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
>  	struct ibv_flow_spec_eth *eth;
>  	const char *msg;
>  
> -	if (!mask || !mask->tci) {
> +	if (!mask || !mask->hdr.vlan_tci) {
>  		msg = "mlx4 cannot match all VLAN traffic while excluding"
>  			" non-VLAN traffic, TCI VID must be specified";
>  		goto error;
>  	}
> -	if (mask->tci != RTE_BE16(0x0fff)) {
> +	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
>  		msg = "mlx4 does not support partial TCI VID matching";
>  		goto error;
>  	}
> @@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
>  		return 0;
>  	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
>  		       sizeof(*eth));
> -	eth->val.vlan_tag = spec->tci;
> -	eth->mask.vlan_tag = mask->tci;
> +	eth->val.vlan_tag = spec->hdr.vlan_tci;
> +	eth->mask.vlan_tag = mask->hdr.vlan_tci;
>  	eth->val.vlan_tag &= eth->mask.vlan_tag;
>  	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
>  		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
> @@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
>  				       RTE_FLOW_ITEM_TYPE_IPV4),
>  		.mask_support = &(const struct rte_flow_item_eth){
>  			/* Only destination MAC can be matched. */
> -			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  		},
>  		.mask_default = &rte_flow_item_eth_mask,
>  		.mask_sz = sizeof(struct rte_flow_item_eth),
> @@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
>  		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
>  		.mask_support = &(const struct rte_flow_item_vlan){
>  			/* Only TCI VID matching is supported. */
> -			.tci = RTE_BE16(0x0fff),
> +			.hdr.vlan_tci = RTE_BE16(0x0fff),
>  		},
>  		.mask_default = &rte_flow_item_vlan_mask,
>  		.mask_sz = sizeof(struct rte_flow_item_vlan),
> @@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
>  	};
>  	struct rte_flow_item_eth eth_spec;
>  	const struct rte_flow_item_eth eth_mask = {
> -		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  	};
>  	const struct rte_flow_item_eth eth_allmulti = {
> -		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> +		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
>  	};
>  	struct rte_flow_item_vlan vlan_spec;
>  	const struct rte_flow_item_vlan vlan_mask = {
> -		.tci = RTE_BE16(0x0fff),
> +		.hdr.vlan_tci = RTE_BE16(0x0fff),
>  	};
>  	struct rte_flow_item pattern[] = {
>  		{
> @@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
>  			.type = RTE_FLOW_ACTION_TYPE_END,
>  		},
>  	};
> -	struct rte_ether_addr *rule_mac = &eth_spec.dst;
> +	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
>  	rte_be16_t *rule_vlan =
>  		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
>  		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
>  		!ETH_DEV(priv)->data->promiscuous ?
> -		&vlan_spec.tci :
> +		&vlan_spec.hdr.vlan_tci :
>  		NULL;
>  	uint16_t vlan = 0;
>  	struct rte_flow *flow;
> @@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
>  		if (i < RTE_DIM(priv->mac))
>  			mac = &priv->mac[i];
>  		else
> -			mac = &eth_mask.dst;
> +			mac = &eth_mask.hdr.dst_addr;
>  		if (rte_is_zero_ether_addr(mac))
>  			continue;
>  		/* Check if MAC flow rule is already present. */
> diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
> index 6b98eb8c9666..604384a24253 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_definer.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
> @@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
>  
>  /* Xmacro used to create generic item setter from items */
>  #define LIST_OF_FIELDS_INFO \
> -	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
> -	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
> -	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
> -	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
> -	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
> -	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
> +	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
> +	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
> +	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
> +	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
> +	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
> +	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
>  	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
>  	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
>  	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
> @@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
>  		return rte_errno;
>  	}
>  
> -	if (m->type) {
> +	if (m->hdr.ether_type) {
>  		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_eth_type_set;
> @@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
>  	}
>  
>  	/* Check SMAC 47_16 */
> -	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
> +	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
>  		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
> @@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
>  	}
>  
>  	/* Check SMAC 15_0 */
> -	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
> +	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
>  		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
> @@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
>  	}
>  
>  	/* Check DMAC 47_16 */
> -	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
> +	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
>  		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
> @@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
>  	}
>  
>  	/* Check DMAC 15_0 */
> -	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
> +	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
>  		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
> @@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
>  		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
>  	}
>  
> -	if (m->tci) {
> +	if (m->hdr.vlan_tci) {
>  		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_tci_set;
>  		DR_CALC_SET(fc, eth_l2, tci, inner);
>  	}
>  
> -	if (m->inner_type) {
> +	if (m->hdr.eth_proto) {
>  		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_eth_type_set;
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index a0cf677fb099..2512d6b52db9 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
>  		return RTE_FLOW_ITEM_TYPE_VOID;
>  	switch (item->type) {
>  	case RTE_FLOW_ITEM_TYPE_ETH:
> -		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
> +		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
>  		if (!mask)
>  			return RTE_FLOW_ITEM_TYPE_VOID;
>  		ret = mlx5_ethertype_to_item_type(spec, mask, false);
>  		break;
>  	case RTE_FLOW_ITEM_TYPE_VLAN:
> -		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
> +		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
>  		if (!mask)
>  			return RTE_FLOW_ITEM_TYPE_VOID;
>  		ret = mlx5_ethertype_to_item_type(spec, mask, false);
> @@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
>  {
>  	const struct rte_flow_item_eth *mask = item->mask;
>  	const struct rte_flow_item_eth nic_mask = {
> -		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -		.type = RTE_BE16(0xffff),
> +		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.ether_type = RTE_BE16(0xffff),
>  		.has_vlan = ext_vlan_sup ? 1 : 0,
>  	};
>  	int ret;
> @@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
>  	const struct rte_flow_item_vlan *spec = item->spec;
>  	const struct rte_flow_item_vlan *mask = item->mask;
>  	const struct rte_flow_item_vlan nic_mask = {
> -		.tci = RTE_BE16(UINT16_MAX),
> -		.inner_type = RTE_BE16(UINT16_MAX),
> +		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
> +		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
>  	};
>  	uint16_t vlan_tag = 0;
>  	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> @@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
>  					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
>  	if (ret)
>  		return ret;
> -	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
> +	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
>  		struct mlx5_priv *priv = dev->data->dev_private;
>  
>  		if (priv->vmwa_context) {
> @@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
>  		}
>  	}
>  	if (spec) {
> -		vlan_tag = spec->tci;
> -		vlan_tag &= mask->tci;
> +		vlan_tag = spec->hdr.vlan_tci;
> +		vlan_tag &= mask->hdr.vlan_tci;
>  	}
>  	/*
>  	 * From verbs perspective an empty VLAN is equivalent
> @@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
>  	 * a multicast dst mac causes kernel to give low priority to this flow.
>  	 */
>  	static const struct rte_flow_item_eth lacp_spec = {
> -		.type = RTE_BE16(0x8809),
> +		.hdr.ether_type = RTE_BE16(0x8809),
>  	};
>  	static const struct rte_flow_item_eth lacp_mask = {
> -		.type = 0xffff,
> +		.hdr.ether_type = 0xffff,
>  	};
>  	const struct rte_flow_attr attr = {
>  		.ingress = 1,
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
> index 62c38b87a1f0..ff915183b7cc 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
>  	memset(&eth, 0, sizeof(eth));
>  	memset(&eth_mask, 0, sizeof(eth_mask));
>  	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
> -		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
> -		       sizeof(eth.src.addr_bytes));
> -		memcpy(&eth_mask.src.addr_bytes,
> -		       &rte_flow_item_eth_mask.src.addr_bytes,
> -		       sizeof(eth_mask.src.addr_bytes));
> +		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
> +		       sizeof(eth.hdr.src_addr.addr_bytes));
> +		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
> +		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
> +		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
>  	} else {
> -		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
> -		       sizeof(eth.dst.addr_bytes));
> -		memcpy(&eth_mask.dst.addr_bytes,
> -		       &rte_flow_item_eth_mask.dst.addr_bytes,
> -		       sizeof(eth_mask.dst.addr_bytes));
> +		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
> +		       sizeof(eth.hdr.dst_addr.addr_bytes));
> +		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
> +		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
> +		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
>  	}
>  	item.spec = &eth;
>  	item.mask = &eth_mask;
> @@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
>  {
>  	const struct rte_flow_item_vlan *mask = item->mask;
>  	const struct rte_flow_item_vlan nic_mask = {
> -		.tci = RTE_BE16(UINT16_MAX),
> -		.inner_type = RTE_BE16(UINT16_MAX),
> +		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
> +		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
>  		.has_more_vlan = 1,
>  	};
>  	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> @@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
>  					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
>  	if (ret)
>  		return ret;
> -	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
> +	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
>  		struct mlx5_priv *priv = dev->data->dev_private;
>  
>  		if (priv->vmwa_context) {
> @@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
>  				  struct rte_vlan_hdr *vlan)
>  {
>  	const struct rte_flow_item_vlan nic_mask = {
> -		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
> +		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
>  				MLX5DV_FLOW_VLAN_VID_MASK),
> -		.inner_type = RTE_BE16(0xffff),
> +		.hdr.eth_proto = RTE_BE16(0xffff),
>  	};
>  
>  	if (items == NULL)
> @@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
>  		if (!vlan_m)
>  			vlan_m = &nic_mask;
>  		/* Only full match values are accepted */
> -		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
> +		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
>  		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
>  			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
>  			vlan->vlan_tci |=
> -				rte_be_to_cpu_16(vlan_v->tci &
> +				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
>  						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
>  		}
> -		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
> +		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
>  		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
>  			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
>  			vlan->vlan_tci |=
> -				rte_be_to_cpu_16(vlan_v->tci &
> +				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
>  						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
>  		}
> -		if (vlan_m->inner_type == nic_mask.inner_type)
> -			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
> -							   vlan_m->inner_type);
> +		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
> +			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
> +							   vlan_m->hdr.eth_proto);
>  	}
>  }
>  
> @@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
>  					  "push vlan action for VF representor "
>  					  "not supported on NIC table");
>  	if (vlan_m &&
> -	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
> -	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
> +	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
> +	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
>  		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
>  	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
>  	    !(mlx5_flow_find_action
> @@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
>  					  "push VLAN action cannot figure out "
>  					  "PCP value");
>  	if (vlan_m &&
> -	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
> -	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
> +	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
> +	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
>  		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
>  	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
>  	    !(mlx5_flow_find_action
> @@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
>  			if (items->mask != NULL && items->spec != NULL) {
>  				ether_type =
>  					((const struct rte_flow_item_eth *)
> -					 items->spec)->type;
> +					 items->spec)->hdr.ether_type;
>  				ether_type &=
>  					((const struct rte_flow_item_eth *)
> -					 items->mask)->type;
> +					 items->mask)->hdr.ether_type;
>  				ether_type = rte_be_to_cpu_16(ether_type);
>  			} else {
>  				ether_type = 0;
> @@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
>  			if (items->mask != NULL && items->spec != NULL) {
>  				ether_type =
>  					((const struct rte_flow_item_vlan *)
> -					 items->spec)->inner_type;
> +					 items->spec)->hdr.eth_proto;
>  				ether_type &=
>  					((const struct rte_flow_item_vlan *)
> -					 items->mask)->inner_type;
> +					 items->mask)->hdr.eth_proto;
>  				ether_type = rte_be_to_cpu_16(ether_type);
>  			} else {
>  				ether_type = 0;
> @@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
>  	const struct rte_flow_item_eth *eth_m;
>  	const struct rte_flow_item_eth *eth_v;
>  	const struct rte_flow_item_eth nic_mask = {
> -		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -		.type = RTE_BE16(0xffff),
> +		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.ether_type = RTE_BE16(0xffff),
>  		.has_vlan = 0,
>  	};
>  	void *hdrs_v;
> @@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
>  		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
>  	/* The value must be in the range of the mask. */
>  	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
> -	for (i = 0; i < sizeof(eth_m->dst); ++i)
> -		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
> +	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
> +		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
>  	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
>  	/* The value must be in the range of the mask. */
> -	for (i = 0; i < sizeof(eth_m->dst); ++i)
> -		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
> +	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
> +		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
>  	/*
>  	 * HW supports match on one Ethertype, the Ethertype following the last
>  	 * VLAN tag of the packet (see PRM).
> @@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
>  	 * ethertype, and use ip_version field instead.
>  	 * eCPRI over Ether layer will use type value 0xAEFE.
>  	 */
> -	if (eth_m->type == 0xFFFF) {
> -		rte_be16_t type = eth_v->type;
> +	if (eth_m->hdr.ether_type == 0xFFFF) {
> +		rte_be16_t type = eth_v->hdr.ether_type;
>  
>  		/*
>  		 * When set the matcher mask, refer to the original spec
> @@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
>  		 */
>  		if (key_type == MLX5_SET_MATCHER_SW_M) {
>  			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
> -			type = eth_vv->type;
> +			type = eth_vv->hdr.ether_type;
>  		}
>  		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
>  		switch (type) {
> @@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
>  			return;
>  	}
>  	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
> -	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
> +	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
>  }
>  
>  /**
> @@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
>  		 * and pre-validated.
>  		 */
>  		if (vlan_vv)
> -			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
> +			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
>  	}
>  	/*
>  	 * When VLAN item exists in flow, mark packet as tagged,
> @@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
>  		return;
>  	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
>  			 &rte_flow_item_vlan_mask);
> -	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
> +	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
>  	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
>  	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
>  	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
> @@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
>  	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
>  	 * ethertype, and use ip_version field instead.
>  	 */
> -	if (vlan_m->inner_type == 0xFFFF) {
> -		rte_be16_t inner_type = vlan_v->inner_type;
> +	if (vlan_m->hdr.eth_proto == 0xFFFF) {
> +		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
>  
>  		/*
>  		 * When set the matcher mask, refer to the original spec
>  		 * value.
>  		 */
>  		if (key_type == MLX5_SET_MATCHER_SW_M)
> -			inner_type = vlan_vv->inner_type;
> +			inner_type = vlan_vv->hdr.eth_proto;
>  		switch (inner_type) {
>  		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
>  			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
> @@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
>  		return;
>  	}
>  	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
> -		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
> +		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
>  }
>  
>  /**
> diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
> index a3c8056515da..b8f96839c8bf 100644
> --- a/drivers/net/mlx5/mlx5_flow_hw.c
> +++ b/drivers/net/mlx5/mlx5_flow_hw.c
> @@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
>  
>  /* Ethernet item spec for promiscuous mode. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
> -	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  /* Ethernet item mask for promiscuous mode. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
> -	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  
>  /* Ethernet item spec for all multicast mode. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
> -	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  /* Ethernet item mask for all multicast mode. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
> -	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  
>  /* Ethernet item spec for IPv4 multicast traffic. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
> -	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  /* Ethernet item mask for IPv4 multicast traffic. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
> -	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  
>  /* Ethernet item spec for IPv6 multicast traffic. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
> -	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  /* Ethernet item mask for IPv6 multicast traffic. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
> -	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  
>  /* Ethernet item mask for unicast traffic. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
> -	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  
>  /* Ethernet item spec for broadcast. */
>  static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
> -	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -	.type = 0,
> +	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +	.hdr.ether_type = 0,
>  };
>  
>  /**
> @@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
>  		.egress = 1,
>  	};
>  	struct rte_flow_item_eth promisc = {
> -		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -		.type = 0,
> +		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +		.hdr.ether_type = 0,
>  	};
>  	struct rte_flow_item eth_all[] = {
>  		[0] = {
> @@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
>  {
>  	struct mlx5_priv *priv = dev->data->dev_private;
>  	struct rte_flow_item_eth promisc = {
> -		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -		.type = 0,
> +		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +		.hdr.ether_type = 0,
>  	};
>  	struct rte_flow_item eth_all[] = {
>  		[0] = {
> @@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
>  	for (i = 0; i < priv->vlan_filter_n; ++i) {
>  		uint16_t vlan = priv->vlan_filter[i];
>  		struct rte_flow_item_vlan vlan_spec = {
> -			.tci = rte_cpu_to_be_16(vlan),
> +			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
>  		};
>  
>  		items[1].spec = &vlan_spec;
> @@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
>  
>  		if (!memcmp(mac, &cmp, sizeof(*mac)))
>  			continue;
> -		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
> +		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
>  		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
>  			return -rte_errno;
>  	}
> @@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
>  
>  		if (!memcmp(mac, &cmp, sizeof(*mac)))
>  			continue;
> -		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
> +		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
>  		for (j = 0; j < priv->vlan_filter_n; ++j) {
>  			uint16_t vlan = priv->vlan_filter[j];
>  			struct rte_flow_item_vlan vlan_spec = {
> -				.tci = rte_cpu_to_be_16(vlan),
> +				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
>  			};
>  
>  			items[1].spec = &vlan_spec;
> diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
> index 28ea28bfbe02..1902b97ec6d4 100644
> --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> @@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
>  	if (spec) {
>  		unsigned int i;
>  
> -		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
> +		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
>  			RTE_ETHER_ADDR_LEN);
> -		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
> +		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
>  			RTE_ETHER_ADDR_LEN);
> -		eth.val.ether_type = spec->type;
> -		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
> +		eth.val.ether_type = spec->hdr.ether_type;
> +		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
>  			RTE_ETHER_ADDR_LEN);
> -		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
> +		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
>  			RTE_ETHER_ADDR_LEN);
> -		eth.mask.ether_type = mask->type;
> +		eth.mask.ether_type = mask->hdr.ether_type;
>  		/* Remove unwanted bits from values. */
>  		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
>  			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
> @@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
>  	if (!mask)
>  		mask = &rte_flow_item_vlan_mask;
>  	if (spec) {
> -		eth.val.vlan_tag = spec->tci;
> -		eth.mask.vlan_tag = mask->tci;
> +		eth.val.vlan_tag = spec->hdr.vlan_tci;
> +		eth.mask.vlan_tag = mask->hdr.vlan_tci;
>  		eth.val.vlan_tag &= eth.mask.vlan_tag;
> -		eth.val.ether_type = spec->inner_type;
> -		eth.mask.ether_type = mask->inner_type;
> +		eth.val.ether_type = spec->hdr.eth_proto;
> +		eth.mask.ether_type = mask->hdr.eth_proto;
>  		eth.val.ether_type &= eth.mask.ether_type;
>  	}
>  	if (!(item_flags & l2m))
> @@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
>  		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
>  	if (!tunnel)
>  		dev_flow->handle->vf_vlan.tag =
> -			rte_be_to_cpu_16(spec->tci) & 0x0fff;
> +			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
>  }
>  
>  /**
> @@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
>  			if (items->mask != NULL && items->spec != NULL) {
>  				ether_type =
>  					((const struct rte_flow_item_eth *)
> -					 items->spec)->type;
> +					 items->spec)->hdr.ether_type;
>  				ether_type &=
>  					((const struct rte_flow_item_eth *)
> -					 items->mask)->type;
> +					 items->mask)->hdr.ether_type;
>  				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
>  					is_empty_vlan = true;
>  				ether_type = rte_be_to_cpu_16(ether_type);
> @@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
>  			if (items->mask != NULL && items->spec != NULL) {
>  				ether_type =
>  					((const struct rte_flow_item_vlan *)
> -					 items->spec)->inner_type;
> +					 items->spec)->hdr.eth_proto;
>  				ether_type &=
>  					((const struct rte_flow_item_vlan *)
> -					 items->mask)->inner_type;
> +					 items->mask)->hdr.eth_proto;
>  				ether_type = rte_be_to_cpu_16(ether_type);
>  			} else {
>  				ether_type = 0;
> diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
> index f54443ed1ac4..3457bf65d3e1 100644
> --- a/drivers/net/mlx5/mlx5_trigger.c
> +++ b/drivers/net/mlx5/mlx5_trigger.c
> @@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
>  {
>  	struct mlx5_priv *priv = dev->data->dev_private;
>  	struct rte_flow_item_eth bcast = {
> -		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  	};
>  	struct rte_flow_item_eth ipv6_multi_spec = {
> -		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
> +		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
>  	};
>  	struct rte_flow_item_eth ipv6_multi_mask = {
> -		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
> +		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
>  	};
>  	struct rte_flow_item_eth unicast = {
> -		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
>  	};
>  	struct rte_flow_item_eth unicast_mask = {
> -		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  	};
>  	const unsigned int vlan_filter_n = priv->vlan_filter_n;
>  	const struct rte_ether_addr cmp = {
> @@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
>  		return 0;
>  	if (dev->data->promiscuous) {
>  		struct rte_flow_item_eth promisc = {
> -			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -			.type = 0,
> +			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +			.hdr.ether_type = 0,
>  		};
>  
>  		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
> @@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
>  	}
>  	if (dev->data->all_multicast) {
>  		struct rte_flow_item_eth multicast = {
> -			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> -			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> -			.type = 0,
> +			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> +			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
> +			.hdr.ether_type = 0,
>  		};
>  
>  		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
> @@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
>  			uint16_t vlan = priv->vlan_filter[i];
>  
>  			struct rte_flow_item_vlan vlan_spec = {
> -				.tci = rte_cpu_to_be_16(vlan),
> +				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
>  			};
>  			struct rte_flow_item_vlan vlan_mask =
>  				rte_flow_item_vlan_mask;
> @@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
>  
>  		if (!memcmp(mac, &cmp, sizeof(*mac)))
>  			continue;
> -		memcpy(&unicast.dst.addr_bytes,
> +		memcpy(&unicast.hdr.dst_addr.addr_bytes,
>  		       mac->addr_bytes,
>  		       RTE_ETHER_ADDR_LEN);
>  		for (j = 0; j != vlan_filter_n; ++j) {
>  			uint16_t vlan = priv->vlan_filter[j];
>  
>  			struct rte_flow_item_vlan vlan_spec = {
> -				.tci = rte_cpu_to_be_16(vlan),
> +				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
>  			};
>  			struct rte_flow_item_vlan vlan_mask =
>  				rte_flow_item_vlan_mask;
> diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
> index 99695b91c496..e74a5f83f55b 100644
> --- a/drivers/net/mvpp2/mrvl_flow.c
> +++ b/drivers/net/mvpp2/mrvl_flow.c
> @@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
>  	const uint8_t *k, *m;
>  
>  	if (parse_dst) {
> -		k = spec->dst.addr_bytes;
> -		m = mask->dst.addr_bytes;
> +		k = spec->hdr.dst_addr.addr_bytes;
> +		m = mask->hdr.dst_addr.addr_bytes;
>  
>  		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
>  			MV_NET_ETH_F_DA;
>  	} else {
> -		k = spec->src.addr_bytes;
> -		m = mask->src.addr_bytes;
> +		k = spec->hdr.src_addr.addr_bytes;
> +		m = mask->hdr.src_addr.addr_bytes;
>  
>  		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
>  			MV_NET_ETH_F_SA;
> @@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
>  	mrvl_alloc_key_mask(key_field);
>  	key_field->size = 2;
>  
> -	k = rte_be_to_cpu_16(spec->type);
> +	k = rte_be_to_cpu_16(spec->hdr.ether_type);
>  	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
>  
>  	flow->table_key.proto_field[flow->rule.num_fields].proto =
> @@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
>  	mrvl_alloc_key_mask(key_field);
>  	key_field->size = 2;
>  
> -	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
> +	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
>  	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
>  
>  	flow->table_key.proto_field[flow->rule.num_fields].proto =
> @@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
>  	mrvl_alloc_key_mask(key_field);
>  	key_field->size = 1;
>  
> -	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
> +	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
>  	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
>  
>  	flow->table_key.proto_field[flow->rule.num_fields].proto =
> @@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
>  
>  	memset(&zero, 0, sizeof(zero));
>  
> -	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
> +	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
>  		ret = mrvl_parse_dmac(spec, mask, flow);
>  		if (ret)
>  			goto out;
>  	}
>  
> -	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
> +	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
>  		ret = mrvl_parse_smac(spec, mask, flow);
>  		if (ret)
>  			goto out;
>  	}
>  
> -	if (mask->type) {
> +	if (mask->hdr.ether_type) {
>  		MRVL_LOG(WARNING, "eth type mask is ignored");
>  		ret = mrvl_parse_type(spec, mask, flow);
>  		if (ret)
> @@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
>  	if (ret)
>  		return ret;
>  
> -	m = rte_be_to_cpu_16(mask->tci);
> +	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
>  	if (m & MRVL_VLAN_ID_MASK) {
>  		MRVL_LOG(WARNING, "vlan id mask is ignored");
>  		ret = mrvl_parse_vlan_id(spec, mask, flow);
> @@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
>  			goto out;
>  	}
>  
> -	if (mask->inner_type) {
> +	if (mask->hdr.eth_proto) {
>  		struct rte_flow_item_eth spec_eth = {
> -			.type = spec->inner_type,
> +			.hdr.ether_type = spec->hdr.eth_proto,
>  		};
>  		struct rte_flow_item_eth mask_eth = {
> -			.type = mask->inner_type,
> +			.hdr.ether_type = mask->hdr.eth_proto,
>  		};
>  
>  		/* TPID is not supported so if ETH_TYPE was selected,
> diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
> index ff2e21c817b4..bd3a8d2a3b2f 100644
> --- a/drivers/net/nfp/nfp_flow.c
> +++ b/drivers/net/nfp/nfp_flow.c
> @@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
>  	eth = (void *)*mbuf_off;
>  
>  	if (is_mask) {
> -		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
> -		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
> +		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
> +		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
>  	} else {
> -		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
> -		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
> +		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
> +		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
>  	}
>  
>  	eth->mpls_lse = 0;
> @@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
>  	mask = item->mask ? item->mask : proc->mask_default;
>  	if (is_mask) {
>  		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
> -		meta_tci->tci |= mask->tci;
> +		meta_tci->tci |= mask->hdr.vlan_tci;
>  	} else {
>  		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
> -		meta_tci->tci |= spec->tci;
> +		meta_tci->tci |= spec->hdr.vlan_tci;
>  	}
>  
>  	return 0;
> diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
> index fb59abd0b563..f098edc6eb33 100644
> --- a/drivers/net/sfc/sfc_flow.c
> +++ b/drivers/net/sfc/sfc_flow.c
> @@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
>  	const struct rte_flow_item_eth *spec = NULL;
>  	const struct rte_flow_item_eth *mask = NULL;
>  	const struct rte_flow_item_eth supp_mask = {
> -		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
> -		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
> -		.type = 0xffff,
> +		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
> +		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
> +		.hdr.ether_type = 0xffff,
>  	};
>  	const struct rte_flow_item_eth ifrm_supp_mask = {
> -		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
> +		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
>  	};
>  	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
>  		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
> @@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
>  	if (spec == NULL)
>  		return 0;
>  
> -	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
> +	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
>  		efx_spec->efs_match_flags |= is_ifrm ?
>  			EFX_FILTER_MATCH_IFRM_LOC_MAC :
>  			EFX_FILTER_MATCH_LOC_MAC;
> -		rte_memcpy(loc_mac, spec->dst.addr_bytes,
> +		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
>  			   EFX_MAC_ADDR_LEN);
> -	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
> +	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
>  			  EFX_MAC_ADDR_LEN) == 0) {
> -		if (rte_is_unicast_ether_addr(&spec->dst))
> +		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
>  			efx_spec->efs_match_flags |= is_ifrm ?
>  				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
>  				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
> @@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
>  			efx_spec->efs_match_flags |= is_ifrm ?
>  				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
>  				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
> -	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
> +	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
>  		goto fail_bad_mask;
>  	}
>  
> @@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
>  	 * ethertype masks are equal to zero in inner frame,
>  	 * so these fields are filled in only for the outer frame
>  	 */
> -	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
> +	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
>  		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
> -		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
> +		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
>  			   EFX_MAC_ADDR_LEN);
> -	} else if (!rte_is_zero_ether_addr(&mask->src)) {
> +	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
>  		goto fail_bad_mask;
>  	}
>  
> @@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
>  	 * Ether type is in big-endian byte order in item and
>  	 * in little-endian in efx_spec, so byte swap is used
>  	 */
> -	if (mask->type == supp_mask.type) {
> +	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
>  		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
> -		efx_spec->efs_ether_type = rte_bswap16(spec->type);
> -	} else if (mask->type != 0) {
> +		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
> +	} else if (mask->hdr.ether_type != 0) {
>  		goto fail_bad_mask;
>  	}
>  
> @@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
>  	const struct rte_flow_item_vlan *spec = NULL;
>  	const struct rte_flow_item_vlan *mask = NULL;
>  	const struct rte_flow_item_vlan supp_mask = {
> -		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
> -		.inner_type = RTE_BE16(0xffff),
> +		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
> +		.hdr.eth_proto = RTE_BE16(0xffff),
>  	};
>  
>  	rc = sfc_flow_parse_init(item,
> @@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
>  	 * If two VLAN items are included, the first matches
>  	 * the outer tag and the next matches the inner tag.
>  	 */
> -	if (mask->tci == supp_mask.tci) {
> +	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
>  		/* Apply mask to keep VID only */
> -		vid = rte_bswap16(spec->tci & mask->tci);
> +		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
>  
>  		if (!(efx_spec->efs_match_flags &
>  		      EFX_FILTER_MATCH_OUTER_VID)) {
> @@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
>  				   "VLAN TPID matching is not supported");
>  		return -rte_errno;
>  	}
> -	if (mask->inner_type == supp_mask.inner_type) {
> +	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
>  		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
> -		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
> -	} else if (mask->inner_type) {
> +		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
> +	} else if (mask->hdr.eth_proto) {
>  		rte_flow_error_set(error, EINVAL,
>  				   RTE_FLOW_ERROR_TYPE_ITEM, item,
> -				   "Bad mask for VLAN inner_type");
> +				   "Bad mask for VLAN inner type");
>  		return -rte_errno;
>  	}
>  
> diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
> index 421bb6da9582..710d04be13af 100644
> --- a/drivers/net/sfc/sfc_mae.c
> +++ b/drivers/net/sfc/sfc_mae.c
> @@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
>  		 * The field is handled by sfc_mae_rule_process_pattern_data().
>  		 */
>  		SFC_MAE_FIELD_HANDLING_DEFERRED,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
> -		offsetof(struct rte_flow_item_eth, type),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
> +		offsetof(struct rte_flow_item_eth, hdr.ether_type),
>  	},
>  	{
>  		EFX_MAE_FIELD_ETH_DADDR_BE,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
> -		offsetof(struct rte_flow_item_eth, dst),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
> +		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
>  	},
>  	{
>  		EFX_MAE_FIELD_ETH_SADDR_BE,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
> -		offsetof(struct rte_flow_item_eth, src),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
> +		offsetof(struct rte_flow_item_eth, hdr.src_addr),
>  	},
>  };
>  
> @@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
>  		 * sfc_mae_rule_process_pattern_data() will consider them
>  		 * altogether when the rest of the items have been parsed.
>  		 */
> -		ethertypes[0].value = item_spec->type;
> -		ethertypes[0].mask = item_mask->type;
> +		ethertypes[0].value = item_spec->hdr.ether_type;
> +		ethertypes[0].mask = item_mask->hdr.ether_type;
>  		if (item_mask->has_vlan) {
>  			pdata->has_ovlan_mask = B_TRUE;
>  			if (item_spec->has_vlan)
> @@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
>  	/* Outermost tag */
>  	{
>  		EFX_MAE_FIELD_VLAN0_TCI_BE,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
> -		offsetof(struct rte_flow_item_vlan, tci),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
> +		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
>  	},
>  	{
>  		/*
> @@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
>  		 * The field is handled by sfc_mae_rule_process_pattern_data().
>  		 */
>  		SFC_MAE_FIELD_HANDLING_DEFERRED,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
> -		offsetof(struct rte_flow_item_vlan, inner_type),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
> +		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
>  	},
>  
>  	/* Innermost tag */
>  	{
>  		EFX_MAE_FIELD_VLAN1_TCI_BE,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
> -		offsetof(struct rte_flow_item_vlan, tci),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
> +		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
>  	},
>  	{
>  		/*
> @@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
>  		 * The field is handled by sfc_mae_rule_process_pattern_data().
>  		 */
>  		SFC_MAE_FIELD_HANDLING_DEFERRED,
> -		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
> -		offsetof(struct rte_flow_item_vlan, inner_type),
> +		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
> +		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
>  	},
>  };
>  
> @@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
>  		 * sfc_mae_rule_process_pattern_data() will consider them
>  		 * altogether when the rest of the items have been parsed.
>  		 */
> -		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
> -		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
> -		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
> +		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
> +		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
> +		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
>  		if (item_mask->has_more_vlan) {
>  			if (pdata->nb_vlan_tags ==
>  			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
> diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
> index efe66fe0593d..ed4d42f92f9f 100644
> --- a/drivers/net/tap/tap_flow.c
> +++ b/drivers/net/tap/tap_flow.c
> @@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
>  			RTE_FLOW_ITEM_TYPE_IPV4,
>  			RTE_FLOW_ITEM_TYPE_IPV6),
>  		.mask = &(const struct rte_flow_item_eth){
> -			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> -			.type = -1,
> +			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +			.hdr.ether_type = -1,
>  		},
>  		.mask_sz = sizeof(struct rte_flow_item_eth),
>  		.default_mask = &rte_flow_item_eth_mask,
> @@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
>  		.mask = &(const struct rte_flow_item_vlan){
>  			/* DEI matching is not supported */
>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -			.tci = 0xffef,
> +			.hdr.vlan_tci = 0xffef,
>  #else
> -			.tci = 0xefff,
> +			.hdr.vlan_tci = 0xefff,
>  #endif
> -			.inner_type = -1,
> +			.hdr.eth_proto = -1,
>  		},
>  		.mask_sz = sizeof(struct rte_flow_item_vlan),
>  		.default_mask = &rte_flow_item_vlan_mask,
> @@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
>  		.items[0] = {
>  			.type = RTE_FLOW_ITEM_TYPE_ETH,
>  			.mask =  &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  			},
>  		},
>  		.items[1] = {
> @@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
>  		.items[0] = {
>  			.type = RTE_FLOW_ITEM_TYPE_ETH,
>  			.mask =  &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  			},
>  			.spec = &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
> +				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
>  			},
>  		},
>  		.items[1] = {
> @@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
>  		.items[0] = {
>  			.type = RTE_FLOW_ITEM_TYPE_ETH,
>  			.mask =  &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
> +				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
>  			},
>  			.spec = &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
> +				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
>  			},
>  		},
>  		.items[1] = {
> @@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
>  		.items[0] = {
>  			.type = RTE_FLOW_ITEM_TYPE_ETH,
>  			.mask =  &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> +				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
>  			},
>  			.spec = &(const struct rte_flow_item_eth){
> -				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
> +				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
>  			},
>  		},
>  		.items[1] = {
> @@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
>  	if (!mask)
>  		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
>  	/* TC does not support eth_type masking. Only accept if exact match. */
> -	if (mask->type && mask->type != 0xffff)
> +	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
>  		return -1;
>  	if (!spec)
>  		return 0;
>  	/* store eth_type for consistency if ipv4/6 pattern item comes next */
> -	if (spec->type & mask->type)
> -		info->eth_type = spec->type;
> +	if (spec->hdr.ether_type & mask->hdr.ether_type)
> +		info->eth_type = spec->hdr.ether_type;
>  	if (!flow)
>  		return 0;
>  	msg = &flow->msg;
> -	if (!rte_is_zero_ether_addr(&mask->dst)) {
> +	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
>  		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
>  			RTE_ETHER_ADDR_LEN,
> -			   &spec->dst.addr_bytes);
> +			   &spec->hdr.dst_addr.addr_bytes);
>  		tap_nlattr_add(&msg->nh,
>  			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
> -			   &mask->dst.addr_bytes);
> +			   &mask->hdr.dst_addr.addr_bytes);
>  	}
> -	if (!rte_is_zero_ether_addr(&mask->src)) {
> +	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
>  		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
>  			RTE_ETHER_ADDR_LEN,
> -			&spec->src.addr_bytes);
> +			&spec->hdr.src_addr.addr_bytes);
>  		tap_nlattr_add(&msg->nh,
>  			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
> -			   &mask->src.addr_bytes);
> +			   &mask->hdr.src_addr.addr_bytes);
>  	}
>  	return 0;
>  }
> @@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
>  	if (info->vlan)
>  		return -1;
>  	info->vlan = 1;
> -	if (mask->inner_type) {
> +	if (mask->hdr.eth_proto) {
>  		/* TC does not support partial eth_type masking */
> -		if (mask->inner_type != RTE_BE16(0xffff))
> +		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
>  			return -1;
> -		info->eth_type = spec->inner_type;
> +		info->eth_type = spec->hdr.eth_proto;
>  	}
>  	if (!flow)
>  		return 0;
> @@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
>  #define VLAN_ID(tci) ((tci) & 0xfff)
>  	if (!spec)
>  		return 0;
> -	if (spec->tci) {
> -		uint16_t tci = ntohs(spec->tci) & mask->tci;
> +	if (spec->hdr.vlan_tci) {
> +		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
>  		uint16_t prio = VLAN_PRIO(tci);
>  		uint8_t vid = VLAN_ID(tci);
>  
> @@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
>  	};
>  	struct rte_flow_item *items = implicit_rte_flows[idx].items;
>  	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
> -	struct rte_flow_item_eth eth_local = { .type = 0 };
> +	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
>  	unsigned int if_index = pmd->remote_if_index;
>  	struct rte_flow *remote_flow = NULL;
>  	struct nlmsg *msg = NULL;
> @@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
>  		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
>  		 * known at compile time.
>  		 */
> -		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
> +		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
>  		items = items_local;
>  	}
>  	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
> diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
> index 7b18dca7e8d2..7ef52d0b0fcd 100644
> --- a/drivers/net/txgbe/txgbe_flow.c
> +++ b/drivers/net/txgbe/txgbe_flow.c
> @@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	 * Mask bits of destination MAC address must be full
>  	 * of 1 or full of 0.
>  	 */
> -	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
> -	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
> -	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
> +	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
> +	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
> +	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
>  		rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ether address mask");
>  		return -rte_errno;
>  	}
>  
> -	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
> +	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
>  		rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
>  				item, "Invalid ethertype mask");
> @@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
>  	/* If mask bits of destination MAC address
>  	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
>  	 */
> -	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
> -		filter->mac_addr = eth_spec->dst;
> +	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
> +		filter->mac_addr = eth_spec->hdr.dst_addr;
>  		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
>  	} else {
>  		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
>  	}
> -	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
> +	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
>  
>  	/* Check if the next non-void item is END. */
>  	item = next_no_void_pattern(pattern, item);
> @@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
>  			eth_mask = item->mask;
>  
>  			/* Ether type should be masked. */
> -			if (eth_mask->type ||
> +			if (eth_mask->hdr.ether_type ||
>  			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
>  				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
>  				rte_flow_error_set(error, EINVAL,
> @@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
>  			 * and don't support dst MAC address mask.
>  			 */
>  			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
> -				if (eth_mask->src.addr_bytes[j] ||
> -					eth_mask->dst.addr_bytes[j] != 0xFF) {
> +				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
> +					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
>  					memset(rule, 0,
>  					sizeof(struct txgbe_fdir_rule));
>  					rte_flow_error_set(error, EINVAL,
> @@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  	eth_mask = item->mask;
>  
>  	/* Ether type should be masked. */
> -	if (eth_mask->type) {
> +	if (eth_mask->hdr.ether_type) {
>  		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
>  		rte_flow_error_set(error, EINVAL,
>  			RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  
>  	/* src MAC address should be masked. */
>  	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
> -		if (eth_mask->src.addr_bytes[j]) {
> +		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
>  			memset(rule, 0,
>  			       sizeof(struct txgbe_fdir_rule));
>  			rte_flow_error_set(error, EINVAL,
> @@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
>  	rule->mask.mac_addr_byte_mask = 0;
>  	for (j = 0; j < ETH_ADDR_LEN; j++) {
>  		/* It's a per byte mask. */
> -		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
> +		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
>  			rule->mask.mac_addr_byte_mask |= 0x1 << j;
> -		} else if (eth_mask->dst.addr_bytes[j]) {
> +		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
>  			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
>  			rte_flow_error_set(error, EINVAL,
>  				RTE_FLOW_ERROR_TYPE_ITEM,
> -- 
> 2.25.1
> 

-- 
Kind Regards,
Niklas Söderlund

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-26 16:19   ` [PATCH v5 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-01-27 14:34     ` Niklas Söderlund
  2023-02-01 17:44     ` Ori Kam
  2023-02-02  9:53     ` Andrew Rybchenko
  2 siblings, 0 replies; 90+ messages in thread
From: Niklas Söderlund @ 2023-01-27 14:34 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Hemant Agrawal, Sachin Saxena,
	Matan Azrad, Viacheslav Ovsiienko, Chaoyong He, Andrew Rybchenko,
	Olivier Matz, David Marchand, dev

Hi Ferruh and Thomas,

Thanks for your work.

On 2023-01-26 16:19:00 +0000, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> The protocol struct is added in an unnamed union, keeping old field names.
> 
> The GRE header struct members are used in apps and drivers
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  app/test-flow-perf/items_gen.c           |  4 ++--
>  app/test-pmd/cmdline_flow.c              | 14 +++++------
>  doc/guides/prog_guide/rte_flow.rst       |  6 +----
>  doc/guides/rel_notes/deprecation.rst     |  1 -
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++-----
>  drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++-----
>  drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 +++----
>  drivers/net/mlx5/mlx5_flow.c             | 22 ++++++++---------
>  drivers/net/mlx5/mlx5_flow_dv.c          | 30 +++++++++++++-----------
>  drivers/net/mlx5/mlx5_flow_verbs.c       | 10 ++++----
>  drivers/net/nfp/nfp_flow.c               |  9 +++----

For NFP,

Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>

>  lib/ethdev/rte_flow.h                    | 24 +++++++++++++------
>  lib/net/rte_gre.h                        |  5 ++++
>  13 files changed, 84 insertions(+), 73 deletions(-)
> 
> diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
> index a58245239ba1..0f19e5e53648 100644
> --- a/app/test-flow-perf/items_gen.c
> +++ b/app/test-flow-perf/items_gen.c
> @@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
>  	__rte_unused struct additional_para para)
>  {
>  	static struct rte_flow_item_gre gre_spec = {
> -		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
> +		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
>  	};
>  	static struct rte_flow_item_gre gre_mask = {
> -		.protocol = RTE_BE16(0xffff),
> +		.hdr.proto = RTE_BE16(0xffff),
>  	};
>  
>  	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index b904f8c3d45c..0e115956514c 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
> -					     protocol)),
> +					     hdr.proto)),
>  	},
>  	[ITEM_GRE_C_RSVD0_VER] = {
>  		.name = "c_rsvd0_ver",
> @@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
> -					     c_rsvd0_ver)),
> +					     hdr.c_rsvd0_ver)),
>  	},
>  	[ITEM_GRE_C_BIT] = {
>  		.name = "c_bit",
> @@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
>  			     item_param),
>  		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
> -						  c_rsvd0_ver,
> +						  hdr.c_rsvd0_ver,
>  						  "\x80\x00\x00\x00")),
>  	},
>  	[ITEM_GRE_S_BIT] = {
> @@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
>  		.help = "sequence number bit (S)",
>  		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
>  		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
> -						  c_rsvd0_ver,
> +						  hdr.c_rsvd0_ver,
>  						  "\x10\x00\x00\x00")),
>  	},
>  	[ITEM_GRE_K_BIT] = {
> @@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
>  		.help = "key bit (K)",
>  		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
>  		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
> -						  c_rsvd0_ver,
> +						  hdr.c_rsvd0_ver,
>  						  "\x20\x00\x00\x00")),
>  	},
>  	[ITEM_FUZZY] = {
> @@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
>  		},
>  	};
>  	struct rte_flow_item_gre gre = {
> -		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
> +		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
>  	};
>  	struct rte_flow_item_mpls mpls = {
>  		.ttl = 0,
> @@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
>  		},
>  	};
>  	struct rte_flow_item_gre gre = {
> -		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
> +		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
>  	};
>  	struct rte_flow_item_mpls mpls;
>  	uint8_t *header;
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 116722351486..603e1b866be3 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -980,8 +980,7 @@ Item: ``GRE``
>  
>  Matches a GRE header.
>  
> -- ``c_rsvd0_ver``: checksum, reserved 0 and version.
> -- ``protocol``: protocol type.
> +- ``hdr``:  header definition (``rte_gre.h``).
>  - Default ``mask`` matches protocol only.
>  
>  Item: ``GRE_KEY``
> @@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
>  Matches a GRE optional fields (checksum/key/sequence).
>  This should be preceded by item ``GRE``.
>  
> -- ``checksum``: checksum.
> -- ``key``: key.
> -- ``sequence``: sequence.
>  - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
>    item. The bit flags need be set with GRE item by application. When the items
>    present, the corresponding bits in GRE spec and mask should be set "1" by
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 638051789d19..80bf7209065a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -68,7 +68,6 @@ Deprecation Notices
>    - ``rte_flow_item_e_tag``
>    - ``rte_flow_item_geneve``
>    - ``rte_flow_item_geneve_opt``
> -  - ``rte_flow_item_gre``
>    - ``rte_flow_item_gtp``
>    - ``rte_flow_item_icmp6``
>    - ``rte_flow_item_icmp6_nd_na``
> diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
> index 80869b79c3fe..c1e231ce8c49 100644
> --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
> +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
> @@ -1461,16 +1461,16 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
>  		return BNXT_TF_RC_ERROR;
>  	}
>  
> -	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
> +	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.c_rsvd0_ver);
>  	ulp_rte_prsr_fld_mask(params, &idx, size,
> -			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
> -			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
> +			      ulp_deference_struct(gre_spec, hdr.c_rsvd0_ver),
> +			      ulp_deference_struct(gre_mask, hdr.c_rsvd0_ver),
>  			      ULP_PRSR_ACT_DEFAULT);
>  
> -	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
> +	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
>  	ulp_rte_prsr_fld_mask(params, &idx, size,
> -			      ulp_deference_struct(gre_spec, protocol),
> -			      ulp_deference_struct(gre_mask, protocol),
> +			      ulp_deference_struct(gre_spec, hdr.proto),
> +			      ulp_deference_struct(gre_mask, hdr.proto),
>  			      ULP_PRSR_ACT_DEFAULT);
>  
>  	/* Update the hdr_bitmap with GRE */
> diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
> index eec7e6065097..8a6d44da4875 100644
> --- a/drivers/net/dpaa2/dpaa2_flow.c
> +++ b/drivers/net/dpaa2/dpaa2_flow.c
> @@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
>  };
>  
>  static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
> -	.protocol = RTE_BE16(0xffff),
> +	.hdr.proto = RTE_BE16(0xffff),
>  };
>  
>  #endif
> @@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
>  		return -1;
>  	}
>  
> -	if (!mask->protocol)
> +	if (!mask->hdr.proto)
>  		return 0;
>  
>  	index = dpaa2_flow_extract_search(
> @@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
>  				&flow->qos_rule,
>  				NET_PROT_GRE,
>  				NH_FLD_GRE_TYPE,
> -				&spec->protocol,
> -				&mask->protocol,
> +				&spec->hdr.proto,
> +				&mask->hdr.proto,
>  				sizeof(rte_be16_t));
>  	if (ret) {
>  		DPAA2_PMD_ERR(
> @@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
>  			&flow->fs_rule,
>  			NET_PROT_GRE,
>  			NH_FLD_GRE_TYPE,
> -			&spec->protocol,
> -			&mask->protocol,
> +			&spec->hdr.proto,
> +			&mask->hdr.proto,
>  			sizeof(rte_be16_t));
>  	if (ret) {
>  		DPAA2_PMD_ERR(
> diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
> index 604384a24253..3a438f2c9d12 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_definer.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
> @@ -156,8 +156,8 @@ struct mlx5dr_definer_conv_data {
>  	X(SET,		source_qp,		v->queue,		mlx5_rte_flow_item_sq) \
>  	X(SET,		tag,			v->data,		rte_flow_item_tag) \
>  	X(SET,		metadata,		v->data,		rte_flow_item_meta) \
> -	X(SET_BE16,	gre_c_ver,		v->c_rsvd0_ver,		rte_flow_item_gre) \
> -	X(SET_BE16,	gre_protocol_type,	v->protocol,		rte_flow_item_gre) \
> +	X(SET_BE16,	gre_c_ver,		v->hdr.c_rsvd0_ver,	rte_flow_item_gre) \
> +	X(SET_BE16,	gre_protocol_type,	v->hdr.proto,		rte_flow_item_gre) \
>  	X(SET,		ipv4_protocol_gre,	IPPROTO_GRE,		rte_flow_item_gre) \
>  	X(SET_BE32,	gre_opt_key,		v->key.key,		rte_flow_item_gre_opt) \
>  	X(SET_BE32,	gre_opt_seq,		v->sequence.sequence,	rte_flow_item_gre_opt) \
> @@ -1210,7 +1210,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
>  	if (!m)
>  		return 0;
>  
> -	if (m->c_rsvd0_ver) {
> +	if (m->hdr.c_rsvd0_ver) {
>  		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_gre_c_ver_set;
> @@ -1219,7 +1219,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
>  		fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver);
>  	}
>  
> -	if (m->protocol) {
> +	if (m->hdr.proto) {
>  		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL];
>  		fc->item_idx = item_idx;
>  		fc->tag_set = &mlx5dr_definer_gre_protocol_type_set;
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index ff08a629e2c6..7b19c5f03f5d 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -329,7 +329,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
>  		ret = mlx5_ethertype_to_item_type(spec, mask, true);
>  		break;
>  	case RTE_FLOW_ITEM_TYPE_GRE:
> -		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
> +		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
>  		ret = mlx5_ethertype_to_item_type(spec, mask, true);
>  		break;
>  	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> @@ -3089,8 +3089,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
>  	if (!gre_mask)
>  		gre_mask = &rte_flow_item_gre_mask;
>  	gre_spec = gre_item->spec;
> -	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
> -			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
> +	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
>  		return rte_flow_error_set(error, EINVAL,
>  					  RTE_FLOW_ERROR_TYPE_ITEM, item,
>  					  "Key bit must be on");
> @@ -3165,21 +3164,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
>  	if (!gre_mask)
>  		gre_mask = &rte_flow_item_gre_mask;
>  	if (mask->checksum_rsvd.checksum)
> -		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
> -				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
> +		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
>  			return rte_flow_error_set(error, EINVAL,
>  						  RTE_FLOW_ERROR_TYPE_ITEM,
>  						  item,
>  						  "Checksum bit must be on");
>  	if (mask->key.key)
> -		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
> -				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
> +		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
>  			return rte_flow_error_set(error, EINVAL,
>  						  RTE_FLOW_ERROR_TYPE_ITEM,
>  						  item, "Key bit must be on");
>  	if (mask->sequence.sequence)
> -		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
> -				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
> +		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
>  			return rte_flow_error_set(error, EINVAL,
>  						  RTE_FLOW_ERROR_TYPE_ITEM,
>  						  item,
> @@ -3230,8 +3226,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
>  	const struct rte_flow_item_gre *mask = item->mask;
>  	int ret;
>  	const struct rte_flow_item_gre nic_mask = {
> -		.c_rsvd0_ver = RTE_BE16(0xB000),
> -		.protocol = RTE_BE16(UINT16_MAX),
> +		.hdr.c = 1,
> +		.hdr.k = 1,
> +		.hdr.s = 1,
> +		.hdr.proto = RTE_BE16(UINT16_MAX),
>  	};
>  
>  	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
> @@ -3259,7 +3257,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
>  		return ret;
>  #ifndef HAVE_MLX5DV_DR
>  #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
> -	if (spec && (spec->protocol & mask->protocol))
> +	if (spec && (spec->hdr.proto & mask->hdr.proto))
>  		return rte_flow_error_set(error, ENOTSUP,
>  					  RTE_FLOW_ERROR_TYPE_ITEM, item,
>  					  "without MPLS support the"
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
> index 261c60a5c33a..2b9c2ba6a4b5 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -8984,7 +8984,7 @@ static void
>  flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
>  			   uint64_t pattern_flags, uint32_t key_type)
>  {
> -	static const struct rte_flow_item_gre empty_gre = {0,};
> +	static const struct rte_flow_item_gre empty_gre = {{{0}}};
>  	const struct rte_flow_item_gre *gre_m = item->mask;
>  	const struct rte_flow_item_gre *gre_v = item->spec;
>  	void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
> @@ -9021,8 +9021,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
>  		gre_v = gre_m;
>  	else if (key_type == MLX5_SET_MATCHER_HS_V)
>  		gre_m = gre_v;
> -	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
> -	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
> +	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->hdr.c_rsvd0_ver);
> +	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->hdr.c_rsvd0_ver);
>  	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
>  		 gre_crks_rsvd0_ver_v.c_present &
>  		 gre_crks_rsvd0_ver_m.c_present);
> @@ -9032,8 +9032,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
>  	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
>  		 gre_crks_rsvd0_ver_v.s_present &
>  		 gre_crks_rsvd0_ver_m.s_present);
> -	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
> -	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
> +	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
> +	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
>  	if (!protocol_m) {
>  		/* Force next protocol to prevent matchers duplication */
>  		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
> @@ -9072,7 +9072,7 @@ flow_dv_translate_item_gre_option(void *key,
>  	const struct rte_flow_item_gre_opt *option_v = item->spec;
>  	const struct rte_flow_item_gre *gre_m = gre_item->mask;
>  	const struct rte_flow_item_gre *gre_v = gre_item->spec;
> -	static const struct rte_flow_item_gre empty_gre = {0};
> +	static const struct rte_flow_item_gre empty_gre = {{{0}}};
>  	struct rte_flow_item gre_key_item;
>  	uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v;
>  	uint16_t protocol_m, protocol_v;
> @@ -9097,8 +9097,8 @@ flow_dv_translate_item_gre_option(void *key,
>  		if (!gre_m)
>  			gre_m = &rte_flow_item_gre_mask;
>  	}
> -	protocol_v = gre_v->protocol;
> -	protocol_m = gre_m->protocol;
> +	protocol_v = gre_v->hdr.proto;
> +	protocol_m = gre_m->hdr.proto;
>  	if (!protocol_m) {
>  		/* Force next protocol to prevent matchers duplication */
>  		uint16_t ether_type =
> @@ -9108,8 +9108,8 @@ flow_dv_translate_item_gre_option(void *key,
>  			protocol_m = UINT16_MAX;
>  		}
>  	}
> -	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
> -	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
> +	c_rsvd0_ver_v = gre_v->hdr.c_rsvd0_ver;
> +	c_rsvd0_ver_m = gre_m->hdr.c_rsvd0_ver;
>  	if (option_m->sequence.sequence) {
>  		c_rsvd0_ver_v |= RTE_BE16(0x1000);
>  		c_rsvd0_ver_m |= RTE_BE16(0x1000);
> @@ -9171,12 +9171,14 @@ flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item,
>  
>  	/* For NVGRE, GRE header fields must be set with defined values. */
>  	const struct rte_flow_item_gre gre_spec = {
> -		.c_rsvd0_ver = RTE_BE16(0x2000),
> -		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
> +		.hdr.k = 1,
> +		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
>  	};
>  	const struct rte_flow_item_gre gre_mask = {
> -		.c_rsvd0_ver = RTE_BE16(0xB000),
> -		.protocol = RTE_BE16(UINT16_MAX),
> +		.hdr.c = 1,
> +		.hdr.k = 1,
> +		.hdr.s = 1,
> +		.hdr.proto = RTE_BE16(UINT16_MAX),
>  	};
>  	const struct rte_flow_item gre_item = {
>  		.spec = &gre_spec,
> diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
> index 4ef4f3044515..291369d437d4 100644
> --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> @@ -930,7 +930,7 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
>  		.size = size,
>  	};
>  #else
> -	static const struct rte_flow_item_gre empty_gre = {0,};
> +	static const struct rte_flow_item_gre empty_gre = {{{0}}};
>  	const struct rte_flow_item_gre *spec = item->spec;
>  	const struct rte_flow_item_gre *mask = item->mask;
>  	unsigned int size = sizeof(struct ibv_flow_spec_gre);
> @@ -946,10 +946,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
>  		if (!mask)
>  			mask = &rte_flow_item_gre_mask;
>  	}
> -	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
> -	tunnel.val.protocol = spec->protocol;
> -	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
> -	tunnel.mask.protocol = mask->protocol;
> +	tunnel.val.c_ks_res0_ver = spec->hdr.c_rsvd0_ver;
> +	tunnel.val.protocol = spec->hdr.proto;
> +	tunnel.mask.c_ks_res0_ver = mask->hdr.c_rsvd0_ver;
> +	tunnel.mask.protocol = mask->hdr.proto;
>  	/* Remove unwanted bits from values. */
>  	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
>  	tunnel.val.key &= tunnel.mask.key;
> diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
> index bd3a8d2a3b2f..0994fdeeb49f 100644
> --- a/drivers/net/nfp/nfp_flow.c
> +++ b/drivers/net/nfp/nfp_flow.c
> @@ -1812,8 +1812,9 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
>  	[RTE_FLOW_ITEM_TYPE_GRE] = {
>  		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
>  		.mask_support = &(const struct rte_flow_item_gre){
> -			.c_rsvd0_ver = RTE_BE16(0xa000),
> -			.protocol = RTE_BE16(0xffff),
> +			.hdr.c = 1,
> +			.hdr.k = 1,
> +			.hdr.proto = RTE_BE16(0xffff),
>  		},
>  		.mask_default = &rte_flow_item_gre_mask,
>  		.mask_sz = sizeof(struct rte_flow_item_gre),
> @@ -3144,7 +3145,7 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
>  	memset(set_tun, 0, act_set_size);
>  	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
>  			ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
> -	set_tun->tun_proto = gre->protocol;
> +	set_tun->tun_proto = gre->hdr.proto;
>  
>  	/* Send the tunnel neighbor cmsg to fw */
>  	return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
> @@ -3181,7 +3182,7 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
>  	tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
>  	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
>  			ipv6->hdr.hop_limits, tos);
> -	set_tun->tun_proto = gre->protocol;
> +	set_tun->tun_proto = gre->hdr.proto;
>  
>  	/* Send the tunnel neighbor cmsg to fw */
>  	return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index e2364823d622..3ae89e367c16 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -1070,19 +1070,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
>   *
>   * Matches a GRE header.
>   */
> +RTE_STD_C11
>  struct rte_flow_item_gre {
> -	/**
> -	 * Checksum (1b), reserved 0 (12b), version (3b).
> -	 * Refer to RFC 2784.
> -	 */
> -	rte_be16_t c_rsvd0_ver;
> -	rte_be16_t protocol; /**< Protocol type. */
> +	union {
> +		struct {
> +			/*
> +			 * These are old fields kept for compatibility.
> +			 * Please prefer hdr field below.
> +			 */
> +			/**
> +			 * Checksum (1b), reserved 0 (12b), version (3b).
> +			 * Refer to RFC 2784.
> +			 */
> +			rte_be16_t c_rsvd0_ver;
> +			rte_be16_t protocol; /**< Protocol type. */
> +		};
> +		struct rte_gre_hdr hdr; /**< GRE header definition. */
> +	};
>  };
>  
>  /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
>  #ifndef __cplusplus
>  static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
> -	.protocol = RTE_BE16(0xffff),
> +	.hdr.proto = RTE_BE16(UINT16_MAX),
>  };
>  #endif
>  
> diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
> index 6c6aef6fcaa0..210b81c99018 100644
> --- a/lib/net/rte_gre.h
> +++ b/lib/net/rte_gre.h
> @@ -28,6 +28,8 @@ extern "C" {
>   */
>  __extension__
>  struct rte_gre_hdr {
> +	union {
> +		struct {
>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>  	uint16_t res2:4; /**< Reserved */
>  	uint16_t s:1;    /**< Sequence Number Present bit */
> @@ -45,6 +47,9 @@ struct rte_gre_hdr {
>  	uint16_t res3:5; /**< Reserved */
>  	uint16_t ver:3;  /**< Version Number */
>  #endif
> +		};
> +		rte_be16_t c_rsvd0_ver;
> +	};
>  	uint16_t proto;  /**< Protocol Type */
>  } __rte_packed;
>  
> -- 
> 2.25.1
> 

-- 
Kind Regards,
Niklas Söderlund

^ permalink raw reply	[flat|nested] 90+ messages in thread

* RE: [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-01-27 14:33     ` Niklas Söderlund
@ 2023-02-01 17:34     ` Ori Kam
  2023-02-02  9:51     ` Andrew Rybchenko
  2 siblings, 0 replies; 90+ messages in thread
From: Ori Kam @ 2023-02-01 17:34 UTC (permalink / raw)
  To: Ferruh Yigit, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Wisam Monther, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Slava Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

Hi Ferrurh and Thomas,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, 26 January 2023 18:19
> 
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> The Ethernet headers (including VLAN) structures are used
> instead of the redundant fields in the flow items.
> 
> The remaining protocols to clean up are listed for future work
> in the deprecation list.
> Some protocols are not even defined in the directory net yet.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  app/test-flow-perf/items_gen.c           |   4 +-
>  app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
>  doc/guides/prog_guide/rte_flow.rst       |   7 +-
>  doc/guides/rel_notes/deprecation.rst     |   2 +
>  drivers/net/bnxt/bnxt_flow.c             |  42 +++----
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
>  drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
>  drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
>  drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
>  drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
>  drivers/net/e1000/igb_flow.c             |  14 +--
>  drivers/net/enic/enic_flow.c             |  24 ++--
>  drivers/net/enic/enic_fm_flow.c          |  16 +--
>  drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
>  drivers/net/hns3/hns3_flow.c             |  28 ++---
>  drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
>  drivers/net/i40e/i40e_hash.c             |   4 +-
>  drivers/net/iavf/iavf_fdir.c             |  10 +-
>  drivers/net/iavf/iavf_fsub.c             |  10 +-
>  drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
>  drivers/net/ice/ice_acl_filter.c         |  20 ++--
>  drivers/net/ice/ice_fdir_filter.c        |  14 +--
>  drivers/net/ice/ice_switch_filter.c      |  34 +++---
>  drivers/net/igc/igc_flow.c               |   8 +-
>  drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
>  drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
>  drivers/net/mlx4/mlx4_flow.c             |  38 +++---
>  drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
>  drivers/net/mlx5/mlx5_flow.c             |  24 ++--
>  drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
>  drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
>  drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
>  drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
>  drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
>  drivers/net/nfp/nfp_flow.c               |  12 +-
>  drivers/net/sfc/sfc_flow.c               |  46 ++++----
>  drivers/net/sfc/sfc_mae.c                |  38 +++---
>  drivers/net/tap/tap_flow.c               |  58 +++++-----
>  drivers/net/txgbe/txgbe_flow.c           |  28 ++---
>  39 files changed, 618 insertions(+), 619 deletions(-)
> 

For rte_flow, testpmd, doc.
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 90+ messages in thread

* RE: [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-01-26 16:18   ` [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-02-01 17:41     ` Ori Kam
  2023-02-02  9:52     ` Andrew Rybchenko
  1 sibling, 0 replies; 90+ messages in thread
From: Ori Kam @ 2023-02-01 17:41 UTC (permalink / raw)
  To: Ferruh Yigit, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Wisam Monther, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu, Matan Azrad,
	Slava Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

Hi Ferruh and Thomas,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, 26 January 2023 18:19
> 
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> In the case of VXLAN-GPE, the protocol struct is added
> in an unnamed union, keeping old field names.
> 
> The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  app/test-flow-perf/actions_gen.c         |  2 +-
>  app/test-flow-perf/items_gen.c           | 12 +++----
>  app/test-pmd/cmdline_flow.c              | 10 +++---
>  doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
>  doc/guides/rel_notes/deprecation.rst     |  1 -
>  drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
>  drivers/net/hns3/hns3_flow.c             | 12 +++----
>  drivers/net/i40e/i40e_flow.c             |  4 +--
>  drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
>  drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
>  drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
>  drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
>  drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
>  drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
>  drivers/net/sfc/sfc_flow.c               |  6 ++--
>  drivers/net/sfc/sfc_mae.c                |  8 ++---
>  lib/ethdev/rte_flow.h                    | 24 ++++++++++----
>  18 files changed, 126 insertions(+), 122 deletions(-)
> 

For rte_flow, test-pmd, doc.
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 90+ messages in thread

* RE: [PATCH v5 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-26 16:19   ` [PATCH v5 4/8] ethdev: use GRE " Ferruh Yigit
  2023-01-27 14:34     ` Niklas Söderlund
@ 2023-02-01 17:44     ` Ori Kam
  2023-02-02  9:53     ` Andrew Rybchenko
  2 siblings, 0 replies; 90+ messages in thread
From: Ori Kam @ 2023-02-01 17:44 UTC (permalink / raw)
  To: Ferruh Yigit, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Wisam Monther, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Hemant Agrawal, Sachin Saxena, Matan Azrad,
	Slava Ovsiienko, Chaoyong He, Niklas Söderlund,
	Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

Hi Ferruh and Thomas,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, 26 January 2023 18:19
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> The protocol struct is added in an unnamed union, keeping old field names.
> 
> The GRE header struct members are used in apps and drivers
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>  app/test-flow-perf/items_gen.c           |  4 ++--
>  app/test-pmd/cmdline_flow.c              | 14 +++++------
>  doc/guides/prog_guide/rte_flow.rst       |  6 +----
>  doc/guides/rel_notes/deprecation.rst     |  1 -
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++-----
>  drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++-----
>  drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 +++----
>  drivers/net/mlx5/mlx5_flow.c             | 22 ++++++++---------
>  drivers/net/mlx5/mlx5_flow_dv.c          | 30 +++++++++++++-----------
>  drivers/net/mlx5/mlx5_flow_verbs.c       | 10 ++++----
>  drivers/net/nfp/nfp_flow.c               |  9 +++----
>  lib/ethdev/rte_flow.h                    | 24 +++++++++++++------
>  lib/net/rte_gre.h                        |  5 ++++
>  13 files changed, 84 insertions(+), 73 deletions(-)
> 

For rte_flow, test-pmd, doc, rte_gre
Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori

^ permalink raw reply	[flat|nested] 90+ messages in thread

* RE: [PATCH v5 6/8] ethdev: use ARP protocol struct for flow matching
  2023-01-26 16:19   ` [PATCH v5 6/8] ethdev: use ARP " Ferruh Yigit
@ 2023-02-01 17:46     ` Ori Kam
  2023-02-02  9:55     ` Andrew Rybchenko
  1 sibling, 0 replies; 90+ messages in thread
From: Ori Kam @ 2023-02-01 17:46 UTC (permalink / raw)
  To: Ferruh Yigit, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

Hi Ferruh and Thomas

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, 26 January 2023 18:19
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> The protocol struct is added in an unnamed union, keeping old field names.
> 
> The ARP header struct members are used in testpmd
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---

For rte_flow, test-pmd, doc
Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-01-27 14:33     ` Niklas Söderlund
  2023-02-01 17:34     ` Ori Kam
@ 2023-02-02  9:51     ` Andrew Rybchenko
  2 siblings, 0 replies; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:51 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Chas Williams,
	Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

On 1/26/23 19:18, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> The Ethernet headers (including VLAN) structures are used
> instead of the redundant fields in the flow items.
> 
> The remaining protocols to clean up are listed for future work
> in the deprecation list.
> Some protocols are not even defined in the directory net yet.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-01-26 16:18   ` [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
  2023-02-01 17:41     ` Ori Kam
@ 2023-02-02  9:52     ` Andrew Rybchenko
  1 sibling, 0 replies; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:52 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Dongdong Liu,
	Yisen Zhuang, Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu,
	Wenjun Wu, Matan Azrad, Viacheslav Ovsiienko
  Cc: David Marchand, dev

On 1/26/23 19:18, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> In the case of VXLAN-GPE, the protocol struct is added
> in an unnamed union, keeping old field names.
> 
> The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 4/8] ethdev: use GRE protocol struct for flow matching
  2023-01-26 16:19   ` [PATCH v5 4/8] ethdev: use GRE " Ferruh Yigit
  2023-01-27 14:34     ` Niklas Söderlund
  2023-02-01 17:44     ` Ori Kam
@ 2023-02-02  9:53     ` Andrew Rybchenko
  2 siblings, 0 replies; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:53 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Ajit Khaparde, Somnath Kotur, Hemant Agrawal,
	Sachin Saxena, Matan Azrad, Viacheslav Ovsiienko, Chaoyong He,
	Niklas Söderlund, Olivier Matz
  Cc: David Marchand, dev

On 1/26/23 19:19, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> The protocol struct is added in an unnamed union, keeping old field names.
> 
> The GRE header struct members are used in apps and drivers
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 5/8] ethdev: use GTP protocol struct for flow matching
  2023-01-26 16:19   ` [PATCH v5 5/8] ethdev: use GTP " Ferruh Yigit
@ 2023-02-02  9:54     ` Andrew Rybchenko
  0 siblings, 0 replies; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:54 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh,
	Yuying Zhang, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
	Matan Azrad, Viacheslav Ovsiienko
  Cc: David Marchand, dev

On 1/26/23 19:19, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> The protocol struct is added in an unnamed union, keeping old field names.
> 
> The GTP header struct members are used in apps and drivers
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 6/8] ethdev: use ARP protocol struct for flow matching
  2023-01-26 16:19   ` [PATCH v5 6/8] ethdev: use ARP " Ferruh Yigit
  2023-02-01 17:46     ` Ori Kam
@ 2023-02-02  9:55     ` Andrew Rybchenko
  1 sibling, 0 replies; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:55 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang
  Cc: David Marchand, dev

On 1/26/23 19:19, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> As announced in the deprecation notice, flow item structures
> should re-use the protocol header definitions from the directory lib/net/.
> 
> The protocol struct is added in an unnamed union, keeping old field names.
> 
> The ARP header struct members are used in testpmd
> instead of the redundant fields in the flow items.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 7/8] doc: fix description of L2TPV2 flow item
  2023-01-26 16:19   ` [PATCH v5 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-02-02  9:56     ` Andrew Rybchenko
  0 siblings, 0 replies; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:56 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Ori Kam, Wenjun Wu, Jie Wang,
	Ferruh Yigit
  Cc: David Marchand, dev, stable

On 1/26/23 19:19, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> The flow item structure includes the protocol definition
> from the directory lib/net/, so it is reflected in the guide.
> 
> Section title underlining is also fixed around.
> 
> Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 8/8] net: mark all big endian types
  2023-01-26 16:19   ` [PATCH v5 8/8] net: mark all big endian types Ferruh Yigit
@ 2023-02-02 10:01     ` Andrew Rybchenko
  2023-02-02 11:11       ` Ferruh Yigit
  0 siblings, 1 reply; 90+ messages in thread
From: Andrew Rybchenko @ 2023-02-02 10:01 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, Ori Kam, Olivier Matz; +Cc: David Marchand, dev

On 1/26/23 19:19, Ferruh Yigit wrote:
> From: Thomas Monjalon <thomas@monjalon.net>
> 
> Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
> types for their 16 and 32-bit fields.
> It was correct but not conveying the big endian nature of these fields.
> 
> As for other protocols defined in this directory,
> all types are explicitly marked as big endian fields.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

One nit below,

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

> diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
> index 076c8ab314ee..151e6c641fc5 100644
> --- a/lib/net/rte_arp.h
> +++ b/lib/net/rte_arp.h
> @@ -23,28 +23,28 @@ extern "C" {
>    */
>   struct rte_arp_ipv4 {
>   	struct rte_ether_addr arp_sha;  /**< sender hardware address */
> -	uint32_t          arp_sip;  /**< sender IP address */
> +	rte_be32_t            arp_sip;  /**< sender IP address */
>   	struct rte_ether_addr arp_tha;  /**< target hardware address */
> -	uint32_t          arp_tip;  /**< target IP address */
> +	rte_be32_t            arp_tip;  /**< target IP address */
>   } __rte_packed __rte_aligned(2);
>   
>   /**
>    * ARP header.
>    */
>   struct rte_arp_hdr {
> -	uint16_t arp_hardware;    /* format of hardware address */
> -#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
> +	rte_be16_t arp_hardware; /** format of hardware address */
> +#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */

The comment is fixed above, but it is still wrong. It should be
/**<. So, I'd either don't touch it at all, or fix in a right
way. Same for all fields below.

>   
> -	uint16_t arp_protocol;    /* format of protocol address */
> -	uint8_t  arp_hlen;    /* length of hardware address */
> -	uint8_t  arp_plen;    /* length of protocol address */
> -	uint16_t arp_opcode;     /* ARP opcode (command) */
> -#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
> -#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
> -#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
> -#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
> -#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
> -#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
> +	rte_be16_t arp_protocol; /** format of protocol address */
> +	uint8_t    arp_hlen;     /** length of hardware address */
> +	uint8_t    arp_plen;     /** length of protocol address */
> +	rte_be16_t arp_opcode;   /** ARP opcode (command) */
> +#define	RTE_ARP_OP_REQUEST    1  /** request to resolve address */
> +#define	RTE_ARP_OP_REPLY      2  /** response to previous request */
> +#define	RTE_ARP_OP_REVREQUEST 3  /** request proto addr given hardware */
> +#define	RTE_ARP_OP_REVREPLY   4  /** response giving protocol address */
> +#define	RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
> +#define	RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
>   
>   	struct rte_arp_ipv4 arp_data;
>   } __rte_packed __rte_aligned(2);

[snip[

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v5 8/8] net: mark all big endian types
  2023-02-02 10:01     ` Andrew Rybchenko
@ 2023-02-02 11:11       ` Ferruh Yigit
  0 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 11:11 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon, Ori Kam, Olivier Matz
  Cc: David Marchand, dev

On 2/2/2023 10:01 AM, Andrew Rybchenko wrote:
> On 1/26/23 19:19, Ferruh Yigit wrote:
>> From: Thomas Monjalon <thomas@monjalon.net>
>>
>> Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
>> types for their 16 and 32-bit fields.
>> It was correct but not conveying the big endian nature of these fields.
>>
>> As for other protocols defined in this directory,
>> all types are explicitly marked as big endian fields.
>>
>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> 
> One nit below,
> 
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
>> diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
>> index 076c8ab314ee..151e6c641fc5 100644
>> --- a/lib/net/rte_arp.h
>> +++ b/lib/net/rte_arp.h
>> @@ -23,28 +23,28 @@ extern "C" {
>>    */
>>   struct rte_arp_ipv4 {
>>       struct rte_ether_addr arp_sha;  /**< sender hardware address */
>> -    uint32_t          arp_sip;  /**< sender IP address */
>> +    rte_be32_t            arp_sip;  /**< sender IP address */
>>       struct rte_ether_addr arp_tha;  /**< target hardware address */
>> -    uint32_t          arp_tip;  /**< target IP address */
>> +    rte_be32_t            arp_tip;  /**< target IP address */
>>   } __rte_packed __rte_aligned(2);
>>     /**
>>    * ARP header.
>>    */
>>   struct rte_arp_hdr {
>> -    uint16_t arp_hardware;    /* format of hardware address */
>> -#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
>> +    rte_be16_t arp_hardware; /** format of hardware address */
>> +#define RTE_ARP_HRD_ETHER     1  /** ARP Ethernet address format */
> 
> The comment is fixed above, but it is still wrong. It should be
> /**<. So, I'd either don't touch it at all, or fix in a right
> way. Same for all fields below.
> 

ack

>>   -    uint16_t arp_protocol;    /* format of protocol address */
>> -    uint8_t  arp_hlen;    /* length of hardware address */
>> -    uint8_t  arp_plen;    /* length of protocol address */
>> -    uint16_t arp_opcode;     /* ARP opcode (command) */
>> -#define    RTE_ARP_OP_REQUEST    1 /* request to resolve address */
>> -#define    RTE_ARP_OP_REPLY      2 /* response to previous request */
>> -#define    RTE_ARP_OP_REVREQUEST 3 /* request proto addr given
>> hardware */
>> -#define    RTE_ARP_OP_REVREPLY   4 /* response giving protocol
>> address */
>> -#define    RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
>> -#define    RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
>> +    rte_be16_t arp_protocol; /** format of protocol address */
>> +    uint8_t    arp_hlen;     /** length of hardware address */
>> +    uint8_t    arp_plen;     /** length of protocol address */
>> +    rte_be16_t arp_opcode;   /** ARP opcode (command) */
>> +#define    RTE_ARP_OP_REQUEST    1  /** request to resolve address */
>> +#define    RTE_ARP_OP_REPLY      2  /** response to previous request */
>> +#define    RTE_ARP_OP_REVREQUEST 3  /** request proto addr given
>> hardware */
>> +#define    RTE_ARP_OP_REVREPLY   4  /** response giving protocol
>> address */
>> +#define    RTE_ARP_OP_INVREQUEST 8  /** request to identify peer */
>> +#define    RTE_ARP_OP_INVREPLY   9  /** response identifying peer */
>>         struct rte_arp_ipv4 arp_data;
>>   } __rte_packed __rte_aligned(2);
> 
> [snip[


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v6 0/8] start cleanup of rte_flow_item_*
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (11 preceding siblings ...)
  2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-02-02 12:44 ` Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
                     ` (7 more replies)
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
  13 siblings, 8 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, dev

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.

v6:
 * Fix "struct rte_arp_hdr" field doxygen comment

v5:
 * Fix more RHEL7 build error

v4:
 * Fix build error for RHEL7 (gcc 4.8.5) caused by nested struct
   initialization.

v3:
 * Updated Higig2 protocol flow item assignment taking into account endianness
   annotations.

v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp

Cc: David Marchand <david.marchand@redhat.com>

Thomas Monjalon (8):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GRE protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  24 +--
 app/test-pmd/cmdline_flow.c              | 180 +++++++++++-----------
 doc/guides/prog_guide/rte_flow.rst       |  57 ++-----
 doc/guides/rel_notes/deprecation.rst     |   6 +-
 drivers/net/bnxt/bnxt_flow.c             |  54 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 112 +++++++-------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c           |  60 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 +--
 drivers/net/enic/enic_fm_flow.c          |  16 +-
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 ++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 +++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 +--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 ++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  48 +++---
 drivers/net/mlx5/mlx5_flow.c             |  62 ++++----
 drivers/net/mlx5/mlx5_flow_dv.c          | 184 ++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 +++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c       |  48 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/nfp/nfp_flow.c               |  21 +--
 drivers/net/sfc/sfc_flow.c               |  52 +++----
 drivers/net/sfc/sfc_mae.c                |  46 +++---
 drivers/net/tap/tap_flow.c               |  58 +++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    | 121 ++++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_gre.h                        |   7 +-
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 ++++-
 47 files changed, 988 insertions(+), 953 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v6 1/8] ethdev: use Ethernet protocol struct for flow matching
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 2/8] net: add smaller fields for VXLAN Ferruh Yigit
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |   2 +
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803dc0..27c3780c4f17 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index eea99b454005..4782d2e680d3 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,6 +63,8 @@ Deprecation Notices
   should start with relevant protocol header structure from lib/net/.
   The individual protocol header fields and the protocol header struct
   may be kept together in a union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
 
   These items are not compliant (not including struct from lib/net/):
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460cf5..8f660493402c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c49..2928598ced55 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f70c2c290577..f0c4f7d26b86 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6b8..f5787c247f1f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e7c..eec7e6065097 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f425c..2ff1a98fda7c 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b77531065196..ea9b290e1cb5 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfef0..e6c9ad442ac0 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af8476c..90027dc67695 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07e8..d1a564a16303 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c3980..ef1832982dee 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c17..0acbd5a061e0 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 0c848189776d..02e1457d8017 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f8087392538..a6c88cb55b88 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 4082c0069f31..74e1e7099b8c 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index 868921cac595..08a80137e5b9 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba940731..5d297afc290e 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a1697..d84061340e6c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539c6..b677a0d61340 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341d3..ee56d0f43d93 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 110ff34fcceb..a11da3dc8beb 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81d0..8ef9fd2db44e 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c9666..604384a24253 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
 
 /* Xmacro used to create generic item setter from items */
 #define LIST_OF_FIELDS_INFO \
-	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
-	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
+	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
+	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
 	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
 	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
 	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
@@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (m->type) {
+	if (m->hdr.ether_type) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
@@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 47_16 */
-	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
@@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 15_0 */
-	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
@@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 47_16 */
-	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
@@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 15_0 */
-	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
@@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
 	}
 
-	if (m->tci) {
+	if (m->hdr.vlan_tci) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_tci_set;
 		DR_CALC_SET(fc, eth_l2, tci, inner);
 	}
 
-	if (m->inner_type) {
+	if (m->hdr.eth_proto) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a0cf677fb099..2512d6b52db9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 62c38b87a1f0..ff915183b7cc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *eth_m;
 	const struct rte_flow_item_eth *eth_v;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_v;
@@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
-		rte_be16_t type = eth_v->type;
+	if (eth_m->hdr.ether_type == 0xFFFF) {
+		rte_be16_t type = eth_v->hdr.ether_type;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
@@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M) {
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
-			type = eth_vv->type;
+			type = eth_vv->hdr.ether_type;
 		}
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		switch (type) {
@@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 			return;
 	}
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		 * and pre-validated.
 		 */
 		if (vlan_vv)
-			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
+			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
 			 &rte_flow_item_vlan_mask);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
@@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		rte_be16_t inner_type = vlan_v->inner_type;
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
 		 * value.
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M)
-			inner_type = vlan_vv->inner_type;
+			inner_type = vlan_vv->hdr.eth_proto;
 		switch (inner_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a3c8056515da..b8f96839c8bf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
 
 /* Ethernet item spec for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
-	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item mask for unicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for broadcast. */
 static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /**
@@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
 		.egress = 1,
 	};
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
 	for (i = 0; i < priv->vlan_filter_n; ++i) {
 		uint16_t vlan = priv->vlan_filter[i];
 		struct rte_flow_item_vlan vlan_spec = {
-			.tci = rte_cpu_to_be_16(vlan),
+			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 		};
 
 		items[1].spec = &vlan_spec;
@@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
 			return -rte_errno;
 	}
@@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		for (j = 0; j < priv->vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 
 			items[1].spec = &vlan_spec;
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 28ea28bfbe02..1902b97ec6d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f54443ed1ac4..3457bf65d3e1 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c496..e74a5f83f55b 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ff2e21c817b4..bd3a8d2a3b2f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	eth = (void *)*mbuf_off;
 
 	if (is_mask) {
-		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	} else {
-		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	}
 
 	eth->mpls_lse = 0;
@@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	if (is_mask) {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
-		meta_tci->tci |= mask->tci;
+		meta_tci->tci |= mask->hdr.vlan_tci;
 	} else {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-		meta_tci->tci |= spec->tci;
+		meta_tci->tci |= spec->hdr.vlan_tci;
 	}
 
 	return 0;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b563..f098edc6eb33 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da9582..710d04be13af 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe0593d..ed4d42f92f9f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8d2..7ef52d0b0fcd 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 2/8] net: add smaller fields for VXLAN
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 3/8] ethdev: use VXLAN protocol struct for flow matching
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 2/8] net: add smaller fields for VXLAN Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 4/8] ethdev: use GRE " Ferruh Yigit
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Dongdong Liu, Yisen Zhuang,
	Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu,
	Matan Azrad, Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 ++++++++++----
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27c3780c4f17..116722351486 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4782d2e680d3..df8b5bcb1b64 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -90,7 +90,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f660493402c..4a107e81e955 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced55..80869b79c3fe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982dee..e88f9b7e452b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061e0..2855b14fe679 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e6c..7cb20fa0b4f8 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d93..d20a29b9a2d6 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index a11da3dc8beb..fe710b79008d 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2512d6b52db9..ff08a629e2c6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff915183b7cc..261c60a5c33a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
@@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	    ((attr->group || (attr->transfer && priv->fdb_def_rule)) &&
 	    !priv->sh->misc5_cap)) {
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
+			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_v,
 						   tunnel_header_1);
-	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	*tunnel_header_v = tunnel_v;
 	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
+		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 		if (!tunnel_v)
 			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
+		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
 	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
 	}
 }
 
@@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 		MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		vxlan_m = vxlan_v;
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags,
 		 flags_m & flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.protocol;
+	v_protocol = vxlan_v->hdr.protocol;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1902b97ec6d4..4ef4f3044515 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb33..fe1f5ba55f86 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13af..aab697b204c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b4f..e2364823d622 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -988,7 +988,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 4/8] ethdev: use GRE protocol struct for flow matching
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (2 preceding siblings ...)
  2023-02-02 12:44   ` [PATCH v6 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 17:16     ` Thomas Monjalon
  2023-02-02 12:44   ` [PATCH v6 5/8] ethdev: use GTP " Ferruh Yigit
                     ` (3 subsequent siblings)
  7 siblings, 1 reply; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Hemant Agrawal, Sachin Saxena,
	Matan Azrad, Viacheslav Ovsiienko, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GRE header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/items_gen.c           |  4 ++--
 app/test-pmd/cmdline_flow.c              | 14 +++++------
 doc/guides/prog_guide/rte_flow.rst       |  6 +----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 +++++-----
 drivers/net/dpaa2/dpaa2_flow.c           | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  8 +++----
 drivers/net/mlx5/mlx5_flow.c             | 22 ++++++++---------
 drivers/net/mlx5/mlx5_flow_dv.c          | 30 +++++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       | 10 ++++----
 drivers/net/nfp/nfp_flow.c               |  9 +++----
 lib/ethdev/rte_flow.h                    | 24 +++++++++++++------
 lib/net/rte_gre.h                        |  5 ++++
 13 files changed, 84 insertions(+), 73 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..0f19e5e53648 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -173,10 +173,10 @@ add_gre(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gre gre_spec = {
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB),
 	};
 	static struct rte_flow_item_gre gre_mask = {
-		.protocol = RTE_BE16(0xffff),
+		.hdr.proto = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..0e115956514c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4071,7 +4071,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     protocol)),
+					     hdr.proto)),
 	},
 	[ITEM_GRE_C_RSVD0_VER] = {
 		.name = "c_rsvd0_ver",
@@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
-					     c_rsvd0_ver)),
+					     hdr.c_rsvd0_ver)),
 	},
 	[ITEM_GRE_C_BIT] = {
 		.name = "c_bit",
@@ -4090,7 +4090,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x80\x00\x00\x00")),
 	},
 	[ITEM_GRE_S_BIT] = {
@@ -4098,7 +4098,7 @@ static const struct token token_list[] = {
 		.help = "sequence number bit (S)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x10\x00\x00\x00")),
 	},
 	[ITEM_GRE_K_BIT] = {
@@ -4106,7 +4106,7 @@ static const struct token token_list[] = {
 		.help = "key bit (K)",
 		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_BOOLEAN), item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_gre,
-						  c_rsvd0_ver,
+						  hdr.c_rsvd0_ver,
 						  "\x20\x00\x00\x00")),
 	},
 	[ITEM_FUZZY] = {
@@ -7837,7 +7837,7 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls = {
 		.ttl = 0,
@@ -7935,7 +7935,7 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 		},
 	};
 	struct rte_flow_item_gre gre = {
-		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+		.hdr.proto = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
 	};
 	struct rte_flow_item_mpls mpls;
 	uint8_t *header;
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 116722351486..603e1b866be3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -980,8 +980,7 @@ Item: ``GRE``
 
 Matches a GRE header.
 
-- ``c_rsvd0_ver``: checksum, reserved 0 and version.
-- ``protocol``: protocol type.
+- ``hdr``:  header definition (``rte_gre.h``).
 - Default ``mask`` matches protocol only.
 
 Item: ``GRE_KEY``
@@ -1000,9 +999,6 @@ Item: ``GRE_OPTION``
 Matches a GRE optional fields (checksum/key/sequence).
 This should be preceded by item ``GRE``.
 
-- ``checksum``: checksum.
-- ``key``: key.
-- ``sequence``: sequence.
 - The items in GRE_OPTION do not change bit flags(c_bit/k_bit/s_bit) in GRE
   item. The bit flags need be set with GRE item by application. When the items
   present, the corresponding bits in GRE spec and mask should be set "1" by
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index df8b5bcb1b64..3bb15cf9f7a5 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -73,7 +73,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gre``
   - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 80869b79c3fe..c1e231ce8c49 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1461,16 +1461,16 @@ ulp_rte_gre_hdr_handler(const struct rte_flow_item *item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.c_rsvd0_ver);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, c_rsvd0_ver),
-			      ulp_deference_struct(gre_mask, c_rsvd0_ver),
+			      ulp_deference_struct(gre_spec, hdr.c_rsvd0_ver),
+			      ulp_deference_struct(gre_mask, hdr.c_rsvd0_ver),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_gre *)NULL)->protocol);
+	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(gre_spec, protocol),
-			      ulp_deference_struct(gre_mask, protocol),
+			      ulp_deference_struct(gre_spec, hdr.proto),
+			      ulp_deference_struct(gre_mask, hdr.proto),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with GRE */
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e6065097..8a6d44da4875 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -154,7 +154,7 @@ static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = {
 };
 
 static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(0xffff),
 };
 
 #endif
@@ -2792,7 +2792,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->protocol)
+	if (!mask->hdr.proto)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -2841,8 +2841,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_GRE,
 				NH_FLD_GRE_TYPE,
-				&spec->protocol,
-				&mask->protocol,
+				&spec->hdr.proto,
+				&mask->hdr.proto,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
@@ -2855,8 +2855,8 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_GRE,
 			NH_FLD_GRE_TYPE,
-			&spec->protocol,
-			&mask->protocol,
+			&spec->hdr.proto,
+			&mask->hdr.proto,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR(
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 604384a24253..3a438f2c9d12 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -156,8 +156,8 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		source_qp,		v->queue,		mlx5_rte_flow_item_sq) \
 	X(SET,		tag,			v->data,		rte_flow_item_tag) \
 	X(SET,		metadata,		v->data,		rte_flow_item_meta) \
-	X(SET_BE16,	gre_c_ver,		v->c_rsvd0_ver,		rte_flow_item_gre) \
-	X(SET_BE16,	gre_protocol_type,	v->protocol,		rte_flow_item_gre) \
+	X(SET_BE16,	gre_c_ver,		v->hdr.c_rsvd0_ver,	rte_flow_item_gre) \
+	X(SET_BE16,	gre_protocol_type,	v->hdr.proto,		rte_flow_item_gre) \
 	X(SET,		ipv4_protocol_gre,	IPPROTO_GRE,		rte_flow_item_gre) \
 	X(SET_BE32,	gre_opt_key,		v->key.key,		rte_flow_item_gre_opt) \
 	X(SET_BE32,	gre_opt_seq,		v->sequence.sequence,	rte_flow_item_gre_opt) \
@@ -1210,7 +1210,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->c_rsvd0_ver) {
+	if (m->hdr.c_rsvd0_ver) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_c_ver_set;
@@ -1219,7 +1219,7 @@ mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd,
 		fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver);
 	}
 
-	if (m->protocol) {
+	if (m->hdr.proto) {
 		fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_gre_protocol_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ff08a629e2c6..7b19c5f03f5d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -329,7 +329,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_GRE:
-		MLX5_XSET_ITEM_MASK_SPEC(gre, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(gre, hdr.proto);
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
@@ -3089,8 +3089,7 @@ mlx5_flow_validate_item_gre_key(const struct rte_flow_item *item,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	gre_spec = gre_item->spec;
-	if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-			 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+	if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Key bit must be on");
@@ -3165,21 +3164,18 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev,
 	if (!gre_mask)
 		gre_mask = &rte_flow_item_gre_mask;
 	if (mask->checksum_rsvd.checksum)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x8000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x8000)))
+		if (gre_spec && (gre_mask->hdr.c) && !(gre_spec->hdr.c))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "Checksum bit must be on");
 	if (mask->key.key)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x2000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x2000)))
+		if (gre_spec && (gre_mask->hdr.k) && !(gre_spec->hdr.k))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item, "Key bit must be on");
 	if (mask->sequence.sequence)
-		if (gre_spec && (gre_mask->c_rsvd0_ver & RTE_BE16(0x1000)) &&
-				 !(gre_spec->c_rsvd0_ver & RTE_BE16(0x1000)))
+		if (gre_spec && (gre_mask->hdr.s) && !(gre_spec->hdr.s))
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
@@ -3230,8 +3226,10 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 	const struct rte_flow_item_gre *mask = item->mask;
 	int ret;
 	const struct rte_flow_item_gre nic_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 
 	if (target_protocol != 0xff && target_protocol != IPPROTO_GRE)
@@ -3259,7 +3257,7 @@ mlx5_flow_validate_item_gre(const struct rte_flow_item *item,
 		return ret;
 #ifndef HAVE_MLX5DV_DR
 #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
-	if (spec && (spec->protocol & mask->protocol))
+	if (spec && (spec->hdr.proto & mask->hdr.proto))
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "without MPLS support the"
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 261c60a5c33a..2b9c2ba6a4b5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8984,7 +8984,7 @@ static void
 flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 			   uint64_t pattern_flags, uint32_t key_type)
 {
-	static const struct rte_flow_item_gre empty_gre = {0,};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	const struct rte_flow_item_gre *gre_m = item->mask;
 	const struct rte_flow_item_gre *gre_v = item->spec;
 	void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
@@ -9021,8 +9021,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 		gre_v = gre_m;
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		gre_m = gre_v;
-	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
-	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
+	gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->hdr.c_rsvd0_ver);
+	gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->hdr.c_rsvd0_ver);
 	MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
 		 gre_crks_rsvd0_ver_v.c_present &
 		 gre_crks_rsvd0_ver_m.c_present);
@@ -9032,8 +9032,8 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
 	MLX5_SET(fte_match_set_misc, misc_v, gre_s_present,
 		 gre_crks_rsvd0_ver_v.s_present &
 		 gre_crks_rsvd0_ver_m.s_present);
-	protocol_m = rte_be_to_cpu_16(gre_m->protocol);
-	protocol_v = rte_be_to_cpu_16(gre_v->protocol);
+	protocol_m = rte_be_to_cpu_16(gre_m->hdr.proto);
+	protocol_v = rte_be_to_cpu_16(gre_v->hdr.proto);
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		protocol_v = mlx5_translate_tunnel_etypes(pattern_flags);
@@ -9072,7 +9072,7 @@ flow_dv_translate_item_gre_option(void *key,
 	const struct rte_flow_item_gre_opt *option_v = item->spec;
 	const struct rte_flow_item_gre *gre_m = gre_item->mask;
 	const struct rte_flow_item_gre *gre_v = gre_item->spec;
-	static const struct rte_flow_item_gre empty_gre = {0};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	struct rte_flow_item gre_key_item;
 	uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v;
 	uint16_t protocol_m, protocol_v;
@@ -9097,8 +9097,8 @@ flow_dv_translate_item_gre_option(void *key,
 		if (!gre_m)
 			gre_m = &rte_flow_item_gre_mask;
 	}
-	protocol_v = gre_v->protocol;
-	protocol_m = gre_m->protocol;
+	protocol_v = gre_v->hdr.proto;
+	protocol_m = gre_m->hdr.proto;
 	if (!protocol_m) {
 		/* Force next protocol to prevent matchers duplication */
 		uint16_t ether_type =
@@ -9108,8 +9108,8 @@ flow_dv_translate_item_gre_option(void *key,
 			protocol_m = UINT16_MAX;
 		}
 	}
-	c_rsvd0_ver_v = gre_v->c_rsvd0_ver;
-	c_rsvd0_ver_m = gre_m->c_rsvd0_ver;
+	c_rsvd0_ver_v = gre_v->hdr.c_rsvd0_ver;
+	c_rsvd0_ver_m = gre_m->hdr.c_rsvd0_ver;
 	if (option_m->sequence.sequence) {
 		c_rsvd0_ver_v |= RTE_BE16(0x1000);
 		c_rsvd0_ver_m |= RTE_BE16(0x1000);
@@ -9171,12 +9171,14 @@ flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item,
 
 	/* For NVGRE, GRE header fields must be set with defined values. */
 	const struct rte_flow_item_gre gre_spec = {
-		.c_rsvd0_ver = RTE_BE16(0x2000),
-		.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB)
+		.hdr.k = 1,
+		.hdr.proto = RTE_BE16(RTE_ETHER_TYPE_TEB)
 	};
 	const struct rte_flow_item_gre gre_mask = {
-		.c_rsvd0_ver = RTE_BE16(0xB000),
-		.protocol = RTE_BE16(UINT16_MAX),
+		.hdr.c = 1,
+		.hdr.k = 1,
+		.hdr.s = 1,
+		.hdr.proto = RTE_BE16(UINT16_MAX),
 	};
 	const struct rte_flow_item gre_item = {
 		.spec = &gre_spec,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 4ef4f3044515..291369d437d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -930,7 +930,7 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		.size = size,
 	};
 #else
-	static const struct rte_flow_item_gre empty_gre = {0,};
+	static const struct rte_flow_item_gre empty_gre = {{{0}}};
 	const struct rte_flow_item_gre *spec = item->spec;
 	const struct rte_flow_item_gre *mask = item->mask;
 	unsigned int size = sizeof(struct ibv_flow_spec_gre);
@@ -946,10 +946,10 @@ flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
 		if (!mask)
 			mask = &rte_flow_item_gre_mask;
 	}
-	tunnel.val.c_ks_res0_ver = spec->c_rsvd0_ver;
-	tunnel.val.protocol = spec->protocol;
-	tunnel.mask.c_ks_res0_ver = mask->c_rsvd0_ver;
-	tunnel.mask.protocol = mask->protocol;
+	tunnel.val.c_ks_res0_ver = spec->hdr.c_rsvd0_ver;
+	tunnel.val.protocol = spec->hdr.proto;
+	tunnel.mask.c_ks_res0_ver = mask->hdr.c_rsvd0_ver;
+	tunnel.mask.protocol = mask->hdr.proto;
 	/* Remove unwanted bits from values. */
 	tunnel.val.c_ks_res0_ver &= tunnel.mask.c_ks_res0_ver;
 	tunnel.val.key &= tunnel.mask.key;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index bd3a8d2a3b2f..0994fdeeb49f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1812,8 +1812,9 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
 		.mask_support = &(const struct rte_flow_item_gre){
-			.c_rsvd0_ver = RTE_BE16(0xa000),
-			.protocol = RTE_BE16(0xffff),
+			.hdr.c = 1,
+			.hdr.k = 1,
+			.hdr.proto = RTE_BE16(0xffff),
 		},
 		.mask_default = &rte_flow_item_gre_mask,
 		.mask_sz = sizeof(struct rte_flow_item_gre),
@@ -3144,7 +3145,7 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 	memset(set_tun, 0, act_set_size);
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
@@ -3181,7 +3182,7 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
 	nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
 			ipv6->hdr.hop_limits, tos);
-	set_tun->tun_proto = gre->protocol;
+	set_tun->tun_proto = gre->hdr.proto;
 
 	/* Send the tunnel neighbor cmsg to fw */
 	return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e2364823d622..3ae89e367c16 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1070,19 +1070,29 @@ static const struct rte_flow_item_mpls rte_flow_item_mpls_mask = {
  *
  * Matches a GRE header.
  */
+RTE_STD_C11
 struct rte_flow_item_gre {
-	/**
-	 * Checksum (1b), reserved 0 (12b), version (3b).
-	 * Refer to RFC 2784.
-	 */
-	rte_be16_t c_rsvd0_ver;
-	rte_be16_t protocol; /**< Protocol type. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Checksum (1b), reserved 0 (12b), version (3b).
+			 * Refer to RFC 2784.
+			 */
+			rte_be16_t c_rsvd0_ver;
+			rte_be16_t protocol; /**< Protocol type. */
+		};
+		struct rte_gre_hdr hdr; /**< GRE header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
-	.protocol = RTE_BE16(0xffff),
+	.hdr.proto = RTE_BE16(UINT16_MAX),
 };
 #endif
 
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 6c6aef6fcaa0..210b81c99018 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -28,6 +28,8 @@ extern "C" {
  */
 __extension__
 struct rte_gre_hdr {
+	union {
+		struct {
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t res2:4; /**< Reserved */
 	uint16_t s:1;    /**< Sequence Number Present bit */
@@ -45,6 +47,9 @@ struct rte_gre_hdr {
 	uint16_t res3:5; /**< Reserved */
 	uint16_t ver:3;  /**< Version Number */
 #endif
+		};
+		rte_be16_t c_rsvd0_ver;
+	};
 	uint16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 5/8] ethdev: use GTP protocol struct for flow matching
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (3 preceding siblings ...)
  2023-02-02 12:44   ` [PATCH v6 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 6/8] ethdev: use ARP " Ferruh Yigit
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/items_gen.c        |  4 ++--
 app/test-pmd/cmdline_flow.c           |  8 +++----
 doc/guides/prog_guide/rte_flow.rst    | 10 ++-------
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c          | 14 ++++++------
 drivers/net/i40e/i40e_flow.c          | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c          |  8 +++----
 drivers/net/ice/ice_fdir_filter.c     | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c   | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------
 drivers/net/mlx5/mlx5_flow_dv.c       | 20 ++++++++---------
 lib/ethdev/rte_flow.h                 | 32 ++++++++++++++++++---------
 12 files changed, 78 insertions(+), 75 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index 0f19e5e53648..55eb6f5cf009 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0e115956514c..dd6da9d98d9b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 603e1b866be3..ec2e335fac3d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1064,12 +1064,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1235,8 +1230,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 3bb15cf9f7a5..39de294d7e77 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -73,7 +73,6 @@ Deprecation Notices
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb58..47f79ecf11cc 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe679..3c550733f2bb 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b88..811a10287b70 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc290e..480b369af816 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4f8..110d8895fea3 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 3a438f2c9d12..127cebcf3e11 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data {
 	X(SET_BE16,	tcp_src_port,		v->hdr.src_port,	rte_flow_item_tcp) \
 	X(SET_BE16,	tcp_dst_port,		v->hdr.dst_port,	rte_flow_item_tcp) \
 	X(SET,		gtp_udp_port,		RTE_GTPU_UDP_PORT,	rte_flow_item_gtp) \
-	X(SET_BE32,	gtp_teid,		v->teid,		rte_flow_item_gtp) \
-	X(SET,		gtp_msg_type,		v->msg_type,		rte_flow_item_gtp) \
-	X(SET,		gtp_ext_flag,		!!v->v_pt_rsv_flags,	rte_flow_item_gtp) \
+	X(SET_BE32,	gtp_teid,		v->hdr.teid,		rte_flow_item_gtp) \
+	X(SET,		gtp_msg_type,		v->hdr.msg_type,	rte_flow_item_gtp) \
+	X(SET,		gtp_ext_flag,		!!v->hdr.gtp_hdr_info,	rte_flow_item_gtp) \
 	X(SET,		gtp_next_ext_hdr,	GTP_PDU_SC,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_pdu,	v->hdr.type,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_qfi,	v->hdr.qfi,		rte_flow_item_gtp_psc) \
@@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
+	if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
 		rte_errno = ENOTSUP;
 		return rte_errno;
 	}
 
-	if (m->teid) {
+	if (m->hdr.teid) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 		fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
 	}
 
-	if (m->v_pt_rsv_flags) {
+	if (m->hdr.gtp_hdr_info) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	}
 
 
-	if (m->msg_type) {
+	if (m->hdr.msg_type) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2b9c2ba6a4b5..bdd56cf0f9ae 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -9320,7 +9320,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 				 const uint64_t pattern_flags,
 				 uint32_t key_type)
 {
-	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, };
+	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {{{0}}};
 	const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask;
 	const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec;
 	/* The item was validated to be on the outer side */
@@ -10358,11 +10358,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item,
 	MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m,
 		&rte_flow_item_gtp_mask);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3ae89e367c16..85ca73d1dc04 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1149,23 +1149,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 6/8] ethdev: use ARP protocol struct for flow matching
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (4 preceding siblings ...)
  2023-02-02 12:44   ` [PATCH v6 5/8] ethdev: use GTP " Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 12:44   ` [PATCH v6 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
  2023-02-02 12:45   ` [PATCH v6 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index dd6da9d98d9b..1d337a96199d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec2e335fac3d..8bf85df2f611 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1100,15 +1100,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 39de294d7e77..471590a77013 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -69,7 +69,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 85ca73d1dc04..a215daa83640 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include <rte_compat.h>
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1255,26 +1256,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 7/8] doc: fix description of L2TPV2 flow item
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (5 preceding siblings ...)
  2023-02-02 12:44   ` [PATCH v6 6/8] ethdev: use ARP " Ferruh Yigit
@ 2023-02-02 12:44   ` Ferruh Yigit
  2023-02-02 12:45   ` [PATCH v6 8/8] net: mark all big endian types Ferruh Yigit
  7 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:44 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Jie Wang, Wenjun Wu, Andrew Rybchenko,
	Ferruh Yigit
  Cc: David Marchand, dev, stable

From: Thomas Monjalon <thomas@monjalon.net>

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
Cc: jie1x.wang@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 8bf85df2f611..c01b53aad8ed 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1485,22 +1485,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v6 8/8] net: mark all big endian types
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (6 preceding siblings ...)
  2023-02-02 12:44   ` [PATCH v6 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-02-02 12:45   ` Ferruh Yigit
  2023-02-02 17:20     ` Thomas Monjalon
  7 siblings, 1 reply; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-02 12:45 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 lib/ethdev/rte_flow.h |  4 ++--
 lib/net/rte_arp.h     | 28 ++++++++++++++--------------
 lib/net/rte_gre.h     |  2 +-
 lib/net/rte_higig.h   |  6 +++---
 lib/net/rte_mpls.h    |  2 +-
 5 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a215daa83640..99f8340f8274 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
 static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
 	.hdr = {
 		.ppt1 = {
-			.classification = 0xffff,
-			.vid = 0xfff,
+			.classification = RTE_BE16(0xffff),
+			.vid = RTE_BE16(0xfff),
 		},
 	},
 };
diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..c3cd0afb5ca8 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /**< format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /**< ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /**< format of protocol address */
+	uint8_t    arp_hlen;     /**< length of hardware address */
+	uint8_t    arp_plen;     /**< length of protocol address */
+	rte_be16_t arp_opcode;   /**< ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /**< request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /**< response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /**< request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /**< response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /**< request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /**< response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 210b81c99018..6b1169c8b0c1 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -50,7 +50,7 @@ struct rte_gre_hdr {
 		};
 		rte_be16_t c_rsvd0_ver;
 	};
-	uint16_t proto;  /**< Protocol Type */
+	rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH v6 4/8] ethdev: use GRE protocol struct for flow matching
  2023-02-02 12:44   ` [PATCH v6 4/8] ethdev: use GRE " Ferruh Yigit
@ 2023-02-02 17:16     ` Thomas Monjalon
  2023-02-03 15:02       ` Ferruh Yigit
  0 siblings, 1 reply; 90+ messages in thread
From: Thomas Monjalon @ 2023-02-02 17:16 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Hemant Agrawal, Sachin Saxena, Matan Azrad,
	Viacheslav Ovsiienko, Chaoyong He, Niklas Söderlund,
	Andrew Rybchenko, Olivier Matz, David Marchand, dev

02/02/2023 13:44, Ferruh Yigit:
> --- a/lib/net/rte_gre.h
> +++ b/lib/net/rte_gre.h
> @@ -28,6 +28,8 @@ extern "C" {
>   */
>  __extension__
>  struct rte_gre_hdr {
> +       union {
> +               struct {
>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>         uint16_t res2:4; /**< Reserved */
>         uint16_t s:1;    /**< Sequence Number Present bit */
> @@ -45,6 +47,9 @@ struct rte_gre_hdr {
>         uint16_t res3:5; /**< Reserved */
>         uint16_t ver:3;  /**< Version Number */
>  #endif
> +               };
> +               rte_be16_t c_rsvd0_ver;
> +       };
>         uint16_t proto;  /**< Protocol Type */

Why adding an unioned field c_rsvd0_ver?



^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v6 8/8] net: mark all big endian types
  2023-02-02 12:45   ` [PATCH v6 8/8] net: mark all big endian types Ferruh Yigit
@ 2023-02-02 17:20     ` Thomas Monjalon
  2023-02-03 15:03       ` Ferruh Yigit
  0 siblings, 1 reply; 90+ messages in thread
From: Thomas Monjalon @ 2023-02-02 17:20 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Ori Kam, Andrew Rybchenko, Olivier Matz, David Marchand, dev

02/02/2023 13:45, Ferruh Yigit:
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
>  static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
>  	.hdr = {
>  		.ppt1 = {
> -			.classification = 0xffff,
> -			.vid = 0xfff,
> +			.classification = RTE_BE16(0xffff),
> +			.vid = RTE_BE16(0xfff),

0xffff could be UINT16_MAX




^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v6 4/8] ethdev: use GRE protocol struct for flow matching
  2023-02-02 17:16     ` Thomas Monjalon
@ 2023-02-03 15:02       ` Ferruh Yigit
  2023-02-03 15:12         ` Thomas Monjalon
  0 siblings, 1 reply; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 15:02 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Hemant Agrawal, Sachin Saxena, Matan Azrad,
	Viacheslav Ovsiienko, Chaoyong He, Niklas Söderlund,
	Andrew Rybchenko, Olivier Matz, David Marchand, dev

On 2/2/2023 5:16 PM, Thomas Monjalon wrote:
> 02/02/2023 13:44, Ferruh Yigit:
>> --- a/lib/net/rte_gre.h
>> +++ b/lib/net/rte_gre.h
>> @@ -28,6 +28,8 @@ extern "C" {
>>   */
>>  __extension__
>>  struct rte_gre_hdr {
>> +       union {
>> +               struct {
>>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>>         uint16_t res2:4; /**< Reserved */
>>         uint16_t s:1;    /**< Sequence Number Present bit */
>> @@ -45,6 +47,9 @@ struct rte_gre_hdr {
>>         uint16_t res3:5; /**< Reserved */
>>         uint16_t ver:3;  /**< Version Number */
>>  #endif
>> +               };
>> +               rte_be16_t c_rsvd0_ver;
>> +       };
>>         uint16_t proto;  /**< Protocol Type */
> 
> Why adding an unioned field c_rsvd0_ver?
> 
> 

Because existing usage in the drivers require to access these fields as
a single 16 bytes variable.

like mlx was using it as:
`X(SET_BE16,	gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \`

When all usage switched to flow item specific fields to generic headers,
there needs a way to represent this in the generic header.

By adding 'c_rsvd0_ver' to generic header it becomes:
`X(SET_BE16,	gre_c_ver, v->hdr.c_rsvd0_ver, rte_flow_item_gre) \`


Or another sample, previous version of code was updated as following:
`
 -	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
 +	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
`

Because generic field to represent 'c_rsvd0_ver' is missing, 'hdr.proto'
was used, this was wrong.
Although the sizes of fields are same and functionally works, they are
different fields, this is worse than sizeof(uint16_t);


Another usage in testpmd:
`
  	[ITEM_GRE_C_RSVD0_VER] = {
  		.name = "c_rsvd0_ver",
 @@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
  		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
   			     item_param),
  		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
 -					     c_rsvd0_ver)),
 +					     hdr.c_rsvd0_ver)),
`


But looking it again perhaps it can be named differently, because it is
not a reserved field in the generic header, though I am not sure what
can be a good variable name.


^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v6 8/8] net: mark all big endian types
  2023-02-02 17:20     ` Thomas Monjalon
@ 2023-02-03 15:03       ` Ferruh Yigit
  0 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 15:03 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Ori Kam, Andrew Rybchenko, Olivier Matz, David Marchand, dev

On 2/2/2023 5:20 PM, Thomas Monjalon wrote:
> 02/02/2023 13:45, Ferruh Yigit:
>> --- a/lib/ethdev/rte_flow.h
>> +++ b/lib/ethdev/rte_flow.h
>> @@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
>>  static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
>>  	.hdr = {
>>  		.ppt1 = {
>> -			.classification = 0xffff,
>> -			.vid = 0xfff,
>> +			.classification = RTE_BE16(0xffff),
>> +			.vid = RTE_BE16(0xfff),
> 
> 0xffff could be UINT16_MAX
> 
>

ack


^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v6 4/8] ethdev: use GRE protocol struct for flow matching
  2023-02-03 15:02       ` Ferruh Yigit
@ 2023-02-03 15:12         ` Thomas Monjalon
  2023-02-03 15:16           ` Ferruh Yigit
  0 siblings, 1 reply; 90+ messages in thread
From: Thomas Monjalon @ 2023-02-03 15:12 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Hemant Agrawal, Sachin Saxena, Matan Azrad,
	Viacheslav Ovsiienko, Chaoyong He, Niklas Söderlund,
	Andrew Rybchenko, Olivier Matz, David Marchand, dev

03/02/2023 16:02, Ferruh Yigit:
> On 2/2/2023 5:16 PM, Thomas Monjalon wrote:
> > 02/02/2023 13:44, Ferruh Yigit:
> >> --- a/lib/net/rte_gre.h
> >> +++ b/lib/net/rte_gre.h
> >> @@ -28,6 +28,8 @@ extern "C" {
> >>   */
> >>  __extension__
> >>  struct rte_gre_hdr {
> >> +       union {
> >> +               struct {
> >>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> >>         uint16_t res2:4; /**< Reserved */
> >>         uint16_t s:1;    /**< Sequence Number Present bit */
> >> @@ -45,6 +47,9 @@ struct rte_gre_hdr {
> >>         uint16_t res3:5; /**< Reserved */
> >>         uint16_t ver:3;  /**< Version Number */
> >>  #endif
> >> +               };
> >> +               rte_be16_t c_rsvd0_ver;
> >> +       };
> >>         uint16_t proto;  /**< Protocol Type */
> > 
> > Why adding an unioned field c_rsvd0_ver?
> > 
> > 
> 
> Because existing usage in the drivers require to access these fields as
> a single 16 bytes variable.
> 
> like mlx was using it as:
> `X(SET_BE16,	gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \`
> 
> When all usage switched to flow item specific fields to generic headers,
> there needs a way to represent this in the generic header.
> 
> By adding 'c_rsvd0_ver' to generic header it becomes:
> `X(SET_BE16,	gre_c_ver, v->hdr.c_rsvd0_ver, rte_flow_item_gre) \`
> 
> 
> Or another sample, previous version of code was updated as following:
> `
>  -	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
>  +	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
> `
> 
> Because generic field to represent 'c_rsvd0_ver' is missing, 'hdr.proto'
> was used, this was wrong.
> Although the sizes of fields are same and functionally works, they are
> different fields, this is worse than sizeof(uint16_t);
> 
> 
> Another usage in testpmd:
> `
>   	[ITEM_GRE_C_RSVD0_VER] = {
>   		.name = "c_rsvd0_ver",
>  @@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
>   		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
>    			     item_param),
>   		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
>  -					     c_rsvd0_ver)),
>  +					     hdr.c_rsvd0_ver)),
> `
> 
> 
> But looking it again perhaps it can be named differently, because it is
> not a reserved field in the generic header, though I am not sure what
> can be a good variable name.

The best would be to try not using this field at all.
I suggest to remove this patch from the series for now.
I could try to work on it for the next release.





^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v6 4/8] ethdev: use GRE protocol struct for flow matching
  2023-02-03 15:12         ` Thomas Monjalon
@ 2023-02-03 15:16           ` Ferruh Yigit
  0 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 15:16 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang, Ajit Khaparde,
	Somnath Kotur, Hemant Agrawal, Sachin Saxena, Matan Azrad,
	Viacheslav Ovsiienko, Chaoyong He, Niklas Söderlund,
	Andrew Rybchenko, Olivier Matz, David Marchand, dev

On 2/3/2023 3:12 PM, Thomas Monjalon wrote:
> 03/02/2023 16:02, Ferruh Yigit:
>> On 2/2/2023 5:16 PM, Thomas Monjalon wrote:
>>> 02/02/2023 13:44, Ferruh Yigit:
>>>> --- a/lib/net/rte_gre.h
>>>> +++ b/lib/net/rte_gre.h
>>>> @@ -28,6 +28,8 @@ extern "C" {
>>>>   */
>>>>  __extension__
>>>>  struct rte_gre_hdr {
>>>> +       union {
>>>> +               struct {
>>>>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>>>>         uint16_t res2:4; /**< Reserved */
>>>>         uint16_t s:1;    /**< Sequence Number Present bit */
>>>> @@ -45,6 +47,9 @@ struct rte_gre_hdr {
>>>>         uint16_t res3:5; /**< Reserved */
>>>>         uint16_t ver:3;  /**< Version Number */
>>>>  #endif
>>>> +               };
>>>> +               rte_be16_t c_rsvd0_ver;
>>>> +       };
>>>>         uint16_t proto;  /**< Protocol Type */
>>>
>>> Why adding an unioned field c_rsvd0_ver?
>>>
>>>
>>
>> Because existing usage in the drivers require to access these fields as
>> a single 16 bytes variable.
>>
>> like mlx was using it as:
>> `X(SET_BE16,	gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \`
>>
>> When all usage switched to flow item specific fields to generic headers,
>> there needs a way to represent this in the generic header.
>>
>> By adding 'c_rsvd0_ver' to generic header it becomes:
>> `X(SET_BE16,	gre_c_ver, v->hdr.c_rsvd0_ver, rte_flow_item_gre) \`
>>
>>
>> Or another sample, previous version of code was updated as following:
>> `
>>  -	size = sizeof(((struct rte_flow_item_gre *)NULL)->c_rsvd0_ver);
>>  +	size = sizeof(((struct rte_flow_item_gre *)NULL)->hdr.proto);
>> `
>>
>> Because generic field to represent 'c_rsvd0_ver' is missing, 'hdr.proto'
>> was used, this was wrong.
>> Although the sizes of fields are same and functionally works, they are
>> different fields, this is worse than sizeof(uint16_t);
>>
>>
>> Another usage in testpmd:
>> `
>>   	[ITEM_GRE_C_RSVD0_VER] = {
>>   		.name = "c_rsvd0_ver",
>>  @@ -4082,7 +4082,7 @@ static const struct token token_list[] = {
>>   		.next = NEXT(item_gre, NEXT_ENTRY(COMMON_UNSIGNED),
>>    			     item_param),
>>   		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
>>  -					     c_rsvd0_ver)),
>>  +					     hdr.c_rsvd0_ver)),
>> `
>>
>>
>> But looking it again perhaps it can be named differently, because it is
>> not a reserved field in the generic header, though I am not sure what
>> can be a good variable name.
> 
> The best would be to try not using this field at all.
> I suggest to remove this patch from the series for now.
> I could try to work on it for the next release.
> 

OK


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v7 0/7] start cleanup of rte_flow_item_*
  2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
                   ` (12 preceding siblings ...)
  2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-02-03 16:48 ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 1/7] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
                     ` (8 more replies)
  13 siblings, 9 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: David Marchand, dev

There was a plan to have structures from lib/net/ at the beginning
of corresponding flow item structures.
Unfortunately this plan has not been followed up so far.
This series is a step to make the most used items,
compliant with the inheritance design explained above.
The old API is kept in anonymous union for compatibility,
but the code in drivers and apps is updated to use the new API.

v7:
 * replace 0xffff -> UINT16_MAX in
   'rte_flow_item_higig2_hdr_mask.hdr.ppt1.classification'
 * drop "GRE protocol struct for flow matching" patch, to fine grain
   using generic fileds better, this will be done later

v6:
 * Fix "struct rte_arp_hdr" field doxygen comment

v5:
 * Fix more RHEL7 build error

v4:
 * Fix build error for RHEL7 (gcc 4.8.5) caused by nested struct
   initialization.

v3:
 * Updated Higig2 protocol flow item assignment taking into account endianness
   annotations.

v2: (by Ferruh)
 * Rebased on latest next-net for v23.03
 * 'struct rte_gre_hdr' endianness annotation added to protocol field
 * more driver code updated for rte_flow_item_eth & rte_flow_item_vlan
 * 'struct rte_gre_hdr' updated to have a combined "rte_be16_t c_rsvd0_ver"
   field and updated drivers accordingly
 * more driver code updated for rte_flow_item_gre
 * more driver code updated for rte_flow_item_gtp

Cc: David Marchand <david.marchand@redhat.com>

Thomas Monjalon (7):
  ethdev: use Ethernet protocol struct for flow matching
  net: add smaller fields for VXLAN
  ethdev: use VXLAN protocol struct for flow matching
  ethdev: use GTP protocol struct for flow matching
  ethdev: use ARP protocol struct for flow matching
  doc: fix description of L2TPV2 flow item
  net: mark all big endian types

 app/test-flow-perf/actions_gen.c         |   2 +-
 app/test-flow-perf/items_gen.c           |  20 +--
 app/test-pmd/cmdline_flow.c              | 166 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |  51 ++-----
 doc/guides/rel_notes/deprecation.rst     |   5 +-
 drivers/net/bnxt/bnxt_flow.c             |  54 ++++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 100 +++++++-------
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++---
 drivers/net/dpaa2/dpaa2_flow.c           |  48 +++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +-
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +-
 drivers/net/hns3/hns3_flow.c             |  40 +++---
 drivers/net/i40e/i40e_fdir.c             |  14 +-
 drivers/net/i40e/i40e_flow.c             | 124 ++++++++---------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  18 +--
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 +--
 drivers/net/ice/ice_fdir_filter.c        |  24 ++--
 drivers/net/ice/ice_switch_filter.c      |  64 ++++-----
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |  12 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  58 ++++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  40 +++---
 drivers/net/mlx5/mlx5_flow.c             |  40 +++---
 drivers/net/mlx5/mlx5_flow_dv.c          | 154 ++++++++++-----------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 +++++------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  38 +++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++--
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++--
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  52 +++----
 drivers/net/sfc/sfc_mae.c                |  46 +++----
 drivers/net/tap/tap_flow.c               |  58 ++++----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++--
 lib/ethdev/rte_flow.h                    |  97 ++++++++-----
 lib/net/rte_arp.h                        |  28 ++--
 lib/net/rte_gre.h                        |   2 +-
 lib/net/rte_higig.h                      |   6 +-
 lib/net/rte_mpls.h                       |   2 +-
 lib/net/rte_vxlan.h                      |  35 ++++-
 47 files changed, 904 insertions(+), 880 deletions(-)

--
2.34.1


^ permalink raw reply	[flat|nested] 90+ messages in thread

* [PATCH v7 1/7] ethdev: use Ethernet protocol struct for flow matching
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 2/7] net: add smaller fields for VXLAN Ferruh Yigit
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Simei Su,
	Wenjun Wu, John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Beilei Xing,
	Jingjing Wu, Qiming Yang, Qi Zhang, Junfeng Guo, Rosen Xu,
	Matan Azrad, Viacheslav Ovsiienko, Liron Himi, Chaoyong He,
	Niklas Söderlund, Andrew Rybchenko, Jiawen Wu, Jian Wang
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.
The Ethernet headers (including VLAN) structures are used
instead of the redundant fields in the flow items.

The remaining protocols to clean up are listed for future work
in the deprecation list.
Some protocols are not even defined in the directory net yet.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/items_gen.c           |   4 +-
 app/test-pmd/cmdline_flow.c              | 140 +++++++++++------------
 doc/guides/prog_guide/rte_flow.rst       |   7 +-
 doc/guides/rel_notes/deprecation.rst     |   2 +
 drivers/net/bnxt/bnxt_flow.c             |  42 +++----
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c |  58 +++++-----
 drivers/net/bonding/rte_eth_bond_pmd.c   |  12 +-
 drivers/net/cxgbe/cxgbe_flow.c           |  44 +++----
 drivers/net/dpaa2/dpaa2_flow.c           |  48 ++++----
 drivers/net/dpaa2/dpaa2_mux.c            |   2 +-
 drivers/net/e1000/igb_flow.c             |  14 +--
 drivers/net/enic/enic_flow.c             |  24 ++--
 drivers/net/enic/enic_fm_flow.c          |  16 +--
 drivers/net/hinic/hinic_pmd_flow.c       |  14 +--
 drivers/net/hns3/hns3_flow.c             |  28 ++---
 drivers/net/i40e/i40e_flow.c             | 100 ++++++++--------
 drivers/net/i40e/i40e_hash.c             |   4 +-
 drivers/net/iavf/iavf_fdir.c             |  10 +-
 drivers/net/iavf/iavf_fsub.c             |  10 +-
 drivers/net/iavf/iavf_ipsec_crypto.c     |   4 +-
 drivers/net/ice/ice_acl_filter.c         |  20 ++--
 drivers/net/ice/ice_fdir_filter.c        |  14 +--
 drivers/net/ice/ice_switch_filter.c      |  34 +++---
 drivers/net/igc/igc_flow.c               |   8 +-
 drivers/net/ipn3ke/ipn3ke_flow.c         |   8 +-
 drivers/net/ixgbe/ixgbe_flow.c           |  40 +++----
 drivers/net/mlx4/mlx4_flow.c             |  38 +++---
 drivers/net/mlx5/hws/mlx5dr_definer.c    |  26 ++---
 drivers/net/mlx5/mlx5_flow.c             |  24 ++--
 drivers/net/mlx5/mlx5_flow_dv.c          |  94 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c          |  80 ++++++-------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  30 ++---
 drivers/net/mlx5/mlx5_trigger.c          |  28 ++---
 drivers/net/mvpp2/mrvl_flow.c            |  28 ++---
 drivers/net/nfp/nfp_flow.c               |  12 +-
 drivers/net/sfc/sfc_flow.c               |  46 ++++----
 drivers/net/sfc/sfc_mae.c                |  38 +++---
 drivers/net/tap/tap_flow.c               |  58 +++++-----
 drivers/net/txgbe/txgbe_flow.c           |  28 ++---
 39 files changed, 618 insertions(+), 619 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a73de9031f54..b7f51030a119 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_vlan vlan_spec = {
-		.tci = RTE_BE16(VLAN_VALUE),
+		.hdr.vlan_tci = RTE_BE16(VLAN_VALUE),
 	};
 	static struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0xffff),
+		.hdr.vlan_tci = RTE_BE16(0xffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0c3..694a7eb647c5 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3633,19 +3633,19 @@ static const struct token token_list[] = {
 		.name = "dst",
 		.help = "destination MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)),
 	},
 	[ITEM_ETH_SRC] = {
 		.name = "src",
 		.help = "source MAC",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)),
 	},
 	[ITEM_ETH_TYPE] = {
 		.name = "type",
 		.help = "EtherType",
 		.next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)),
 	},
 	[ITEM_ETH_HAS_VLAN] = {
 		.name = "has_vlan",
@@ -3666,7 +3666,7 @@ static const struct token token_list[] = {
 		.help = "tag control information",
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)),
 	},
 	[ITEM_VLAN_PCP] = {
 		.name = "pcp",
@@ -3674,7 +3674,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\xe0\x00")),
+						  hdr.vlan_tci, "\xe0\x00")),
 	},
 	[ITEM_VLAN_DEI] = {
 		.name = "dei",
@@ -3682,7 +3682,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x10\x00")),
+						  hdr.vlan_tci, "\x10\x00")),
 	},
 	[ITEM_VLAN_VID] = {
 		.name = "vid",
@@ -3690,7 +3690,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan,
-						  tci, "\x0f\xff")),
+						  hdr.vlan_tci, "\x0f\xff")),
 	},
 	[ITEM_VLAN_INNER_TYPE] = {
 		.name = "inner_type",
@@ -3698,7 +3698,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan,
-					     inner_type)),
+					     hdr.eth_proto)),
 	},
 	[ITEM_VLAN_HAS_MORE_VLAN] = {
 		.name = "has_more_vlan",
@@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = vxlan_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 			.src_addr = vxlan_encap_conf.ipv4_src,
@@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 		},
 		.item_vxlan.flags = 0,
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!vxlan_encap_conf.select_ipv4) {
 		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
@@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 				.type = RTE_FLOW_ITEM_TYPE_END,
 			},
 		},
-		.item_eth.type = 0,
+		.item_eth.hdr.ether_type = 0,
 		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
+			.hdr.vlan_tci = nvgre_encap_conf.vlan_tci,
+			.hdr.eth_proto = 0,
 		},
 		.item_ipv4.hdr = {
 		       .src_addr = nvgre_encap_conf.ipv4_src,
@@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_
 		.item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB),
 		.item_nvgre.flow_id = 0,
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes,
 	       nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	if (!nvgre_encap_conf.select_ipv4) {
 		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
@@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (l2_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (l2_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_encap_conf.select_vlan) {
 		if (l2_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	uint8_t *header;
 	int ret;
@@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (l2_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (l2_decap_conf.select_vlan) {
@@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsogre_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsogre_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsogre_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_GRE,
@@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsogre_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsogre_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsogre_encap_conf.select_vlan) {
 		if (mplsogre_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_encap_data *action_encap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
 	struct rte_flow_item_vlan vlan = {
-		.tci = mplsoudp_encap_conf.vlan_tci,
-		.inner_type = 0,
+		.hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci,
+		.hdr.eth_proto = 0,
 	};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
@@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
 	};
 	header = action_encap_data->data;
 	if (mplsoudp_encap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
@@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
 	struct action_raw_decap_data *action_decap_data;
-	struct rte_flow_item_eth eth = { .type = 0, };
-	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_eth eth = { .hdr.ether_type = 0, };
+	struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0};
 	struct rte_flow_item_ipv4 ipv4 = {
 		.hdr =  {
 			.next_proto_id = IPPROTO_UDP,
@@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
 	};
 	header = action_decap_data->data;
 	if (mplsoudp_decap_conf.select_vlan)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else if (mplsoudp_encap_conf.select_ipv4)
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 	else
-		eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
-	memcpy(eth.dst.addr_bytes,
+		eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	memcpy(eth.hdr.dst_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
-	memcpy(eth.src.addr_bytes,
+	memcpy(eth.hdr.src_addr.addr_bytes,
 	       mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN);
 	memcpy(header, &eth, sizeof(eth));
 	header += sizeof(eth);
 	if (mplsoudp_encap_conf.select_vlan) {
 		if (mplsoudp_encap_conf.select_ipv4)
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		else
-			vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+			vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 		memcpy(header, &vlan, sizeof(vlan));
 		header += sizeof(vlan);
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803dc0..27c3780c4f17 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -840,9 +840,7 @@ instead of using the ``type`` field.
 If the ``type`` and ``has_vlan`` fields are not specified, then both tagged
 and untagged packets will match the pattern.
 
-- ``dst``: destination MAC.
-- ``src``: source MAC.
-- ``type``: EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_vlan``: packet header contains at least one VLAN.
 - Default ``mask`` matches destination and source addresses only.
 
@@ -861,8 +859,7 @@ instead of using the ``inner_type field``.
 If the ``inner_type`` and ``has_more_vlan`` fields are not specified,
 then any tagged packets will match the pattern.
 
-- ``tci``: tag control information.
-- ``inner_type``: inner EtherType or TPID.
+- ``hdr``:  header definition (``rte_ether.h``).
 - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN.
 - Default ``mask`` matches the VID part of TCI only (lower 12 bits).
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index eea99b454005..4782d2e680d3 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,6 +63,8 @@ Deprecation Notices
   should start with relevant protocol header structure from lib/net/.
   The individual protocol header fields and the protocol header struct
   may be kept together in a union as a first migration step.
+  In future (target is DPDK 23.11), the protocol header fields will be cleaned
+  and only protocol header struct will remain.
 
   These items are not compliant (not including struct from lib/net/):
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 96ef00460cf5..8f660493402c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			 * Destination MAC address mask must not be partially
 			 * set. Should be all 1's or all 0's.
 			 */
-			if ((!rte_is_zero_ether_addr(&eth_mask->src) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if ((!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			}
 
 			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
+			if (eth_mask->hdr.ether_type &&
+			    eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				return -rte_errno;
 			}
 
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				dst = &eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				dst = &eth_spec->hdr.dst_addr;
 				if (!rte_is_valid_assigned_ether_addr(dst)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				PMD_DRV_LOG(DEBUG,
 					    "Creating a priority flow\n");
 			}
-			if (rte_is_broadcast_ether_addr(&eth_mask->src)) {
-				src = &eth_spec->src;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
+				src = &eth_spec->hdr.src_addr;
 				if (!rte_is_valid_assigned_ether_addr(src)) {
 					rte_flow_error_set(error,
 							   EINVAL,
@@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 					return -rte_errno;
 				}
 				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+					   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 				en |= use_ntuple ?
 					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
 					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
 			   * }
 			   */
-			if (eth_mask->type) {
+			if (eth_mask->hdr.ether_type) {
 				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
+					rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 				en |= en_ethertype;
 			}
 			if (inner)
@@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " supported");
 				return -rte_errno;
 			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+			if (vlan_mask->hdr.vlan_tci &&
+			    vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) {
 				/* Only the VLAN ID can be matched. */
 				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
+					rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci &
 							 RTE_BE16(0x0fff));
 				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
 			} else {
@@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   "VLAN mask is invalid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_mask->hdr.eth_proto &&
+			    vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 						   " valid");
 				return -rte_errno;
 			}
-			if (vlan_mask->inner_type) {
+			if (vlan_mask->hdr.eth_proto) {
 				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 				en |= en_ethertype;
 			}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1be649a16c49..2928598ced55 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	/* Perform validations */
 	if (eth_spec) {
 		/* Todo: work around to avoid multicast and broadcast addr */
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->dst))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.dst_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->src))
+		if (ulp_rte_parser_is_bcmc_addr(&eth_spec->hdr.src_addr))
 			return BNXT_TF_RC_PARSE_ERR;
 
-		eth_type = eth_spec->type;
+		eth_type = eth_spec->hdr.ether_type;
 	}
 
 	if (ulp_rte_prsr_fld_size_validate(params, &idx,
@@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
 	 * header fields
 	 */
 	dmac_idx = idx;
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, dst.addr_bytes),
-			      ulp_deference_struct(eth_mask, dst.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, src.addr_bytes),
-			      ulp_deference_struct(eth_mask, src.addr_bytes),
+			      ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes),
+			      ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_eth *)NULL)->type);
+	size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(eth_spec, type),
-			      ulp_deference_struct(eth_mask, type),
+			      ulp_deference_struct(eth_spec, hdr.ether_type),
+			      ulp_deference_struct(eth_mask, hdr.ether_type),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Update the protocol hdr bitmap */
@@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	uint32_t size;
 
 	if (vlan_spec) {
-		vlan_tag = ntohs(vlan_spec->tci);
+		vlan_tag = ntohs(vlan_spec->hdr.vlan_tci);
 		priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag &= ULP_VLAN_TAG_MASK;
 		vlan_tag = htons(vlan_tag);
-		eth_type = vlan_spec->inner_type;
+		eth_type = vlan_spec->hdr.eth_proto;
 	}
 
 	if (vlan_mask) {
-		vlan_tag_mask = ntohs(vlan_mask->tci);
+		vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci);
 		priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT);
 		vlan_tag_mask &= 0xfff;
 
@@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci);
 	/*
 	 * The priority field is ignored since OVS is setting it as
 	 * wild card match and it is not supported. This is a work
@@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 			      (vlan_mask) ? &vlan_tag_mask : NULL,
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type);
+	size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vlan_spec, inner_type),
-			      ulp_deference_struct(vlan_mask, inner_type),
+			      ulp_deference_struct(vlan_spec, hdr.eth_proto),
+			      ulp_deference_struct(vlan_mask, hdr.eth_proto),
 			      ULP_PRSR_ACT_MATCH_IGNORE);
 
 	/* Get the outer tag and inner tag counts */
@@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC];
-	size = sizeof(eth_spec->dst.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.dst_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->src.addr_bytes);
-	field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size);
+	size = sizeof(eth_spec->hdr.src_addr.addr_bytes);
+	field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size);
 
-	size = sizeof(eth_spec->type);
-	field = ulp_rte_parser_fld_copy(field, &eth_spec->type, size);
+	size = sizeof(eth_spec->hdr.ether_type);
+	field = ulp_rte_parser_fld_copy(field, &eth_spec->hdr.ether_type, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH);
 }
@@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	}
 
-	size = sizeof(vlan_spec->tci);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size);
+	size = sizeof(vlan_spec->hdr.vlan_tci);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size);
 
-	size = sizeof(vlan_spec->inner_type);
-	field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size);
+	size = sizeof(vlan_spec->hdr.eth_proto);
+	field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size);
 }
 
 /* Function to handle the parsing of RTE Flow item ipv4 Header. */
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f70c2c290577..f0c4f7d26b86 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
  */
 
 static struct rte_flow_item_eth flow_item_eth_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW),
 };
 
 static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = {
-	.dst.addr_bytes = { 0 },
-	.src.addr_bytes = { 0 },
-	.type = 0xFFFF,
+	.hdr.dst_addr.addr_bytes = { 0 },
+	.hdr.src_addr.addr_bytes = { 0 },
+	.hdr.ether_type = 0xFFFF,
 };
 
 static struct rte_flow_item flow_item_8023ad[] = {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d66672a9e6b8..f5787c247f1f 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		return 0;
 
 	/* we don't support SRC_MAC filtering*/
-	if (!rte_is_zero_ether_addr(&spec->src) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->src)))
+	if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr)))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item,
 					  "src mac filtering not supported");
 
-	if (!rte_is_zero_ether_addr(&spec->dst) ||
-	    (umask && !rte_is_zero_ether_addr(&umask->dst))) {
+	if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) ||
+	    (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) {
 		CXGBE_FILL_FS(0, 0x1ff, macidx);
-		CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes,
+		CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 				     dmac);
 	}
 
-	if (spec->type || (umask && umask->type))
-		CXGBE_FILL_FS(be16_to_cpu(spec->type),
-			      be16_to_cpu(mask->type), ethtype);
+	if (spec->hdr.ether_type || (umask && umask->hdr.ether_type))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type),
+			      be16_to_cpu(mask->hdr.ether_type), ethtype);
 
 	return 0;
 }
@@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
 	if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
 		CXGBE_FILL_FS(1, 1, ovlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ovlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ovlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	} else {
 		CXGBE_FILL_FS(1, 1, ivlan_vld);
 		if (spec) {
-			if (spec->tci || (umask && umask->tci))
-				CXGBE_FILL_FS(be16_to_cpu(spec->tci),
-					      be16_to_cpu(mask->tci), ivlan);
+			if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci))
+				CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci),
+					      be16_to_cpu(mask->hdr.vlan_tci), ivlan);
 			fs->mask.ethtype = 0;
 			fs->val.ethtype = 0;
 		}
 	}
 
-	if (spec && (spec->inner_type || (umask && umask->inner_type)))
-		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
-			      be16_to_cpu(mask->inner_type), ethtype);
+	if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto)))
+		CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto),
+			      be16_to_cpu(mask->hdr.eth_proto), ethtype);
 
 	return 0;
 }
@@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = {
 	[RTE_FLOW_ITEM_TYPE_ETH] = {
 		.fptr  = ch_rte_parsetype_eth,
 		.dmask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0xffff,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0xffff,
 		}
 	},
 
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.fptr = ch_rte_parsetype_vlan,
 		.dmask = &(const struct rte_flow_item_vlan){
-			.tci = 0xffff,
-			.inner_type = 0xffff,
+			.hdr.vlan_tci = 0xffff,
+			.hdr.eth_proto = 0xffff,
 		}
 	},
 
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index df06c3862e7c..eec7e6065097 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = {
 
 #ifndef __cplusplus
 static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.type = RTE_BE16(0xffff),
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.ether_type = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = {
-	.tci = RTE_BE16(0xffff),
+	.hdr.vlan_tci = RTE_BE16(0xffff),
 };
 
 static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = {
@@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_SA);
@@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed");
@@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_SA,
-				&spec->src.addr_bytes,
-				&mask->src.addr_bytes,
+				&spec->hdr.src_addr.addr_bytes,
+				&mask->hdr.src_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed");
@@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) {
+	if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_DA);
@@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed");
@@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_DA,
-				&spec->dst.addr_bytes,
-				&mask->dst.addr_bytes,
+				&spec->hdr.dst_addr.addr_bytes,
+				&mask->hdr.dst_addr.addr_bytes,
 				sizeof(struct rte_ether_addr));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed");
@@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 		}
 	}
 
-	if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) {
+	if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) {
 		index = dpaa2_flow_extract_search(
 				&priv->extract.qos_key_extract.dpkg,
 				NET_PROT_ETH, NH_FLD_ETH_TYPE);
@@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed");
@@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 				&flow->fs_rule,
 				NET_PROT_ETH,
 				NH_FLD_ETH_TYPE,
-				&spec->type,
-				&mask->type,
+				&spec->hdr.ether_type,
+				&mask->hdr.ether_type,
 				sizeof(rte_be16_t));
 		if (ret) {
 			DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed");
@@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 		return -1;
 	}
 
-	if (!mask->tci)
+	if (!mask->hdr.vlan_tci)
 		return 0;
 
 	index = dpaa2_flow_extract_search(
@@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 				&flow->qos_rule,
 				NET_PROT_VLAN,
 				NH_FLD_VLAN_TCI,
-				&spec->tci,
-				&mask->tci,
+				&spec->hdr.vlan_tci,
+				&mask->hdr.vlan_tci,
 				sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed");
@@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 			&flow->fs_rule,
 			NET_PROT_VLAN,
 			NH_FLD_VLAN_TCI,
-			&spec->tci,
-			&mask->tci,
+			&spec->hdr.vlan_tci,
+			&mask->hdr.vlan_tci,
 			sizeof(rte_be16_t));
 	if (ret) {
 		DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed");
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 7456f43f425c..2ff1a98fda7c 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 		kg_cfg.num_extracts = 1;
 
 		spec = (const struct rte_flow_item_eth *)pattern[0]->spec;
-		eth_type = rte_constant_bswap16(spec->type);
+		eth_type = rte_constant_bswap16(spec->hdr.ether_type);
 		memcpy((void *)key_iova, (const void *)&eth_type,
 							sizeof(rte_be16_t));
 		memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t));
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index b77531065196..ea9b290e1cb5 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	index++;
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793cfef0..e6c9ad442ac0 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	enic_spec.ether_type = spec->type;
-	enic_mask.ether_type = mask->type;
+	enic_spec.ether_type = spec->hdr.ether_type;
+	enic_mask.ether_type = mask->hdr.ether_type;
 
 	/* outer header */
 	memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
@@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
 		struct rte_vlan_hdr *vlan;
 
 		vlan = (struct rte_vlan_hdr *)(eth_mask + 1);
-		vlan->eth_proto = mask->inner_type;
+		vlan->eth_proto = mask->hdr.eth_proto;
 		vlan = (struct rte_vlan_hdr *)(eth_val + 1);
-		vlan->eth_proto = spec->inner_type;
+		vlan->eth_proto = spec->hdr.eth_proto;
 	} else {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	/* For TCI, use the vlan mask/val fields (little endian). */
-	gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
-	gp->val_vlan = rte_be_to_cpu_16(spec->tci);
+	gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
+	gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c
index c87d3af8476c..90027dc67695 100644
--- a/drivers/net/enic/enic_fm_flow.c
+++ b/drivers/net/enic/enic_fm_flow.c
@@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	eth_val = (void *)&fm_data->l2.eth;
 
 	/*
-	 * Outer TPID cannot be matched. If inner_type is 0, use what is
+	 * Outer TPID cannot be matched. If protocol is 0, use what is
 	 * in the eth header.
 	 */
-	if (eth_mask->ether_type && mask->inner_type)
+	if (eth_mask->ether_type && mask->hdr.eth_proto)
 		return -ENOTSUP;
 
 	/*
@@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg)
 	 * L2, regardless of vlan stripping settings. So, the inner type
 	 * from vlan becomes the ether type of the eth header.
 	 */
-	if (mask->inner_type) {
-		eth_mask->ether_type = mask->inner_type;
-		eth_val->ether_type = spec->inner_type;
+	if (mask->hdr.eth_proto) {
+		eth_mask->ether_type = mask->hdr.eth_proto;
+		eth_val->ether_type = spec->hdr.eth_proto;
 	}
 	fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG;
 	fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG;
-	fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci);
-	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci);
+	fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
+	fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	return 0;
 }
 
@@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm,
 
 		ENICPMD_LOG(DEBUG, "vxlan-encap: vlan");
 		spec = item->spec;
-		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci);
+		fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci);
 		item++;
 		flow_item_skip_void(&item);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 358b372e07e8..d1a564a16303 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
 		return -rte_errno;
@@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a2c1589c3980..ef1832982dee 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		eth_mask = item->mask;
-		if (eth_mask->type) {
+		if (eth_mask->hdr.ether_type) {
 			hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1);
 			rule->key_conf.mask.ether_type =
-			    rte_be_to_cpu_16(eth_mask->type);
+			    rte_be_to_cpu_16(eth_mask->hdr.ether_type);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 			hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1);
 			memcpy(rule->key_conf.mask.src_mac,
-			       eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
-		if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+		if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 			hns3_set_bit(rule->input_set, INNER_DST_MAC, 1);
 			memcpy(rule->key_conf.mask.dst_mac,
-			       eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+			       eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 		}
 	}
 
 	eth_spec = item->spec;
-	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type);
-	memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes,
+	rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+	memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes,
+	memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	return 0;
 }
@@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 
 	if (item->mask) {
 		vlan_mask = item->mask;
-		if (vlan_mask->tci) {
+		if (vlan_mask->hdr.vlan_tci) {
 			if (rule->key_conf.vlan_num == 1) {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG1,
 					     1);
 				rule->key_conf.mask.vlan_tag1 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			} else {
 				hns3_set_bit(rule->input_set, INNER_VLAN_TAG2,
 					     1);
 				rule->key_conf.mask.vlan_tag2 =
-				    rte_be_to_cpu_16(vlan_mask->tci);
+				    rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci);
 			}
 		}
 	}
@@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vlan_spec = item->spec;
 	if (rule->key_conf.vlan_num == 1)
 		rule->key_conf.spec.vlan_tag1 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	else
 		rule->key_conf.spec.vlan_tag2 =
-		    rte_be_to_cpu_16(vlan_spec->tci);
+		    rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 65a826d51c17..0acbd5a061e0 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			 * Mask bits of destination MAC address must be full
 			 * of 1 or full of 0.
 			 */
-			if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-			    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-			     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+			    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+			     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+			if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
 			/* If mask bits of destination MAC address
 			 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 			 */
-			if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-				filter->mac_addr = eth_spec->dst;
+			if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+				filter->mac_addr = eth_spec->hdr.dst_addr;
 				filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 			} else {
 				filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 			}
-			filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+			filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 			if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
 			    filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					input_set |= I40E_INSET_DMAC;
-				} else if (rte_is_zero_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= I40E_INSET_SMAC;
-				} else if (rte_is_broadcast_ether_addr(&eth_mask->dst) &&
-					rte_is_broadcast_ether_addr(&eth_mask->src)) {
+				} else if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) &&
+					rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr)) {
 					filter->input.flow.l2_flow.dst =
-						eth_spec->dst;
+						eth_spec->hdr.dst_addr;
 					filter->input.flow.l2_flow.src =
-						eth_spec->src;
+						eth_spec->hdr.src_addr;
 					input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-					   !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+					   !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			}
 			if (eth_spec && eth_mask &&
 			next_type == RTE_FLOW_ITEM_TYPE_END) {
-				if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 					return -rte_errno;
 				}
 
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
 				    ether_type == RTE_ETHER_TYPE_IPV4 ||
@@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					eth_spec->type;
+					eth_spec->hdr.ether_type;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
 			RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci !=
+				if (vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
-				    vlan_mask->tci !=
+				    vlan_mask->hdr.vlan_tci !=
 				    rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_VLAN_INNER;
 				filter->input.flow_ext.vlan_tci =
-					vlan_spec->tci;
+					vlan_spec->hdr.vlan_tci;
 			}
-			if (vlan_spec && vlan_mask && vlan_mask->inner_type) {
-				if (vlan_mask->inner_type != RTE_BE16(0xffff)) {
+			if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
+				if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
 					rte_flow_error_set(error, EINVAL,
 						      RTE_FLOW_ERROR_TYPE_ITEM,
 						      item,
@@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				ether_type =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
+					rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6 ||
@@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 				input_set |= I40E_INSET_LAST_ETHER_TYPE;
 				filter->input.flow.l2_flow.ether_type =
-					vlan_spec->inner_type;
+					vlan_spec->hdr.eth_proto;
 			}
 
 			pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
@@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!vxlan_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 				/* DST address of inner MAC shouldn't be masked.
 				 * SRC address of Inner MAC should be masked.
 				 */
-				if (!rte_is_broadcast_ether_addr(&eth_mask->dst) ||
-				    !rte_is_zero_ether_addr(&eth_mask->src) ||
-				    eth_mask->type) {
+				if (!rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr) ||
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+				    eth_mask->hdr.ether_type) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 
 				if (!nvgre_flag) {
 					rte_memcpy(&filter->outer_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
 				} else {
 					rte_memcpy(&filter->inner_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN);
 					filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
 				}
@@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_spec = item->spec;
 			vlan_mask = item->mask;
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
 			}
 
 			if (vlan_spec && vlan_mask) {
-				if (vlan_mask->tci ==
+				if (vlan_mask->hdr.vlan_tci ==
 				    rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
 					filter->inner_vlan =
-					      rte_be_to_cpu_16(vlan_spec->tci) &
+					      rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
 					      I40E_VLAN_TCI_MASK;
 				filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
 			}
@@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 			vlan_mask = item->mask;
 
 			if (!(vlan_spec && vlan_mask) ||
-			    vlan_mask->inner_type) {
+			    vlan_mask->hdr.eth_proto) {
 				rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
 					   item,
@@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
 
 	/* Get filter specification */
 	if (o_vlan_mask != NULL &&  i_vlan_mask != NULL) {
-		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci);
-		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci);
+		filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
+		filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
 	} else {
 			rte_flow_error_set(error, EINVAL,
 					   RTE_FLOW_ERROR_TYPE_ITEM,
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 0c848189776d..02e1457d8017 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 	vlan_spec = pattern->spec;
 	vlan_mask = pattern->mask;
 	if (!vlan_spec || !vlan_mask ||
-	    (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7)
+	    (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, pattern,
 					  "Pattern error.");
@@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev,
 
 	rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
 	rss_conf->region_queue_start = rss_act->queue[0];
-	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13;
+	rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
 	return 0;
 }
 
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 8f8087392538..a6c88cb55b88 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			}
 
 			if (eth_spec && eth_mask) {
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= IAVF_INSET_DMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									DST);
-				} else if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				} else if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= IAVF_INSET_SMAC;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1,
 									ETH,
 									SRC);
 				}
 
-				if (eth_mask->type) {
-					if (eth_mask->type != RTE_BE16(0xffff)) {
+				if (eth_mask->hdr.ether_type) {
+					if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
 						rte_flow_error_set(error, EINVAL,
 							RTE_FLOW_ERROR_TYPE_ITEM,
 							item, "Invalid type mask.");
 						return -rte_errno;
 					}
 
-					ether_type = rte_be_to_cpu_16(eth_spec->type);
+					ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 					if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 						ether_type == RTE_ETHER_TYPE_IPV6) {
 						rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 4082c0069f31..74e1e7099b8c 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 			if (eth_spec && eth_mask) {
 				input = &outer_input_set;
 
-				if (!rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					*input |= IAVF_INSET_DMAC;
 					input_set_byte += 6;
 				} else {
@@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 					input_set_byte += 6;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					*input |= IAVF_INSET_SMAC;
 					input_set_byte += 6;
 				}
 
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					*input |= IAVF_INSET_ETHERTYPE;
 					input_set_byte += 2;
 				}
@@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
 
 				*input |= IAVF_INSET_VLAN_OUTER;
 
-				if (vlan_mask->tci)
+				if (vlan_mask->hdr.vlan_tci)
 					input_set_byte += 2;
 
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index 868921cac595..08a80137e5b9 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item,
 		struct rte_ether_hdr *eth)
 {
 	memcpy(eth->src_addr.addr_bytes,
-			item->src.addr_bytes, sizeof(eth->src_addr));
+			item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr));
 	memcpy(eth->dst_addr.addr_bytes,
-			item->dst.addr_bytes, sizeof(eth->dst_addr));
+			item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr));
 }
 
 static void
diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad,
 			eth_mask = item->mask;
 
 			if (eth_spec && eth_mask) {
-				if (rte_is_broadcast_ether_addr(&eth_mask->src) ||
-				    rte_is_broadcast_ether_addr(&eth_mask->dst)) {
+				if (rte_is_broadcast_ether_addr(&eth_mask->hdr.src_addr) ||
+				    rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid mac addr mask");
 					return -rte_errno;
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->src) &&
-				    !rte_is_zero_ether_addr(&eth_mask->src)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.src_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.src_addr)) {
 					input_set |= ICE_INSET_SMAC;
 					ice_memcpy(&filter->input.ext_data.src_mac,
-						   &eth_spec->src,
+						   &eth_spec->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.src_mac,
-						   &eth_mask->src,
+						   &eth_mask->hdr.src_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
 
-				if (!rte_is_zero_ether_addr(&eth_spec->dst) &&
-				    !rte_is_zero_ether_addr(&eth_mask->dst)) {
+				if (!rte_is_zero_ether_addr(&eth_spec->hdr.dst_addr) &&
+				    !rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr)) {
 					input_set |= ICE_INSET_DMAC;
 					ice_memcpy(&filter->input.ext_data.dst_mac,
-						   &eth_spec->dst,
+						   &eth_spec->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 					ice_memcpy(&filter->input.ext_mask.dst_mac,
-						   &eth_mask->dst,
+						   &eth_mask->hdr.dst_addr,
 						   RTE_ETHER_ADDR_LEN,
 						   ICE_NONDMA_TO_NONDMA);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 7914ba940731..5d297afc290e 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(eth_spec && eth_mask))
 				break;
 
-			if (!rte_is_zero_ether_addr(&eth_mask->dst))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr))
 				*input_set |= ICE_INSET_DMAC;
-			if (!rte_is_zero_ether_addr(&eth_mask->src))
+			if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr))
 				*input_set |= ICE_INSET_SMAC;
 
 			next_type = (item + 1)->type;
 			/* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */
-			if (eth_mask->type == RTE_BE16(0xffff) &&
+			if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) &&
 			    next_type == RTE_FLOW_ITEM_TYPE_END) {
 				*input_set |= ICE_INSET_ETHERTYPE;
-				ether_type = rte_be_to_cpu_16(eth_spec->type);
+				ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 				if (ether_type == RTE_ETHER_TYPE_IPV4 ||
 				    ether_type == RTE_ETHER_TYPE_IPV6) {
@@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				     &filter->input.ext_data_outer :
 				     &filter->input.ext_data;
 			rte_memcpy(&p_ext_data->src_mac,
-				   &eth_spec->src, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->dst_mac,
-				   &eth_spec->dst, RTE_ETHER_ADDR_LEN);
+				   &eth_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
 			rte_memcpy(&p_ext_data->ether_type,
-				   &eth_spec->type, sizeof(eth_spec->type));
+				   &eth_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 60f7934a1697..d84061340e6c 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			eth_spec = item->spec;
 			eth_mask = item->mask;
 			if (eth_spec && eth_mask) {
-				const uint8_t *a = eth_mask->src.addr_bytes;
-				const uint8_t *b = eth_mask->dst.addr_bytes;
+				const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes;
+				const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes;
 				if (tunnel_valid)
 					input = &inner_input_set;
 				else
@@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 						break;
 					}
 				}
-				if (eth_mask->type)
+				if (eth_mask->hdr.ether_type)
 					*input |= ICE_INSET_ETHERTYPE;
 				list[t].type = (tunnel_valid  == 0) ?
 					ICE_MAC_OFOS : ICE_MAC_IL;
@@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				h = &list[t].h_u.eth_hdr;
 				m = &list[t].m_u.eth_hdr;
 				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-					if (eth_mask->src.addr_bytes[j]) {
+					if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 						h->src_addr[j] =
-						eth_spec->src.addr_bytes[j];
+						eth_spec->hdr.src_addr.addr_bytes[j];
 						m->src_addr[j] =
-						eth_mask->src.addr_bytes[j];
+						eth_mask->hdr.src_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
-					if (eth_mask->dst.addr_bytes[j]) {
+					if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 						h->dst_addr[j] =
-						eth_spec->dst.addr_bytes[j];
+						eth_spec->hdr.dst_addr.addr_bytes[j];
 						m->dst_addr[j] =
-						eth_mask->dst.addr_bytes[j];
+						eth_mask->hdr.dst_addr.addr_bytes[j];
 						i = 1;
 						input_set_byte++;
 					}
 				}
 				if (i)
 					t++;
-				if (eth_mask->type) {
+				if (eth_mask->hdr.ether_type) {
 					list[t].type = ICE_ETYPE_OL;
 					list[t].h_u.ethertype.ethtype_id =
-						eth_spec->type;
+						eth_spec->hdr.ether_type;
 					list[t].m_u.ethertype.ethtype_id =
-						eth_mask->type;
+						eth_mask->hdr.ether_type;
 					input_set_byte += 2;
 					t++;
 				}
@@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					*input |= ICE_INSET_VLAN_INNER;
 				}
 
-				if (vlan_mask->tci) {
+				if (vlan_mask->hdr.vlan_tci) {
 					list[t].h_u.vlan_hdr.vlan =
-						vlan_spec->tci;
+						vlan_spec->hdr.vlan_tci;
 					list[t].m_u.vlan_hdr.vlan =
-						vlan_mask->tci;
+						vlan_mask->hdr.vlan_tci;
 					input_set_byte += 2;
 				}
-				if (vlan_mask->inner_type) {
+				if (vlan_mask->hdr.eth_proto) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 				eth_mask = item->mask;
 			else
 				continue;
-			if (eth_mask->type == UINT16_MAX)
+			if (eth_mask->hdr.ether_type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
 
diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c
index 58a6a8a539c6..b677a0d61340 100644
--- a/drivers/net/igc/igc_flow.c
+++ b/drivers/net/igc/igc_flow.c
@@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER);
 
 	/* destination and source MAC address are not supported */
-	if (!rte_is_zero_ether_addr(&mask->src) ||
-		!rte_is_zero_ether_addr(&mask->dst))
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) ||
+		!rte_is_zero_ether_addr(&mask->hdr.dst_addr))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Only support ether-type");
 
 	/* ether-type mask bits must be all 1 */
-	if (IGC_NOT_ALL_BITS_SET(mask->type))
+	if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 				"Ethernet type mask bits must be all 1");
@@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item,
 	ether = &filter->ethertype;
 
 	/* get ether-type */
-	ether->ether_type = rte_be_to_cpu_16(spec->type);
+	ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type);
 
 	/* ether-type should not be IPv4 and IPv6 */
 	if (ether->ether_type == RTE_ETHER_TYPE_IPV4 ||
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index 5b57ee9341d3..ee56d0f43d93 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(&parser->key[0],
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
 			eth = item->spec;
 
 			rte_memcpy(parser->key,
-					eth->src.addr_bytes,
+					eth->hdr.src_addr.addr_bytes,
 					RTE_ETHER_ADDR_LEN);
 			break;
 
@@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[],
 			if (!outer_vlan) {
 				outer_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(outer_vlan->tci);
+				tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci);
 				parser->key[0]  = (tci & 0xff0) >> 4;
 				parser->key[1] |= (tci & 0x00f) << 4;
 			} else {
 				inner_vlan = item->spec;
 
-				tci = rte_be_to_cpu_16(inner_vlan->tci);
+				tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci);
 				parser->key[1] |= (tci & 0xf00) >> 8;
 				parser->key[2]  = (tci & 0x0ff);
 			}
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 110ff34fcceb..a11da3dc8beb 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			/* Get the dst MAC. */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 				rule->ixgbe_fdir.formatted.inner_mac[j] =
-					eth_spec->dst.addr_bytes[j];
+					eth_spec->hdr.dst_addr.addr_bytes[j];
 			}
 		}
 
@@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct ixgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
@@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		/* Get the dst MAC. */
 		for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
 			rule->ixgbe_fdir.formatted.inner_mac[j] =
-				eth_spec->dst.addr_bytes[j];
+				eth_spec->hdr.dst_addr.addr_bytes[j];
 		}
 	}
 
@@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		vlan_spec = item->spec;
 		vlan_mask = item->mask;
 
-		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
 
-		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
 		rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
 		/* More than one tags are not supported. */
 
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 9d7247cf81d0..8ef9fd2db44e 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		uint32_t sum_dst = 0;
 		uint32_t sum_src = 0;
 
-		for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) {
-			sum_dst += mask->dst.addr_bytes[i];
-			sum_src += mask->src.addr_bytes[i];
+		for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) {
+			sum_dst += mask->hdr.dst_addr.addr_bytes[i];
+			sum_src += mask->hdr.src_addr.addr_bytes[i];
 		}
 		if (sum_src) {
 			msg = "mlx4 does not support source MAC matching";
 			goto error;
 		} else if (!sum_dst) {
 			flow->promisc = 1;
-		} else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) {
-			if (!(spec->dst.addr_bytes[0] & 1)) {
+		} else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) {
+			if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) {
 				msg = "mlx4 does not support the explicit"
 					" exclusion of all multicast traffic";
 				goto error;
@@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow,
 		flow->promisc = 1;
 		return 0;
 	}
-	memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
-	memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+	memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	/* Remove unwanted bits from values. */
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i)
 		eth->val.dst_mac[i] &= eth->mask.dst_mac[i];
@@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 	struct ibv_flow_spec_eth *eth;
 	const char *msg;
 
-	if (!mask || !mask->tci) {
+	if (!mask || !mask->hdr.vlan_tci) {
 		msg = "mlx4 cannot match all VLAN traffic while excluding"
 			" non-VLAN traffic, TCI VID must be specified";
 		goto error;
 	}
-	if (mask->tci != RTE_BE16(0x0fff)) {
+	if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		msg = "mlx4 does not support partial TCI VID matching";
 		goto error;
 	}
@@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow,
 		return 0;
 	eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size -
 		       sizeof(*eth));
-	eth->val.vlan_tag = spec->tci;
-	eth->mask.vlan_tag = mask->tci;
+	eth->val.vlan_tag = spec->hdr.vlan_tci;
+	eth->mask.vlan_tag = mask->hdr.vlan_tci;
 	eth->val.vlan_tag &= eth->mask.vlan_tag;
 	if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT)
 		flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL;
@@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 				       RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_eth){
 			/* Only destination MAC can be matched. */
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_eth_mask,
 		.mask_sz = sizeof(struct rte_flow_item_eth),
@@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4),
 		.mask_support = &(const struct rte_flow_item_vlan){
 			/* Only TCI VID matching is supported. */
-			.tci = RTE_BE16(0x0fff),
+			.hdr.vlan_tci = RTE_BE16(0x0fff),
 		},
 		.mask_default = &rte_flow_item_vlan_mask,
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
@@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	};
 	struct rte_flow_item_eth eth_spec;
 	const struct rte_flow_item_eth eth_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const struct rte_flow_item_eth eth_allmulti = {
-		.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_vlan vlan_spec;
 	const struct rte_flow_item_vlan vlan_mask = {
-		.tci = RTE_BE16(0x0fff),
+		.hdr.vlan_tci = RTE_BE16(0x0fff),
 	};
 	struct rte_flow_item pattern[] = {
 		{
@@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	struct rte_ether_addr *rule_mac = &eth_spec.dst;
+	struct rte_ether_addr *rule_mac = &eth_spec.hdr.dst_addr;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
 		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
-		&vlan_spec.tci :
+		&vlan_spec.hdr.vlan_tci :
 		NULL;
 	uint16_t vlan = 0;
 	struct rte_flow *flow;
@@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 		if (i < RTE_DIM(priv->mac))
 			mac = &priv->mac[i];
 		else
-			mac = &eth_mask.dst;
+			mac = &eth_mask.hdr.dst_addr;
 		if (rte_is_zero_ether_addr(mac))
 			continue;
 		/* Check if MAC flow rule is already present. */
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c9666..604384a24253 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data {
 
 /* Xmacro used to create generic item setter from items */
 #define LIST_OF_FIELDS_INFO \
-	X(SET_BE16,	eth_type,		v->type,		rte_flow_item_eth) \
-	X(SET_BE32P,	eth_smac_47_16,		&v->src.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_smac_15_0,		&v->src.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE32P,	eth_dmac_47_16,		&v->dst.addr_bytes[0],	rte_flow_item_eth) \
-	X(SET_BE16P,	eth_dmac_15_0,		&v->dst.addr_bytes[4],	rte_flow_item_eth) \
-	X(SET_BE16,	tci,			v->tci,			rte_flow_item_vlan) \
+	X(SET_BE16,	eth_type,		v->hdr.ether_type,		rte_flow_item_eth) \
+	X(SET_BE32P,	eth_smac_47_16,		&v->hdr.src_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_smac_15_0,		&v->hdr.src_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE32P,	eth_dmac_47_16,		&v->hdr.dst_addr.addr_bytes[0],	rte_flow_item_eth) \
+	X(SET_BE16P,	eth_dmac_15_0,		&v->hdr.dst_addr.addr_bytes[4],	rte_flow_item_eth) \
+	X(SET_BE16,	tci,			v->hdr.vlan_tci,		rte_flow_item_vlan) \
 	X(SET,		ipv4_ihl,		v->ihl,			rte_ipv4_hdr) \
 	X(SET,		ipv4_tos,		v->type_of_service,	rte_ipv4_hdr) \
 	X(SET,		ipv4_time_to_live,	v->time_to_live,	rte_ipv4_hdr) \
@@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (m->type) {
+	if (m->hdr.ether_type) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
@@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 47_16 */
-	if (memcmp(m->src.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set;
@@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check SMAC 15_0 */
-	if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set;
@@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 47_16 */
-	if (memcmp(m->dst.addr_bytes, empty_mac, 4)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set;
@@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd,
 	}
 
 	/* Check DMAC 15_0 */
-	if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) {
+	if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set;
@@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
 	}
 
-	if (m->tci) {
+	if (m->hdr.vlan_tci) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_tci_set;
 		DR_CALC_SET(fc, eth_l2, tci, inner);
 	}
 
-	if (m->inner_type) {
+	if (m->hdr.eth_proto) {
 		fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_set = &mlx5dr_definer_eth_type_set;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a0cf677fb099..2512d6b52db9 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		return RTE_FLOW_ITEM_TYPE_VOID;
 	switch (item->type) {
 	case RTE_FLOW_ITEM_TYPE_ETH:
-		MLX5_XSET_ITEM_MASK_SPEC(eth, type);
+		MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VLAN:
-		MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type);
+		MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto);
 		if (!mask)
 			return RTE_FLOW_ITEM_TYPE_VOID;
 		ret = mlx5_ethertype_to_item_type(spec, mask, false);
@@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_eth *mask = item->mask;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = ext_vlan_sup ? 1 : 0,
 	};
 	int ret;
@@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = item->spec;
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 	};
 	uint16_t vlan_tag = 0;
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		}
 	}
 	if (spec) {
-		vlan_tag = spec->tci;
-		vlan_tag &= mask->tci;
+		vlan_tag = spec->hdr.vlan_tci;
+		vlan_tag &= mask->hdr.vlan_tci;
 	}
 	/*
 	 * From verbs perspective an empty VLAN is equivalent
@@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev)
 	 * a multicast dst mac causes kernel to give low priority to this flow.
 	 */
 	static const struct rte_flow_item_eth lacp_spec = {
-		.type = RTE_BE16(0x8809),
+		.hdr.ether_type = RTE_BE16(0x8809),
 	};
 	static const struct rte_flow_item_eth lacp_mask = {
-		.type = 0xffff,
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_attr attr = {
 		.ingress = 1,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 62c38b87a1f0..ff915183b7cc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac
 	memset(&eth, 0, sizeof(eth));
 	memset(&eth_mask, 0, sizeof(eth_mask));
 	if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) {
-		memcpy(&eth.src.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.src.addr_bytes));
-		memcpy(&eth_mask.src.addr_bytes,
-		       &rte_flow_item_eth_mask.src.addr_bytes,
-		       sizeof(eth_mask.src.addr_bytes));
+		memcpy(&eth.hdr.src_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.src_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.src_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.src_addr.addr_bytes));
 	} else {
-		memcpy(&eth.dst.addr_bytes, &conf->mac_addr,
-		       sizeof(eth.dst.addr_bytes));
-		memcpy(&eth_mask.dst.addr_bytes,
-		       &rte_flow_item_eth_mask.dst.addr_bytes,
-		       sizeof(eth_mask.dst.addr_bytes));
+		memcpy(&eth.hdr.dst_addr.addr_bytes, &conf->mac_addr,
+		       sizeof(eth.hdr.dst_addr.addr_bytes));
+		memcpy(&eth_mask.hdr.dst_addr.addr_bytes,
+		       &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes,
+		       sizeof(eth_mask.hdr.dst_addr.addr_bytes));
 	}
 	item.spec = &eth;
 	item.mask = &eth_mask;
@@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vlan *mask = item->mask;
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(UINT16_MAX),
-		.inner_type = RTE_BE16(UINT16_MAX),
+		.hdr.vlan_tci = RTE_BE16(UINT16_MAX),
+		.hdr.eth_proto = RTE_BE16(UINT16_MAX),
 		.has_more_vlan = 1,
 	};
 	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
@@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
-	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
+	if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) {
 		struct mlx5_priv *priv = dev->data->dev_private;
 
 		if (priv->vmwa_context) {
@@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 				  struct rte_vlan_hdr *vlan)
 {
 	const struct rte_flow_item_vlan nic_mask = {
-		.tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
+		.hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK |
 				MLX5DV_FLOW_VLAN_VID_MASK),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	if (items == NULL)
@@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items,
 		if (!vlan_m)
 			vlan_m = &nic_mask;
 		/* Only full match values are accepted */
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_PCP_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_PCP_MASK_BE);
 		}
-		if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
+		if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) ==
 		     MLX5DV_FLOW_VLAN_VID_MASK_BE) {
 			vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK;
 			vlan->vlan_tci |=
-				rte_be_to_cpu_16(vlan_v->tci &
+				rte_be_to_cpu_16(vlan_v->hdr.vlan_tci &
 						 MLX5DV_FLOW_VLAN_VID_MASK_BE);
 		}
-		if (vlan_m->inner_type == nic_mask.inner_type)
-			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type &
-							   vlan_m->inner_type);
+		if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto)
+			vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto &
+							   vlan_m->hdr.eth_proto);
 	}
 }
 
@@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push vlan action for VF representor "
 					  "not supported on NIC table");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_PCP_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) &&
 	    !(mlx5_flow_find_action
@@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev,
 					  "push VLAN action cannot figure out "
 					  "PCP value");
 	if (vlan_m &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
-	    (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) &&
+	    (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) !=
 		MLX5DV_FLOW_VLAN_VID_MASK_BE &&
 	    !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) &&
 	    !(mlx5_flow_find_action
@@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
@@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *eth_m;
 	const struct rte_flow_item_eth *eth_v;
 	const struct rte_flow_item_eth nic_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-		.type = RTE_BE16(0xffff),
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.ether_type = RTE_BE16(0xffff),
 		.has_vlan = 0,
 	};
 	void *hdrs_v;
@@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
 	/* The value must be in the range of the mask. */
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16);
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i];
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16);
 	/* The value must be in the range of the mask. */
-	for (i = 0; i < sizeof(eth_m->dst); ++i)
-		l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i];
+	for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i)
+		l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i];
 	/*
 	 * HW supports match on one Ethertype, the Ethertype following the last
 	 * VLAN tag of the packet (see PRM).
@@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 	 * ethertype, and use ip_version field instead.
 	 * eCPRI over Ether layer will use type value 0xAEFE.
 	 */
-	if (eth_m->type == 0xFFFF) {
-		rte_be16_t type = eth_v->type;
+	if (eth_m->hdr.ether_type == 0xFFFF) {
+		rte_be16_t type = eth_v->hdr.ether_type;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
@@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M) {
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1);
-			type = eth_vv->type;
+			type = eth_vv->hdr.ether_type;
 		}
 		/* Set cvlan_tag mask for any single\multi\un-tagged case. */
 		switch (type) {
@@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item,
 			return;
 	}
 	l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype);
-	*(uint16_t *)(l24_v) = eth_m->type & eth_v->type;
+	*(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type;
 }
 
 /**
@@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		 * and pre-validated.
 		 */
 		if (vlan_vv)
-			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff;
+			wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff;
 	}
 	/*
 	 * When VLAN item exists in flow, mark packet as tagged,
@@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m,
 			 &rte_flow_item_vlan_mask);
-	tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci);
+	tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12);
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13);
@@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 	 * HW is optimized for IPv4/IPv6. In such cases, avoid setting
 	 * ethertype, and use ip_version field instead.
 	 */
-	if (vlan_m->inner_type == 0xFFFF) {
-		rte_be16_t inner_type = vlan_v->inner_type;
+	if (vlan_m->hdr.eth_proto == 0xFFFF) {
+		rte_be16_t inner_type = vlan_v->hdr.eth_proto;
 
 		/*
 		 * When set the matcher mask, refer to the original spec
 		 * value.
 		 */
 		if (key_type == MLX5_SET_MATCHER_SW_M)
-			inner_type = vlan_vv->inner_type;
+			inner_type = vlan_vv->hdr.eth_proto;
 		switch (inner_type) {
 		case RTE_BE16(RTE_ETHER_TYPE_VLAN):
 			MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1);
@@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item,
 		return;
 	}
 	MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype,
-		 rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type));
+		 rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a3c8056515da..b8f96839c8bf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX]
 
 /* Ethernet item spec for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for promiscuous mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = {
-	.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for all multicast mode. */
 static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = {
-	.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = {
-	.dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv4 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = {
-	.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 /* Ethernet item mask for IPv6 multicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = {
-	.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item mask for unicast traffic. */
 static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /* Ethernet item spec for broadcast. */
 static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
-	.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-	.type = 0,
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+	.hdr.ether_type = 0,
 };
 
 /**
@@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev)
 		.egress = 1,
 	};
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth promisc = {
-		.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-		.type = 0,
+		.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.ether_type = 0,
 	};
 	struct rte_flow_item eth_all[] = {
 		[0] = {
@@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev,
 	for (i = 0; i < priv->vlan_filter_n; ++i) {
 		uint16_t vlan = priv->vlan_filter[i];
 		struct rte_flow_item_vlan vlan_spec = {
-			.tci = rte_cpu_to_be_16(vlan),
+			.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 		};
 
 		items[1].spec = &vlan_spec;
@@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0))
 			return -rte_errno;
 	}
@@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev,
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&eth_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(&eth_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN);
 		for (j = 0; j < priv->vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 
 			items[1].spec = &vlan_spec;
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 28ea28bfbe02..1902b97ec6d4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
 	if (spec) {
 		unsigned int i;
 
-		memcpy(&eth.val.dst_mac, spec->dst.addr_bytes,
+		memcpy(&eth.val.dst_mac, spec->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.val.src_mac, spec->src.addr_bytes,
+		memcpy(&eth.val.src_mac, spec->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.val.ether_type = spec->type;
-		memcpy(&eth.mask.dst_mac, mask->dst.addr_bytes,
+		eth.val.ether_type = spec->hdr.ether_type;
+		memcpy(&eth.mask.dst_mac, mask->hdr.dst_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		memcpy(&eth.mask.src_mac, mask->src.addr_bytes,
+		memcpy(&eth.mask.src_mac, mask->hdr.src_addr.addr_bytes,
 			RTE_ETHER_ADDR_LEN);
-		eth.mask.ether_type = mask->type;
+		eth.mask.ether_type = mask->hdr.ether_type;
 		/* Remove unwanted bits from values. */
 		for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) {
 			eth.val.dst_mac[i] &= eth.mask.dst_mac[i];
@@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
 	if (spec) {
-		eth.val.vlan_tag = spec->tci;
-		eth.mask.vlan_tag = mask->tci;
+		eth.val.vlan_tag = spec->hdr.vlan_tci;
+		eth.mask.vlan_tag = mask->hdr.vlan_tci;
 		eth.val.vlan_tag &= eth.mask.vlan_tag;
-		eth.val.ether_type = spec->inner_type;
-		eth.mask.ether_type = mask->inner_type;
+		eth.val.ether_type = spec->hdr.eth_proto;
+		eth.mask.ether_type = mask->hdr.eth_proto;
 		eth.val.ether_type &= eth.mask.ether_type;
 	}
 	if (!(item_flags & l2m))
@@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
 		flow_verbs_item_vlan_update(&dev_flow->verbs.attr, &eth);
 	if (!tunnel)
 		dev_flow->handle->vf_vlan.tag =
-			rte_be_to_cpu_16(spec->tci) & 0x0fff;
+			rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff;
 }
 
 /**
@@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_eth *)
-					 items->spec)->type;
+					 items->spec)->hdr.ether_type;
 				ether_type &=
 					((const struct rte_flow_item_eth *)
-					 items->mask)->type;
+					 items->mask)->hdr.ether_type;
 				if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN))
 					is_empty_vlan = true;
 				ether_type = rte_be_to_cpu_16(ether_type);
@@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 			if (items->mask != NULL && items->spec != NULL) {
 				ether_type =
 					((const struct rte_flow_item_vlan *)
-					 items->spec)->inner_type;
+					 items->spec)->hdr.eth_proto;
 				ether_type &=
 					((const struct rte_flow_item_vlan *)
-					 items->mask)->inner_type;
+					 items->mask)->hdr.eth_proto;
 				ether_type = rte_be_to_cpu_16(ether_type);
 			} else {
 				ether_type = 0;
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index f54443ed1ac4..3457bf65d3e1 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_item_eth bcast = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	struct rte_flow_item_eth ipv6_multi_spec = {
-		.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth ipv6_multi_mask = {
-		.dst.addr_bytes = "\xff\xff\x00\x00\x00\x00",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast = {
-		.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+		.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
 	};
 	struct rte_flow_item_eth unicast_mask = {
-		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+		.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	};
 	const unsigned int vlan_filter_n = priv->vlan_filter_n;
 	const struct rte_ether_addr cmp = {
@@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 		return 0;
 	if (dev->data->promiscuous) {
 		struct rte_flow_item_eth promisc = {
-			.dst.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &promisc, &promisc);
@@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	}
 	if (dev->data->all_multicast) {
 		struct rte_flow_item_eth multicast = {
-			.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
-			.src.addr_bytes = "\x00\x00\x00\x00\x00\x00",
-			.type = 0,
+			.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+			.hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00",
+			.hdr.ether_type = 0,
 		};
 
 		ret = mlx5_ctrl_flow(dev, &multicast, &multicast);
@@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 			uint16_t vlan = priv->vlan_filter[i];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
@@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 
 		if (!memcmp(mac, &cmp, sizeof(*mac)))
 			continue;
-		memcpy(&unicast.dst.addr_bytes,
+		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
 		       RTE_ETHER_ADDR_LEN);
 		for (j = 0; j != vlan_filter_n; ++j) {
 			uint16_t vlan = priv->vlan_filter[j];
 
 			struct rte_flow_item_vlan vlan_spec = {
-				.tci = rte_cpu_to_be_16(vlan),
+				.hdr.vlan_tci = rte_cpu_to_be_16(vlan),
 			};
 			struct rte_flow_item_vlan vlan_mask =
 				rte_flow_item_vlan_mask;
diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c
index 99695b91c496..e74a5f83f55b 100644
--- a/drivers/net/mvpp2/mrvl_flow.c
+++ b/drivers/net/mvpp2/mrvl_flow.c
@@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec,
 	const uint8_t *k, *m;
 
 	if (parse_dst) {
-		k = spec->dst.addr_bytes;
-		m = mask->dst.addr_bytes;
+		k = spec->hdr.dst_addr.addr_bytes;
+		m = mask->hdr.dst_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_DA;
 	} else {
-		k = spec->src.addr_bytes;
-		m = mask->src.addr_bytes;
+		k = spec->hdr.src_addr.addr_bytes;
+		m = mask->hdr.src_addr.addr_bytes;
 
 		flow->table_key.proto_field[flow->rule.num_fields].field.eth =
 			MV_NET_ETH_F_SA;
@@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->type);
+	k = rte_be_to_cpu_16(spec->hdr.ether_type);
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 2;
 
-	k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK;
+	k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec,
 	mrvl_alloc_key_mask(key_field);
 	key_field->size = 1;
 
-	k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13;
+	k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13;
 	snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k);
 
 	flow->table_key.proto_field[flow->rule.num_fields].proto =
@@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow,
 
 	memset(&zero, 0, sizeof(zero));
 
-	if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) {
+	if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) {
 		ret = mrvl_parse_dmac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (memcmp(&mask->src, &zero, sizeof(mask->src))) {
+	if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) {
 		ret = mrvl_parse_smac(spec, mask, flow);
 		if (ret)
 			goto out;
 	}
 
-	if (mask->type) {
+	if (mask->hdr.ether_type) {
 		MRVL_LOG(WARNING, "eth type mask is ignored");
 		ret = mrvl_parse_type(spec, mask, flow);
 		if (ret)
@@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 	if (ret)
 		return ret;
 
-	m = rte_be_to_cpu_16(mask->tci);
+	m = rte_be_to_cpu_16(mask->hdr.vlan_tci);
 	if (m & MRVL_VLAN_ID_MASK) {
 		MRVL_LOG(WARNING, "vlan id mask is ignored");
 		ret = mrvl_parse_vlan_id(spec, mask, flow);
@@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item,
 			goto out;
 	}
 
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		struct rte_flow_item_eth spec_eth = {
-			.type = spec->inner_type,
+			.hdr.ether_type = spec->hdr.eth_proto,
 		};
 		struct rte_flow_item_eth mask_eth = {
-			.type = mask->inner_type,
+			.hdr.ether_type = mask->hdr.eth_proto,
 		};
 
 		/* TPID is not supported so if ETH_TYPE was selected,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ff2e21c817b4..bd3a8d2a3b2f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	eth = (void *)*mbuf_off;
 
 	if (is_mask) {
-		memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	} else {
-		memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN);
-		memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+		memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
 	}
 
 	eth->mpls_lse = 0;
@@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	if (is_mask) {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data;
-		meta_tci->tci |= mask->tci;
+		meta_tci->tci |= mask->hdr.vlan_tci;
 	} else {
 		meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-		meta_tci->tci |= spec->tci;
+		meta_tci->tci |= spec->hdr.vlan_tci;
 	}
 
 	return 0;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index fb59abd0b563..f098edc6eb33 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	const struct rte_flow_item_eth *spec = NULL;
 	const struct rte_flow_item_eth *mask = NULL;
 	const struct rte_flow_item_eth supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
-		.type = 0xffff,
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.ether_type = 0xffff,
 	};
 	const struct rte_flow_item_eth ifrm_supp_mask = {
-		.dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
+		.hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff },
 	};
 	const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = {
 		0x01, 0x00, 0x00, 0x00, 0x00, 0x00
@@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) {
+	if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) {
 		efx_spec->efs_match_flags |= is_ifrm ?
 			EFX_FILTER_MATCH_IFRM_LOC_MAC :
 			EFX_FILTER_MATCH_LOC_MAC;
-		rte_memcpy(loc_mac, spec->dst.addr_bytes,
+		rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (memcmp(mask->dst.addr_bytes, ig_mask,
+	} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
 			  EFX_MAC_ADDR_LEN) == 0) {
-		if (rte_is_unicast_ether_addr(&spec->dst))
+		if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_UCAST_DST;
@@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 			efx_spec->efs_match_flags |= is_ifrm ?
 				EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST :
 				EFX_FILTER_MATCH_UNKNOWN_MCAST_DST;
-	} else if (!rte_is_zero_ether_addr(&mask->dst)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * ethertype masks are equal to zero in inner frame,
 	 * so these fields are filled in only for the outer frame
 	 */
-	if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) {
+	if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
-		rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes,
+		rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
 			   EFX_MAC_ADDR_LEN);
-	} else if (!rte_is_zero_ether_addr(&mask->src)) {
+	} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		goto fail_bad_mask;
 	}
 
@@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
 	 * Ether type is in big-endian byte order in item and
 	 * in little-endian in efx_spec, so byte swap is used
 	 */
-	if (mask->type == supp_mask.type) {
+	if (mask->hdr.ether_type == supp_mask.hdr.ether_type) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->type);
-	} else if (mask->type != 0) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type);
+	} else if (mask->hdr.ether_type != 0) {
 		goto fail_bad_mask;
 	}
 
@@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
-		.inner_type = RTE_BE16(0xffff),
+		.hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
+		.hdr.eth_proto = RTE_BE16(0xffff),
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	 * If two VLAN items are included, the first matches
 	 * the outer tag and the next matches the inner tag.
 	 */
-	if (mask->tci == supp_mask.tci) {
+	if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) {
 		/* Apply mask to keep VID only */
-		vid = rte_bswap16(spec->tci & mask->tci);
+		vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci);
 
 		if (!(efx_spec->efs_match_flags &
 		      EFX_FILTER_MATCH_OUTER_VID)) {
@@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 				   "VLAN TPID matching is not supported");
 		return -rte_errno;
 	}
-	if (mask->inner_type == supp_mask.inner_type) {
+	if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) {
 		efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
-		efx_spec->efs_ether_type = rte_bswap16(spec->inner_type);
-	} else if (mask->inner_type) {
+		efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto);
+	} else if (mask->hdr.eth_proto) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ITEM, item,
-				   "Bad mask for VLAN inner_type");
+				   "Bad mask for VLAN inner type");
 		return -rte_errno;
 	}
 
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 421bb6da9582..710d04be13af 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type),
-		offsetof(struct rte_flow_item_eth, type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type),
+		offsetof(struct rte_flow_item_eth, hdr.ether_type),
 	},
 	{
 		EFX_MAE_FIELD_ETH_DADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst),
-		offsetof(struct rte_flow_item_eth, dst),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr),
+		offsetof(struct rte_flow_item_eth, hdr.dst_addr),
 	},
 	{
 		EFX_MAE_FIELD_ETH_SADDR_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src),
-		offsetof(struct rte_flow_item_eth, src),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr),
+		offsetof(struct rte_flow_item_eth, hdr.src_addr),
 	},
 };
 
@@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		ethertypes[0].value = item_spec->type;
-		ethertypes[0].mask = item_mask->type;
+		ethertypes[0].value = item_spec->hdr.ether_type;
+		ethertypes[0].mask = item_mask->hdr.ether_type;
 		if (item_mask->has_vlan) {
 			pdata->has_ovlan_mask = B_TRUE;
 			if (item_spec->has_vlan)
@@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 	/* Outermost tag */
 	{
 		EFX_MAE_FIELD_VLAN0_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 
 	/* Innermost tag */
 	{
 		EFX_MAE_FIELD_VLAN1_TCI_BE,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci),
-		offsetof(struct rte_flow_item_vlan, tci),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci),
+		offsetof(struct rte_flow_item_vlan, hdr.vlan_tci),
 	},
 	{
 		/*
@@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = {
 		 * The field is handled by sfc_mae_rule_process_pattern_data().
 		 */
 		SFC_MAE_FIELD_HANDLING_DEFERRED,
-		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type),
-		offsetof(struct rte_flow_item_vlan, inner_type),
+		RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto),
+		offsetof(struct rte_flow_item_vlan, hdr.eth_proto),
 	},
 };
 
@@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item,
 		 * sfc_mae_rule_process_pattern_data() will consider them
 		 * altogether when the rest of the items have been parsed.
 		 */
-		et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type;
-		et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type;
-		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci;
+		et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto;
+		et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto;
+		pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci;
 		if (item_mask->has_more_vlan) {
 			if (pdata->nb_vlan_tags ==
 			    SFC_MAE_MATCH_VLAN_MAX_NTAGS) {
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index efe66fe0593d..ed4d42f92f9f 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = {
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
 		.mask = &(const struct rte_flow_item_eth){
-			.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-			.type = -1,
+			.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+			.hdr.ether_type = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_eth),
 		.default_mask = &rte_flow_item_eth_mask,
@@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = {
 		.mask = &(const struct rte_flow_item_vlan){
 			/* DEI matching is not supported */
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-			.tci = 0xffef,
+			.hdr.vlan_tci = 0xffef,
 #else
-			.tci = 0xefff,
+			.hdr.vlan_tci = 0xefff,
 #endif
-			.inner_type = -1,
+			.hdr.eth_proto = -1,
 		},
 		.mask_sz = sizeof(struct rte_flow_item_vlan),
 		.default_mask = &rte_flow_item_vlan_mask,
@@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+				.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 			},
 		},
 		.items[1] = {
@@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x33\x33\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = {
 		.items[0] = {
 			.type = RTE_FLOW_ITEM_TYPE_ETH,
 			.mask =  &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 			.spec = &(const struct rte_flow_item_eth){
-				.dst.addr_bytes = "\x01\x00\x00\x00\x00\x00",
+				.hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00",
 			},
 		},
 		.items[1] = {
@@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data)
 	if (!mask)
 		mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask;
 	/* TC does not support eth_type masking. Only accept if exact match. */
-	if (mask->type && mask->type != 0xffff)
+	if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff)
 		return -1;
 	if (!spec)
 		return 0;
 	/* store eth_type for consistency if ipv4/6 pattern item comes next */
-	if (spec->type & mask->type)
-		info->eth_type = spec->type;
+	if (spec->hdr.ether_type & mask->hdr.ether_type)
+		info->eth_type = spec->hdr.ether_type;
 	if (!flow)
 		return 0;
 	msg = &flow->msg;
-	if (!rte_is_zero_ether_addr(&mask->dst)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST,
 			RTE_ETHER_ADDR_LEN,
-			   &spec->dst.addr_bytes);
+			   &spec->hdr.dst_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->dst.addr_bytes);
+			   &mask->hdr.dst_addr.addr_bytes);
 	}
-	if (!rte_is_zero_ether_addr(&mask->src)) {
+	if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
 		tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC,
 			RTE_ETHER_ADDR_LEN,
-			&spec->src.addr_bytes);
+			&spec->hdr.src_addr.addr_bytes);
 		tap_nlattr_add(&msg->nh,
 			   TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN,
-			   &mask->src.addr_bytes);
+			   &mask->hdr.src_addr.addr_bytes);
 	}
 	return 0;
 }
@@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 	if (info->vlan)
 		return -1;
 	info->vlan = 1;
-	if (mask->inner_type) {
+	if (mask->hdr.eth_proto) {
 		/* TC does not support partial eth_type masking */
-		if (mask->inner_type != RTE_BE16(0xffff))
+		if (mask->hdr.eth_proto != RTE_BE16(0xffff))
 			return -1;
-		info->eth_type = spec->inner_type;
+		info->eth_type = spec->hdr.eth_proto;
 	}
 	if (!flow)
 		return 0;
@@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data)
 #define VLAN_ID(tci) ((tci) & 0xfff)
 	if (!spec)
 		return 0;
-	if (spec->tci) {
-		uint16_t tci = ntohs(spec->tci) & mask->tci;
+	if (spec->hdr.vlan_tci) {
+		uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci;
 		uint16_t prio = VLAN_PRIO(tci);
 		uint8_t vid = VLAN_ID(tci);
 
@@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 	};
 	struct rte_flow_item *items = implicit_rte_flows[idx].items;
 	struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr;
-	struct rte_flow_item_eth eth_local = { .type = 0 };
+	struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 };
 	unsigned int if_index = pmd->remote_if_index;
 	struct rte_flow *remote_flow = NULL;
 	struct nlmsg *msg = NULL;
@@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd,
 		 * eth addr couldn't be set in implicit_rte_flows[] as it is not
 		 * known at compile time.
 		 */
-		memcpy(&eth_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr));
+		memcpy(&eth_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr));
 		items = items_local;
 	}
 	tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7b18dca7e8d2..7ef52d0b0fcd 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	 * Mask bits of destination MAC address must be full
 	 * of 1 or full of 0.
 	 */
-	if (!rte_is_zero_ether_addr(&eth_mask->src) ||
-	    (!rte_is_zero_ether_addr(&eth_mask->dst) &&
-	     !rte_is_broadcast_ether_addr(&eth_mask->dst))) {
+	if (!rte_is_zero_ether_addr(&eth_mask->hdr.src_addr) ||
+	    (!rte_is_zero_ether_addr(&eth_mask->hdr.dst_addr) &&
+	     !rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr))) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ether address mask");
 		return -rte_errno;
 	}
 
-	if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) {
+	if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
 		rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
 				item, "Invalid ethertype mask");
@@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
 	/* If mask bits of destination MAC address
 	 * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
 	 */
-	if (rte_is_broadcast_ether_addr(&eth_mask->dst)) {
-		filter->mac_addr = eth_spec->dst;
+	if (rte_is_broadcast_ether_addr(&eth_mask->hdr.dst_addr)) {
+		filter->mac_addr = eth_spec->hdr.dst_addr;
 		filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
 	} else {
 		filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
 	}
-	filter->ether_type = rte_be_to_cpu_16(eth_spec->type);
+	filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
 	/* Check if the next non-void item is END. */
 	item = next_no_void_pattern(pattern, item);
@@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			eth_mask = item->mask;
 
 			/* Ether type should be masked. */
-			if (eth_mask->type ||
+			if (eth_mask->hdr.ether_type ||
 			    rule->mode == RTE_FDIR_MODE_SIGNATURE) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
@@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			 * and don't support dst MAC address mask.
 			 */
 			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-				if (eth_mask->src.addr_bytes[j] ||
-					eth_mask->dst.addr_bytes[j] != 0xFF) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+					eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
 					memset(rule, 0,
 					sizeof(struct txgbe_fdir_rule));
 					rte_flow_error_set(error, EINVAL,
@@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	eth_mask = item->mask;
 
 	/* Ether type should be masked. */
-	if (eth_mask->type) {
+	if (eth_mask->hdr.ether_type) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* src MAC address should be masked. */
 	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->src.addr_bytes[j]) {
+		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
 			memset(rule, 0,
 			       sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
@@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	rule->mask.mac_addr_byte_mask = 0;
 	for (j = 0; j < ETH_ADDR_LEN; j++) {
 		/* It's a per byte mask. */
-		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
 			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->dst.addr_bytes[j]) {
+		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v7 2/7] net: add smaller fields for VXLAN
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 1/7] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 3/7] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz; +Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

The VXLAN and VXLAN-GPE headers were including reserved fields
with other fields in big uint32_t struct members.

Some more precise definitions are added as union of the old ones.

The new struct members are smaller in size and in names.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
 lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h
index 929fa7a1dd01..997fc784fc84 100644
--- a/lib/net/rte_vxlan.h
+++ b/lib/net/rte_vxlan.h
@@ -30,9 +30,20 @@ extern "C" {
  * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and
  * Reserved fields (24 bits and 8 bits)
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_hdr {
-	rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t    flags;    /**< Should be 8 (I flag). */
+			uint8_t    rsvd0[3]; /**< Reserved. */
+			uint8_t    vni[3];   /**< VXLAN identifier. */
+			uint8_t    rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN tunnel header length. */
@@ -45,11 +56,23 @@ struct rte_vxlan_hdr {
  * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network
  * Identifier and Reserved fields (16 bits and 8 bits).
  */
+__extension__ /* no named member in struct */
 struct rte_vxlan_gpe_hdr {
-	uint8_t vx_flags;    /**< flag (8). */
-	uint8_t reserved[2]; /**< Reserved (16). */
-	uint8_t proto;       /**< next-protocol (8). */
-	rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+	union {
+		struct {
+			uint8_t vx_flags;    /**< flag (8). */
+			uint8_t reserved[2]; /**< Reserved (16). */
+			uint8_t protocol;    /**< next-protocol (8). */
+			rte_be32_t vx_vni;   /**< VNI (24) + Reserved (8). */
+		};
+		struct {
+			uint8_t flags;    /**< Flags. */
+			uint8_t rsvd0[2]; /**< Reserved. */
+			uint8_t proto;    /**< Next protocol. */
+			uint8_t vni[3];   /**< VXLAN identifier. */
+			uint8_t rsvd1;    /**< Reserved. */
+		};
+	};
 } __rte_packed;
 
 /** VXLAN-GPE tunnel header length. */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v7 3/7] ethdev: use VXLAN protocol struct for flow matching
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 1/7] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 2/7] net: add smaller fields for VXLAN Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 4/7] ethdev: use GTP " Ferruh Yigit
                     ` (5 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Ajit Khaparde, Somnath Kotur, Dongdong Liu, Yisen Zhuang,
	Beilei Xing, Qiming Yang, Qi Zhang, Rosen Xu, Wenjun Wu,
	Matan Azrad, Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

In the case of VXLAN-GPE, the protocol struct is added
in an unnamed union, keeping old field names.

The VXLAN headers (including VXLAN-GPE) are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/actions_gen.c         |  2 +-
 app/test-flow-perf/items_gen.c           | 12 +++----
 app/test-pmd/cmdline_flow.c              | 10 +++---
 doc/guides/prog_guide/rte_flow.rst       | 11 ++-----
 doc/guides/rel_notes/deprecation.rst     |  1 -
 drivers/net/bnxt/bnxt_flow.c             | 12 ++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------
 drivers/net/hns3/hns3_flow.c             | 12 +++----
 drivers/net/i40e/i40e_flow.c             |  4 +--
 drivers/net/ice/ice_switch_filter.c      | 18 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c         |  4 +--
 drivers/net/ixgbe/ixgbe_flow.c           | 18 +++++-----
 drivers/net/mlx5/mlx5_flow.c             | 16 ++++-----
 drivers/net/mlx5/mlx5_flow_dv.c          | 40 +++++++++++-----------
 drivers/net/mlx5/mlx5_flow_verbs.c       |  8 ++---
 drivers/net/sfc/sfc_flow.c               |  6 ++--
 drivers/net/sfc/sfc_mae.c                |  8 ++---
 lib/ethdev/rte_flow.h                    | 24 ++++++++++----
 18 files changed, 126 insertions(+), 122 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 63f05d87fa86..f1d59313256d 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	items[2].type = RTE_FLOW_ITEM_TYPE_UDP;
 
 
-	item_vxlan.vni[2] = 1;
+	item_vxlan.hdr.vni[2] = 1;
 	items[3].spec = &item_vxlan;
 	items[3].mask = &item_vxlan;
 	items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN;
diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index b7f51030a119..a58245239ba1 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items,
 
 	/* Set standard vxlan vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_masks[ti].vni[2 - i] = 0xff;
+		vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* Standard vxlan flags */
-	vxlan_specs[ti].flags = 0x8;
+	vxlan_specs[ti].hdr.flags = 0x8;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN;
 	items[items_counter].spec = &vxlan_specs[ti];
@@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items,
 
 	/* Set vxlan-gpe vni */
 	for (i = 0; i < 3; i++) {
-		vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8);
-		vxlan_gpe_masks[ti].vni[2 - i] = 0xff;
+		vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8);
+		vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff;
 	}
 
 	/* vxlan-gpe flags */
-	vxlan_gpe_specs[ti].flags = 0x0c;
+	vxlan_gpe_specs[ti].hdr.flags = 0x0c;
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE;
 	items[items_counter].spec = &vxlan_gpe_specs[ti];
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 694a7eb647c5..b904f8c3d45c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -3984,7 +3984,7 @@ static const struct token token_list[] = {
 		.help = "VXLAN identifier",
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)),
 	},
 	[ITEM_VXLAN_LAST_RSVD] = {
 		.name = "last_rsvd",
@@ -3992,7 +3992,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
-					     rsvd1)),
+					     hdr.rsvd1)),
 	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
@@ -4210,7 +4210,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe,
-					     vni)),
+					     hdr.vni)),
 	},
 	[ITEM_ARP_ETH_IPV4] = {
 		.name = "arp_eth_ipv4",
@@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 			.src_port = vxlan_encap_conf.udp_src,
 			.dst_port = vxlan_encap_conf.udp_dst,
 		},
-		.item_vxlan.flags = 0,
+		.item_vxlan.hdr.flags = 0,
 	};
 	memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes,
 	       vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN);
@@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_
 							&ipv6_mask_tos;
 		}
 	}
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
+	memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni,
 	       RTE_DIM(vxlan_encap_conf.vni));
 	return 0;
 }
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 27c3780c4f17..116722351486 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -935,10 +935,7 @@ Item: ``VXLAN``
 
 Matches a VXLAN header (RFC 7348).
 
-- ``flags``: normally 0x08 (I flag).
-- ``rsvd0``: reserved, normally 0x000000.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``E_TAG``
@@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE``
 
 Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05).
 
-- ``flags``: normally 0x0C (I and P flags).
-- ``rsvd0``: reserved, normally 0x0000.
-- ``protocol``: protocol type.
-- ``vni``: VXLAN network identifier.
-- ``rsvd1``: reserved, normally 0x00.
+- ``hdr``:  header definition (``rte_vxlan.h``).
 - Default ``mask`` matches VNI only.
 
 Item: ``ARP_ETH_IPV4``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4782d2e680d3..df8b5bcb1b64 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -90,7 +90,6 @@ Deprecation Notices
   - ``rte_flow_item_pfcp``
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
-  - ``rte_flow_item_vxlan_gpe``
 
 * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
   Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 8f660493402c..4a107e81e955 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				break;
 			}
 
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
+			if ((vxlan_spec->hdr.rsvd0[0] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[1] != 0) ||
+			    (vxlan_spec->hdr.rsvd0[2] != 0) ||
+			    (vxlan_spec->hdr.rsvd1 != 0) ||
+			    (vxlan_spec->hdr.flags != 8)) {
 				rte_flow_error_set(error,
 						   EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
@@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 			/* Check if VNI is masked. */
 			if (vxlan_mask != NULL) {
 				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (vni_masked) {
 					rte_flow_error_set
@@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->vni =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter->tunnel_type =
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2928598ced55..80869b79c3fe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
 	 * header fields
 	 */
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, flags),
-			      ulp_deference_struct(vxlan_mask, flags),
+			      ulp_deference_struct(vxlan_spec, hdr.flags),
+			      ulp_deference_struct(vxlan_mask, hdr.flags),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd0),
-			      ulp_deference_struct(vxlan_mask, rsvd0),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd0),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd0),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, vni),
-			      ulp_deference_struct(vxlan_mask, vni),
+			      ulp_deference_struct(vxlan_spec, hdr.vni),
+			      ulp_deference_struct(vxlan_mask, hdr.vni),
 			      ULP_PRSR_ACT_DEFAULT);
 
-	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1);
+	size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1);
 	ulp_rte_prsr_fld_mask(params, &idx, size,
-			      ulp_deference_struct(vxlan_spec, rsvd1),
-			      ulp_deference_struct(vxlan_mask, rsvd1),
+			      ulp_deference_struct(vxlan_spec, hdr.rsvd1),
+			      ulp_deference_struct(vxlan_mask, hdr.rsvd1),
 			      ULP_PRSR_ACT_DEFAULT);
 
 	/* Update the hdr_bitmap with vxlan */
@@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params,
 	uint32_t size;
 
 	field = &params->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS];
-	size = sizeof(vxlan_spec->flags);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size);
+	size = sizeof(vxlan_spec->hdr.flags);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size);
 
-	size = sizeof(vxlan_spec->rsvd0);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size);
+	size = sizeof(vxlan_spec->hdr.rsvd0);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size);
 
-	size = sizeof(vxlan_spec->vni);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size);
+	size = sizeof(vxlan_spec->hdr.vni);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size);
 
-	size = sizeof(vxlan_spec->rsvd1);
-	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size);
+	size = sizeof(vxlan_spec->hdr.rsvd1);
+	field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size);
 
 	ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN);
 }
@@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	vxlan_size = sizeof(struct rte_flow_item_vxlan);
 	/* copy the vxlan details */
 	memcpy(&vxlan_spec, item->spec, vxlan_size);
-	vxlan_spec.flags = 0x08;
+	vxlan_spec.hdr.flags = 0x08;
 	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index ef1832982dee..e88f9b7e452b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
 	vxlan_mask = item->mask;
 	vxlan_spec = item->spec;
 
-	if (vxlan_mask->flags)
+	if (vxlan_mask->hdr.flags)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "Flags is not supported in VxLAN");
 
 	/* VNI must be totally masked or not. */
-	if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) &&
-	    memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN))
+	if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) &&
+	    memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
 					  "VNI must be totally masked or not in VxLAN");
-	if (vxlan_mask->vni[0]) {
+	if (vxlan_mask->hdr.vni[0]) {
 		hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1);
-		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni,
+		memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni,
 			   VNI_OR_TNI_LEN);
 	}
-	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni,
+	memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni,
 		   VNI_OR_TNI_LEN);
 	return 0;
 }
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 0acbd5a061e0..2855b14fe679 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 			/* Check if VNI is masked. */
 			if (vxlan_spec && vxlan_mask) {
 				is_vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
+					!!memcmp(vxlan_mask->hdr.vni, vni_mask,
 						 RTE_DIM(vni_mask));
 				if (is_vni_masked) {
 					rte_flow_error_set(error, EINVAL,
@@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
 				}
 
 				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
+					   vxlan_spec->hdr.vni, 3);
 				filter->tenant_id =
 					rte_be_to_cpu_32(tenant_id_be);
 				filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index d84061340e6c..7cb20fa0b4f8 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 			input = &inner_input_set;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
-				if (vxlan_mask->vni[0] ||
-					vxlan_mask->vni[1] ||
-					vxlan_mask->vni[2]) {
+				if (vxlan_mask->hdr.vni[0] ||
+					vxlan_mask->hdr.vni[1] ||
+					vxlan_mask->hdr.vni[2]) {
 					list[t].h_u.tnl_hdr.vni =
-						(vxlan_spec->vni[2] << 16) |
-						(vxlan_spec->vni[1] << 8) |
-						vxlan_spec->vni[0];
+						(vxlan_spec->hdr.vni[2] << 16) |
+						(vxlan_spec->hdr.vni[1] << 8) |
+						vxlan_spec->hdr.vni[0];
 					list[t].m_u.tnl_hdr.vni =
-						(vxlan_mask->vni[2] << 16) |
-						(vxlan_mask->vni[1] << 8) |
-						vxlan_mask->vni[0];
+						(vxlan_mask->hdr.vni[2] << 16) |
+						(vxlan_mask->hdr.vni[1] << 8) |
+						vxlan_mask->hdr.vni[0];
 					*input |= ICE_INSET_VXLAN_VNI;
 					input_set_byte += 2;
 				}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index ee56d0f43d93..d20a29b9a2d6 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[6], vxlan->vni, 3);
+			rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
 			break;
 
 		default:
@@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
 			vxlan = item->spec;
 
-			rte_memcpy(&parser->key[0], vxlan->vni, 3);
+			rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_IPV4:
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index a11da3dc8beb..fe710b79008d 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		rule->mask.tunnel_type_mask = 1;
 
 		vxlan_mask = item->mask;
-		if (vxlan_mask->flags) {
+		if (vxlan_mask->hdr.flags) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 		/* VNI must be totally masked or not. */
-		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] ||
-			vxlan_mask->vni[2]) &&
-			((vxlan_mask->vni[0] != 0xFF) ||
-			(vxlan_mask->vni[1] != 0xFF) ||
-				(vxlan_mask->vni[2] != 0xFF))) {
+		if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
+			vxlan_mask->hdr.vni[2]) &&
+			((vxlan_mask->hdr.vni[0] != 0xFF) ||
+			(vxlan_mask->hdr.vni[1] != 0xFF) ||
+				(vxlan_mask->hdr.vni[2] != 0xFF))) {
 			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
-			RTE_DIM(vxlan_mask->vni));
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
+			RTE_DIM(vxlan_mask->hdr.vni));
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
 			vxlan_spec = item->spec;
 			rte_memcpy(((uint8_t *)
 				&rule->ixgbe_fdir.formatted.tni_vni),
-				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+				vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
 		}
 	}
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2512d6b52db9..ff08a629e2c6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
 		ret = mlx5_ethertype_to_item_type(spec, mask, true);
 		break;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
-		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol);
+		MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto);
 		ret = mlx5_nsh_proto_to_item_type(spec, mask);
 		break;
 	default:
@@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 	const struct rte_flow_item_vxlan *valid_mask;
 
@@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
@@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 	if (ret < 0)
 		return ret;
 	if (spec) {
-		if (spec->protocol)
+		if (spec->hdr.proto)
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  item,
 						  "VxLAN-GPE protocol"
 						  " not supported");
-		memcpy(&id.vni[1], spec->vni, 3);
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 	}
 	if (!(item_flags & MLX5_FLOW_LAYER_OUTER))
 		return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff915183b7cc..261c60a5c33a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_vxlan nic_mask = {
-		.vni = "\xff\xff\xff",
-		.rsvd1 = 0xff,
+		.hdr.vni = "\xff\xff\xff",
+		.hdr.rsvd1 = 0xff,
 	};
 
 	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
@@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 	    ((attr->group || (attr->transfer && priv->fdb_def_rule)) &&
 	    !priv->sh->misc5_cap)) {
 		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-		size = sizeof(vxlan_m->vni);
+		size = sizeof(vxlan_m->hdr.vni);
 		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
 		for (i = 0; i < size; ++i)
-			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
+			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
 		return;
 	}
 	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
 						   misc5_v,
 						   tunnel_header_1);
-	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
-		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
-		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 	*tunnel_header_v = tunnel_v;
 	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
+		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
+			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
+			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
 		if (!tunnel_v)
 			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
+		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
+			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
 	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
 	}
 }
 
@@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 		MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3);
 	char *vni_v =
 		MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni);
-	int i, size = sizeof(vxlan_m->vni);
+	int i, size = sizeof(vxlan_m->hdr.vni);
 	uint8_t flags_m = 0xff;
 	uint8_t flags_v = 0xc;
 	uint8_t m_protocol, v_protocol;
@@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 	else if (key_type == MLX5_SET_MATCHER_HS_V)
 		vxlan_m = vxlan_v;
 	for (i = 0; i < size; ++i)
-		vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
-	if (vxlan_m->flags) {
-		flags_m = vxlan_m->flags;
-		flags_v = vxlan_v->flags;
+		vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+	if (vxlan_m->hdr.flags) {
+		flags_m = vxlan_m->hdr.flags;
+		flags_v = vxlan_v->hdr.flags;
 	}
 	MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags,
 		 flags_m & flags_v);
-	m_protocol = vxlan_m->protocol;
-	v_protocol = vxlan_v->protocol;
+	m_protocol = vxlan_m->hdr.protocol;
+	v_protocol = vxlan_v->hdr.protocol;
 	if (!m_protocol) {
 		/* Force next protocol to ensure next headers parsing. */
 		if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 1902b97ec6d4..4ef4f3044515 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
@@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
 	if (!mask)
 		mask = &rte_flow_item_vxlan_gpe_mask;
 	if (spec) {
-		memcpy(&id.vni[1], spec->vni, 3);
+		memcpy(&id.vni[1], spec->hdr.vni, 3);
 		vxlan_gpe.val.tunnel_id = id.vlan_id;
-		memcpy(&id.vni[1], mask->vni, 3);
+		memcpy(&id.vni[1], mask->hdr.vni, 3);
 		vxlan_gpe.mask.tunnel_id = id.vlan_id;
 		/* Remove unwanted bits from values. */
 		vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f098edc6eb33..fe1f5ba55f86 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vxlan *spec = NULL;
 	const struct rte_flow_item_vxlan *mask = NULL;
 	const struct rte_flow_item_vxlan supp_mask = {
-		.vni = { 0xff, 0xff, 0xff }
+		.hdr.vni = { 0xff, 0xff, 0xff }
 	};
 
 	rc = sfc_flow_parse_init(item,
@@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
 	if (spec == NULL)
 		return 0;
 
-	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni,
-					       mask->vni, item, error);
+	rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni,
+					       mask->hdr.vni, item, error);
 
 	return rc;
 }
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 710d04be13af..aab697b204c2 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
 		 * The size and offset values are relevant
 		 * for Geneve and NVGRE, too.
 		 */
-		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
-		.ofst = offsetof(struct rte_flow_item_vxlan, vni),
+		.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni),
+		.ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni),
 	},
 };
 
@@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item,
 	 * The extra byte is 0 both in the mask and in the value.
 	 */
 	vxp = (const struct rte_flow_item_vxlan *)spec;
-	memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	vxp = (const struct rte_flow_item_vxlan *)mask;
-	memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni));
+	memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni));
 
 	rc = efx_mae_match_spec_field_set(ctx_mae->match_spec,
 					  EFX_MAE_FIELD_ENC_VNET_ID_BE,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b4f..e2364823d622 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -988,7 +988,7 @@ struct rte_flow_item_vxlan {
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
-	.hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
@@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = {
  *
  * Matches a VXLAN-GPE header.
  */
+RTE_STD_C11
 struct rte_flow_item_vxlan_gpe {
-	uint8_t flags; /**< Normally 0x0c (I and P flags). */
-	uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
-	uint8_t protocol; /**< Protocol type. */
-	uint8_t vni[3]; /**< VXLAN identifier. */
-	uint8_t rsvd1; /**< Reserved, normally 0x00. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			uint8_t flags; /**< Normally 0x0c (I and P flags). */
+			uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */
+			uint8_t protocol; /**< Protocol type. */
+			uint8_t vni[3]; /**< VXLAN identifier. */
+			uint8_t rsvd1; /**< Reserved, normally 0x00. */
+		};
+		struct rte_vxlan_gpe_hdr hdr;
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */
 #ifndef __cplusplus
 static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
-	.vni = "\xff\xff\xff",
+	.hdr.vni = "\xff\xff\xff",
 };
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v7 4/7] ethdev: use GTP protocol struct for flow matching
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (2 preceding siblings ...)
  2023-02-03 16:48   ` [PATCH v7 3/7] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 5/7] ethdev: use ARP " Ferruh Yigit
                     ` (4 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Wisam Jaddo, Ori Kam, Aman Singh, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The GTP header struct members are used in apps and drivers
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-flow-perf/items_gen.c        |  4 ++--
 app/test-pmd/cmdline_flow.c           |  8 +++----
 doc/guides/prog_guide/rte_flow.rst    | 10 ++-------
 doc/guides/rel_notes/deprecation.rst  |  1 -
 drivers/net/i40e/i40e_fdir.c          | 14 ++++++------
 drivers/net/i40e/i40e_flow.c          | 20 ++++++++---------
 drivers/net/iavf/iavf_fdir.c          |  8 +++----
 drivers/net/ice/ice_fdir_filter.c     | 10 ++++-----
 drivers/net/ice/ice_switch_filter.c   | 12 +++++-----
 drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------
 drivers/net/mlx5/mlx5_flow_dv.c       | 20 ++++++++---------
 lib/ethdev/rte_flow.h                 | 32 ++++++++++++++++++---------
 12 files changed, 78 insertions(+), 75 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index a58245239ba1..85928349eee0 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items,
 	__rte_unused struct additional_para para)
 {
 	static struct rte_flow_item_gtp gtp_spec = {
-		.teid = RTE_BE32(TEID_VALUE),
+		.hdr.teid = RTE_BE32(TEID_VALUE),
 	};
 	static struct rte_flow_item_gtp gtp_mask = {
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP;
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b904f8c3d45c..429d9cab8217 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4137,19 +4137,19 @@ static const struct token token_list[] = {
 		.help = "GTP flags",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp,
-					v_pt_rsv_flags)),
+					hdr.gtp_hdr_info)),
 	},
 	[ITEM_GTP_MSG_TYPE] = {
 		.name = "msg_type",
 		.help = "GTP message type",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)),
 	},
 	[ITEM_GTP_TEID] = {
 		.name = "teid",
 		.help = "tunnel endpoint identifier",
 		.next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param),
-		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)),
 	},
 	[ITEM_GTPC] = {
 		.name = "gtpc",
@@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in)
 				goto error;
 			}
 			gtp = item->spec;
-			if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) {
+			if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) {
 				/* Only E flag should be set. */
 				fprintf(stderr,
 					"Error - GTP unsupported flags\n");
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 116722351486..c4b96b5d324b 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1068,12 +1068,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item
 are defined for a user-friendly API when creating GTP-C and GTP-U
 flow rules.
 
-- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b),
-  extension header flag (1b), sequence number flag (1b), N-PDU number
-  flag (1b).
-- ``msg_type``: message type.
-- ``msg_len``: message length.
-- ``teid``: tunnel endpoint identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches teid only.
 
 Item: ``ESP``
@@ -1239,8 +1234,7 @@ Item: ``GTP_PSC``
 
 Matches a GTP PDU extension header with type 0x85.
 
-- ``pdu_type``: PDU type.
-- ``qfi``: QoS flow identifier.
+- ``hdr``:  header definition (``rte_gtp.h``).
 - Default ``mask`` matches QFI only.
 
 Item: ``PPPOES``, ``PPPOED``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index df8b5bcb1b64..838d5854ad9b 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -74,7 +74,6 @@ Deprecation Notices
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
   - ``rte_flow_item_gre``
-  - ``rte_flow_item_gtp``
   - ``rte_flow_item_icmp6``
   - ``rte_flow_item_icmp6_nd_na``
   - ``rte_flow_item_icmp6_nd_ns``
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index afcaa593eb58..47f79ecf11cc 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 			gtp = (struct rte_flow_item_gtp *)
 				((unsigned char *)udp +
 					sizeof(struct rte_udp_hdr));
-			gtp->msg_len =
+			gtp->hdr.plen =
 				rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN);
-			gtp->teid = fdir_input->flow.gtp_flow.teid;
-			gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
+			gtp->hdr.teid = fdir_input->flow.gtp_flow.teid;
+			gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01;
 
 			/* GTP-C message type is not supported. */
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X32;
 			} else {
 				udp->dst_port =
 				      rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT);
-				gtp->v_pt_rsv_flags =
+				gtp->hdr.gtp_hdr_info =
 					I40E_FDIR_GTP_VER_FLAG_0X30;
 			}
 
 			if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv4 = (struct rte_ipv4_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
@@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf,
 					sizeof(struct rte_ipv4_hdr);
 			} else if (cus_pctype->index ==
 				   I40E_CUSTOMIZED_GTPU_IPV6) {
-				gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
+				gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF;
 				gtp_ipv6 = (struct rte_ipv6_hdr *)
 					((unsigned char *)gtp +
 					 sizeof(struct rte_flow_item_gtp));
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 2855b14fe679..3c550733f2bb 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 			gtp_mask = item->mask;
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len ||
-				    gtp_mask->teid != UINT32_MAX) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen ||
+				    gtp_mask->hdr.teid != UINT32_MAX) {
 					rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 				}
 
 				filter->input.flow.gtp_flow.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				filter->input.flow_ext.customized_pctype = true;
 				cus_proto = item_type;
 			}
@@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len ||
-			    gtp_mask->teid != UINT32_MAX) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen ||
+			    gtp_mask->hdr.teid != UINT32_MAX) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
 			else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
 				filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
 
-			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid);
+			filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
 
 			break;
 		default:
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index a6c88cb55b88..811a10287b70 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
 			VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP);
 
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-					gtp_mask->msg_type ||
-					gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+					gtp_mask->hdr.msg_type ||
+					gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item, "Invalid GTP mask");
 					return -rte_errno;
 				}
 
-				if (gtp_mask->teid == UINT32_MAX) {
+				if (gtp_mask->hdr.teid == UINT32_MAX) {
 					input_set |= IAVF_INSET_GTPU_TEID;
 					VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
 				}
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 5d297afc290e..480b369af816 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 			if (!(gtp_spec && gtp_mask))
 				break;
 
-			if (gtp_mask->v_pt_rsv_flags ||
-			    gtp_mask->msg_type ||
-			    gtp_mask->msg_len) {
+			if (gtp_mask->hdr.gtp_hdr_info ||
+			    gtp_mask->hdr.msg_type ||
+			    gtp_mask->hdr.plen) {
 				rte_flow_error_set(error, EINVAL,
 						   RTE_FLOW_ERROR_TYPE_ITEM,
 						   item,
@@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
 				return -rte_errno;
 			}
 
-			if (gtp_mask->teid == UINT32_MAX)
+			if (gtp_mask->hdr.teid == UINT32_MAX)
 				input_set_o |= ICE_INSET_GTPU_TEID;
 
-			filter->input.gtpu_data.teid = gtp_spec->teid;
+			filter->input.gtpu_data.teid = gtp_spec->hdr.teid;
 			break;
 		case RTE_FLOW_ITEM_TYPE_GTP_PSC:
 			tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH;
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7cb20fa0b4f8..110d8895fea3 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 				return false;
 			}
 			if (gtp_spec && gtp_mask) {
-				if (gtp_mask->v_pt_rsv_flags ||
-				    gtp_mask->msg_type ||
-				    gtp_mask->msg_len) {
+				if (gtp_mask->hdr.gtp_hdr_info ||
+				    gtp_mask->hdr.msg_type ||
+				    gtp_mask->hdr.plen) {
 					rte_flow_error_set(error, EINVAL,
 						RTE_FLOW_ERROR_TYPE_ITEM,
 						item,
@@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[],
 					return false;
 				}
 				input = &outer_input_set;
-				if (gtp_mask->teid)
+				if (gtp_mask->hdr.teid)
 					*input |= ICE_INSET_GTPU_TEID;
 				list[t].type = ICE_GTP;
 				list[t].h_u.gtp_hdr.teid =
-					gtp_spec->teid;
+					gtp_spec->hdr.teid;
 				list[t].m_u.gtp_hdr.teid =
-					gtp_mask->teid;
+					gtp_mask->hdr.teid;
 				input_set_byte += 4;
 				t++;
 			}
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 604384a24253..fbcfe3665748 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data {
 	X(SET_BE16,	tcp_src_port,		v->hdr.src_port,	rte_flow_item_tcp) \
 	X(SET_BE16,	tcp_dst_port,		v->hdr.dst_port,	rte_flow_item_tcp) \
 	X(SET,		gtp_udp_port,		RTE_GTPU_UDP_PORT,	rte_flow_item_gtp) \
-	X(SET_BE32,	gtp_teid,		v->teid,		rte_flow_item_gtp) \
-	X(SET,		gtp_msg_type,		v->msg_type,		rte_flow_item_gtp) \
-	X(SET,		gtp_ext_flag,		!!v->v_pt_rsv_flags,	rte_flow_item_gtp) \
+	X(SET_BE32,	gtp_teid,		v->hdr.teid,		rte_flow_item_gtp) \
+	X(SET,		gtp_msg_type,		v->hdr.msg_type,	rte_flow_item_gtp) \
+	X(SET,		gtp_ext_flag,		!!v->hdr.gtp_hdr_info,	rte_flow_item_gtp) \
 	X(SET,		gtp_next_ext_hdr,	GTP_PDU_SC,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_pdu,	v->hdr.type,		rte_flow_item_gtp_psc) \
 	X(SET,		gtp_ext_hdr_qfi,	v->hdr.qfi,		rte_flow_item_gtp_psc) \
@@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	if (!m)
 		return 0;
 
-	if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
+	if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) {
 		rte_errno = ENOTSUP;
 		return rte_errno;
 	}
 
-	if (m->teid) {
+	if (m->hdr.teid) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 		fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
 	}
 
-	if (m->v_pt_rsv_flags) {
+	if (m->hdr.gtp_hdr_info) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
@@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
 	}
 
 
-	if (m->msg_type) {
+	if (m->hdr.msg_type) {
 		if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
 			rte_errno = ENOTSUP;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 261c60a5c33a..54cd4ca7344c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 	const struct rte_flow_item_gtp *spec = item->spec;
 	const struct rte_flow_item_gtp *mask = item->mask;
 	const struct rte_flow_item_gtp nic_mask = {
-		.v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK,
-		.msg_type = 0xff,
-		.teid = RTE_BE32(0xffffffff),
+		.hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK,
+		.hdr.msg_type = 0xff,
+		.hdr.teid = RTE_BE32(0xffffffff),
 	};
 
 	if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp)
@@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_gtp_mask;
-	if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK)
+	if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
@@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
 	gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask;
 	/* GTP spec and E flag is requested to match zero. */
 	if (gtp_spec &&
-		(gtp_mask->v_pt_rsv_flags &
-		~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG))
+		(gtp_mask->hdr.gtp_hdr_info &
+		~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG))
 		return rte_flow_error_set
 			(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item,
 			 "GTP E flag must be 1 to match GTP PSC");
@@ -9318,7 +9318,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item,
 				 const uint64_t pattern_flags,
 				 uint32_t key_type)
 {
-	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, };
+	static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {{{0}}};
 	const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask;
 	const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec;
 	/* The item was validated to be on the outer side */
@@ -10356,11 +10356,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item,
 	MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m,
 		&rte_flow_item_gtp_mask);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags,
-		 gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags);
+		 gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type,
-		 gtp_v->msg_type & gtp_m->msg_type);
+		 gtp_v->hdr.msg_type & gtp_m->hdr.msg_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid,
-		 rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
+		 rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid));
 }
 
 /**
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e2364823d622..8e8925277eb3 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1139,23 +1139,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
  *
  * Matches a GTPv1 header.
  */
+RTE_STD_C11
 struct rte_flow_item_gtp {
-	/**
-	 * Version (3b), protocol type (1b), reserved (1b),
-	 * Extension header flag (1b),
-	 * Sequence number flag (1b),
-	 * N-PDU number flag (1b).
-	 */
-	uint8_t v_pt_rsv_flags;
-	uint8_t msg_type; /**< Message type. */
-	rte_be16_t msg_len; /**< Message length. */
-	rte_be32_t teid; /**< Tunnel endpoint identifier. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			/**
+			 * Version (3b), protocol type (1b), reserved (1b),
+			 * Extension header flag (1b),
+			 * Sequence number flag (1b),
+			 * N-PDU number flag (1b).
+			 */
+			uint8_t v_pt_rsv_flags;
+			uint8_t msg_type; /**< Message type. */
+			rte_be16_t msg_len; /**< Message length. */
+			rte_be32_t teid; /**< Tunnel endpoint identifier. */
+		};
+		struct rte_gtp_hdr hdr; /**< GTP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */
 #ifndef __cplusplus
 static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = {
-	.teid = RTE_BE32(0xffffffff),
+	.hdr.teid = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v7 5/7] ethdev: use ARP protocol struct for flow matching
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (3 preceding siblings ...)
  2023-02-03 16:48   ` [PATCH v7 4/7] ethdev: use GTP " Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 6/7] doc: fix description of L2TPV2 flow item Ferruh Yigit
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Aman Singh, Yuying Zhang, Andrew Rybchenko
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

As announced in the deprecation notice, flow item structures
should re-use the protocol header definitions from the directory lib/net/.

The protocol struct is added in an unnamed union, keeping old field names.

The ARP header struct members are used in testpmd
instead of the redundant fields in the flow items.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/test-pmd/cmdline_flow.c          |  8 +++---
 doc/guides/prog_guide/rte_flow.rst   | 10 +-------
 doc/guides/rel_notes/deprecation.rst |  1 -
 lib/ethdev/rte_flow.h                | 37 ++++++++++++++++++----------
 4 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 429d9cab8217..275a1f9d3b5e 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -4226,7 +4226,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     sha)),
+					     hdr.arp_data.arp_sha)),
 	},
 	[ITEM_ARP_ETH_IPV4_SPA] = {
 		.name = "spa",
@@ -4234,7 +4234,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     spa)),
+					     hdr.arp_data.arp_sip)),
 	},
 	[ITEM_ARP_ETH_IPV4_THA] = {
 		.name = "tha",
@@ -4242,7 +4242,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tha)),
+					     hdr.arp_data.arp_tha)),
 	},
 	[ITEM_ARP_ETH_IPV4_TPA] = {
 		.name = "tpa",
@@ -4250,7 +4250,7 @@ static const struct token token_list[] = {
 		.next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR),
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4,
-					     tpa)),
+					     hdr.arp_data.arp_tip)),
 	},
 	[ITEM_IPV6_EXT] = {
 		.name = "ipv6_ext",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c4b96b5d324b..085c93c89b3b 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1104,15 +1104,7 @@ Item: ``ARP_ETH_IPV4``
 
 Matches an ARP header for Ethernet/IPv4.
 
-- ``hdr``: hardware type, normally 1.
-- ``pro``: protocol type, normally 0x0800.
-- ``hln``: hardware address length, normally 6.
-- ``pln``: protocol address length, normally 4.
-- ``op``: opcode (1 for request, 2 for reply).
-- ``sha``: sender hardware address.
-- ``spa``: sender IPv4 address.
-- ``tha``: target hardware address.
-- ``tpa``: target IPv4 address.
+- ``hdr``:  header definition (``rte_arp.h``).
 - Default ``mask`` matches SHA, SPA, THA and TPA.
 
 Item: ``IPV6_EXT``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 838d5854ad9b..6097eb5e0c5b 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -69,7 +69,6 @@ Deprecation Notices
   These items are not compliant (not including struct from lib/net/):
 
   - ``rte_flow_item_ah``
-  - ``rte_flow_item_arp_eth_ipv4``
   - ``rte_flow_item_e_tag``
   - ``rte_flow_item_geneve``
   - ``rte_flow_item_geneve_opt``
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 8e8925277eb3..b8f66d668cac 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -20,6 +20,7 @@
 #include <rte_compat.h>
 #include <rte_common.h>
 #include <rte_ether.h>
+#include <rte_arp.h>
 #include <rte_icmp.h>
 #include <rte_ip.h>
 #include <rte_sctp.h>
@@ -1245,26 +1246,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = {
  *
  * Matches an ARP header for Ethernet/IPv4.
  */
+RTE_STD_C11
 struct rte_flow_item_arp_eth_ipv4 {
-	rte_be16_t hrd; /**< Hardware type, normally 1. */
-	rte_be16_t pro; /**< Protocol type, normally 0x0800. */
-	uint8_t hln; /**< Hardware address length, normally 6. */
-	uint8_t pln; /**< Protocol address length, normally 4. */
-	rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
-	struct rte_ether_addr sha; /**< Sender hardware address. */
-	rte_be32_t spa; /**< Sender IPv4 address. */
-	struct rte_ether_addr tha; /**< Target hardware address. */
-	rte_be32_t tpa; /**< Target IPv4 address. */
+	union {
+		struct {
+			/*
+			 * These are old fields kept for compatibility.
+			 * Please prefer hdr field below.
+			 */
+			rte_be16_t hrd; /**< Hardware type, normally 1. */
+			rte_be16_t pro; /**< Protocol type, normally 0x0800. */
+			uint8_t hln; /**< Hardware address length, normally 6. */
+			uint8_t pln; /**< Protocol address length, normally 4. */
+			rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */
+			struct rte_ether_addr sha; /**< Sender hardware address. */
+			rte_be32_t spa; /**< Sender IPv4 address. */
+			struct rte_ether_addr tha; /**< Target hardware address. */
+			rte_be32_t tpa; /**< Target IPv4 address. */
+		};
+		struct rte_arp_hdr hdr; /**< ARP header definition. */
+	};
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */
 #ifndef __cplusplus
 static const struct rte_flow_item_arp_eth_ipv4
 rte_flow_item_arp_eth_ipv4_mask = {
-	.sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.spa = RTE_BE32(0xffffffff),
-	.tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.tpa = RTE_BE32(0xffffffff),
+	.hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX),
+	.hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX),
 };
 #endif
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v7 6/7] doc: fix description of L2TPV2 flow item
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (4 preceding siblings ...)
  2023-02-03 16:48   ` [PATCH v7 5/7] ethdev: use ARP " Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-03 16:48   ` [PATCH v7 7/7] net: mark all big endian types Ferruh Yigit
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Wenjun Wu, Ferruh Yigit, Jie Wang,
	Andrew Rybchenko
  Cc: David Marchand, dev, stable

From: Thomas Monjalon <thomas@monjalon.net>

The flow item structure includes the protocol definition
from the directory lib/net/, so it is reflected in the guide.

Section title underlining is also fixed around.

Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
Cc: jie1x.wang@intel.com
---
 doc/guides/prog_guide/rte_flow.rst | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 085c93c89b3b..f532cb1675ff 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1489,22 +1489,15 @@ rte_flow_flex_item_create() routine.
   value and mask.
 
 Item: ``L2TPV2``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^
 
 Matches a L2TPv2 header.
 
-- ``flags_version``: flags(12b), version(4b).
-- ``length``: total length of the message.
-- ``tunnel_id``: identifier for the control connection.
-- ``session_id``: identifier for a session within a tunnel.
-- ``ns``: sequence number for this date or control message.
-- ``nr``: sequence number expected in the next control message to be received.
-- ``offset_size``: offset of payload data.
-- ``offset_padding``: offset padding, variable length.
+- ``hdr``:  header definition (``rte_l2tpv2.h``).
 - Default ``mask`` matches flags_version only.
 
 Item: ``PPP``
-^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
 
 Matches a PPP header.
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH v7 7/7] net: mark all big endian types
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (5 preceding siblings ...)
  2023-02-03 16:48   ` [PATCH v7 6/7] doc: fix description of L2TPV2 flow item Ferruh Yigit
@ 2023-02-03 16:48   ` Ferruh Yigit
  2023-02-06  2:35   ` [PATCH v7 0/7] start cleanup of rte_flow_item_* fengchengwen
  2023-02-07 14:58   ` Thomas Monjalon
  8 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-03 16:48 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam, Andrew Rybchenko, Olivier Matz
  Cc: David Marchand, dev

From: Thomas Monjalon <thomas@monjalon.net>

Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t
types for their 16 and 32-bit fields.
It was correct but not conveying the big endian nature of these fields.

As for other protocols defined in this directory,
all types are explicitly marked as big endian fields.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 lib/ethdev/rte_flow.h |  4 ++--
 lib/net/rte_arp.h     | 28 ++++++++++++++--------------
 lib/net/rte_gre.h     |  2 +-
 lib/net/rte_higig.h   |  6 +++---
 lib/net/rte_mpls.h    |  2 +-
 5 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b8f66d668cac..7b780f70a56f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr {
 static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = {
 	.hdr = {
 		.ppt1 = {
-			.classification = 0xffff,
-			.vid = 0xfff,
+			.classification = RTE_BE16(UINT16_MAX),
+			.vid = RTE_BE16(0xfff),
 		},
 	},
 };
diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h
index 076c8ab314ee..c3cd0afb5ca8 100644
--- a/lib/net/rte_arp.h
+++ b/lib/net/rte_arp.h
@@ -23,28 +23,28 @@ extern "C" {
  */
 struct rte_arp_ipv4 {
 	struct rte_ether_addr arp_sha;  /**< sender hardware address */
-	uint32_t          arp_sip;  /**< sender IP address */
+	rte_be32_t            arp_sip;  /**< sender IP address */
 	struct rte_ether_addr arp_tha;  /**< target hardware address */
-	uint32_t          arp_tip;  /**< target IP address */
+	rte_be32_t            arp_tip;  /**< target IP address */
 } __rte_packed __rte_aligned(2);
 
 /**
  * ARP header.
  */
 struct rte_arp_hdr {
-	uint16_t arp_hardware;    /* format of hardware address */
-#define RTE_ARP_HRD_ETHER     1  /* ARP Ethernet address format */
+	rte_be16_t arp_hardware; /**< format of hardware address */
+#define RTE_ARP_HRD_ETHER     1  /**< ARP Ethernet address format */
 
-	uint16_t arp_protocol;    /* format of protocol address */
-	uint8_t  arp_hlen;    /* length of hardware address */
-	uint8_t  arp_plen;    /* length of protocol address */
-	uint16_t arp_opcode;     /* ARP opcode (command) */
-#define	RTE_ARP_OP_REQUEST    1 /* request to resolve address */
-#define	RTE_ARP_OP_REPLY      2 /* response to previous request */
-#define	RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */
-#define	RTE_ARP_OP_REVREPLY   4 /* response giving protocol address */
-#define	RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */
-#define	RTE_ARP_OP_INVREPLY   9 /* response identifying peer */
+	rte_be16_t arp_protocol; /**< format of protocol address */
+	uint8_t    arp_hlen;     /**< length of hardware address */
+	uint8_t    arp_plen;     /**< length of protocol address */
+	rte_be16_t arp_opcode;   /**< ARP opcode (command) */
+#define	RTE_ARP_OP_REQUEST    1  /**< request to resolve address */
+#define	RTE_ARP_OP_REPLY      2  /**< response to previous request */
+#define	RTE_ARP_OP_REVREQUEST 3  /**< request proto addr given hardware */
+#define	RTE_ARP_OP_REVREPLY   4  /**< response giving protocol address */
+#define	RTE_ARP_OP_INVREQUEST 8  /**< request to identify peer */
+#define	RTE_ARP_OP_INVREPLY   9  /**< response identifying peer */
 
 	struct rte_arp_ipv4 arp_data;
 } __rte_packed __rte_aligned(2);
diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h
index 6c6aef6fcaa0..8da8027b43da 100644
--- a/lib/net/rte_gre.h
+++ b/lib/net/rte_gre.h
@@ -45,7 +45,7 @@ struct rte_gre_hdr {
 	uint16_t res3:5; /**< Reserved */
 	uint16_t ver:3;  /**< Version Number */
 #endif
-	uint16_t proto;  /**< Protocol Type */
+	rte_be16_t proto;  /**< Protocol Type */
 } __rte_packed;
 
 /**
diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h
index b55fb1a7db44..bba3898a883f 100644
--- a/lib/net/rte_higig.h
+++ b/lib/net/rte_higig.h
@@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 {
  */
 __extension__
 struct rte_higig2_ppt_type1 {
-	uint16_t classification;
-	uint16_t resv;
-	uint16_t vid;
+	rte_be16_t classification;
+	rte_be16_t resv;
+	rte_be16_t vid;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
 	uint16_t opcode:3;
 	uint16_t resv1:2;
diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h
index 3e8cb90ec383..51523e7a1188 100644
--- a/lib/net/rte_mpls.h
+++ b/lib/net/rte_mpls.h
@@ -23,7 +23,7 @@ extern "C" {
  */
 __extension__
 struct rte_mpls_hdr {
-	uint16_t tag_msb;   /**< Label(msb). */
+	rte_be16_t tag_msb; /**< Label(msb). */
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 	uint8_t tag_lsb:4;  /**< Label(lsb). */
 	uint8_t tc:3;       /**< Traffic class. */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH v7 0/7] start cleanup of rte_flow_item_*
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (6 preceding siblings ...)
  2023-02-03 16:48   ` [PATCH v7 7/7] net: mark all big endian types Ferruh Yigit
@ 2023-02-06  2:35   ` fengchengwen
  2023-02-07 14:58   ` Thomas Monjalon
  8 siblings, 0 replies; 90+ messages in thread
From: fengchengwen @ 2023-02-06  2:35 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon; +Cc: David Marchand, dev

LGTM, Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2023/2/4 0:48, Ferruh Yigit wrote:
> There was a plan to have structures from lib/net/ at the beginning
> of corresponding flow item structures.
> Unfortunately this plan has not been followed up so far.
> This series is a step to make the most used items,
> compliant with the inheritance design explained above.
> The old API is kept in anonymous union for compatibility,
> but the code in drivers and apps is updated to use the new API.
> 
...

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v7 0/7] start cleanup of rte_flow_item_*
  2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
                     ` (7 preceding siblings ...)
  2023-02-06  2:35   ` [PATCH v7 0/7] start cleanup of rte_flow_item_* fengchengwen
@ 2023-02-07 14:58   ` Thomas Monjalon
  2023-02-07 16:33     ` Ferruh Yigit
  8 siblings, 1 reply; 90+ messages in thread
From: Thomas Monjalon @ 2023-02-07 14:58 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: dev, David Marchand, dev, orika, andrew.rybchenko, jerinj, ajit.khaparde

03/02/2023 17:48, Ferruh Yigit:
> There was a plan to have structures from lib/net/ at the beginning
> of corresponding flow item structures.
> Unfortunately this plan has not been followed up so far.
> This series is a step to make the most used items,
> compliant with the inheritance design explained above.
> The old API is kept in anonymous union for compatibility,
> but the code in drivers and apps is updated to use the new API.

This series looks good, let's merge.
It is only a first step.
We will need to continue using more lib/net/ structures in flow offloads.

A question to answer later after more thoughts,
what is the timeframe to remove old fields?
or do we prefer keeping old flow item fields forever?




^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH v7 0/7] start cleanup of rte_flow_item_*
  2023-02-07 14:58   ` Thomas Monjalon
@ 2023-02-07 16:33     ` Ferruh Yigit
  0 siblings, 0 replies; 90+ messages in thread
From: Ferruh Yigit @ 2023-02-07 16:33 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, David Marchand, orika, andrew.rybchenko, jerinj, ajit.khaparde

On 2/7/2023 2:58 PM, Thomas Monjalon wrote:
> 03/02/2023 17:48, Ferruh Yigit:
>> There was a plan to have structures from lib/net/ at the beginning
>> of corresponding flow item structures.
>> Unfortunately this plan has not been followed up so far.
>> This series is a step to make the most used items,
>> compliant with the inheritance design explained above.
>> The old API is kept in anonymous union for compatibility,
>> but the code in drivers and apps is updated to use the new API.
> 
> This series looks good, let's merge.

Moving Chengwen's ack too.

> LGTM, Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>
>

Series applied to dpdk-next-net/main, thanks.



> It is only a first step.
> We will need to continue using more lib/net/ structures in flow offloads.
> 

Ack

^ permalink raw reply	[flat|nested] 90+ messages in thread

end of thread, other threads:[~2023-02-07 16:33 UTC | newest]

Thread overview: 90+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-25 21:44 [PATCH 0/8] start cleanup of rte_flow_item_* Thomas Monjalon
2022-10-25 21:44 ` [PATCH 1/8] ethdev: use Ethernet protocol struct for flow matching Thomas Monjalon
2022-10-25 21:44 ` [PATCH 2/8] net: add smaller fields for VXLAN Thomas Monjalon
2022-10-25 21:44 ` [PATCH 3/8] ethdev: use VXLAN protocol struct for flow matching Thomas Monjalon
2022-10-25 21:44 ` [PATCH 4/8] ethdev: use GRE " Thomas Monjalon
2022-10-26  8:45   ` David Marchand
2023-01-20 17:21     ` Ferruh Yigit
2022-10-25 21:44 ` [PATCH 5/8] ethdev: use GTP " Thomas Monjalon
2022-10-25 21:44 ` [PATCH 6/8] ethdev: use ARP " Thomas Monjalon
2022-10-25 21:44 ` [PATCH 7/8] doc: fix description of L2TPV2 flow item Thomas Monjalon
2022-10-25 21:44 ` [PATCH 8/8] net: mark all big endian types Thomas Monjalon
2022-10-26  8:41   ` David Marchand
2023-01-20 17:18 ` [PATCH v2 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
2023-01-20 17:18   ` [PATCH v2 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
2023-01-20 17:18   ` [PATCH v2 2/8] net: add smaller fields for VXLAN Ferruh Yigit
2023-01-20 17:18   ` [PATCH v2 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
2023-01-20 17:18   ` [PATCH v2 4/8] ethdev: use GRE " Ferruh Yigit
2023-01-20 17:18   ` [PATCH v2 5/8] ethdev: use GTP " Ferruh Yigit
2023-01-20 17:19   ` [PATCH v2 6/8] ethdev: use ARP " Ferruh Yigit
2023-01-20 17:19   ` [PATCH v2 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
2023-01-20 17:19   ` [PATCH v2 8/8] net: mark all big endian types Ferruh Yigit
2023-01-22 10:52   ` [PATCH v2 0/8] start cleanup of rte_flow_item_* David Marchand
2023-01-24  9:07     ` Ferruh Yigit
2023-01-24  9:02 ` [PATCH v3 " Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 2/8] net: add smaller fields for VXLAN Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 4/8] ethdev: use GRE " Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 5/8] ethdev: use GTP " Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 6/8] ethdev: use ARP " Ferruh Yigit
2023-01-24  9:02   ` [PATCH v3 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
2023-01-24  9:03   ` [PATCH v3 8/8] net: mark all big endian types Ferruh Yigit
2023-01-26 13:17 ` [PATCH v4 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 2/8] net: add smaller fields for VXLAN Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 4/8] ethdev: use GRE " Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 5/8] ethdev: use GTP " Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 6/8] ethdev: use ARP " Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
2023-01-26 13:17   ` [PATCH v4 8/8] net: mark all big endian types Ferruh Yigit
2023-01-26 16:18 ` [PATCH v5 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
2023-01-26 16:18   ` [PATCH v5 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
2023-01-27 14:33     ` Niklas Söderlund
2023-02-01 17:34     ` Ori Kam
2023-02-02  9:51     ` Andrew Rybchenko
2023-01-26 16:18   ` [PATCH v5 2/8] net: add smaller fields for VXLAN Ferruh Yigit
2023-01-26 16:18   ` [PATCH v5 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
2023-02-01 17:41     ` Ori Kam
2023-02-02  9:52     ` Andrew Rybchenko
2023-01-26 16:19   ` [PATCH v5 4/8] ethdev: use GRE " Ferruh Yigit
2023-01-27 14:34     ` Niklas Söderlund
2023-02-01 17:44     ` Ori Kam
2023-02-02  9:53     ` Andrew Rybchenko
2023-01-26 16:19   ` [PATCH v5 5/8] ethdev: use GTP " Ferruh Yigit
2023-02-02  9:54     ` Andrew Rybchenko
2023-01-26 16:19   ` [PATCH v5 6/8] ethdev: use ARP " Ferruh Yigit
2023-02-01 17:46     ` Ori Kam
2023-02-02  9:55     ` Andrew Rybchenko
2023-01-26 16:19   ` [PATCH v5 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
2023-02-02  9:56     ` Andrew Rybchenko
2023-01-26 16:19   ` [PATCH v5 8/8] net: mark all big endian types Ferruh Yigit
2023-02-02 10:01     ` Andrew Rybchenko
2023-02-02 11:11       ` Ferruh Yigit
2023-02-02 12:44 ` [PATCH v6 0/8] start cleanup of rte_flow_item_* Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 1/8] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 2/8] net: add smaller fields for VXLAN Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 3/8] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 4/8] ethdev: use GRE " Ferruh Yigit
2023-02-02 17:16     ` Thomas Monjalon
2023-02-03 15:02       ` Ferruh Yigit
2023-02-03 15:12         ` Thomas Monjalon
2023-02-03 15:16           ` Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 5/8] ethdev: use GTP " Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 6/8] ethdev: use ARP " Ferruh Yigit
2023-02-02 12:44   ` [PATCH v6 7/8] doc: fix description of L2TPV2 flow item Ferruh Yigit
2023-02-02 12:45   ` [PATCH v6 8/8] net: mark all big endian types Ferruh Yigit
2023-02-02 17:20     ` Thomas Monjalon
2023-02-03 15:03       ` Ferruh Yigit
2023-02-03 16:48 ` [PATCH v7 0/7] start cleanup of rte_flow_item_* Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 1/7] ethdev: use Ethernet protocol struct for flow matching Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 2/7] net: add smaller fields for VXLAN Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 3/7] ethdev: use VXLAN protocol struct for flow matching Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 4/7] ethdev: use GTP " Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 5/7] ethdev: use ARP " Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 6/7] doc: fix description of L2TPV2 flow item Ferruh Yigit
2023-02-03 16:48   ` [PATCH v7 7/7] net: mark all big endian types Ferruh Yigit
2023-02-06  2:35   ` [PATCH v7 0/7] start cleanup of rte_flow_item_* fengchengwen
2023-02-07 14:58   ` Thomas Monjalon
2023-02-07 16:33     ` Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.