All of lore.kernel.org
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11
@ 2021-08-03  8:37 Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 01/22] net/ice: support RSS hash for IP fragment Wenjun Wu
                   ` (22 more replies)
  0 siblings, 23 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:37 UTC (permalink / raw)
  To: dev

Below patches are the backports of features in DPDK 21.02 and DPDK 21.05.
They are not for LTS upstream, just for customer to cherrypick.

feature includes
1. support RSS hash for IP fragment.
2. enable QinQ filter for switch.

Haiyue Wang (4):
  net/ice/base: do not set VLAN mode in DCF mode
  net/ice: fix VLAN strip for double VLAN
  net/ice: fix VLAN 0 adding based on VLAN mode
  net/ice: update QinQ switch filter handling

Junfeng Guo (1):
  net/ice: enable QinQ filter for switch

Qi Zhang (13):
  net/ice/base: align add VSI and update VSI AQ command buffer
  net/ice/base: add interface to support configuring VLAN mode
  net/ice/base: fix outer VLAN related macro
  net/ice/base: add VLAN TPID for VLAN filters
  net/ice/base: support checking double VLAN mode
  net/ice/base: support configuring device in double VLAN mode
  net/ice/base: update boost TCAM for DVM
  net/ice/base: change protocol ID for VLAN in DVM
  net/ice/base: refactor post DDP download VLAN mode config
  net/ice/base: log if DDP/FW do not support QinQ
  net/ice/base: add inner VLAN protocol type for QinQ filter
  net/ice/base: fix QinQ PPPoE dummy packet selection
  net/ice/base: add priority check of matching recipe

Ting Xu (1):
  net/ice/base: fix wrong ptype bitmap for IP fragment

Wenjun Wu (1):
  net/ice: support RSS hash for IP fragment

Yuying Zhang (2):
  net/ice/base: add ethertype offset for QinQ dummy packet
  net/ice: support flow priority for DCF switch filter

 drivers/net/ice/base/ice_adminq_cmd.h    | 268 ++++++++-----
 drivers/net/ice/base/ice_bitops.h        |  45 +++
 drivers/net/ice/base/ice_common.c        |  38 ++
 drivers/net/ice/base/ice_common.h        |   4 +
 drivers/net/ice/base/ice_flex_pipe.c     | 302 +++++++++++++--
 drivers/net/ice/base/ice_flex_pipe.h     |  12 +
 drivers/net/ice/base/ice_flex_type.h     |  39 ++
 drivers/net/ice/base/ice_flow.c          |  87 ++++-
 drivers/net/ice/base/ice_flow.h          |   5 +-
 drivers/net/ice/base/ice_protocol_type.h |   1 +
 drivers/net/ice/base/ice_switch.c        | 133 ++++++-
 drivers/net/ice/base/ice_switch.h        |  15 +
 drivers/net/ice/base/ice_type.h          |   4 +
 drivers/net/ice/base/ice_vlan_mode.c     | 451 ++++++++++++++++++++++
 drivers/net/ice/base/ice_vlan_mode.h     |  16 +
 drivers/net/ice/base/meson.build         |   1 +
 drivers/net/ice/ice_acl_filter.c         |   1 +
 drivers/net/ice/ice_ethdev.c             | 455 +++++++++++++----------
 drivers/net/ice/ice_ethdev.h             |  10 +-
 drivers/net/ice/ice_fdir_filter.c        |   1 +
 drivers/net/ice/ice_generic_flow.c       |  51 ++-
 drivers/net/ice/ice_generic_flow.h       |   9 +
 drivers/net/ice/ice_hash.c               |  39 +-
 drivers/net/ice/ice_switch_filter.c      | 128 ++++++-
 24 files changed, 1714 insertions(+), 401 deletions(-)
 create mode 100644 drivers/net/ice/base/ice_vlan_mode.c
 create mode 100644 drivers/net/ice/base/ice_vlan_mode.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 01/22] net/ice: support RSS hash for IP fragment
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
@ 2021-08-03  8:37 ` Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 02/22] net/ice/base: align add VSI and update VSI AQ command buffer Wenjun Wu
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:37 UTC (permalink / raw)
  To: dev

[ upstream commit f1ea76eb63944a65e9e0bbc32244bc7c8b4fbd1d ]
[ upstream commit 664b8eb745b9b6249231cea2f2bc6ff4d4b6bc40 ]
[ upstream commit 8434528175614f4cc8ab25fd28560848d8999605 ]

New pattern and RSS hash flow parsing are added to handle fragmented
IPv4/IPv6 packet.

This patch is not for LTS upstream, just for customer to cherry-pick.

Signed-off-by: Jeff Guo <jia.guo@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_flow.c    | 51 +++++++++++++++++++++++++++++-
 drivers/net/ice/base/ice_flow.h    |  5 ++-
 drivers/net/ice/ice_generic_flow.c | 25 +++++++++++++++
 drivers/net/ice/ice_generic_flow.h |  7 ++++
 drivers/net/ice/ice_hash.c         | 37 ++++++++++++++++++++--
 5 files changed, 120 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index c75f58659c..049e2f0c26 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -13,6 +13,8 @@
 #define ICE_FLOW_FLD_SZ_IPV6_PRE32_ADDR	4
 #define ICE_FLOW_FLD_SZ_IPV6_PRE48_ADDR	6
 #define ICE_FLOW_FLD_SZ_IPV6_PRE64_ADDR	8
+#define ICE_FLOW_FLD_SZ_IPV4_ID		2
+#define ICE_FLOW_FLD_SZ_IPV6_ID		4
 #define ICE_FLOW_FLD_SZ_IP_DSCP		1
 #define ICE_FLOW_FLD_SZ_IP_TTL		1
 #define ICE_FLOW_FLD_SZ_IP_PROT		1
@@ -94,6 +96,12 @@ struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = {
 	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 8, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV6_DA */
 	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 24, ICE_FLOW_FLD_SZ_IPV6_ADDR),
+	/* ICE_FLOW_FIELD_IDX_IPV4_FRAG */
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV_FRAG, 4,
+			  ICE_FLOW_FLD_SZ_IPV4_ID),
+	/* ICE_FLOW_FIELD_IDX_IPV6_FRAG */
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV_FRAG, 4,
+			  ICE_FLOW_FLD_SZ_IPV6_ID),
 	/* ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA */
 	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 8,
 			  ICE_FLOW_FLD_SZ_IPV6_PRE32_ADDR),
@@ -468,6 +476,28 @@ static const u32 ice_ptypes_gtpc_tid[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
+static const u32 ice_ptypes_ipv4_frag[] = {
+	0x00400000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
+static const u32 ice_ptypes_ipv6_frag[] = {
+	0x00000000, 0x00000000, 0x01000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+};
+
 /* Packet types for GTPU */
 static const struct ice_ptype_attributes ice_attr_gtpu_session[] = {
 	{ ICE_MAC_IPV4_GTPU_IPV4_FRAG,	  ICE_PTYPE_ATTR_GTP_SESSION },
@@ -851,6 +881,16 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				(const ice_bitmap_t *)ice_ptypes_ipv6_ofos_all;
 			ice_and_bitmap(params->ptypes, params->ptypes, src,
 				       ICE_FLOW_PTYPE_MAX);
+		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV4) &&
+			   (hdrs & ICE_FLOW_SEG_HDR_IPV_FRAG)) {
+			src = (const ice_bitmap_t *)ice_ptypes_ipv4_frag;
+			ice_and_bitmap(params->ptypes, params->ptypes, src,
+				       ICE_FLOW_PTYPE_MAX);
+		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV6) &&
+			   (hdrs & ICE_FLOW_SEG_HDR_IPV_FRAG)) {
+			src = (const ice_bitmap_t *)ice_ptypes_ipv6_frag;
+			ice_and_bitmap(params->ptypes, params->ptypes, src,
+				       ICE_FLOW_PTYPE_MAX);
 		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV4) &&
 			   !(hdrs & ICE_FLOW_SEG_HDRS_L4_MASK_NO_OTHER)) {
 			src = !i ? (const ice_bitmap_t *)ice_ptypes_ipv4_ofos_no_l4 :
@@ -1121,6 +1161,9 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params,
 	case ICE_FLOW_FIELD_IDX_IPV4_DA:
 		prot_id = seg == 0 ? ICE_PROT_IPV4_OF_OR_S : ICE_PROT_IPV4_IL;
 		break;
+	case ICE_FLOW_FIELD_IDX_IPV4_ID:
+		prot_id = ICE_PROT_IPV4_OF_OR_S;
+		break;
 	case ICE_FLOW_FIELD_IDX_IPV6_SA:
 	case ICE_FLOW_FIELD_IDX_IPV6_DA:
 	case ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA:
@@ -1131,6 +1174,9 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params,
 	case ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA:
 		prot_id = seg == 0 ? ICE_PROT_IPV6_OF_OR_S : ICE_PROT_IPV6_IL;
 		break;
+	case ICE_FLOW_FIELD_IDX_IPV6_ID:
+		prot_id = ICE_PROT_IPV6_FRAG;
+		break;
 	case ICE_FLOW_FIELD_IDX_TCP_SRC_PORT:
 	case ICE_FLOW_FIELD_IDX_TCP_DST_PORT:
 	case ICE_FLOW_FIELD_IDX_TCP_FLAGS:
@@ -3278,13 +3324,16 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt,
 	/* set outer most header */
 	if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV4)
 		segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV4 |
+						   ICE_FLOW_SEG_HDR_IPV_FRAG |
 						   ICE_FLOW_SEG_HDR_IPV_OTHER;
 	else if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV6)
 		segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV6 |
+						   ICE_FLOW_SEG_HDR_IPV_FRAG |
 						   ICE_FLOW_SEG_HDR_IPV_OTHER;
 
 	if (seg->hdrs & ~ICE_FLOW_RSS_SEG_HDR_VAL_MASKS &
-	    ~ICE_FLOW_RSS_HDRS_INNER_MASK & ~ICE_FLOW_SEG_HDR_IPV_OTHER)
+	    ~ICE_FLOW_RSS_HDRS_INNER_MASK & ~ICE_FLOW_SEG_HDR_IPV_OTHER &
+	    ~ICE_FLOW_SEG_HDR_IPV_FRAG)
 		return ICE_ERR_PARAM;
 
 	val = (u64)(seg->hdrs & ICE_FLOW_RSS_SEG_HDR_L3_MASKS);
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 2a9ae66454..b503417b70 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -182,7 +182,8 @@ enum ice_flow_seg_hdr {
 	/* The following is an additive bit for ICE_FLOW_SEG_HDR_IPV4 and
 	 * ICE_FLOW_SEG_HDR_IPV6 which include the IPV4 other PTYPEs
 	 */
-	ICE_FLOW_SEG_HDR_IPV_OTHER	= 0x20000000,
+	ICE_FLOW_SEG_HDR_IPV_FRAG	= 0x20000000,
+	ICE_FLOW_SEG_HDR_IPV_OTHER	= 0x40000000,
 };
 
 /* These segements all have the same PTYPES, but are otherwise distinguished by
@@ -219,6 +220,8 @@ enum ice_flow_field {
 	ICE_FLOW_FIELD_IDX_IPV4_DA,
 	ICE_FLOW_FIELD_IDX_IPV6_SA,
 	ICE_FLOW_FIELD_IDX_IPV6_DA,
+	ICE_FLOW_FIELD_IDX_IPV4_ID,
+	ICE_FLOW_FIELD_IDX_IPV6_ID,
 	ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA,
 	ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA,
 	ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA,
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 1429cbc3b6..bc4e0a5704 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -212,6 +212,31 @@ enum rte_flow_item_type pattern_eth_qinq_ipv6[] = {
 	RTE_FLOW_ITEM_TYPE_IPV6,
 	RTE_FLOW_ITEM_TYPE_END,
 };
+
+enum rte_flow_item_type pattern_eth_ipv6_frag_ext[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type pattern_eth_vlan_ipv6_frag_ext[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
+enum rte_flow_item_type pattern_eth_qinq_ipv6_frag_ext[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_IPV6,
+	RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
+	RTE_FLOW_ITEM_TYPE_END,
+};
+
 enum rte_flow_item_type pattern_eth_ipv6_udp[] = {
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_IPV6,
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index 434d2f425d..eb0368e280 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -84,6 +84,8 @@
 	(ICE_PROT_IPV4_OUTER | ICE_IP_PROTO)
 #define ICE_INSET_IPV4_TTL \
 	(ICE_PROT_IPV4_OUTER | ICE_IP_TTL)
+#define ICE_INSET_IPV4_PKID \
+	(ICE_PROT_IPV4 | ICE_IP_PK_ID)
 #define ICE_INSET_IPV6_SRC \
 	(ICE_PROT_IPV6_OUTER | ICE_IP_SRC)
 #define ICE_INSET_IPV6_DST \
@@ -94,6 +96,8 @@
 	(ICE_PROT_IPV6_OUTER | ICE_IP_TTL)
 #define ICE_INSET_IPV6_TC \
 	(ICE_PROT_IPV6_OUTER | ICE_IP_TOS)
+#define ICE_INSET_IPV6_PKID \
+	(ICE_PROT_IPV6 | ICE_IP_PK_ID)
 
 #define ICE_INSET_TCP_SRC_PORT \
 	(ICE_PROT_TCP_OUTER | ICE_SPORT)
@@ -236,6 +240,9 @@ extern enum rte_flow_item_type pattern_eth_qinq_ipv4_icmp[];
 extern enum rte_flow_item_type pattern_eth_ipv6[];
 extern enum rte_flow_item_type pattern_eth_vlan_ipv6[];
 extern enum rte_flow_item_type pattern_eth_qinq_ipv6[];
+extern enum rte_flow_item_type pattern_eth_ipv6_frag_ext[];
+extern enum rte_flow_item_type pattern_eth_vlan_ipv6_frag_ext[];
+extern enum rte_flow_item_type pattern_eth_qinq_ipv6_frag_ext[];
 extern enum rte_flow_item_type pattern_eth_ipv6_udp[];
 extern enum rte_flow_item_type pattern_eth_vlan_ipv6_udp[];
 extern enum rte_flow_item_type pattern_eth_qinq_ipv6_udp[];
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 1bb7d2c7c6..2b0a479c7e 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -83,7 +83,7 @@ struct rss_type_match_hdr hint_empty = {
 	ICE_FLOW_SEG_HDR_NONE,	0};
 struct rss_type_match_hdr hint_eth_ipv4 = {
 	ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER,
-	ETH_RSS_ETH | ETH_RSS_IPV4};
+	ETH_RSS_ETH | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4};
 struct rss_type_match_hdr hint_eth_ipv4_udp = {
 	ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER |
 	ICE_FLOW_SEG_HDR_UDP,
@@ -227,7 +227,7 @@ struct rss_type_match_hdr hint_eth_ipv4_pfcp = {
 struct rss_type_match_hdr hint_eth_vlan_ipv4 = {
 	ICE_FLOW_SEG_HDR_VLAN | ICE_FLOW_SEG_HDR_IPV4 |
 	ICE_FLOW_SEG_HDR_IPV_OTHER,
-	ETH_RSS_ETH | ETH_RSS_IPV4 | ETH_RSS_C_VLAN};
+	ETH_RSS_ETH | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_C_VLAN};
 struct rss_type_match_hdr hint_eth_vlan_ipv4_udp = {
 	ICE_FLOW_SEG_HDR_VLAN | ICE_FLOW_SEG_HDR_IPV4 |
 	ICE_FLOW_SEG_HDR_IPV_OTHER | ICE_FLOW_SEG_HDR_UDP,
@@ -246,6 +246,10 @@ struct rss_type_match_hdr hint_eth_vlan_ipv4_sctp = {
 struct rss_type_match_hdr hint_eth_ipv6 = {
 	ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER,
 	ETH_RSS_ETH | ETH_RSS_IPV6};
+struct rss_type_match_hdr hint_eth_ipv6_frag_ext = {
+	ICE_FLOW_SEG_HDR_ETH | ICE_FLOW_SEG_HDR_IPV6 |
+	ICE_FLOW_SEG_HDR_IPV_FRAG,
+	ETH_RSS_ETH | ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6};
 struct rss_type_match_hdr hint_eth_ipv6_udp = {
 	ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER |
 	ICE_FLOW_SEG_HDR_UDP,
@@ -282,6 +286,10 @@ struct rss_type_match_hdr hint_eth_vlan_ipv6 = {
 	ICE_FLOW_SEG_HDR_VLAN | ICE_FLOW_SEG_HDR_IPV6 |
 	ICE_FLOW_SEG_HDR_IPV_OTHER,
 	ETH_RSS_ETH | ETH_RSS_IPV6 | ETH_RSS_C_VLAN};
+struct rss_type_match_hdr hint_eth_vlan_ipv6_frag_ext = {
+	ICE_FLOW_SEG_HDR_VLAN | ICE_FLOW_SEG_HDR_IPV6 |
+	ICE_FLOW_SEG_HDR_IPV_FRAG,
+	ETH_RSS_ETH | ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_C_VLAN};
 struct rss_type_match_hdr hint_eth_vlan_ipv6_udp = {
 	ICE_FLOW_SEG_HDR_VLAN | ICE_FLOW_SEG_HDR_IPV6 |
 	ICE_FLOW_SEG_HDR_IPV_OTHER | ICE_FLOW_SEG_HDR_UDP,
@@ -415,6 +423,8 @@ static struct ice_pattern_match_item ice_hash_pattern_list_comms[] = {
 		&hint_eth_vlan_ipv4_sctp},
 	{pattern_eth_ipv6,		    ICE_INSET_NONE,
 		&hint_eth_ipv6},
+	{pattern_eth_ipv6_frag_ext,	    ICE_INSET_NONE,
+		&hint_eth_ipv6_frag_ext},
 	{pattern_eth_ipv6_udp,		    ICE_INSET_NONE,
 		&hint_eth_ipv6_udp},
 	{pattern_eth_ipv6_tcp,		    ICE_INSET_NONE,
@@ -433,6 +443,8 @@ static struct ice_pattern_match_item ice_hash_pattern_list_comms[] = {
 		&hint_eth_ipv6_pfcp},
 	{pattern_eth_vlan_ipv6,		    ICE_INSET_NONE,
 		&hint_eth_vlan_ipv6},
+	{pattern_eth_vlan_ipv6_frag_ext,    ICE_INSET_NONE,
+		&hint_eth_vlan_ipv6_frag_ext},
 	{pattern_eth_vlan_ipv6_udp,	    ICE_INSET_NONE,
 		&hint_eth_vlan_ipv6_udp},
 	{pattern_eth_vlan_ipv6_tcp,	    ICE_INSET_NONE,
@@ -492,6 +504,9 @@ struct ice_hash_match_type ice_hash_type_list[] = {
 	{ETH_RSS_IPV4 | ETH_RSS_L3_DST_ONLY,
 		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA)},
 	{ETH_RSS_IPV4, ICE_FLOW_HASH_IPV4},
+	{ETH_RSS_FRAG_IPV4,
+		ICE_FLOW_HASH_IPV4 |
+		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID)},
 	{ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY | ETH_RSS_L4_SRC_ONLY,
 		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA) |
 		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT) |
@@ -591,6 +606,9 @@ struct ice_hash_match_type ice_hash_type_list[] = {
 	{ETH_RSS_IPV6 | ETH_RSS_L3_DST_ONLY,
 		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA)},
 	{ETH_RSS_IPV6, ICE_FLOW_HASH_IPV6},
+	{ETH_RSS_FRAG_IPV6,
+		ICE_FLOW_HASH_IPV6 |
+		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID)},
 	{ETH_RSS_IPV6_PRE32 | ETH_RSS_L3_SRC_ONLY,
 		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA)},
 	{ETH_RSS_IPV6_PRE32 | ETH_RSS_L3_DST_ONLY,
@@ -1087,10 +1105,12 @@ ice_hash_parse_action(struct ice_pattern_match_item *pattern_match_item,
 					  RTE_ETH_RSS_L3_PRE64;
 
 			rss_attr_symm = ETH_RSS_IPV4 |
+					ETH_RSS_FRAG_IPV4 |
 					ETH_RSS_NONFRAG_IPV4_UDP |
 					ETH_RSS_NONFRAG_IPV4_TCP |
 					ETH_RSS_NONFRAG_IPV4_SCTP |
 					ETH_RSS_IPV6 |
+					ETH_RSS_FRAG_IPV6 |
 					ETH_RSS_NONFRAG_IPV6_UDP |
 					ETH_RSS_NONFRAG_IPV6_TCP |
 					ETH_RSS_NONFRAG_IPV6_SCTP;
@@ -1144,6 +1164,14 @@ ice_hash_parse_action(struct ice_pattern_match_item *pattern_match_item,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"Not supported flow");
 			}
+			/* update hash field for ip fragment */
+			if (rss_type & ETH_RSS_FRAG_IPV4) {
+				hash_meta->pkt_hdr |= ICE_FLOW_SEG_HDR_IPV_FRAG;
+				hash_meta->pkt_hdr &=
+				~(ICE_FLOW_SEG_HDR_IPV_OTHER);
+				hash_meta->hash_flds |=
+				BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
+			}
 
 			/* update hash field for eth-non-ip. */
 			if (rss_type & ETH_RSS_ETH) {
@@ -1293,7 +1321,10 @@ ice_hash_create(struct ice_adapter *ad,
 		filter_ptr->rss_cfg.hash.symm =
 			(hash_function ==
 				RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ);
-		filter_ptr->rss_cfg.hash.hdr_type = ICE_RSS_ANY_HEADERS;
+		if (headermask & ICE_FLOW_SEG_HDR_IPV_FRAG)
+			filter_ptr->rss_cfg.hash.hdr_type = ICE_RSS_OUTER_HEADERS;
+		else
+			filter_ptr->rss_cfg.hash.hdr_type = ICE_RSS_ANY_HEADERS;
 
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx,
 					   &filter_ptr->rss_cfg.hash);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 02/22] net/ice/base: align add VSI and update VSI AQ command buffer
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 01/22] net/ice: support RSS hash for IP fragment Wenjun Wu
@ 2021-08-03  8:37 ` Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 03/22] net/ice/base: add interface to support configuring VLAN mode Wenjun Wu
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:37 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 9ea028123a0bef9f6bbf5dd1a5250b9bfa63c1ea ]

Aligned the buffer the following admin commands to their new
definitions:
* 0x210 = add_vsi
* 0x211 = update_vsi

Signed-off-by: Shay Amir <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 209 +++++++++++++-------------
 drivers/net/ice/ice_ethdev.c          |  88 +++++------
 2 files changed, 152 insertions(+), 145 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index f715fb0910..91d360be62 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -411,144 +411,151 @@ struct ice_aqc_vsi_props {
 #define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
 	u8 sw_flags2;
 #define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
-#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
-				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
 #define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
 #define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
 	u8 veb_stat_id;
 #define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
-#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M		(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
 #define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
 	/* security section */
 	u8 sec_flags;
 #define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
 #define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
-#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
-#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S		4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M		(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
 #define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
 	u8 sec_reserved;
 	/* VLAN section */
-	__le16 pvid; /* VLANS include priority bits */
-	u8 pvlan_reserved[2];
-	u8 vlan_flags;
-#define ICE_AQ_VSI_VLAN_MODE_S	0
-#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
-#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
-#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
-#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
-#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
-#define ICE_AQ_VSI_VLAN_EMOD_S		3
-#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
-#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
-#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
-#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
-#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
-	u8 pvlan_reserved2[3];
+	__le16 port_based_inner_vlan; /* VLANS include priority bits */
+	u8 inner_vlan_reserved[2];
+	u8 inner_vlan_flags;
+#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_S		0
+#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_M		(0x3 << ICE_AQ_VSI_INNER_VLAN_TX_MODE_S)
+#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED	0x1
+#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTTAGGED	0x2
+#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL	0x3
+#define ICE_AQ_VSI_INNER_VLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_INNER_VLAN_EMODE_S		3
+#define ICE_AQ_VSI_INNER_VLAN_EMODE_M		(0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH	(0x0 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP	(0x1 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR		(0x2 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING	(0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+#define ICE_AQ_VSI_INNER_VLAN_BLOCK_TX_DESC	BIT(5)
+	u8 inner_vlan_reserved2[3];
 	/* ingress egress up sections */
 	__le32 ingress_table; /* bitmap, 3 bits per up */
-#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
-#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
-#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
-#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
-#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
-#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
-#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
-#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
-#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
-#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
-#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
-#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
-#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
-#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
-#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
-#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+#define ICE_AQ_VSI_UP_TABLE_UP0_S		0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S		3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S		6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S		9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S		12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S		15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S		18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S		21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M		(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
 	__le32 egress_table;   /* same defines as for ingress table */
 	/* outer tags section */
-	__le16 outer_tag;
-	u8 outer_tag_flags;
-#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
-#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
-#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
-#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
-#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
-#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
-#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
-#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
-#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
-#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
-#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
-#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
-#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
-	u8 outer_tag_reserved;
+	__le16 port_based_outer_vlan;
+	u8 outer_vlan_flags;
+#define ICE_AQ_VSI_OUTER_VLAN_EMODE_S		0
+#define ICE_AQ_VSI_OUTER_VLAN_EMODE_M		(0x3 << ICE_AQ_VSI_OUTER_VLAN_EMODE_S)
+#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH	0x0
+#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_UP	0x1
+#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW	0x2
+#define ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING	0x3
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S		2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M		(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE		0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG		0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100		0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100		0x3
+#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT		BIT(4)
+#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST		BIT(4)
+#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S			5
+#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M			(0x3 << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S)
+#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED	0x1
+#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTTAGGED	0x2
+#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL		0x3
+#define ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC		BIT(7)
+	u8 outer_vlan_reserved;
 	/* queue mapping section */
 	__le16 mapping_flags;
-#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
-#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+#define ICE_AQ_VSI_Q_MAP_CONTIG			0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG		BIT(0)
 	__le16 q_mapping[16];
-#define ICE_AQ_VSI_Q_S		0
-#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+#define ICE_AQ_VSI_Q_S				0
+#define ICE_AQ_VSI_Q_M				(0x7FF << ICE_AQ_VSI_Q_S)
 	__le16 tc_mapping[8];
-#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
-#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
-#define ICE_AQ_VSI_TC_Q_NUM_S		11
-#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+#define ICE_AQ_VSI_TC_Q_OFFSET_S		0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M		(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S			11
+#define ICE_AQ_VSI_TC_Q_NUM_M			(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
 	/* queueing option section */
 	u8 q_opt_rss;
-#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
-#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
-#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
-#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
-#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
-#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
-#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
-#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
-#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
-#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
-#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
-#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
-#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S		0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M		(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI		0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF		0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL		0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S		2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M		(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S		6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M		(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ		(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ		(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR		(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH		(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
 	u8 q_opt_tc;
-#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
-#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
-#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S		0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M		(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR		BIT(7)
 	u8 q_opt_flags;
-#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN		BIT(0)
 	u8 q_opt_reserved[3];
 	/* outer up section */
 	__le32 outer_up_table; /* same structure and defines as ingress tbl */
 	/* ACL section */
 	__le16 acl_def_act;
-#define ICE_AQ_VSI_ACL_DEF_RX_PROF_S	0
-#define ICE_AQ_VSI_ACL_DEF_RX_PROF_M	(0xF << ICE_AQ_VSI_ACL_DEF_RX_PROF_S)
-#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_S	4
-#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_M	(0xF << ICE_AQ_VSI_ACL_DEF_RX_TABLE_S)
-#define ICE_AQ_VSI_ACL_DEF_TX_PROF_S	8
-#define ICE_AQ_VSI_ACL_DEF_TX_PROF_M	(0xF << ICE_AQ_VSI_ACL_DEF_TX_PROF_S)
-#define ICE_AQ_VSI_ACL_DEF_TX_TABLE_S	12
-#define ICE_AQ_VSI_ACL_DEF_TX_TABLE_M	(0xF << ICE_AQ_VSI_ACL_DEF_TX_TABLE_S)
+#define ICE_AQ_VSI_ACL_DEF_RX_PROF_S		0
+#define ICE_AQ_VSI_ACL_DEF_RX_PROF_M		(0xF << ICE_AQ_VSI_ACL_DEF_RX_PROF_S)
+#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_S		4
+#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_M		(0xF << ICE_AQ_VSI_ACL_DEF_RX_TABLE_S)
+#define ICE_AQ_VSI_ACL_DEF_TX_PROF_S		8
+#define ICE_AQ_VSI_ACL_DEF_TX_PROF_M		(0xF << ICE_AQ_VSI_ACL_DEF_TX_PROF_S)
+#define ICE_AQ_VSI_ACL_DEF_TX_TABLE_S		12
+#define ICE_AQ_VSI_ACL_DEF_TX_TABLE_M		(0xF << ICE_AQ_VSI_ACL_DEF_TX_TABLE_S)
 	/* flow director section */
 	__le16 fd_options;
-#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
-#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
-#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+#define ICE_AQ_VSI_FD_ENABLE			BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE		BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE		BIT(3)
 	__le16 max_fd_fltr_dedicated;
 	__le16 max_fd_fltr_shared;
 	__le16 fd_def_q;
-#define ICE_AQ_VSI_FD_DEF_Q_S		0
-#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
-#define ICE_AQ_VSI_FD_DEF_GRP_S	12
-#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+#define ICE_AQ_VSI_FD_DEF_Q_S			0
+#define ICE_AQ_VSI_FD_DEF_Q_M			(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S			12
+#define ICE_AQ_VSI_FD_DEF_GRP_M			(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
 	__le16 fd_report_opt;
-#define ICE_AQ_VSI_FD_REPORT_Q_S	0
-#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
-#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
-#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
-#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+#define ICE_AQ_VSI_FD_REPORT_Q_S		0
+#define ICE_AQ_VSI_FD_REPORT_Q_M		(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S		12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M		(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP			BIT(15)
 	/* PASID section */
 	__le32 pasid_id;
-#define ICE_AQ_VSI_PASID_ID_S		0
-#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
-#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+#define ICE_AQ_VSI_PASID_ID_S			0
+#define ICE_AQ_VSI_PASID_ID_M			(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID		BIT(31)
 	u8 reserved[24];
 };
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c153c7ca78..9ce0280726 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1131,28 +1131,28 @@ ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
 	if (vsi->info.valid_sections &
 		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
 		if (on) {
-			if ((vsi->info.outer_tag_flags &
-			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
-			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+			if ((vsi->info.outer_vlan_flags &
+			     ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST)
 				return 0; /* already on */
 		} else {
-			if (!(vsi->info.outer_tag_flags &
-			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+			if (!(vsi->info.outer_vlan_flags &
+			      ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST))
 				return 0; /* already off */
 		}
 	}
 
 	if (on)
-		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+		qinq_flags = ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST;
 	else
 		qinq_flags = 0;
 	/* clear global insertion and use per packet insertion */
-	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
-	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
-	vsi->info.outer_tag_flags |= qinq_flags;
+	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT);
+	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST);
+	vsi->info.outer_vlan_flags |= qinq_flags;
 	/* use default vlan type 0x8100 */
-	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
-	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_vlan_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
 				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
 	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
 	ctxt.info.valid_sections =
@@ -1184,27 +1184,27 @@ ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
 	if (vsi->info.valid_sections &
 		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
 		if (on) {
-			if ((vsi->info.outer_tag_flags &
-			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
-			    ICE_AQ_VSI_OUTER_TAG_COPY)
+			if ((vsi->info.outer_vlan_flags &
+			     ICE_AQ_VSI_OUTER_VLAN_EMODE_M) ==
+			    ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW)
 				return 0; /* already on */
 		} else {
-			if ((vsi->info.outer_tag_flags &
-			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
-			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+			if ((vsi->info.outer_vlan_flags &
+			     ICE_AQ_VSI_OUTER_VLAN_EMODE_M) ==
+			    ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH)
 				return 0; /* already off */
 		}
 	}
 
 	if (on)
-		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+		qinq_flags = ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW;
 	else
-		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
-	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
-	vsi->info.outer_tag_flags |= qinq_flags;
+		qinq_flags = ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH;
+	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M);
+	vsi->info.outer_vlan_flags |= qinq_flags;
 	/* use default vlan type 0x8100 */
-	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
-	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_vlan_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
 				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
 	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
 	ctxt.info.valid_sections =
@@ -1582,8 +1582,8 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 		vsi_ctx.info.sw_id = hw->port_info->sw_id;
 		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
 		/* Allow all untagged or tagged packets */
-		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
-		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL;
+		vsi_ctx.info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
 		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
 					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
 
@@ -4124,24 +4124,24 @@ ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
 	if (vsi->info.valid_sections &
 		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
 		if (on) {
-			if ((vsi->info.vlan_flags &
-			     ICE_AQ_VSI_VLAN_EMOD_M) ==
-			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+			if ((vsi->info.inner_vlan_flags &
+			     ICE_AQ_VSI_INNER_VLAN_EMODE_M) ==
+			    ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH)
 				return 0; /* already on */
 		} else {
-			if ((vsi->info.vlan_flags &
-			     ICE_AQ_VSI_VLAN_EMOD_M) ==
-			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+			if ((vsi->info.inner_vlan_flags &
+			     ICE_AQ_VSI_INNER_VLAN_EMODE_M) ==
+			    ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING)
 				return 0; /* already off */
 		}
 	}
 
 	if (on)
-		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+		vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH;
 	else
-		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
-	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
-	vsi->info.vlan_flags |= vlan_flags;
+		vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
+	vsi->info.inner_vlan_flags &= ~(ICE_AQ_VSI_INNER_VLAN_EMODE_M);
+	vsi->info.inner_vlan_flags |= vlan_flags;
 	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
 	ctxt.info.valid_sections =
 		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
@@ -4627,24 +4627,24 @@ ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 	}
 
 	if (info->on) {
-		vsi->info.pvid = info->config.pvid;
+		vsi->info.port_based_inner_vlan = info->config.pvid;
 		/**
 		 * If insert pvid is enabled, only tagged pkts are
 		 * allowed to be sent out.
 		 */
-		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
-			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+		vlan_flags = ICE_AQ_VSI_INNER_VLAN_INSERT_PVID |
+			     ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED;
 	} else {
-		vsi->info.pvid = 0;
+		vsi->info.port_based_inner_vlan = 0;
 		if (info->config.reject.tagged == 0)
-			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+			vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTTAGGED;
 
 		if (info->config.reject.untagged == 0)
-			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+			vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED;
 	}
-	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
-				  ICE_AQ_VSI_VLAN_MODE_M);
-	vsi->info.vlan_flags |= vlan_flags;
+	vsi->info.inner_vlan_flags &= ~(ICE_AQ_VSI_INNER_VLAN_INSERT_PVID |
+				  ICE_AQ_VSI_INNER_VLAN_EMODE_M);
+	vsi->info.inner_vlan_flags |= vlan_flags;
 	memset(&ctxt, 0, sizeof(ctxt));
 	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
 	ctxt.info.valid_sections =
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 03/22] net/ice/base: add interface to support configuring VLAN mode
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 01/22] net/ice: support RSS hash for IP fragment Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 02/22] net/ice/base: align add VSI and update VSI AQ command buffer Wenjun Wu
@ 2021-08-03  8:37 ` Wenjun Wu
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 04/22] net/ice/base: fix outer VLAN related macro Wenjun Wu
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:37 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 4e4dc21e450b5860da650d255abb6e17c3e637a9 ]

The VLAN mode of the device has to be configured while the global
configuration lock is held while downloading the DDP, specifically after
the DDP has been downloaded. In order to support this a VLAN mode
interface was added. By default the device will stay in single VLAN
mode (SVM), which is the current implementation. However, this can be
changed by implementing the .set_dvm op.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 23 ++++++++++++++
 drivers/net/ice/base/ice_common.c     | 40 ++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h     |  4 +++
 drivers/net/ice/base/ice_flex_pipe.c  |  7 +++++
 drivers/net/ice/base/ice_type.h       |  2 ++
 drivers/net/ice/base/ice_vlan_mode.c  | 44 +++++++++++++++++++++++++++
 drivers/net/ice/base/ice_vlan_mode.h  | 32 +++++++++++++++++++
 drivers/net/ice/base/meson.build      |  1 +
 8 files changed, 153 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_vlan_mode.c
 create mode 100644 drivers/net/ice/base/ice_vlan_mode.h

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 91d360be62..eebafee7c7 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -227,6 +227,27 @@ struct ice_aqc_get_sw_cfg_resp_elem {
 #define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
 };
 
+/* Set Port parameters, (direct, 0x0203) */
+struct ice_aqc_set_port_params {
+	__le16 cmd_flags;
+#define ICE_AQC_SET_P_PARAMS_SAVE_BAD_PACKETS	BIT(0)
+#define ICE_AQC_SET_P_PARAMS_PAD_SHORT_PACKETS	BIT(1)
+#define ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA	BIT(2)
+	__le16 bad_frame_vsi;
+#define ICE_AQC_SET_P_PARAMS_VSI_S	0
+#define ICE_AQC_SET_P_PARAMS_VSI_M	(0x3FF << ICE_AQC_SET_P_PARAMS_VSI_S)
+#define ICE_AQC_SET_P_PARAMS_VSI_VALID	BIT(15)
+	__le16 swid;
+#define ICE_AQC_SET_P_PARAMS_SWID_S	0
+#define ICE_AQC_SET_P_PARAMS_SWID_M	(0xFF << ICE_AQC_SET_P_PARAMS_SWID_S)
+#define ICE_AQC_SET_P_PARAMS_LOGI_PORT_ID_S	8
+#define ICE_AQC_SET_P_PARAMS_LOGI_PORT_ID_M	\
+				(0x3F << ICE_AQC_SET_P_PARAMS_LOGI_PORT_ID_S)
+#define ICE_AQC_SET_P_PARAMS_IS_LOGI_PORT	BIT(14)
+#define ICE_AQC_SET_P_PARAMS_SWID_VALID		BIT(15)
+	u8 reserved[10];
+};
+
 /* These resource type defines are used for all switch resource
  * commands where a resource type is required, such as:
  * Get Resource Allocation command (indirect 0x0204)
@@ -2713,6 +2734,7 @@ struct ice_aq_desc {
 		struct ice_aqc_sff_eeprom read_write_sff_param;
 		struct ice_aqc_set_port_id_led set_port_id_led;
 		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_set_port_params set_port_params;
 		struct ice_aqc_sw_rules sw_rules;
 		struct ice_aqc_storm_cfg storm_conf;
 		struct ice_aqc_add_get_recipe add_get_recipe;
@@ -2876,6 +2898,7 @@ enum ice_adminq_opc {
 
 	/* internal switch commands */
 	ice_aqc_opc_get_sw_cfg				= 0x0200,
+	ice_aqc_opc_set_port_params			= 0x0203,
 
 	/* Alloc/Free/Get Resources */
 	ice_aqc_opc_get_res_alloc			= 0x0204,
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 304e55e210..b5f1d8cce5 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -830,6 +830,9 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	if (status)
 		goto err_unroll_fltr_mgmt_struct;
 	ice_init_lock(&hw->tnl_lock);
+
+	ice_init_vlan_mode_ops(hw);
+
 	return ICE_SUCCESS;
 
 err_unroll_fltr_mgmt_struct:
@@ -2370,6 +2373,43 @@ void ice_clear_pxe_mode(struct ice_hw *hw)
 		ice_aq_clear_pxe_mode(hw);
 }
 
+/**
+ * ice_aq_set_port_params - set physical port parameters.
+ * @pi: pointer to the port info struct
+ * @bad_frame_vsi: defines the VSI to which bad frames are forwarded
+ * @save_bad_pac: if set packets with errors are forwarded to the bad frames VSI
+ * @pad_short_pac: if set transmit packets smaller than 60 bytes are padded
+ * @double_vlan: if set double VLAN is enabled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set Physical port parameters (0x0203)
+ */
+enum ice_status
+ice_aq_set_port_params(struct ice_port_info *pi, u16 bad_frame_vsi,
+		       bool save_bad_pac, bool pad_short_pac, bool double_vlan,
+		       struct ice_sq_cd *cd)
+
+{
+	struct ice_aqc_set_port_params *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+	u16 cmd_flags = 0;
+
+	cmd = &desc.params.set_port_params;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params);
+	cmd->bad_frame_vsi = CPU_TO_LE16(bad_frame_vsi);
+	if (save_bad_pac)
+		cmd_flags |= ICE_AQC_SET_P_PARAMS_SAVE_BAD_PACKETS;
+	if (pad_short_pac)
+		cmd_flags |= ICE_AQC_SET_P_PARAMS_PAD_SHORT_PACKETS;
+	if (double_vlan)
+		cmd_flags |= ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
 /**
  * ice_get_link_speed_based_on_phy_type - returns link speed
  * @phy_type_low: lower part of phy_type
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 8c16c7a024..765dc3054f 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -123,6 +123,10 @@ enum ice_status
 ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv,
 		       struct ice_sq_cd *cd);
 enum ice_status
+ice_aq_set_port_params(struct ice_port_info *pi, u16 bad_frame_vsi,
+		       bool save_bad_pac, bool pad_short_pac, bool double_vlan,
+		       struct ice_sq_cd *cd);
+enum ice_status
 ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
 		    struct ice_aqc_get_phy_caps_data *caps,
 		    struct ice_sq_cd *cd);
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index d74fecbf5b..6c7f83899d 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1006,6 +1006,13 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 			break;
 	}
 
+	if (!status) {
+		status = ice_set_vlan_mode(hw);
+		if (status)
+			ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n",
+				  status);
+	}
+
 	ice_release_global_cfg_lock(hw);
 
 	return status;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 6b8d44f0b4..996b58b743 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -57,6 +57,7 @@
 #include "ice_lan_tx_rx.h"
 #include "ice_flex_type.h"
 #include "ice_protocol_type.h"
+#include "ice_vlan_mode.h"
 
 /**
  * ice_is_pow2 - check if integer value is a power of 2
@@ -985,6 +986,7 @@ struct ice_hw {
 	ice_declare_bitmap(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
 	struct ice_lock rss_locks;	/* protect RSS configuration */
 	struct LIST_HEAD_TYPE rss_list_head;
+	struct ice_vlan_mode_ops vlan_mode_ops;
 };
 
 /* Statistics collected by each port, VSI, VEB, and S-channel */
diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
new file mode 100644
index 0000000000..603de74e25
--- /dev/null
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2020 Intel Corporation
+ */
+
+#include "ice_vlan_mode.h"
+#include "ice_common.h"
+
+/**
+ * ice_set_svm - set single VLAN mode
+ * @hw: pointer to the HW structure
+ */
+static enum ice_status ice_set_svm_dflt(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
+	return ice_aq_set_port_params(hw->port_info, 0, false, false, false, NULL);
+}
+
+/**
+ * ice_init_vlan_mode_ops - initialize VLAN mode configuration ops
+ * @hw: pointer to the HW structure
+ */
+void ice_init_vlan_mode_ops(struct ice_hw *hw)
+{
+	hw->vlan_mode_ops.set_dvm = NULL;
+	hw->vlan_mode_ops.set_svm = ice_set_svm_dflt;
+}
+
+/**
+ * ice_set_vlan_mode
+ * @hw: pointer to the HW structure
+ */
+enum ice_status ice_set_vlan_mode(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_ERR_NOT_IMPL;
+
+	if (hw->vlan_mode_ops.set_dvm)
+		status = hw->vlan_mode_ops.set_dvm(hw);
+
+	if (status)
+		return hw->vlan_mode_ops.set_svm(hw);
+
+	return ICE_SUCCESS;
+}
diff --git a/drivers/net/ice/base/ice_vlan_mode.h b/drivers/net/ice/base/ice_vlan_mode.h
new file mode 100644
index 0000000000..1b9db4d36f
--- /dev/null
+++ b/drivers/net/ice/base/ice_vlan_mode.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2020 Intel Corporation
+ */
+
+#ifndef _ICE_VLAN_MODE_H_
+#define _ICE_VLAN_MODE_H_
+
+struct ice_hw;
+
+enum ice_status ice_set_vlan_mode(struct ice_hw *hw);
+void ice_init_vlan_mode_ops(struct ice_hw *hw);
+
+/* This structure defines the VLAN mode configuration interface. It is used to set the VLAN mode.
+ *
+ * Note: These operations will be called while the global configuration lock is held.
+ *
+ * enum ice_status (*set_svm)(struct ice_hw *hw);
+ *	This function is called when the DDP and/or Firmware don't support double VLAN mode (DVM) or
+ *	if the set_dvm op is not implemented and/or returns failure. It will set the device in
+ *	single VLAN mode (SVM).
+ *
+ * enum ice_status (*set_dvm)(struct ice_hw *hw);
+ *	This function is called when the DDP and Firmware support double VLAN mode (DVM). It should
+ *	be implemented to set double VLAN mode. If it fails or remains unimplemented, set_svm will
+ *	be called as a fallback plan.
+ */
+struct ice_vlan_mode_ops {
+	enum ice_status (*set_svm)(struct ice_hw *hw);
+	enum ice_status (*set_dvm)(struct ice_hw *hw);
+};
+
+#endif /* _ICE_VLAN_MODE_H */
diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
index 8f7d4384e7..6483ea0298 100644
--- a/drivers/net/ice/base/meson.build
+++ b/drivers/net/ice/base/meson.build
@@ -13,6 +13,7 @@ sources = [
 	'ice_fdir.c',
 	'ice_acl.c',
 	'ice_acl_ctrl.c',
+	'ice_vlan_mode.c',
 ]
 
 error_cflags = ['-Wno-unused-value',
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 04/22] net/ice/base: fix outer VLAN related macro
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (2 preceding siblings ...)
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 03/22] net/ice/base: add interface to support configuring VLAN mode Wenjun Wu
@ 2021-08-03  8:37 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 05/22] net/ice/base: add VLAN TPID for VLAN filters Wenjun Wu
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:37 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 25aa214490814d14e5f8f69121c23c0b91d2aeb9 ]

Fix the wrong value of ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST

Fixes: 9ea028123a0b ("net/ice/base: align add VSI and update VSI AQ command buffer")

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index eebafee7c7..c0617093d4 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -500,7 +500,7 @@ struct ice_aqc_vsi_props {
 #define ICE_AQ_VSI_OUTER_TAG_VLAN_8100		0x2
 #define ICE_AQ_VSI_OUTER_TAG_VLAN_9100		0x3
 #define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT		BIT(4)
-#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST		BIT(4)
+#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST	BIT(6)
 #define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S			5
 #define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M			(0x3 << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S)
 #define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED	0x1
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 05/22] net/ice/base: add VLAN TPID for VLAN filters
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (3 preceding siblings ...)
  2021-08-03  8:37 ` [dpdk-dev] [PATCH 04/22] net/ice/base: fix outer VLAN related macro Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 06/22] net/ice/base: support checking double VLAN mode Wenjun Wu
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit a6b975d23c10756083357355372c4f545ddc1ebe ]

Currently VLAN filters via RID4 are only based on VLAN ID. However, with
incoming support for Double VLAN Mode (DVM), the driver needs to be able
to support filtering on VLAN ID + VLAN TPID (i.e. 0x8100, 0x88a8, etc.).
Add support for this by adding two fields to the ice_fltr_info
structure. First, add the tpid_valid field so the code can determine
whether or not to overwrite the default 0x8100 value for programming
packets or use the tpid field.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 6 ++++++
 drivers/net/ice/base/ice_switch.h | 2 ++
 2 files changed, 8 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index e6ea04183f..8d455f5995 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -14,6 +14,7 @@
 #define ICE_PPP_IPV6_PROTO_ID		0x0057
 #define ICE_IPV6_ETHER_ID		0x86DD
 #define ICE_TCP_PROTO_ID		0x06
+#define ICE_ETH_P_8021Q			0x8100
 
 /* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
  * struct to configure any switch filter rules.
@@ -3034,6 +3035,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
 {
 	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	u16 vlan_tpid = ICE_ETH_P_8021Q;
 	void *daddr = NULL;
 	u16 eth_hdr_sz;
 	u8 *eth_hdr;
@@ -3106,6 +3108,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 		break;
 	case ICE_SW_LKUP_VLAN:
 		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->l_data.vlan.tpid_valid)
+			vlan_tpid = f_info->l_data.vlan.tpid;
 		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
 		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
 			act |= ICE_SINGLE_ACT_PRUNE;
@@ -3149,6 +3153,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
 		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
 		*off = CPU_TO_BE16(vlan_id);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(vlan_tpid);
 	}
 
 	/* Create the switch rule with the final dummy Ethernet header */
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index be9b74fd4c..392eb369c7 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -160,6 +160,8 @@ struct ice_fltr_info {
 		} mac_vlan;
 		struct {
 			u16 vlan_id;
+			u16 tpid;
+			u8 tpid_valid;
 		} vlan;
 		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
 		 * if just using ethertype as filter. Set lkup_type as
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 06/22] net/ice/base: support checking double VLAN mode
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (4 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 05/22] net/ice/base: add VLAN TPID for VLAN filters Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 07/22] net/ice/base: support configuring device in " Wenjun Wu
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 67285599c9f413c59118379d1f7162031ea6acdc ]

If a driver wants to configure double VLAN mode (DVM) it needs to
first check if the DDP supports DVM. To do this the driver needs to read
the package metadata section via the upload section AQ (0x04C1).

If the DDP doesn't support configuring double VLAN mode (DVM), then
there is nothing to do regarding configuring the VLAN mode of the
device.

The set_svm() or set_dvm() ops should only be called if the current
configuration supports configuring the VLAN mode of the device.

Suggested-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_bitops.h    | 45 ++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.c | 63 +++++++++++++++++++++-
 drivers/net/ice/base/ice_flex_pipe.h | 11 ++++
 drivers/net/ice/base/ice_flex_type.h | 26 ++++++++++
 drivers/net/ice/base/ice_vlan_mode.c | 78 ++++++++++++++++++++++++++++
 5 files changed, 221 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
index 39548967cc..b786bf7a18 100644
--- a/drivers/net/ice/base/ice_bitops.h
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -449,4 +449,49 @@ ice_cmp_bitmap(ice_bitmap_t *bmp1, ice_bitmap_t *bmp2, u16 size)
 	return true;
 }
 
+/**
+ * ice_bitmap_from_array32 - copies u32 array source into bitmap destination
+ * @dst: the destination bitmap
+ * @src: the source u32 array
+ * @size: size of the bitmap (in bits)
+ *
+ * This function copies the src bitmap stored in an u32 array into the dst
+ * bitmap stored as an ice_bitmap_t.
+ */
+static inline void
+ice_bitmap_from_array32(ice_bitmap_t *dst, u32 *src, u16 size)
+{
+	u32 remaining_bits, i;
+
+#define BITS_PER_U32	(sizeof(u32) * BITS_PER_BYTE)
+	/* clear bitmap so we only have to set when iterating */
+	ice_zero_bitmap(dst, size);
+
+	for (i = 0; i < (u32)(size / BITS_PER_U32); i++) {
+		u32 bit_offset = i * BITS_PER_U32;
+		u32 entry = src[i];
+		u32 j;
+
+		for (j = 0; j < BITS_PER_U32; j++) {
+			if (entry & BIT(j))
+				ice_set_bit((u16)(j + bit_offset), dst);
+		}
+	}
+
+	/* still need to check the leftover bits (i.e. if size isn't evenly
+	 * divisible by BITS_PER_U32
+	 **/
+	remaining_bits = size % BITS_PER_U32;
+	if (remaining_bits) {
+		u32 bit_offset = i * BITS_PER_U32;
+		u32 entry = src[i];
+		u32 j;
+
+		for (j = 0; j < remaining_bits; j++) {
+			if (entry & BIT(j))
+				ice_set_bit((u16)(j + bit_offset), dst);
+		}
+	}
+}
+
 #endif /* _ICE_BITOPS_H_ */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 6c7f83899d..e511b50a00 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -807,6 +807,28 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
 	return status;
 }
 
+/**
+ * ice_aq_upload_section
+ * @hw: pointer to the hardware structure
+ * @pkg_buf: the package buffer which will receive the section
+ * @buf_size: the size of the package buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Upload Section (0x0C41)
+ */
+enum ice_status
+ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+		      u16 buf_size, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
+}
+
 /**
  * ice_aq_update_pkg
  * @hw: pointer to the hardware structure
@@ -1800,7 +1822,7 @@ void ice_init_prof_result_bm(struct ice_hw *hw)
  *
  * Frees a package buffer
  */
-static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
+void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 {
 	ice_free(hw, bld);
 }
@@ -1899,6 +1921,43 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 	return NULL;
 }
 
+/**
+ * ice_pkg_buf_alloc_single_section
+ * @hw: pointer to the HW structure
+ * @type: the section type value
+ * @size: the size of the section to reserve (in bytes)
+ * @section: returns pointer to the section
+ *
+ * Allocates a package buffer with a single section.
+ * Note: all package contents must be in Little Endian form.
+ */
+struct ice_buf_build *
+ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
+				 void **section)
+{
+	struct ice_buf_build *buf;
+
+	if (!section)
+		return NULL;
+
+	buf = ice_pkg_buf_alloc(hw);
+	if (!buf)
+		return NULL;
+
+	if (ice_pkg_buf_reserve_section(buf, 1))
+		goto ice_pkg_buf_alloc_single_section_err;
+
+	*section = ice_pkg_buf_alloc_section(buf, type, size);
+	if (!*section)
+		goto ice_pkg_buf_alloc_single_section_err;
+
+	return buf;
+
+ice_pkg_buf_alloc_single_section_err:
+	ice_pkg_buf_free(hw, buf);
+	return NULL;
+}
+
 /**
  * ice_pkg_buf_get_active_sections
  * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
@@ -1926,7 +1985,7 @@ static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
  *
  * Return a pointer to the buffer's header
  */
-static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
+struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 {
 	if (!bld)
 		return NULL;
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 214c7a2837..d4679cc940 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -38,6 +38,12 @@ ice_init_prof_result_bm(struct ice_hw *hw);
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt,
 		   ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list);
+enum ice_status
+ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count);
+u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld);
+enum ice_status
+ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
+		      u16 buf_size, struct ice_sq_cd *cd);
 bool
 ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type,
 			 u16 *port);
@@ -75,6 +81,11 @@ void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
+struct ice_buf_build *
+ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
+				 void **section);
+struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld);
+void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld);
 
 enum ice_status
 ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off,
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 1dd57baccd..169476369b 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -786,4 +786,30 @@ enum ice_prof_type {
 	ICE_PROF_TUN_ALL = 0xE,
 	ICE_PROF_ALL = 0xFF,
 };
+
+/* Number of bits/bytes contained in meta init entry. Note, this should be a
+ * multiple of 32 bits.
+ */
+#define ICE_META_INIT_BITS	192
+#define ICE_META_INIT_DW_CNT	(ICE_META_INIT_BITS / (sizeof(__le32) * \
+				 BITS_PER_BYTE))
+
+/* The meta init Flag field starts at this bit */
+#define ICE_META_FLAGS_ST		123
+
+/* The entry and bit to check for Double VLAN Mode (DVM) support */
+#define ICE_META_VLAN_MODE_ENTRY	0
+#define ICE_META_FLAG_VLAN_MODE		60
+#define ICE_META_VLAN_MODE_BIT		(ICE_META_FLAGS_ST + \
+					 ICE_META_FLAG_VLAN_MODE)
+
+struct ice_meta_init_entry {
+	__le32 bm[ICE_META_INIT_DW_CNT];
+};
+
+struct ice_meta_init_section {
+	__le16 count;
+	__le16 offset;
+	struct ice_meta_init_entry entry[1];
+};
 #endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
index 603de74e25..c4e7a40295 100644
--- a/drivers/net/ice/base/ice_vlan_mode.c
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -5,6 +5,81 @@
 #include "ice_vlan_mode.h"
 #include "ice_common.h"
 
+/**
+ * ice_pkg_get_supported_vlan_mode - chk if DDP supports Double VLAN mode (DVM)
+ * @hw: pointer to the HW struct
+ * @dvm: output variable to determine if DDP supports DVM(true) or SVM(false)
+ */
+static enum ice_status
+ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm)
+{
+	u16 meta_init_size = sizeof(struct ice_meta_init_section);
+	struct ice_meta_init_section *sect;
+	struct ice_buf_build *bld;
+	enum ice_status status;
+
+	/* if anything fails, we assume there is no DVM support */
+	*dvm = false;
+
+	bld = ice_pkg_buf_alloc_single_section(hw,
+					       ICE_SID_RXPARSER_METADATA_INIT,
+					       meta_init_size, (void **)&sect);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* only need to read a single section */
+	sect->count = CPU_TO_LE16(1);
+	sect->offset = CPU_TO_LE16(ICE_META_VLAN_MODE_ENTRY);
+
+	status = ice_aq_upload_section(hw,
+				       (struct ice_buf_hdr *)ice_pkg_buf(bld),
+				       ICE_PKG_BUF_SIZE, NULL);
+	if (!status) {
+		ice_declare_bitmap(entry, ICE_META_INIT_BITS);
+		u32 arr[ICE_META_INIT_DW_CNT];
+		u16 i;
+
+		/* convert to host bitmap format */
+		for (i = 0; i < ICE_META_INIT_DW_CNT; i++)
+			arr[i] = LE32_TO_CPU(sect->entry[0].bm[i]);
+
+		ice_bitmap_from_array32(entry, arr, (u16)ICE_META_INIT_BITS);
+
+		/* check if DVM is supported */
+		*dvm = ice_is_bit_set(entry, ICE_META_VLAN_MODE_BIT);
+	}
+
+	ice_pkg_buf_free(hw, bld);
+
+	return status;
+}
+
+/**
+ * ice_is_dvm_supported - check if double VLAN mode is supported based on DDP
+ * @hw: pointer to the hardware structure
+ *
+ * Returns true if DVM is supported and false if only SVM is supported. This
+ * function should only be called while the global config lock is held and after
+ * the package has been successfully downloaded.
+ */
+static bool ice_is_dvm_supported(struct ice_hw *hw)
+{
+	enum ice_status status;
+	bool pkg_supports_dvm;
+
+	status = ice_pkg_get_supported_vlan_mode(hw, &pkg_supports_dvm);
+	if (status) {
+		ice_debug(hw, ICE_DBG_PKG, "Failed to get supported VLAN mode, err %d\n",
+			  status);
+		return false;
+	}
+
+	if (!pkg_supports_dvm)
+		return false;
+
+	return true;
+}
+
 /**
  * ice_set_svm - set single VLAN mode
  * @hw: pointer to the HW structure
@@ -34,6 +109,9 @@ enum ice_status ice_set_vlan_mode(struct ice_hw *hw)
 {
 	enum ice_status status = ICE_ERR_NOT_IMPL;
 
+	if (!ice_is_dvm_supported(hw))
+		return ICE_SUCCESS;
+
 	if (hw->vlan_mode_ops.set_dvm)
 		status = hw->vlan_mode_ops.set_dvm(hw);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 07/22] net/ice/base: support configuring device in double VLAN mode
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (5 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 06/22] net/ice/base: support checking double VLAN mode Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 08/22] net/ice/base: do not set VLAN mode in DCF mode Wenjun Wu
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 14e7a4b37b4f2f765b4da08019ffc9098d99a076 ]

In order to support configuring the device in Double VLAN Mode (DVM),
the DDP and FW have to support DVM. If both support DVM, the PF
that downloads the package needs to update the default recipes and set
the VLAN mode. This is done in ice_set_dvm().

In order to support updating the default recipes in DVM add support
for updating an existing switch recipe's lkup_idx and mask.
This is done by first calling the get recipe AQ (0x0292) with the
desired recipe ID. Then, if that is successful update one of the lookup
indices (lkup_idx) and its associated mask if the mask is valid
otherwise the already existing mask will be used.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  36 ++++
 drivers/net/ice/base/ice_common.c     |   2 -
 drivers/net/ice/base/ice_flex_pipe.c  |   9 +-
 drivers/net/ice/base/ice_switch.c     |  58 ++++++
 drivers/net/ice/base/ice_switch.h     |  12 ++
 drivers/net/ice/base/ice_type.h       |   2 +-
 drivers/net/ice/base/ice_vlan_mode.c  | 284 ++++++++++++++++++++++++--
 drivers/net/ice/base/ice_vlan_mode.h  |  24 +--
 8 files changed, 381 insertions(+), 46 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index c0617093d4..1bd4f2fc8d 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -358,6 +358,40 @@ struct ice_aqc_get_allocd_res_desc {
 	__le32 addr_low;
 };
 
+/* Request buffer for Set VLAN Mode AQ command (indirect 0x020C) */
+struct ice_aqc_set_vlan_mode {
+	u8 reserved;
+	u8 l2tag_prio_tagging;
+#define ICE_AQ_VLAN_PRIO_TAG_S			0
+#define ICE_AQ_VLAN_PRIO_TAG_M			(0x7 << ICE_AQ_VLAN_PRIO_TAG_S)
+#define ICE_AQ_VLAN_PRIO_TAG_NOT_SUPPORTED	0x0
+#define ICE_AQ_VLAN_PRIO_TAG_STAG		0x1
+#define ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG		0x2
+#define ICE_AQ_VLAN_PRIO_TAG_OUTER_VLAN		0x3
+#define ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG		0x4
+#define ICE_AQ_VLAN_PRIO_TAG_MAX		0x4
+#define ICE_AQ_VLAN_PRIO_TAG_ERROR		0x7
+	u8 l2tag_reserved[64];
+	u8 rdma_packet;
+#define ICE_AQ_VLAN_RDMA_TAG_S			0
+#define ICE_AQ_VLAN_RDMA_TAG_M			(0x3F << ICE_AQ_VLAN_RDMA_TAG_S)
+#define ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING	0x10
+#define ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING	0x1A
+	u8 rdma_reserved[2];
+	u8 mng_vlan_prot_id;
+#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER	0x10
+#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER	0x11
+	u8 prot_id_reserved[30];
+};
+
+/* Response buffer for Get VLAN Mode AQ command (indirect 0x020D) */
+struct ice_aqc_get_vlan_mode {
+	u8 vlan_mode;
+#define ICE_AQ_VLAN_MODE_DVM_ENA	BIT(0)
+	u8 l2tag_prio_tagging;
+	u8 reserved[98];
+};
+
 /* Add VSI (indirect 0x0210)
  * Update VSI (indirect 0x0211)
  * Get VSI (indirect 0x0212)
@@ -2905,6 +2939,8 @@ enum ice_adminq_opc {
 	ice_aqc_opc_alloc_res				= 0x0208,
 	ice_aqc_opc_free_res				= 0x0209,
 	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+	ice_aqc_opc_set_vlan_mode_parameters		= 0x020C,
+	ice_aqc_opc_get_vlan_mode_parameters		= 0x020D,
 
 	/* VSI commands */
 	ice_aqc_opc_add_vsi				= 0x0210,
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index b5f1d8cce5..0f120ec6e0 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -831,8 +831,6 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 		goto err_unroll_fltr_mgmt_struct;
 	ice_init_lock(&hw->tnl_lock);
 
-	ice_init_vlan_mode_ops(hw);
-
 	return ICE_SUCCESS;
 
 err_unroll_fltr_mgmt_struct:
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index e511b50a00..cced7b6352 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1073,6 +1073,7 @@ static enum ice_status
 ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_buf_table *ice_buf_tbl;
+	enum ice_status status;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n",
@@ -1090,8 +1091,12 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 	ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n",
 		  LE32_TO_CPU(ice_buf_tbl->buf_count));
 
-	return ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array,
-				  LE32_TO_CPU(ice_buf_tbl->buf_count));
+	status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array,
+				    LE32_TO_CPU(ice_buf_tbl->buf_count));
+
+	ice_cache_vlan_mode(hw);
+
+	return status;
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 8d455f5995..df8319a7e7 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2768,6 +2768,64 @@ ice_aq_get_recipe(struct ice_hw *hw,
 	return status;
 }
 
+/**
+ * ice_update_recipe_lkup_idx - update a default recipe based on the lkup_idx
+ * @hw: pointer to the HW struct
+ * @params: parameters used to update the default recipe
+ *
+ * This function only supports updating default recipes and it only supports
+ * updating a single recipe based on the lkup_idx at a time.
+ *
+ * This is done as a read-modify-write operation. First, get the current recipe
+ * contents based on the recipe's ID. Then modify the field vector index and
+ * mask if it's valid at the lkup_idx. Finally, use the add recipe AQ to update
+ * the pre-existing recipe with the modifications.
+ */
+enum ice_status
+ice_update_recipe_lkup_idx(struct ice_hw *hw,
+			   struct ice_update_recipe_lkup_idx_params *params)
+{
+	struct ice_aqc_recipe_data_elem *rcp_list;
+	u16 num_recps = ICE_MAX_NUM_RECIPES;
+	enum ice_status status;
+
+	rcp_list = (struct ice_aqc_recipe_data_elem *)ice_malloc(hw, num_recps * sizeof(*rcp_list));
+	if (!rcp_list)
+		return ICE_ERR_NO_MEMORY;
+
+	/* read current recipe list from firmware */
+	rcp_list->recipe_indx = params->rid;
+	status = ice_aq_get_recipe(hw, rcp_list, &num_recps, params->rid, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SW, "Failed to get recipe %d, status %d\n",
+			  params->rid, status);
+		goto error_out;
+	}
+
+	/* only modify existing recipe's lkup_idx and mask if valid, while
+	 * leaving all other fields the same, then update the recipe firmware
+	 */
+	rcp_list->content.lkup_indx[params->lkup_idx] = params->fv_idx;
+	if (params->mask_valid)
+		rcp_list->content.mask[params->lkup_idx] =
+			CPU_TO_LE16(params->mask);
+
+	if (params->ignore_valid)
+		rcp_list->content.lkup_indx[params->lkup_idx] |=
+			ICE_AQ_RECIPE_LKUP_IGNORE;
+
+	status = ice_aq_add_recipe(hw, &rcp_list[0], 1, NULL);
+	if (status)
+		ice_debug(hw, ICE_DBG_SW, "Failed to update recipe %d lkup_idx %d fv_idx %d mask %d mask_valid %s, status %d\n",
+			  params->rid, params->lkup_idx, params->fv_idx,
+			  params->mask, params->mask_valid ? "true" : "false",
+			  status);
+
+error_out:
+	ice_free(hw, rcp_list);
+	return status;
+}
+
 /**
  * ice_aq_map_recipe_to_profile - Map recipe to packet profile
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 392eb369c7..78e6be35a9 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -202,6 +202,15 @@ struct ice_fltr_info {
 	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
 };
 
+struct ice_update_recipe_lkup_idx_params {
+	u16 rid;
+	u16 fv_idx;
+	bool ignore_valid;
+	u16 mask;
+	bool mask_valid;
+	u8 lkup_idx;
+};
+
 struct ice_adv_lkup_elem {
 	enum ice_protocol_type type;
 	union ice_prot_hdr h_u;	/* Header values */
@@ -524,4 +533,7 @@ ice_replay_vsi_all_fltr(struct ice_hw *hw, struct ice_port_info *pi,
 void ice_rm_sw_replay_rule_info(struct ice_hw *hw, struct ice_switch_info *sw);
 void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
 bool ice_is_prof_rule(enum ice_sw_tunnel_type type);
+enum ice_status
+ice_update_recipe_lkup_idx(struct ice_hw *hw,
+			   struct ice_update_recipe_lkup_idx_params *params);
 #endif /* _ICE_SWITCH_H_ */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 996b58b743..a4a4427693 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -986,7 +986,7 @@ struct ice_hw {
 	ice_declare_bitmap(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
 	struct ice_lock rss_locks;	/* protect RSS configuration */
 	struct LIST_HEAD_TYPE rss_list_head;
-	struct ice_vlan_mode_ops vlan_mode_ops;
+	u8 dvm_ena;
 };
 
 /* Statistics collected by each port, VSI, VEB, and S-channel */
diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
index c4e7a40295..c86e803c52 100644
--- a/drivers/net/ice/base/ice_vlan_mode.c
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -55,21 +55,100 @@ ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm)
 }
 
 /**
- * ice_is_dvm_supported - check if double VLAN mode is supported based on DDP
+ * ice_aq_get_vlan_mode - get the VLAN mode of the device
+ * @hw: pointer to the HW structure
+ * @get_params: structure FW fills in based on the current VLAN mode config
+ *
+ * Get VLAN Mode Parameters (0x020D)
+ */
+static enum ice_status
+ice_aq_get_vlan_mode(struct ice_hw *hw,
+		     struct ice_aqc_get_vlan_mode *get_params)
+{
+	struct ice_aq_desc desc;
+
+	if (!get_params)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc,
+				      ice_aqc_opc_get_vlan_mode_parameters);
+
+	return ice_aq_send_cmd(hw, &desc, get_params, sizeof(*get_params),
+			       NULL);
+}
+
+/**
+ * ice_aq_is_dvm_ena - query FW to check if double VLAN mode is enabled
+ * @hw: pointer to the HW structure
+ *
+ * Returns true if the hardware/firmware is configured in double VLAN mode,
+ * else return false signaling that the hardware/firmware is configured in
+ * single VLAN mode.
+ *
+ * Also, return false if this call fails for any reason (i.e. firmware doesn't
+ * support this AQ call).
+ */
+static bool ice_aq_is_dvm_ena(struct ice_hw *hw)
+{
+	struct ice_aqc_get_vlan_mode get_params = { 0 };
+	enum ice_status status;
+
+	status = ice_aq_get_vlan_mode(hw, &get_params);
+	if (status) {
+		ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n",
+			  status);
+		return false;
+	}
+
+	return (get_params.vlan_mode & ICE_AQ_VLAN_MODE_DVM_ENA);
+}
+
+/**
+ * ice_is_dvm_ena - check if double VLAN mode is enabled
+ * @hw: pointer to the HW structure
+ *
+ * The device is configured in single or double VLAN mode on initialization and
+ * this cannot be dynamically changed during runtime. Based on this there is no
+ * need to make an AQ call every time the driver needs to know the VLAN mode.
+ * Instead, use the cached VLAN mode.
+ */
+bool ice_is_dvm_ena(struct ice_hw *hw)
+{
+	return hw->dvm_ena;
+}
+
+/**
+ * ice_cache_vlan_mode - cache VLAN mode after DDP is downloaded
+ * @hw: pointer to the HW structure
+ *
+ * This is only called after downloading the DDP and after the global
+ * configuration lock has been released because all ports on a device need to
+ * cache the VLAN mode.
+ */
+void ice_cache_vlan_mode(struct ice_hw *hw)
+{
+	hw->dvm_ena = ice_aq_is_dvm_ena(hw) ? true : false;
+}
+
+/**
+ * ice_is_dvm_supported - check if Double VLAN Mode is supported
  * @hw: pointer to the hardware structure
  *
- * Returns true if DVM is supported and false if only SVM is supported. This
- * function should only be called while the global config lock is held and after
- * the package has been successfully downloaded.
+ * Returns true if Double VLAN Mode (DVM) is supported and false if only Single
+ * VLAN Mode (SVM) is supported. In order for DVM to be supported the DDP and
+ * firmware must support it, otherwise only SVM is supported. This function
+ * should only be called while the global config lock is held and after the
+ * package has been successfully downloaded.
  */
 static bool ice_is_dvm_supported(struct ice_hw *hw)
 {
+	struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 };
 	enum ice_status status;
 	bool pkg_supports_dvm;
 
 	status = ice_pkg_get_supported_vlan_mode(hw, &pkg_supports_dvm);
 	if (status) {
-		ice_debug(hw, ICE_DBG_PKG, "Failed to get supported VLAN mode, err %d\n",
+		ice_debug(hw, ICE_DBG_PKG, "Failed to get supported VLAN mode, status %d\n",
 			  status);
 		return false;
 	}
@@ -77,28 +156,196 @@ static bool ice_is_dvm_supported(struct ice_hw *hw)
 	if (!pkg_supports_dvm)
 		return false;
 
+	/* If firmware returns success, then it supports DVM, else it only
+	 * supports SVM
+	 */
+	status = ice_aq_get_vlan_mode(hw, &get_vlan_mode);
+	if (status) {
+		ice_debug(hw, ICE_DBG_NVM, "Failed to get VLAN mode, status %d\n",
+			  status);
+		return false;
+	}
+
 	return true;
 }
 
+#define ICE_EXTERNAL_VLAN_ID_FV_IDX			11
+#define ICE_SW_LKUP_VLAN_LOC_LKUP_IDX			1
+#define ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX		2
+#define ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX		2
+#define ICE_PKT_FLAGS_0_TO_15_FV_IDX			1
+#define ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK		0xD000
+static struct ice_update_recipe_lkup_idx_params ice_dvm_dflt_recipes[] = {
+	{
+		/* Update recipe ICE_SW_LKUP_VLAN to filter based on the
+		 * outer/single VLAN in DVM
+		 */
+		.rid = ICE_SW_LKUP_VLAN,
+		.fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX,
+		.ignore_valid = true,
+		.mask = 0,
+		.mask_valid = false, /* use pre-existing mask */
+		.lkup_idx = ICE_SW_LKUP_VLAN_LOC_LKUP_IDX,
+	},
+	{
+		/* Update recipe ICE_SW_LKUP_VLAN to filter based on the VLAN
+		 * packet flags to support VLAN filtering on multiple VLAN
+		 * ethertypes (i.e. 0x8100 and 0x88a8) in DVM
+		 */
+		.rid = ICE_SW_LKUP_VLAN,
+		.fv_idx = ICE_PKT_FLAGS_0_TO_15_FV_IDX,
+		.ignore_valid = false,
+		.mask = ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK,
+		.mask_valid = true,
+		.lkup_idx = ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX,
+	},
+	{
+		/* Update recipe ICE_SW_LKUP_PROMISC_VLAN to filter based on the
+		 * outer/single VLAN in DVM
+		 */
+		.rid = ICE_SW_LKUP_PROMISC_VLAN,
+		.fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX,
+		.ignore_valid = true,
+		.mask = 0,
+		.mask_valid = false,  /* use pre-existing mask */
+		.lkup_idx = ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX,
+	},
+};
+
 /**
- * ice_set_svm - set single VLAN mode
+ * ice_dvm_update_dflt_recipes - update default switch recipes in DVM
+ * @hw: hardware structure used to update the recipes
+ */
+static enum ice_status ice_dvm_update_dflt_recipes(struct ice_hw *hw)
+{
+	unsigned long i;
+
+	for (i = 0; i < ARRAY_SIZE(ice_dvm_dflt_recipes); i++) {
+		struct ice_update_recipe_lkup_idx_params *params;
+		enum ice_status status;
+
+		params = &ice_dvm_dflt_recipes[i];
+
+		status = ice_update_recipe_lkup_idx(hw, params);
+		if (status) {
+			ice_debug(hw, ICE_DBG_INIT, "Failed to update RID %d lkup_idx %d fv_idx %d mask_valid %s mask 0x%04x\n",
+				  params->rid, params->lkup_idx, params->fv_idx,
+				  params->mask_valid ? "true" : "false",
+				  params->mask);
+			return status;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_set_vlan_mode - set the VLAN mode of the device
  * @hw: pointer to the HW structure
+ * @set_params: requested VLAN mode configuration
+ *
+ * Set VLAN Mode Parameters (0x020C)
+ */
+static enum ice_status
+ice_aq_set_vlan_mode(struct ice_hw *hw,
+		     struct ice_aqc_set_vlan_mode *set_params)
+{
+	u8 rdma_packet, mng_vlan_prot_id;
+	struct ice_aq_desc desc;
+
+	if (!set_params)
+		return ICE_ERR_PARAM;
+
+	if (set_params->l2tag_prio_tagging > ICE_AQ_VLAN_PRIO_TAG_MAX)
+		return ICE_ERR_PARAM;
+
+	rdma_packet = set_params->rdma_packet;
+	if (rdma_packet != ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING &&
+	    rdma_packet != ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING)
+		return ICE_ERR_PARAM;
+
+	mng_vlan_prot_id = set_params->mng_vlan_prot_id;
+	if (mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER &&
+	    mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc,
+				      ice_aqc_opc_set_vlan_mode_parameters);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, set_params, sizeof(*set_params),
+			       NULL);
+}
+
+/**
+ * ice_set_dvm - sets up software and hardware for double VLAN mode
+ * @hw: pointer to the hardware structure
  */
-static enum ice_status ice_set_svm_dflt(struct ice_hw *hw)
+static enum ice_status ice_set_dvm(struct ice_hw *hw)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+	struct ice_aqc_set_vlan_mode params = { 0 };
+	enum ice_status status;
+
+	params.l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG;
+	params.rdma_packet = ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING;
+	params.mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER;
+
+	status = ice_aq_set_vlan_mode(hw, &params);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to set double VLAN mode parameters, status %d\n",
+			  status);
+		return status;
+	}
 
-	return ice_aq_set_port_params(hw->port_info, 0, false, false, false, NULL);
+	status = ice_dvm_update_dflt_recipes(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to update default recipes for double VLAN mode, status %d\n",
+			  status);
+		return status;
+	}
+
+	status = ice_aq_set_port_params(hw->port_info, 0, false, false, true,
+					NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to set port in double VLAN mode, status %d\n",
+			  status);
+		return status;
+	}
+
+	return ICE_SUCCESS;
 }
 
 /**
- * ice_init_vlan_mode_ops - initialize VLAN mode configuration ops
+ * ice_set_svm - set single VLAN mode
  * @hw: pointer to the HW structure
  */
-void ice_init_vlan_mode_ops(struct ice_hw *hw)
+static enum ice_status ice_set_svm(struct ice_hw *hw)
 {
-	hw->vlan_mode_ops.set_dvm = NULL;
-	hw->vlan_mode_ops.set_svm = ice_set_svm_dflt;
+	struct ice_aqc_set_vlan_mode *set_params;
+	enum ice_status status;
+
+	status = ice_aq_set_port_params(hw->port_info, 0, false, false, false, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to set port parameters for single VLAN mode\n");
+		return status;
+	}
+
+	set_params = (struct ice_aqc_set_vlan_mode *)
+		ice_malloc(hw, sizeof(*set_params));
+	if (!set_params)
+		return ICE_ERR_NO_MEMORY;
+
+	/* default configuration for SVM configurations */
+	set_params->l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG;
+	set_params->rdma_packet = ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING;
+	set_params->mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER;
+
+	status = ice_aq_set_vlan_mode(hw, set_params);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to configure port in single VLAN mode\n");
+
+	ice_free(hw, set_params);
+	return status;
 }
 
 /**
@@ -107,16 +354,11 @@ void ice_init_vlan_mode_ops(struct ice_hw *hw)
  */
 enum ice_status ice_set_vlan_mode(struct ice_hw *hw)
 {
-	enum ice_status status = ICE_ERR_NOT_IMPL;
-
 	if (!ice_is_dvm_supported(hw))
 		return ICE_SUCCESS;
 
-	if (hw->vlan_mode_ops.set_dvm)
-		status = hw->vlan_mode_ops.set_dvm(hw);
-
-	if (status)
-		return hw->vlan_mode_ops.set_svm(hw);
+	if (!ice_set_dvm(hw))
+		return ICE_SUCCESS;
 
-	return ICE_SUCCESS;
+	return ice_set_svm(hw);
 }
diff --git a/drivers/net/ice/base/ice_vlan_mode.h b/drivers/net/ice/base/ice_vlan_mode.h
index 1b9db4d36f..134bd41635 100644
--- a/drivers/net/ice/base/ice_vlan_mode.h
+++ b/drivers/net/ice/base/ice_vlan_mode.h
@@ -5,28 +5,12 @@
 #ifndef _ICE_VLAN_MODE_H_
 #define _ICE_VLAN_MODE_H_
 
+#include "ice_osdep.h"
+
 struct ice_hw;
 
+bool ice_is_dvm_ena(struct ice_hw *hw);
+void ice_cache_vlan_mode(struct ice_hw *hw);
 enum ice_status ice_set_vlan_mode(struct ice_hw *hw);
-void ice_init_vlan_mode_ops(struct ice_hw *hw);
-
-/* This structure defines the VLAN mode configuration interface. It is used to set the VLAN mode.
- *
- * Note: These operations will be called while the global configuration lock is held.
- *
- * enum ice_status (*set_svm)(struct ice_hw *hw);
- *	This function is called when the DDP and/or Firmware don't support double VLAN mode (DVM) or
- *	if the set_dvm op is not implemented and/or returns failure. It will set the device in
- *	single VLAN mode (SVM).
- *
- * enum ice_status (*set_dvm)(struct ice_hw *hw);
- *	This function is called when the DDP and Firmware support double VLAN mode (DVM). It should
- *	be implemented to set double VLAN mode. If it fails or remains unimplemented, set_svm will
- *	be called as a fallback plan.
- */
-struct ice_vlan_mode_ops {
-	enum ice_status (*set_svm)(struct ice_hw *hw);
-	enum ice_status (*set_dvm)(struct ice_hw *hw);
-};
 
 #endif /* _ICE_VLAN_MODE_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 08/22] net/ice/base: do not set VLAN mode in DCF mode
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (6 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 07/22] net/ice/base: support configuring device in " Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 09/22] net/ice/base: update boost TCAM for DVM Wenjun Wu
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Haiyue Wang <haiyue.wang@intel.com>

[ upstream commit 70f4e156ea52e3d8278acff30d06447eab623a15 ]

The PF will set the VLAN mode globally, DCF just needs to get the VLAN
mode.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_vlan_mode.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
index c86e803c52..42bb108928 100644
--- a/drivers/net/ice/base/ice_vlan_mode.c
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -354,6 +354,12 @@ static enum ice_status ice_set_svm(struct ice_hw *hw)
  */
 enum ice_status ice_set_vlan_mode(struct ice_hw *hw)
 {
+	/* DCF only has the ability to query the VLAN mode. Setting the VLAN
+	 * mode is done by the PF.
+	 */
+	if (hw->dcf_enabled)
+		return ICE_SUCCESS;
+
 	if (!ice_is_dvm_supported(hw))
 		return ICE_SUCCESS;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 09/22] net/ice/base: update boost TCAM for DVM
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (7 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 08/22] net/ice/base: do not set VLAN mode in DCF mode Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 10/22] net/ice/base: change protocol ID for VLAN in DVM Wenjun Wu
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit f977165db0ba8435269a5e19e0e9239a4b22d140 ]

Add code to update boost TCAM entries to enable DVM. This requires
enabled DVM entries, and disabling SVM entries.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 223 +++++++++++++++++++++++----
 drivers/net/ice/base/ice_flex_pipe.h |   1 +
 drivers/net/ice/base/ice_flex_type.h |  13 ++
 drivers/net/ice/base/ice_type.h      |   2 +
 drivers/net/ice/base/ice_vlan_mode.c |   7 +
 5 files changed, 213 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index cced7b6352..058694653a 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -7,9 +7,17 @@
 #include "ice_protocol_type.h"
 #include "ice_flow.h"
 
+/* For supporting double VLAN mode, it is necessary to enable or disable certain
+ * boost tcam entries. The metadata labels names that match the following
+ * prefixes will be saved to allow enabling double VLAN mode.
+ */
+#define ICE_DVM_PRE	"BOOST_MAC_VLAN_DVM"	/* enable these entries */
+#define ICE_SVM_PRE	"BOOST_MAC_VLAN_SVM"	/* disable these entries */
+
 /* To support tunneling entries by PF, the package will append the PF number to
  * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc.
  */
+#define ICE_TNL_PRE	"TNL_"
 static const struct ice_tunnel_type_scan tnls[] = {
 	{ TNL_VXLAN,		"TNL_VXLAN_PF" },
 	{ TNL_GENEVE,		"TNL_GENEVE_PF" },
@@ -452,6 +460,57 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state,
 	return label->name;
 }
 
+/**
+ * ice_add_tunnel_hint
+ * @hw: pointer to the HW structure
+ * @label_name: label text
+ * @val: value of the tunnel port boost entry
+ */
+static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val)
+{
+	if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
+		u16 i;
+
+		for (i = 0; tnls[i].type != TNL_LAST; i++) {
+			size_t len = strlen(tnls[i].label_prefix);
+
+			/* Look for matching label start, before continuing */
+			if (strncmp(label_name, tnls[i].label_prefix, len))
+				continue;
+
+			/* Make sure this label matches our PF. Note that the PF
+			 * character ('0' - '7') will be located where our
+			 * prefix string's null terminator is located.
+			 */
+			if ((label_name[len] - '0') == hw->pf_id) {
+				hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
+				hw->tnl.tbl[hw->tnl.count].valid = false;
+				hw->tnl.tbl[hw->tnl.count].in_use = false;
+				hw->tnl.tbl[hw->tnl.count].marked = false;
+				hw->tnl.tbl[hw->tnl.count].boost_addr = val;
+				hw->tnl.tbl[hw->tnl.count].port = 0;
+				hw->tnl.count++;
+				break;
+			}
+		}
+	}
+}
+
+/**
+ * ice_add_dvm_hint
+ * @hw: pointer to the HW structure
+ * @val: value of the boost entry
+ * @enable: true if entry needs to be enabled, or false if needs to be disabled
+ */
+static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable)
+{
+	if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) {
+		hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val;
+		hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable;
+		hw->dvm_upd.count++;
+	}
+}
+
 /**
  * ice_init_pkg_hints
  * @hw: pointer to the HW structure
@@ -478,40 +537,34 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 	label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state,
 				     &val);
 
-	while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
-		for (i = 0; tnls[i].type != TNL_LAST; i++) {
-			size_t len = strlen(tnls[i].label_prefix);
+	while (label_name) {
+		if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE)))
+			/* check for a tunnel entry */
+			ice_add_tunnel_hint(hw, label_name, val);
 
-			/* Look for matching label start, before continuing */
-			if (strncmp(label_name, tnls[i].label_prefix, len))
-				continue;
+		/* check for a dvm mode entry */
+		else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE)))
+			ice_add_dvm_hint(hw, val, true);
 
-			/* Make sure this label matches our PF. Note that the PF
-			 * character ('0' - '7') will be located where our
-			 * prefix string's null terminator is located.
-			 */
-			if ((label_name[len] - '0') == hw->pf_id) {
-				hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
-				hw->tnl.tbl[hw->tnl.count].valid = false;
-				hw->tnl.tbl[hw->tnl.count].in_use = false;
-				hw->tnl.tbl[hw->tnl.count].marked = false;
-				hw->tnl.tbl[hw->tnl.count].boost_addr = val;
-				hw->tnl.tbl[hw->tnl.count].port = 0;
-				hw->tnl.count++;
-				break;
-			}
-		}
+		/* check for a svm mode entry */
+		else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE)))
+			ice_add_dvm_hint(hw, val, false);
 
 		label_name = ice_enum_labels(NULL, 0, &state, &val);
 	}
 
-	/* Cache the appropriate boost TCAM entry pointers */
+	/* Cache the appropriate boost TCAM entry pointers for tunnels */
 	for (i = 0; i < hw->tnl.count; i++) {
 		ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr,
 				     &hw->tnl.tbl[i].boost_entry);
 		if (hw->tnl.tbl[i].boost_entry)
 			hw->tnl.tbl[i].valid = true;
 	}
+
+	/* Cache the appropriate boost TCAM entry pointers for DVM and SVM */
+	for (i = 0; i < hw->dvm_upd.count; i++)
+		ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr,
+				     &hw->dvm_upd.tbl[i].boost_entry);
 }
 
 /* Key creation */
@@ -916,26 +969,21 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 }
 
 /**
- * ice_update_pkg
+ * ice_update_pkg_no_lock
  * @hw: pointer to the hardware structure
  * @bufs: pointer to an array of buffers
  * @count: the number of buffers in the array
- *
- * Obtains change lock and updates package.
  */
-enum ice_status
-ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
+static enum ice_status
+ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 {
-	enum ice_status status;
-	u32 offset, info, i;
-
-	status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
-	if (status)
-		return status;
+	enum ice_status status = ICE_SUCCESS;
+	u32 i;
 
 	for (i = 0; i < count; i++) {
 		struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i);
 		bool last = ((i + 1) == count);
+		u32 offset, info;
 
 		status = ice_aq_update_pkg(hw, bh, LE16_TO_CPU(bh->data_end),
 					   last, &offset, &info, NULL);
@@ -947,6 +995,28 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 		}
 	}
 
+	return status;
+}
+
+/**
+ * ice_update_pkg
+ * @hw: pointer to the hardware structure
+ * @bufs: pointer to an array of buffers
+ * @count: the number of buffers in the array
+ *
+ * Obtains change lock and updates package.
+ */
+enum ice_status
+ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
+{
+	enum ice_status status;
+
+	status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+	if (status)
+		return status;
+
+	status = ice_update_pkg_no_lock(hw, bufs, count);
+
 	ice_release_change_lock(hw);
 
 	return status;
@@ -2122,6 +2192,93 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type,
 	return res;
 }
 
+/**
+ * ice_upd_dvm_boost_entry
+ * @hw: pointer to the HW structure
+ * @entry: pointer to double vlan boost entry info
+ */
+static enum ice_status
+ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u8 val, dc, nm;
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for Rx parser, one for Tx parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_upd_dvm_boost_entry_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  ice_struct_size(sect_rx, tcam, 1));
+	if (!sect_rx)
+		goto ice_upd_dvm_boost_entry_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  ice_struct_size(sect_tx, tcam, 1));
+	if (!sect_tx)
+		goto ice_upd_dvm_boost_entry_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer */
+	ice_memcpy(sect_rx->tcam, entry->boost_entry, sizeof(*sect_rx->tcam),
+		   ICE_NONDMA_TO_NONDMA);
+
+	/* re-write the don't care and never match bits accordingly */
+	if (entry->enable) {
+		/* all bits are don't care */
+		val = 0x00;
+		dc = 0xFF;
+		nm = 0x00;
+	} else {
+		/* disable, one never match bit, the rest are don't care */
+		val = 0x00;
+		dc = 0xF7;
+		nm = 0x08;
+	}
+
+	ice_set_key((u8 *)&sect_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key),
+		    &val, NULL, &dc, &nm, 0, sizeof(u8));
+
+	/* exact copy of entry to Tx section entry */
+	ice_memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam),
+		   ICE_NONDMA_TO_NONDMA);
+
+	status = ice_update_pkg_no_lock(hw, ice_pkg_buf(bld), 1);
+
+ice_upd_dvm_boost_entry_err:
+	ice_pkg_buf_free(hw, bld);
+
+	return status;
+}
+
+/**
+ * ice_set_dvm_boost_entries
+ * @hw: pointer to the HW structure
+ *
+ * Enable double vlan by updating the appropriate boost tcam entries.
+ */
+enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw)
+{
+	enum ice_status status;
+	u16 i;
+
+	for (i = 0; i < hw->dvm_upd.count; i++) {
+		status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]);
+		if (status)
+			return status;
+	}
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_create_tunnel
  * @hw: pointer to the HW structure
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index d4679cc940..61ce092319 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -49,6 +49,7 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type,
 			 u16 *port);
 enum ice_status
 ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port);
+enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw);
 enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all);
 bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index);
 bool
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 169476369b..dfa3cf3020 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -543,6 +543,19 @@ struct ice_tunnel_table {
 	u16 count;
 };
 
+struct ice_dvm_entry {
+	u16 boost_addr;
+	u16 enable;
+	struct ice_boost_tcam_entry *boost_entry;
+};
+
+#define ICE_DVM_MAX_ENTRIES	48
+
+struct ice_dvm_table {
+	struct ice_dvm_entry tbl[ICE_DVM_MAX_ENTRIES];
+	u16 count;
+};
+
 struct ice_pkg_es {
 	__le16 count;
 	__le16 offset;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index a4a4427693..104f5dc101 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -962,6 +962,8 @@ struct ice_hw {
 	/* tunneling info */
 	struct ice_lock tnl_lock;
 	struct ice_tunnel_table tnl;
+	/* dvm boost update information */
+	struct ice_dvm_table dvm_upd;
 
 	struct ice_acl_tbl *acl_tbl;
 	struct ice_fd_hw_prof **acl_prof;
diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
index 42bb108928..4a749cb9f1 100644
--- a/drivers/net/ice/base/ice_vlan_mode.c
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -312,6 +312,13 @@ static enum ice_status ice_set_dvm(struct ice_hw *hw)
 		return status;
 	}
 
+	status = ice_set_dvm_boost_entries(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to set boost TCAM entries for double VLAN mode, status %d\n",
+			  status);
+		return status;
+	}
+
 	return ICE_SUCCESS;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 10/22] net/ice/base: change protocol ID for VLAN in DVM
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (8 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 09/22] net/ice/base: update boost TCAM for DVM Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 11/22] net/ice/base: refactor post DDP download VLAN mode config Wenjun Wu
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 8d7bb8d500b1ccdeb30668516064337faa20b364 ]

Protocol id for first vlan in Double VLAN Mode (DVM) should be
ICE_VLAN_OF_HW = 16, but for Single VLAN Mode (SVM) this should be
ICE_VLAN_OL_HW = 17.

Change protocol id in type to id translation array for outer vlan
to 17 when DVM is enabled, which means the driver, package,
and firmware support DVM.

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |  3 +++
 drivers/net/ice/base/ice_switch.c    | 19 ++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h    |  1 +
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 058694653a..a92c2b8494 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1166,6 +1166,9 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 
 	ice_cache_vlan_mode(hw);
 
+	if (ice_is_dvm_ena(hw))
+		ice_change_proto_id_to_dvm();
+
 	return status;
 }
 
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index df8319a7e7..55bfc3e6c5 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -6127,7 +6127,7 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[ICE_PROTOCOL_LAST] = {
  * following policy.
  */
 
-static const struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
+static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
 	{ ICE_MAC_OFOS,		ICE_MAC_OFOS_HW },
 	{ ICE_MAC_IL,		ICE_MAC_IL_HW },
 	{ ICE_ETYPE_OL,		ICE_ETYPE_OL_HW },
@@ -6232,6 +6232,23 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts,
 	return ICE_MAX_NUM_RECIPES;
 }
 
+/**
+ * ice_change_proto_id_to_dvm - change proto id in prot_id_tbl
+ *
+ * As protocol id for outer vlan is different in dvm and svm, if dvm is
+ * supported protocol array record for outer vlan has to be modified to
+ * reflect the value proper for DVM.
+ */
+void ice_change_proto_id_to_dvm(void)
+{
+	u8 i;
+
+	for (i = 0; i < ARRAY_SIZE(ice_prot_id_tbl); i++)
+		if (ice_prot_id_tbl[i].type == ICE_VLAN_OFOS &&
+		    ice_prot_id_tbl[i].protocol_id != ICE_VLAN_OF_HW)
+			ice_prot_id_tbl[i].protocol_id = ICE_VLAN_OF_HW;
+}
+
 /**
  * ice_prot_type_to_id - get protocol ID from protocol type
  * @type: protocol type
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 78e6be35a9..dd50820430 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -536,4 +536,5 @@ bool ice_is_prof_rule(enum ice_sw_tunnel_type type);
 enum ice_status
 ice_update_recipe_lkup_idx(struct ice_hw *hw,
 			   struct ice_update_recipe_lkup_idx_params *params);
+void ice_change_proto_id_to_dvm(void);
 #endif /* _ICE_SWITCH_H_ */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 11/22] net/ice/base: refactor post DDP download VLAN mode config
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (9 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 10/22] net/ice/base: change protocol ID for VLAN in DVM Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 12/22] net/ice/base: log if DDP/FW do not support QinQ Wenjun Wu
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 5ade55ab43e6c07a904c03ebe2d796fdea94e7e0 ]

Currently it's not clear that only the first PF downloads the package
and configures the VLAN mode. When this is happening all other PFs are
blocked on the global configuration lock. Once the package is
successfully downloaded and the global configuration lock has been
released then all PFs resume initialization. This includes some post
package download VLAN mode configuration. To make this more obvious add
the new function ice_post_pkg_dwnld_vlan_mode_cfg() so any/all post
download VLAN mode configuration code can be put in here.

This also makes it more clear that all PFs will call this new function.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |  5 +----
 drivers/net/ice/base/ice_vlan_mode.c | 23 ++++++++++++++++++++++-
 drivers/net/ice/base/ice_vlan_mode.h |  2 +-
 3 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index a92c2b8494..e04b863de3 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1164,10 +1164,7 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 	status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array,
 				    LE32_TO_CPU(ice_buf_tbl->buf_count));
 
-	ice_cache_vlan_mode(hw);
-
-	if (ice_is_dvm_ena(hw))
-		ice_change_proto_id_to_dvm();
+	ice_post_pkg_dwnld_vlan_mode_cfg(hw);
 
 	return status;
 }
diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
index 4a749cb9f1..4340189355 100644
--- a/drivers/net/ice/base/ice_vlan_mode.c
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -125,7 +125,7 @@ bool ice_is_dvm_ena(struct ice_hw *hw)
  * configuration lock has been released because all ports on a device need to
  * cache the VLAN mode.
  */
-void ice_cache_vlan_mode(struct ice_hw *hw)
+static void ice_cache_vlan_mode(struct ice_hw *hw)
 {
 	hw->dvm_ena = ice_aq_is_dvm_ena(hw) ? true : false;
 }
@@ -375,3 +375,24 @@ enum ice_status ice_set_vlan_mode(struct ice_hw *hw)
 
 	return ice_set_svm(hw);
 }
+
+/**
+ * ice_post_pkg_dwnld_vlan_mode_cfg - configure VLAN mode after DDP download
+ * @hw: pointer to the HW structure
+ *
+ * This function is meant to configure any VLAN mode specific functionality
+ * after the global configuration lock has been released and the DDP has been
+ * downloaded.
+ *
+ * Since only one PF downloads the DDP and configures the VLAN mode there needs
+ * to be a way to configure the other PFs after the DDP has been downloaded and
+ * the global configuration lock has been released. All such code should go in
+ * this function.
+ */
+void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw)
+{
+	ice_cache_vlan_mode(hw);
+
+	if (ice_is_dvm_ena(hw))
+		ice_change_proto_id_to_dvm();
+}
diff --git a/drivers/net/ice/base/ice_vlan_mode.h b/drivers/net/ice/base/ice_vlan_mode.h
index 134bd41635..c22d6c2a01 100644
--- a/drivers/net/ice/base/ice_vlan_mode.h
+++ b/drivers/net/ice/base/ice_vlan_mode.h
@@ -10,7 +10,7 @@
 struct ice_hw;
 
 bool ice_is_dvm_ena(struct ice_hw *hw);
-void ice_cache_vlan_mode(struct ice_hw *hw);
 enum ice_status ice_set_vlan_mode(struct ice_hw *hw);
+void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw);
 
 #endif /* _ICE_VLAN_MODE_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 12/22] net/ice/base: log if DDP/FW do not support QinQ
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (10 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 11/22] net/ice/base: refactor post DDP download VLAN mode config Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 13/22] net/ice/base: add ethertype offset for QinQ dummy packet Wenjun Wu
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit daa2ca4217ec6bf4fafb84f78985014b20cf5444 ]

Currently if the driver supports QinQ there is no message/information
if the DDP and/or FW don't support QinQ. Add functionality that prints
if the DDP and/or FW don't support QinQ if the driver attempts to
configured DVM. This will make it more obvious to users in the field
that they need to update their DDP and/or FW.

This required a small refactor so some of the existing code could be
shared and used by this new print functionality.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_vlan_mode.c | 77 +++++++++++++++++++++++-----
 1 file changed, 65 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c
index 4340189355..775ebb2d6e 100644
--- a/drivers/net/ice/base/ice_vlan_mode.c
+++ b/drivers/net/ice/base/ice_vlan_mode.c
@@ -131,18 +131,11 @@ static void ice_cache_vlan_mode(struct ice_hw *hw)
 }
 
 /**
- * ice_is_dvm_supported - check if Double VLAN Mode is supported
- * @hw: pointer to the hardware structure
- *
- * Returns true if Double VLAN Mode (DVM) is supported and false if only Single
- * VLAN Mode (SVM) is supported. In order for DVM to be supported the DDP and
- * firmware must support it, otherwise only SVM is supported. This function
- * should only be called while the global config lock is held and after the
- * package has been successfully downloaded.
+ * ice_pkg_supports_dvm - find out if DDP supports DVM
+ * @hw: pointer to the HW structure
  */
-static bool ice_is_dvm_supported(struct ice_hw *hw)
+static bool ice_pkg_supports_dvm(struct ice_hw *hw)
 {
-	struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 };
 	enum ice_status status;
 	bool pkg_supports_dvm;
 
@@ -153,8 +146,17 @@ static bool ice_is_dvm_supported(struct ice_hw *hw)
 		return false;
 	}
 
-	if (!pkg_supports_dvm)
-		return false;
+	return pkg_supports_dvm;
+}
+
+/**
+ * ice_fw_supports_dvm - find out if FW supports DVM
+ * @hw: pointer to the HW structure
+ */
+static bool ice_fw_supports_dvm(struct ice_hw *hw)
+{
+	struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 };
+	enum ice_status status;
 
 	/* If firmware returns success, then it supports DVM, else it only
 	 * supports SVM
@@ -169,6 +171,31 @@ static bool ice_is_dvm_supported(struct ice_hw *hw)
 	return true;
 }
 
+/**
+ * ice_is_dvm_supported - check if Double VLAN Mode is supported
+ * @hw: pointer to the hardware structure
+ *
+ * Returns true if Double VLAN Mode (DVM) is supported and false if only Single
+ * VLAN Mode (SVM) is supported. In order for DVM to be supported the DDP and
+ * firmware must support it, otherwise only SVM is supported. This function
+ * should only be called while the global config lock is held and after the
+ * package has been successfully downloaded.
+ */
+static bool ice_is_dvm_supported(struct ice_hw *hw)
+{
+	if (!ice_pkg_supports_dvm(hw)) {
+		ice_debug(hw, ICE_DBG_PKG, "DDP doesn't support DVM\n");
+		return false;
+	}
+
+	if (!ice_fw_supports_dvm(hw)) {
+		ice_debug(hw, ICE_DBG_PKG, "FW doesn't support DVM\n");
+		return false;
+	}
+
+	return true;
+}
+
 #define ICE_EXTERNAL_VLAN_ID_FV_IDX			11
 #define ICE_SW_LKUP_VLAN_LOC_LKUP_IDX			1
 #define ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX		2
@@ -376,6 +403,30 @@ enum ice_status ice_set_vlan_mode(struct ice_hw *hw)
 	return ice_set_svm(hw);
 }
 
+/**
+ * ice_print_dvm_not_supported - print if DDP and/or FW doesn't support DVM
+ * @hw: pointer to the HW structure
+ *
+ * The purpose of this function is to print that  QinQ is not supported due to
+ * incompatibilty from the DDP and/or FW. This will give a hint to the user to
+ * update one and/or both components if they expect QinQ functionality.
+ */
+static void ice_print_dvm_not_supported(struct ice_hw *hw)
+{
+	bool pkg_supports_dvm = ice_pkg_supports_dvm(hw);
+	bool fw_supports_dvm = ice_fw_supports_dvm(hw);
+
+	if (!fw_supports_dvm && !pkg_supports_dvm)
+		ice_info(hw, "QinQ functionality cannot be enabled on this device. "
+			     "Update your DDP package and NVM to versions that support QinQ.\n");
+	else if (!pkg_supports_dvm)
+		ice_info(hw, "QinQ functionality cannot be enabled on this device. "
+			     "Update your DDP package to a version that supports QinQ.\n");
+	else if (!fw_supports_dvm)
+		ice_info(hw, "QinQ functionality cannot be enabled on this device. "
+			     "Update your NVM to a version that supports QinQ.\n");
+}
+
 /**
  * ice_post_pkg_dwnld_vlan_mode_cfg - configure VLAN mode after DDP download
  * @hw: pointer to the HW structure
@@ -395,4 +446,6 @@ void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw)
 
 	if (ice_is_dvm_ena(hw))
 		ice_change_proto_id_to_dvm();
+	else
+		ice_print_dvm_not_supported(hw);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 13/22] net/ice/base: add ethertype offset for QinQ dummy packet
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (11 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 12/22] net/ice/base: log if DDP/FW do not support QinQ Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 14/22] net/ice/base: add inner VLAN protocol type for QinQ filter Wenjun Wu
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Yuying Zhang <yuying.zhang@intel.com>

[ upstream commit 0c0735ff4fc15e227631cbfe3fd31e33e42b34fc ]

Add the ethertype offset for QinQ switch rule dummy packet to
allow matching the corresponding field.

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 55bfc3e6c5..887b35ac47 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -1205,6 +1205,7 @@ static const u8 dummy_ipv6_l2tpv3_pkt[] = {
 
 static const struct ice_dummy_pkt_offsets dummy_qinq_ipv4_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
+	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
 	{ ICE_VLAN_OFOS,	18 },
 	{ ICE_IPV4_OFOS,	22 },
@@ -1215,7 +1216,8 @@ static const u8 dummy_qinq_ipv4_pkt[] = {
 	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
 	0x00, 0x00, 0x00, 0x00,
 	0x00, 0x00, 0x00, 0x00,
-	0x91, 0x00,
+
+	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
 	0x00, 0x00, 0x08, 0x00, /* ICE_VLAN_OFOS 18 */
@@ -1234,6 +1236,7 @@ static const u8 dummy_qinq_ipv4_pkt[] = {
 
 static const struct ice_dummy_pkt_offsets dummy_qinq_ipv6_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
+	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
 	{ ICE_VLAN_OFOS,	18 },
 	{ ICE_IPV6_OFOS,	22 },
@@ -1244,7 +1247,8 @@ static const u8 dummy_qinq_ipv6_pkt[] = {
 	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
 	0x00, 0x00, 0x00, 0x00,
 	0x00, 0x00, 0x00, 0x00,
-	0x91, 0x00,
+
+	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
 	0x00, 0x00, 0x86, 0xDD, /* ICE_VLAN_OFOS 18 */
@@ -1271,6 +1275,7 @@ static const u8 dummy_qinq_ipv6_pkt[] = {
 
 static const struct ice_dummy_pkt_offsets dummy_qinq_pppoe_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
+	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
 	{ ICE_VLAN_OFOS,	18 },
 	{ ICE_PPPOE,		22 },
@@ -1280,6 +1285,7 @@ static const struct ice_dummy_pkt_offsets dummy_qinq_pppoe_packet_offsets[] = {
 static const
 struct ice_dummy_pkt_offsets dummy_qinq_pppoe_ipv4_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
+	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
 	{ ICE_VLAN_OFOS,	18 },
 	{ ICE_PPPOE,		22 },
@@ -1291,7 +1297,8 @@ static const u8 dummy_qinq_pppoe_ipv4_pkt[] = {
 	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
 	0x00, 0x00, 0x00, 0x00,
 	0x00, 0x00, 0x00, 0x00,
-	0x91, 0x00,
+
+	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
 	0x00, 0x00, 0x88, 0x64, /* ICE_VLAN_OFOS 18 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 14/22] net/ice/base: add inner VLAN protocol type for QinQ filter
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (12 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 13/22] net/ice/base: add ethertype offset for QinQ dummy packet Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 15/22] net/ice/base: fix QinQ PPPoE dummy packet selection Wenjun Wu
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 0475c7770502cb4166b2577df3ff446af9d85515 ]

Since VLAN protocol type 'ICE_VLAN_OFOS' has been changed to map
the hardware VLAN protocol ID to 'ICE_VLAN_OF_HW (16)' when in Double
VLAN mode, and to 'ICE_VLAN_OL_HW (17)' when in Single VLAN mode.

So 'ICE_VLAN_OFOS' can't be used with 'ICE_VLAN_EX' which is outer VLAN
hardware protocol ID 'ICE_VLAN_OF_HW (16)' to do the QinQ VLAN pattern.

Introduce the new inner VLAN protocol type 'ICE_VLAN_IN', which is inner
VLAN hardware protocol ID 'ICE_VLAN_OL_HW (17)'.

Now for QinQ VLAN pattern, the protocol 'ICE_VLAN_EX' and 'ICE_VLAN_IN'
should be used to set the related protocol header fields like VLAN ID.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h |  1 +
 drivers/net/ice/base/ice_switch.c        | 23 +++++++++++++----------
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index e8caefd8f9..5dc795868e 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -53,6 +53,7 @@ enum ice_protocol_type {
 	ICE_NAT_T,
 	ICE_GTP_NO_PAY,
 	ICE_VLAN_EX,
+	ICE_VLAN_IN,
 	ICE_PROTOCOL_LAST
 };
 
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 887b35ac47..9d8dc88844 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -1207,7 +1207,7 @@ static const struct ice_dummy_pkt_offsets dummy_qinq_ipv4_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
-	{ ICE_VLAN_OFOS,	18 },
+	{ ICE_VLAN_IN,		18 },
 	{ ICE_IPV4_OFOS,	22 },
 	{ ICE_PROTOCOL_LAST,	0 },
 };
@@ -1220,7 +1220,7 @@ static const u8 dummy_qinq_ipv4_pkt[] = {
 	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
-	0x00, 0x00, 0x08, 0x00, /* ICE_VLAN_OFOS 18 */
+	0x00, 0x00, 0x08, 0x00, /* ICE_VLAN_IN 18 */
 
 	0x45, 0x00, 0x00, 0x1c, /* ICE_IPV4_OFOS 22 */
 	0x00, 0x01, 0x00, 0x00,
@@ -1238,7 +1238,7 @@ static const struct ice_dummy_pkt_offsets dummy_qinq_ipv6_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
-	{ ICE_VLAN_OFOS,	18 },
+	{ ICE_VLAN_IN,		18 },
 	{ ICE_IPV6_OFOS,	22 },
 	{ ICE_PROTOCOL_LAST,	0 },
 };
@@ -1251,7 +1251,7 @@ static const u8 dummy_qinq_ipv6_pkt[] = {
 	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
-	0x00, 0x00, 0x86, 0xDD, /* ICE_VLAN_OFOS 18 */
+	0x00, 0x00, 0x86, 0xDD, /* ICE_VLAN_IN 18 */
 
 	0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 22 */
 	0x00, 0x10, 0x11, 0x00, /* Next header UDP */
@@ -1277,7 +1277,7 @@ static const struct ice_dummy_pkt_offsets dummy_qinq_pppoe_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
-	{ ICE_VLAN_OFOS,	18 },
+	{ ICE_VLAN_IN,		18 },
 	{ ICE_PPPOE,		22 },
 	{ ICE_PROTOCOL_LAST,	0 },
 };
@@ -1287,7 +1287,7 @@ struct ice_dummy_pkt_offsets dummy_qinq_pppoe_ipv4_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_ETYPE_OL,         12 },
 	{ ICE_VLAN_EX,		14 },
-	{ ICE_VLAN_OFOS,	18 },
+	{ ICE_VLAN_IN,		18 },
 	{ ICE_PPPOE,		22 },
 	{ ICE_IPV4_OFOS,	30 },
 	{ ICE_PROTOCOL_LAST,	0 },
@@ -1301,14 +1301,14 @@ static const u8 dummy_qinq_pppoe_ipv4_pkt[] = {
 	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
-	0x00, 0x00, 0x88, 0x64, /* ICE_VLAN_OFOS 18 */
+	0x00, 0x00, 0x88, 0x64, /* ICE_VLAN_IN 18 */
 
 	0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 22 */
 	0x00, 0x16,
 
 	0x00, 0x21,		/* PPP Link Layer 28 */
 
-	0x45, 0x00, 0x00, 0x14, /* ICE_IPV4_IL 30 */
+	0x45, 0x00, 0x00, 0x14, /* ICE_IPV4_OFOS 30 */
 	0x00, 0x00, 0x00, 0x00,
 	0x00, 0x00, 0x00, 0x00,
 	0x00, 0x00, 0x00, 0x00,
@@ -1322,7 +1322,7 @@ struct ice_dummy_pkt_offsets dummy_qinq_pppoe_packet_ipv6_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_ETYPE_OL,		12 },
 	{ ICE_VLAN_EX,		14},
-	{ ICE_VLAN_OFOS,	18 },
+	{ ICE_VLAN_IN,		18 },
 	{ ICE_PPPOE,		22 },
 	{ ICE_IPV6_OFOS,	30 },
 	{ ICE_PROTOCOL_LAST,	0 },
@@ -1336,7 +1336,7 @@ static const u8 dummy_qinq_pppoe_ipv6_packet[] = {
 	0x91, 0x00,		/* ICE_ETYPE_OL 12 */
 
 	0x00, 0x00, 0x81, 0x00, /* ICE_VLAN_EX 14 */
-	0x00, 0x00, 0x88, 0x64, /* ICE_VLAN_OFOS 18 */
+	0x00, 0x00, 0x88, 0x64, /* ICE_VLAN_IN 18 */
 
 	0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 22 */
 	0x00, 0x2a,
@@ -6126,6 +6126,7 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[ICE_PROTOCOL_LAST] = {
 	{ ICE_NAT_T,		{ 8, 10, 12, 14 } },
 	{ ICE_GTP_NO_PAY,	{ 8, 10, 12, 14 } },
 	{ ICE_VLAN_EX,		{ 0, 2 } },
+	{ ICE_VLAN_IN,		{ 0, 2 } },
 };
 
 /* The following table describes preferred grouping of recipes.
@@ -6160,6 +6161,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
 	{ ICE_NAT_T,		ICE_UDP_ILOS_HW },
 	{ ICE_GTP_NO_PAY,	ICE_UDP_ILOS_HW },
 	{ ICE_VLAN_EX,		ICE_VLAN_OF_HW },
+	{ ICE_VLAN_IN,		ICE_VLAN_OL_HW },
 };
 
 /**
@@ -7728,6 +7730,7 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 			break;
 		case ICE_VLAN_OFOS:
 		case ICE_VLAN_EX:
+		case ICE_VLAN_IN:
 			len = sizeof(struct ice_vlan_hdr);
 			break;
 		case ICE_IPV4_OFOS:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 15/22] net/ice/base: fix QinQ PPPoE dummy packet selection
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (13 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 14/22] net/ice/base: add inner VLAN protocol type for QinQ filter Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 16/22] net/ice: fix VLAN strip for double VLAN Wenjun Wu
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 03697c24b7cafbd6c536204ba6470698fcf8c5e0 ]

The dummy packet should be QinQ PPPoE ipv6 when ppp protocol is ipv6.

Fixes: bb3386f348dd ("net/ice: enable QinQ filter for switch")
Cc: stable@dpdk.org

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 9d8dc88844..d5576267b8 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -7403,6 +7403,11 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		*pkt_len = sizeof(dummy_qinq_pppoe_ipv4_pkt);
 		*offsets = dummy_qinq_pppoe_ipv4_packet_offsets;
 		return;
+	} else if (tun_type == ICE_SW_TUN_PPPOE_QINQ && ipv6) {
+		*pkt = dummy_qinq_pppoe_ipv6_packet;
+		*pkt_len = sizeof(dummy_qinq_pppoe_ipv6_packet);
+		*offsets = dummy_qinq_pppoe_packet_offsets;
+		return;
 	} else if (tun_type == ICE_SW_TUN_PPPOE_QINQ ||
 			tun_type == ICE_SW_TUN_PPPOE_PAY_QINQ) {
 		*pkt = dummy_qinq_pppoe_ipv4_pkt;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 16/22] net/ice: fix VLAN strip for double VLAN
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (14 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 15/22] net/ice/base: fix QinQ PPPoE dummy packet selection Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 17/22] net/ice: fix VLAN 0 adding based on VLAN mode Wenjun Wu
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Haiyue Wang <haiyue.wang@intel.com>

[ upstream commit 8ac4307504bed19ce68b39bc2703975ee0b9ab81 ]

VLAN strip was failing for double VLAN because of hardware
configuration, resulting mbuf not having the vlan_tci information.

Adjusted the strip setting according to current VLAN mode to fix the
VLAN strip.

Fixes: 14e7a4b37b4f ("net/ice/base: support configuring device in double VLAN mode")

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 297 +++++++++++++++--------------------
 1 file changed, 131 insertions(+), 166 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9ce0280726..1d7e5ffbc4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -70,8 +70,6 @@ static struct proto_xtr_ol_flag ice_proto_xtr_ol_flag_params[] = {
 		.ol_flag = &rte_net_ice_dynflag_proto_xtr_ip_offset_mask },
 };
 
-#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
-
 #define ICE_OS_DEFAULT_PKG_NAME		"ICE OS Default Package"
 #define ICE_COMMS_PKG_NAME			"ICE COMMS Package"
 #define ICE_MAX_RES_DESC_NUM        1024
@@ -1119,127 +1117,6 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	return ret;
 }
 
-static int
-ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
-{
-	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
-	struct ice_vsi_ctx ctxt;
-	uint8_t qinq_flags;
-	int ret = 0;
-
-	/* Check if it has been already on or off */
-	if (vsi->info.valid_sections &
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
-		if (on) {
-			if ((vsi->info.outer_vlan_flags &
-			     ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST) ==
-			    ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST)
-				return 0; /* already on */
-		} else {
-			if (!(vsi->info.outer_vlan_flags &
-			      ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST))
-				return 0; /* already off */
-		}
-	}
-
-	if (on)
-		qinq_flags = ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST;
-	else
-		qinq_flags = 0;
-	/* clear global insertion and use per packet insertion */
-	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT);
-	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_ACCEPT_HOST);
-	vsi->info.outer_vlan_flags |= qinq_flags;
-	/* use default vlan type 0x8100 */
-	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
-	vsi->info.outer_vlan_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
-				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
-	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
-	ctxt.info.valid_sections =
-			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
-	ctxt.vsi_num = vsi->vsi_id;
-	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
-	if (ret) {
-		PMD_DRV_LOG(INFO,
-			    "Update VSI failed to %s qinq stripping",
-			    on ? "enable" : "disable");
-		return -EINVAL;
-	}
-
-	vsi->info.valid_sections |=
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
-
-	return ret;
-}
-
-static int
-ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
-{
-	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
-	struct ice_vsi_ctx ctxt;
-	uint8_t qinq_flags;
-	int ret = 0;
-
-	/* Check if it has been already on or off */
-	if (vsi->info.valid_sections &
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
-		if (on) {
-			if ((vsi->info.outer_vlan_flags &
-			     ICE_AQ_VSI_OUTER_VLAN_EMODE_M) ==
-			    ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW)
-				return 0; /* already on */
-		} else {
-			if ((vsi->info.outer_vlan_flags &
-			     ICE_AQ_VSI_OUTER_VLAN_EMODE_M) ==
-			    ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH)
-				return 0; /* already off */
-		}
-	}
-
-	if (on)
-		qinq_flags = ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW;
-	else
-		qinq_flags = ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH;
-	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M);
-	vsi->info.outer_vlan_flags |= qinq_flags;
-	/* use default vlan type 0x8100 */
-	vsi->info.outer_vlan_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
-	vsi->info.outer_vlan_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
-				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
-	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
-	ctxt.info.valid_sections =
-			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
-	ctxt.vsi_num = vsi->vsi_id;
-	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
-	if (ret) {
-		PMD_DRV_LOG(INFO,
-			    "Update VSI failed to %s qinq stripping",
-			    on ? "enable" : "disable");
-		return -EINVAL;
-	}
-
-	vsi->info.valid_sections |=
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
-
-	return ret;
-}
-
-static int
-ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
-{
-	int ret;
-
-	ret = ice_vsi_config_qinq_stripping(vsi, on);
-	if (ret)
-		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
-
-	ret = ice_vsi_config_qinq_insertion(vsi, on);
-	if (ret)
-		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
-
-	return ret;
-}
-
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -2226,9 +2103,6 @@ ice_dev_init(struct rte_eth_dev *dev)
 
 	vsi = pf->main_vsi;
 
-	/* Disable double vlan by default */
-	ice_vsi_config_double_vlan(vsi, false);
-
 	ret = ice_aq_stop_lldp(hw, true, false, NULL);
 	if (ret != ICE_SUCCESS)
 		PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
@@ -4112,49 +3986,147 @@ ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
 	return 0;
 }
 
+/* Manage VLAN stripping for the VSI for Rx */
 static int
-ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
 {
 	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
 	struct ice_vsi_ctx ctxt;
-	uint8_t vlan_flags;
-	int ret = 0;
+	enum ice_status status;
+	int err = 0;
 
-	/* Check if it has been already on or off */
-	if (vsi->info.valid_sections &
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
-		if (on) {
-			if ((vsi->info.inner_vlan_flags &
-			     ICE_AQ_VSI_INNER_VLAN_EMODE_M) ==
-			    ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH)
-				return 0; /* already on */
-		} else {
-			if ((vsi->info.inner_vlan_flags &
-			     ICE_AQ_VSI_INNER_VLAN_EMODE_M) ==
-			    ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING)
-				return 0; /* already off */
-		}
-	}
+	/* do not allow modifying VLAN stripping when a port VLAN is configured
+	 * on this VSI
+	 */
+	if (vsi->info.port_based_inner_vlan)
+		return 0;
 
-	if (on)
-		vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH;
+	memset(&ctxt, 0, sizeof(ctxt));
+
+	if (ena)
+		/* Strip VLAN tag from Rx packet and put it in the desc */
+		ctxt.info.inner_vlan_flags =
+					ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH;
 	else
-		vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
-	vsi->info.inner_vlan_flags &= ~(ICE_AQ_VSI_INNER_VLAN_EMODE_M);
-	vsi->info.inner_vlan_flags |= vlan_flags;
-	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+		/* Disable stripping. Leave tag in packet */
+		ctxt.info.inner_vlan_flags =
+					ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
+
+	/* Allow all packets untagged/tagged */
+	ctxt.info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL;
+
+	ctxt.info.valid_sections = rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	status = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (status) {
+		PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan stripping",
+			    ena ? "enable" : "disable");
+		err = -EIO;
+	} else {
+		vsi->info.inner_vlan_flags = ctxt.info.inner_vlan_flags;
+	}
+
+	return err;
+}
+
+static int
+ice_vsi_ena_inner_stripping(struct ice_vsi *vsi)
+{
+	return ice_vsi_manage_vlan_stripping(vsi, true);
+}
+
+static int
+ice_vsi_dis_inner_stripping(struct ice_vsi *vsi)
+{
+	return ice_vsi_manage_vlan_stripping(vsi, false);
+}
+
+static int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	enum ice_status status;
+	int err = 0;
+
+	/* do not allow modifying VLAN stripping when a port VLAN is configured
+	 * on this VSI
+	 */
+	if (vsi->info.port_based_outer_vlan)
+		return 0;
+
+	memset(&ctxt, 0, sizeof(ctxt));
+
 	ctxt.info.valid_sections =
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
-	ctxt.vsi_num = vsi->vsi_id;
-	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
-	if (ret) {
-		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
-			    on ? "enable" : "disable");
-		return -EINVAL;
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	/* clear current outer VLAN strip settings */
+	ctxt.info.outer_vlan_flags = vsi->info.outer_vlan_flags &
+		~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M | ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	ctxt.info.outer_vlan_flags |=
+		(ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH <<
+		 ICE_AQ_VSI_OUTER_VLAN_EMODE_S) |
+		(ICE_AQ_VSI_OUTER_TAG_VLAN_8100 <<
+		 ICE_AQ_VSI_OUTER_TAG_TYPE_S);
+
+	status = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (status) {
+		PMD_DRV_LOG(ERR, "Update VSI failed to enable outer VLAN stripping");
+		err = -EIO;
+	} else {
+		vsi->info.outer_vlan_flags = ctxt.info.outer_vlan_flags;
 	}
 
-	vsi->info.valid_sections |=
-		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	return err;
+}
+
+static int
+ice_vsi_dis_outer_stripping(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	enum ice_status status;
+	int err = 0;
+
+	if (vsi->info.port_based_outer_vlan)
+		return 0;
+
+	memset(&ctxt, 0, sizeof(ctxt));
+
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	/* clear current outer VLAN strip settings */
+	ctxt.info.outer_vlan_flags = vsi->info.outer_vlan_flags &
+		~ICE_AQ_VSI_OUTER_VLAN_EMODE_M;
+	ctxt.info.outer_vlan_flags |= ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING <<
+		ICE_AQ_VSI_OUTER_VLAN_EMODE_S;
+
+	status = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (status) {
+		PMD_DRV_LOG(ERR, "Update VSI failed to disable outer VLAN stripping");
+		err = -EIO;
+	} else {
+		vsi->info.outer_vlan_flags = ctxt.info.outer_vlan_flags;
+	}
+
+	return err;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool ena)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (ice_is_dvm_ena(hw)) {
+		if (ena)
+			ret = ice_vsi_ena_outer_stripping(vsi);
+		else
+			ret = ice_vsi_dis_outer_stripping(vsi);
+	} else {
+		if (ena)
+			ret = ice_vsi_ena_inner_stripping(vsi);
+		else
+			ret = ice_vsi_dis_inner_stripping(vsi);
+	}
 
 	return ret;
 }
@@ -4181,13 +4153,6 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			ice_vsi_config_vlan_stripping(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-			ice_vsi_config_double_vlan(vsi, true);
-		else
-			ice_vsi_config_double_vlan(vsi, false);
-	}
-
 	return 0;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 17/22] net/ice: fix VLAN 0 adding based on VLAN mode
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (15 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 16/22] net/ice: fix VLAN strip for double VLAN Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 18/22] net/ice: enable QinQ filter for switch Wenjun Wu
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Haiyue Wang <haiyue.wang@intel.com>

[ upstream commit 295b34f55b001bceb27d9177b55326ccda49351b ]

In Single VLAN Mode, single VLAN filters via ICE_SW_LKUP_VLAN are based
on the inner VLAN ID, so the VLAN TPID (i.e. 0x8100 or 0x888a8) doesn't
matter.

In Double VLAN Mode, outer/single VLAN filters via ICE_SW_LKUP_VLAN are
based on the outer/single VLAN ID + VLAN TPID.

For both modes, adding a VLAN 0 + no VLAN TPID filter to handle untagged
traffic when VLAN pruning is enabled. Also, this handles VLAN 0 priority
tagged traffic in Single VLAN Mode, since the VLAN TPID is not part of
filtering.

If Double VLAN Mode is enabled then an explicit VLAN 0 + VLAN TPID filter
needs to be added to allow VLAN 0 priority tagged traffic in DVM, since
the VLAN TPID is part of filtering.

Fixes: 14e7a4b37b4f ("net/ice/base: support configuring device in double VLAN mode")

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 136 +++++++++++++++++++++++++++++------
 drivers/net/ice/ice_ethdev.h |  10 ++-
 2 files changed, 123 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 1d7e5ffbc4..60793f99c6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -944,12 +944,13 @@ ice_remove_mac_filter(struct ice_vsi *vsi, struct rte_ether_addr *mac_addr)
 
 /* Find out specific VLAN filter */
 static struct ice_vlan_filter *
-ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+ice_find_vlan_filter(struct ice_vsi *vsi, struct ice_vlan *vlan)
 {
 	struct ice_vlan_filter *f;
 
 	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
-		if (vlan_id == f->vlan_info.vlan_id)
+		if (vlan->tpid == f->vlan_info.vlan.tpid &&
+		    vlan->vid == f->vlan_info.vlan.vid)
 			return f;
 	}
 
@@ -957,7 +958,7 @@ ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 }
 
 static int
-ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+ice_add_vlan_filter(struct ice_vsi *vsi, struct ice_vlan *vlan)
 {
 	struct ice_fltr_list_entry *v_list_itr = NULL;
 	struct ice_vlan_filter *f;
@@ -965,13 +966,13 @@ ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 	struct ice_hw *hw;
 	int ret = 0;
 
-	if (!vsi || vlan_id > RTE_ETHER_MAX_VLAN_ID)
+	if (!vsi || vlan->vid > RTE_ETHER_MAX_VLAN_ID)
 		return -EINVAL;
 
 	hw = ICE_VSI_TO_HW(vsi);
 
 	/* If it's added and configured, return. */
-	f = ice_find_vlan_filter(vsi, vlan_id);
+	f = ice_find_vlan_filter(vsi, vlan);
 	if (f) {
 		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
 		return 0;
@@ -988,7 +989,9 @@ ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 		ret = -ENOMEM;
 		goto DONE;
 	}
-	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan->vid;
+	v_list_itr->fltr_info.l_data.vlan.tpid = vlan->tpid;
+	v_list_itr->fltr_info.l_data.vlan.tpid_valid = true;
 	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
 	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
 	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
@@ -1012,7 +1015,8 @@ ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 		ret = -ENOMEM;
 		goto DONE;
 	}
-	f->vlan_info.vlan_id = vlan_id;
+	f->vlan_info.vlan.tpid = vlan->tpid;
+	f->vlan_info.vlan.vid = vlan->vid;
 	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
 	vsi->vlan_num++;
 
@@ -1024,7 +1028,7 @@ ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 }
 
 static int
-ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+ice_remove_vlan_filter(struct ice_vsi *vsi, struct ice_vlan *vlan)
 {
 	struct ice_fltr_list_entry *v_list_itr = NULL;
 	struct ice_vlan_filter *f;
@@ -1032,17 +1036,13 @@ ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 	struct ice_hw *hw;
 	int ret = 0;
 
-	/**
-	 * Vlan 0 is the generic filter for untagged packets
-	 * and can't be removed.
-	 */
-	if (!vsi || vlan_id == 0 || vlan_id > RTE_ETHER_MAX_VLAN_ID)
+	if (!vsi || vlan->vid > RTE_ETHER_MAX_VLAN_ID)
 		return -EINVAL;
 
 	hw = ICE_VSI_TO_HW(vsi);
 
 	/* Can't find it, return an error */
-	f = ice_find_vlan_filter(vsi, vlan_id);
+	f = ice_find_vlan_filter(vsi, vlan);
 	if (!f)
 		return -EINVAL;
 
@@ -1055,7 +1055,9 @@ ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
 		goto DONE;
 	}
 
-	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan->vid;
+	v_list_itr->fltr_info.l_data.vlan.tpid = vlan->tpid;
+	v_list_itr->fltr_info.l_data.vlan.tpid_valid = true;
 	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
 	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
 	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
@@ -1106,7 +1108,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 		return 0;
 
 	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
-		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
 			goto DONE;
@@ -1463,6 +1465,16 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 		vsi_ctx.info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
 		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
 					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		if (ice_is_dvm_ena(hw)) {
+			vsi_ctx.info.outer_vlan_flags =
+				(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL <<
+				 ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) &
+				ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M;
+			vsi_ctx.info.outer_vlan_flags |=
+				(ICE_AQ_VSI_OUTER_TAG_VLAN_8100 <<
+				 ICE_AQ_VSI_OUTER_TAG_TYPE_S) &
+				ICE_AQ_VSI_OUTER_TAG_TYPE_M;
+		}
 
 		/* FDIR */
 		cfg = ICE_AQ_VSI_PROP_SECURITY_VALID |
@@ -1696,10 +1708,11 @@ ice_load_pkg_type(struct ice_hw *hw)
 	else
 		package_type = ICE_PKG_TYPE_UNKNOWN;
 
-	PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s",
+	PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s (%s VLAN mode)",
 		hw->active_pkg_ver.major, hw->active_pkg_ver.minor,
 		hw->active_pkg_ver.update, hw->active_pkg_ver.draft,
-		hw->active_pkg_name);
+		hw->active_pkg_name,
+		ice_is_dvm_ena(hw) ? "double" : "single");
 
 	return package_type;
 }
@@ -3921,19 +3934,27 @@ static int
 ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vlan vlan = ICE_VLAN(RTE_ETHER_TYPE_VLAN, vlan_id);
 	struct ice_vsi *vsi = pf->main_vsi;
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed or added by user.
+	 */
+	if (vlan_id == 0)
+		return 0;
+
 	if (on) {
-		ret = ice_add_vlan_filter(vsi, vlan_id);
+		ret = ice_add_vlan_filter(vsi, &vlan);
 		if (ret < 0) {
 			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
 			return -EINVAL;
 		}
 	} else {
-		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		ret = ice_remove_vlan_filter(vsi, &vlan);
 		if (ret < 0) {
 			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
 			return -EINVAL;
@@ -3943,6 +3964,77 @@ ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	return 0;
 }
 
+/* In Single VLAN Mode (SVM), single VLAN filters via ICE_SW_LKUP_VLAN are
+ * based on the inner VLAN ID, so the VLAN TPID (i.e. 0x8100 or 0x888a8)
+ * doesn't matter. In Double VLAN Mode (DVM), outer/single VLAN filters via
+ * ICE_SW_LKUP_VLAN are based on the outer/single VLAN ID + VLAN TPID.
+ *
+ * For both modes add a VLAN 0 + no VLAN TPID filter to handle untagged traffic
+ * when VLAN pruning is enabled. Also, this handles VLAN 0 priority tagged
+ * traffic in SVM, since the VLAN TPID isn't part of filtering.
+ *
+ * If DVM is enabled then an explicit VLAN 0 + VLAN TPID filter needs to be
+ * added to allow VLAN 0 priority tagged traffic in DVM, since the VLAN TPID is
+ * part of filtering.
+ */
+static int
+ice_vsi_add_vlan_zero(struct ice_vsi *vsi)
+{
+	struct ice_vlan vlan;
+	int err;
+
+	vlan = ICE_VLAN(0, 0);
+	err = ice_add_vlan_filter(vsi, &vlan);
+	if (err) {
+		PMD_DRV_LOG(DEBUG, "Failed to add VLAN ID 0");
+		return err;
+	}
+
+	/* in SVM both VLAN 0 filters are identical */
+	if (!ice_is_dvm_ena(&vsi->adapter->hw))
+		return 0;
+
+	vlan = ICE_VLAN(RTE_ETHER_TYPE_VLAN, 0);
+	err = ice_add_vlan_filter(vsi, &vlan);
+	if (err) {
+		PMD_DRV_LOG(DEBUG, "Failed to add VLAN ID 0 in double VLAN mode");
+		return err;
+	}
+
+	return 0;
+}
+
+/*
+ * Delete the VLAN 0 filters in the same manner that they were added in
+ * ice_vsi_add_vlan_zero.
+ */
+static int
+ice_vsi_del_vlan_zero(struct ice_vsi *vsi)
+{
+	struct ice_vlan vlan;
+	int err;
+
+	vlan = ICE_VLAN(0, 0);
+	err = ice_remove_vlan_filter(vsi, &vlan);
+	if (err) {
+		PMD_DRV_LOG(DEBUG, "Failed to remove VLAN ID 0");
+		return err;
+	}
+
+	/* in SVM both VLAN 0 filters are identical */
+	if (!ice_is_dvm_ena(&vsi->adapter->hw))
+		return 0;
+
+	vlan = ICE_VLAN(RTE_ETHER_TYPE_VLAN, 0);
+	err = ice_remove_vlan_filter(vsi, &vlan);
+	if (err) {
+		PMD_DRV_LOG(DEBUG, "Failed to remove VLAN ID 0 in double VLAN mode");
+		return err;
+	}
+
+	return 0;
+}
+
 /* Configure vlan filter on or off */
 static int
 ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
@@ -3979,9 +4071,9 @@ ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
 
 	/* consist with other drivers, allow untagged packet when vlan filter on */
 	if (on)
-		ret = ice_add_vlan_filter(vsi, 0);
+		ret = ice_vsi_add_vlan_zero(vsi);
 	else
-		ret = ice_remove_vlan_filter(vsi, 0);
+		ret = ice_vsi_del_vlan_zero(vsi);
 
 	return 0;
 }
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index c1b432c1ef..2710a575ef 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -167,11 +167,19 @@ struct ice_mac_filter {
 	struct ice_mac_filter_info mac_info;
 };
 
+struct ice_vlan {
+	uint16_t tpid;
+	uint16_t vid;
+};
+
+#define ICE_VLAN(tpid, vid) \
+	((struct ice_vlan){ tpid, vid })
+
 /**
  * VLAN filter structure
  */
 struct ice_vlan_filter_info {
-	uint16_t vlan_id;
+	struct ice_vlan vlan;
 };
 
 TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 18/22] net/ice: enable QinQ filter for switch
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (16 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 17/22] net/ice: fix VLAN 0 adding based on VLAN mode Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 19/22] net/ice: update QinQ switch filter handling Wenjun Wu
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Junfeng Guo <junfeng.guo@intel.com>

[ upstream commit bb3386f348ddf1a32b752ca371146e6be5c56a8b ]

Enable the double VLAN support for switch QinQ filtering.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_generic_flow.c  |   8 +++
 drivers/net/ice/ice_generic_flow.h  |   1 +
 drivers/net/ice/ice_switch_filter.c | 106 +++++++++++++++++++++++++---
 3 files changed, 104 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index bc4e0a5704..c9910a65d1 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1480,6 +1480,14 @@ enum rte_flow_item_type pattern_eth_qinq_pppoes[] = {
 	RTE_FLOW_ITEM_TYPE_PPPOES,
 	RTE_FLOW_ITEM_TYPE_END,
 };
+enum rte_flow_item_type pattern_eth_qinq_pppoes_proto[] = {
+	RTE_FLOW_ITEM_TYPE_ETH,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_VLAN,
+	RTE_FLOW_ITEM_TYPE_PPPOES,
+	RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID,
+	RTE_FLOW_ITEM_TYPE_END,
+};
 enum rte_flow_item_type pattern_eth_pppoes_ipv4[] = {
 	RTE_FLOW_ITEM_TYPE_ETH,
 	RTE_FLOW_ITEM_TYPE_PPPOES,
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index eb0368e280..1b2cdf7e4c 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -433,6 +433,7 @@ extern enum rte_flow_item_type pattern_eth_pppoes_proto[];
 extern enum rte_flow_item_type pattern_eth_vlan_pppoes[];
 extern enum rte_flow_item_type pattern_eth_vlan_pppoes_proto[];
 extern enum rte_flow_item_type pattern_eth_qinq_pppoes[];
+extern enum rte_flow_item_type pattern_eth_qinq_pppoes_proto[];
 extern enum rte_flow_item_type pattern_eth_pppoes_ipv4[];
 extern enum rte_flow_item_type pattern_eth_vlan_pppoes_ipv4[];
 extern enum rte_flow_item_type pattern_eth_qinq_pppoes_ipv4[];
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c383cade65..7e4f041a68 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -35,11 +35,15 @@
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
 #define ICE_SW_INSET_MAC_VLAN ( \
-		ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE | \
-		ICE_INSET_VLAN_OUTER)
+	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE | \
+	ICE_INSET_VLAN_INNER)
+#define ICE_SW_INSET_MAC_QINQ  ( \
+	ICE_SW_INSET_MAC_VLAN | ICE_INSET_VLAN_OUTER)
 #define ICE_SW_INSET_MAC_IPV4 ( \
 	ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \
 	ICE_INSET_IPV4_PROTO | ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS)
+#define ICE_SW_INSET_MAC_QINQ_IPV4 ( \
+	ICE_SW_INSET_MAC_QINQ | ICE_SW_INSET_MAC_IPV4)
 #define ICE_SW_INSET_MAC_IPV4_TCP ( \
 	ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \
 	ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \
@@ -52,6 +56,8 @@
 	ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \
 	ICE_INSET_IPV6_TC | ICE_INSET_IPV6_HOP_LIMIT | \
 	ICE_INSET_IPV6_NEXT_HDR)
+#define ICE_SW_INSET_MAC_QINQ_IPV6 ( \
+	ICE_SW_INSET_MAC_QINQ | ICE_SW_INSET_MAC_IPV6)
 #define ICE_SW_INSET_MAC_IPV6_TCP ( \
 	ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \
 	ICE_INSET_IPV6_HOP_LIMIT | ICE_INSET_IPV6_TC | \
@@ -148,6 +154,8 @@ ice_pattern_match_item ice_switch_pattern_dist_os[] = {
 			ICE_SW_INSET_ETHER, ICE_INSET_NONE},
 	{pattern_ethertype_vlan,
 			ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE},
+	{pattern_ethertype_qinq,
+			ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE},
 	{pattern_eth_arp,
 			ICE_INSET_NONE, ICE_INSET_NONE},
 	{pattern_eth_ipv4,
@@ -182,6 +190,8 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_ETHER, ICE_INSET_NONE},
 	{pattern_ethertype_vlan,
 			ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE},
+	{pattern_ethertype_qinq,
+			ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE},
 	{pattern_eth_arp,
 			ICE_INSET_NONE, ICE_INSET_NONE},
 	{pattern_eth_ipv4,
@@ -262,6 +272,18 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_INSET_NONE, ICE_INSET_NONE},
 	{pattern_eth_ipv6_pfcp,
 			ICE_INSET_NONE, ICE_INSET_NONE},
+	{pattern_eth_qinq_ipv4,
+			ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE},
+	{pattern_eth_qinq_ipv6,
+			ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes,
+			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes_proto,
+			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
 };
 
 static struct
@@ -304,6 +326,8 @@ ice_pattern_match_item ice_switch_pattern_perm_comms[] = {
 			ICE_SW_INSET_ETHER, ICE_INSET_NONE},
 	{pattern_ethertype_vlan,
 			ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE},
+	{pattern_ethertype_qinq,
+			ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE},
 	{pattern_eth_arp,
 		ICE_INSET_NONE, ICE_INSET_NONE},
 	{pattern_eth_ipv4,
@@ -384,6 +408,18 @@ ice_pattern_match_item ice_switch_pattern_perm_comms[] = {
 			ICE_INSET_NONE, ICE_INSET_NONE},
 	{pattern_eth_ipv6_pfcp,
 			ICE_INSET_NONE, ICE_INSET_NONE},
+	{pattern_eth_qinq_ipv4,
+			ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE},
+	{pattern_eth_qinq_ipv6,
+			ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes,
+			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes_proto,
+			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_qinq_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
 };
 
 static int
@@ -516,6 +552,8 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
+	bool inner_vlan_valid = 0;
+	bool outer_vlan_valid = 0;
 	bool tunnel_valid = 0;
 	bool profile_rule = 0;
 	bool nvgre_valid = 0;
@@ -1062,23 +1100,40 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid VLAN item");
 				return 0;
 			}
+
+			if (!outer_vlan_valid &&
+			    (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
+			     *tun_type == ICE_NON_TUN_QINQ))
+				outer_vlan_valid = 1;
+			else if (!inner_vlan_valid &&
+				 (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
+				  *tun_type == ICE_NON_TUN_QINQ))
+				inner_vlan_valid = 1;
+			else if (!inner_vlan_valid)
+				inner_vlan_valid = 1;
+
 			if (vlan_spec && vlan_mask) {
-				list[t].type = ICE_VLAN_OFOS;
+				if (outer_vlan_valid && !inner_vlan_valid) {
+					list[t].type = ICE_VLAN_EX;
+					input_set |= ICE_INSET_VLAN_OUTER;
+				} else if (inner_vlan_valid) {
+					list[t].type = ICE_VLAN_OFOS;
+					input_set |= ICE_INSET_VLAN_INNER;
+				}
+
 				if (vlan_mask->tci) {
 					list[t].h_u.vlan_hdr.vlan =
 						vlan_spec->tci;
 					list[t].m_u.vlan_hdr.vlan =
 						vlan_mask->tci;
-					input_set |= ICE_INSET_VLAN_OUTER;
 					input_set_byte += 2;
 				}
 				if (vlan_mask->inner_type) {
-					list[t].h_u.vlan_hdr.type =
-						vlan_spec->inner_type;
-					list[t].m_u.vlan_hdr.type =
-						vlan_mask->inner_type;
-					input_set |= ICE_INSET_ETHERTYPE;
-					input_set_byte += 2;
+					rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ITEM,
+						item,
+						"Invalid VLAN input set.");
+					return 0;
 				}
 				t++;
 			}
@@ -1380,8 +1435,27 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (*tun_type == ICE_SW_TUN_PPPOE_PAY &&
+	    inner_vlan_valid && outer_vlan_valid)
+		*tun_type = ICE_SW_TUN_PPPOE_PAY_QINQ;
+	else if (*tun_type == ICE_SW_TUN_PPPOE &&
+		 inner_vlan_valid && outer_vlan_valid)
+		*tun_type = ICE_SW_TUN_PPPOE_QINQ;
+	else if (*tun_type == ICE_NON_TUN &&
+		 inner_vlan_valid && outer_vlan_valid)
+		*tun_type = ICE_NON_TUN_QINQ;
+	else if (*tun_type == ICE_SW_TUN_AND_NON_TUN &&
+		 inner_vlan_valid && outer_vlan_valid)
+		*tun_type = ICE_SW_TUN_AND_NON_TUN_QINQ;
+
 	if (pppoe_patt_valid && !pppoe_prot_valid) {
-		if (ipv6_valid && udp_valid)
+		if (inner_vlan_valid && outer_vlan_valid && ipv4_valid)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_QINQ;
+		else if (inner_vlan_valid && outer_vlan_valid && ipv6_valid)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_QINQ;
+		else if (inner_vlan_valid && outer_vlan_valid)
+			*tun_type = ICE_SW_TUN_PPPOE_QINQ;
+		else if (ipv6_valid && udp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
 		else if (ipv6_valid && tcp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
@@ -1659,6 +1733,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	uint16_t lkups_num = 0;
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
+	uint16_t vlan_num = 0;
 	enum ice_sw_tunnel_type tun_type =
 			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
@@ -1674,6 +1749,10 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			if (eth_mask->type == UINT16_MAX)
 				tun_type = ICE_SW_TUN_AND_NON_TUN;
 		}
+
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN)
+			vlan_num++;
+
 		/* reserve one more memory slot for ETH which may
 		 * consume 2 lookup items.
 		 */
@@ -1681,6 +1760,11 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			item_num++;
 	}
 
+	if (vlan_num == 2 && tun_type == ICE_SW_TUN_AND_NON_TUN)
+		tun_type = ICE_SW_TUN_AND_NON_TUN_QINQ;
+	else if (vlan_num == 2)
+		tun_type = ICE_NON_TUN_QINQ;
+
 	list = rte_zmalloc(NULL, item_num * sizeof(*list), 0);
 	if (!list) {
 		rte_flow_error_set(error, EINVAL,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 19/22] net/ice: update QinQ switch filter handling
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (17 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 18/22] net/ice: enable QinQ filter for switch Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 20/22] net/ice/base: fix wrong ptype bitmap for IP fragment Wenjun Wu
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Haiyue Wang <haiyue.wang@intel.com>

[ upstream commit 23ea199b732bf54861aaea49e52c1089334b29ae ]

The hardware outer/inner VLAN protocol types are now updated to map to
new interface VLAN protocol types, so update the application to use new
VLAN protocol types when the rte_flow is QinQ filter type.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 36 ++++++++++++++++++-----------
 1 file changed, 22 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7e4f041a68..455123ee06 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -558,12 +558,17 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool profile_rule = 0;
 	bool nvgre_valid = 0;
 	bool vxlan_valid = 0;
+	bool qinq_valid = 0;
 	bool ipv6_valid = 0;
 	bool ipv4_valid = 0;
 	bool udp_valid = 0;
 	bool tcp_valid = 0;
 	uint16_t j, t = 0;
 
+	if (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
+	    *tun_type == ICE_NON_TUN_QINQ)
+		qinq_valid = 1;
+
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
 		if (item->last) {
@@ -1101,22 +1106,25 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 
-			if (!outer_vlan_valid &&
-			    (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
-			     *tun_type == ICE_NON_TUN_QINQ))
-				outer_vlan_valid = 1;
-			else if (!inner_vlan_valid &&
-				 (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ ||
-				  *tun_type == ICE_NON_TUN_QINQ))
-				inner_vlan_valid = 1;
-			else if (!inner_vlan_valid)
-				inner_vlan_valid = 1;
+			if (qinq_valid) {
+				if (!outer_vlan_valid)
+					outer_vlan_valid = 1;
+				else
+					inner_vlan_valid = 1;
+			}
 
 			if (vlan_spec && vlan_mask) {
-				if (outer_vlan_valid && !inner_vlan_valid) {
-					list[t].type = ICE_VLAN_EX;
-					input_set |= ICE_INSET_VLAN_OUTER;
-				} else if (inner_vlan_valid) {
+				if (qinq_valid) {
+					if (!inner_vlan_valid) {
+						list[t].type = ICE_VLAN_EX;
+						input_set |=
+							ICE_INSET_VLAN_OUTER;
+					} else {
+						list[t].type = ICE_VLAN_IN;
+						input_set |=
+							ICE_INSET_VLAN_INNER;
+					}
+				} else {
 					list[t].type = ICE_VLAN_OFOS;
 					input_set |= ICE_INSET_VLAN_INNER;
 				}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 20/22] net/ice/base: fix wrong ptype bitmap for IP fragment
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (18 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 19/22] net/ice: update QinQ switch filter handling Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 21/22] net/ice: support flow priority for DCF switch filter Wenjun Wu
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Ting Xu <ting.xu@intel.com>

IPv4 and IPv6 fragment ptypes are supposed to be separated from IP
other ptypes. New bitmaps for IP fragment ptypes were created, but the
IP fragment ptypes were not deleted from the previous non-frag bitmaps,
which will cause conflicts. This patch removes IP fragment ptypes from
the non-frag bitmaps.

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 36 ++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 049e2f0c26..e50de9503d 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -226,11 +226,11 @@ static const u32 ice_ptypes_macvlan_il[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outer/First/Single IPv4 header, does NOT
- * include IPV4 other PTYPEs
+/* Packet types for packets with an Outer/First/Single non-frag IPv4 header,
+ * does NOT include IPV4 other PTYPEs
  */
 static const u32 ice_ptypes_ipv4_ofos[] = {
-	0x1DC00000, 0x24000800, 0x00000000, 0x00000000,
+	0x1D800000, 0x24000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000155, 0x00000000, 0x00000000,
 	0x00000000, 0x000FC000, 0x000002A0, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -240,11 +240,11 @@ static const u32 ice_ptypes_ipv4_ofos[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outer/First/Single IPv4 header, includes
- * IPV4 other PTYPEs
+/* Packet types for packets with an Outer/First/Single non-frag IPv4 header,
+ * includes IPV4 other PTYPEs
  */
 static const u32 ice_ptypes_ipv4_ofos_all[] = {
-	0x1DC00000, 0x24000800, 0x00000000, 0x00000000,
+	0x1D800000, 0x24000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000155, 0x00000000, 0x00000000,
 	0x00000000, 0x000FC000, 0x83E0FAA0, 0x00000101,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -266,11 +266,11 @@ static const u32 ice_ptypes_ipv4_il[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outer/First/Single IPv6 header, does NOT
- * include IVP6 other PTYPEs
+/* Packet types for packets with an Outer/First/Single non-frag IPv6 header,
+ * does NOT include IVP6 other PTYPEs
  */
 static const u32 ice_ptypes_ipv6_ofos[] = {
-	0x00000000, 0x00000000, 0x77000000, 0x10002000,
+	0x00000000, 0x00000000, 0x76000000, 0x10002000,
 	0x00000000, 0x000002AA, 0x00000000, 0x00000000,
 	0x00000000, 0x03F00000, 0x00000540, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -280,11 +280,11 @@ static const u32 ice_ptypes_ipv6_ofos[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outer/First/Single IPv6 header, includes
- * IPV6 other PTYPEs
+/* Packet types for packets with an Outer/First/Single non-frag IPv6 header,
+ * includes IPV6 other PTYPEs
  */
 static const u32 ice_ptypes_ipv6_ofos_all[] = {
-	0x00000000, 0x00000000, 0x77000000, 0x10002000,
+	0x00000000, 0x00000000, 0x76000000, 0x10002000,
 	0x00000000, 0x000002AA, 0x00000000, 0x00000000,
 	0x00000000, 0x03F00000, 0x7C1F0540, 0x00000206,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -306,9 +306,11 @@ static const u32 ice_ptypes_ipv6_il[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outer/First/Single IPv4 header - no L4 */
+/* Packet types for packets with an Outer/First/Single
+ * non-frag IPv4 header - no L4
+ */
 static const u32 ice_ptypes_ipv4_ofos_no_l4[] = {
-	0x10C00000, 0x04000800, 0x00000000, 0x00000000,
+	0x10800000, 0x04000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x000cc000, 0x000002A0, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -330,9 +332,11 @@ static const u32 ice_ptypes_ipv4_il_no_l4[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outer/First/Single IPv6 header - no L4 */
+/* Packet types for packets with an Outer/First/Single
+ * non-frag IPv6 header - no L4
+ */
 static const u32 ice_ptypes_ipv6_ofos_no_l4[] = {
-	0x00000000, 0x00000000, 0x43000000, 0x10002000,
+	0x00000000, 0x00000000, 0x42000000, 0x10002000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x02300000, 0x00000540, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 21/22] net/ice: support flow priority for DCF switch filter
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (19 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 20/22] net/ice/base: fix wrong ptype bitmap for IP fragment Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 22/22] net/ice/base: add priority check of matching recipe Wenjun Wu
  2021-08-04  1:20 ` [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Min Hu (Connor)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Yuying Zhang <yuying.zhang@intel.com>

[ upstream commit 2321e34c23b386c46e4a644682e40214cf59ee4f ]

This patch is not for LTS upstream, just for customer to cherry-pick.

Support rte flow priority attribute for DCF switch filter.
When a packet is matched by two rules, the behavior of it
is not defined. This patch supports flow priority to create
different recipes for this situation. Only priority 0 and 1
are supported and higher value denotes higher priority.

for example:
1. flow create 0 priority 0 ingress pattern eth / vlan tci is 2 / vlan
   tci is 2 / end actions vf id 2 / end
2. flow create 0 priority 1 ingress pattern eth / vlan / vlan / ipv4 dst
   is 192.168.0.1 / end actions vf id 1 / end

These two rules can be created at the same time in DCF switch
filter and priority of rule 2 is higher. Packet hits rule 2
when two conditions of rules are satisfied.

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_acl_filter.c    |  1 +
 drivers/net/ice/ice_fdir_filter.c   |  1 +
 drivers/net/ice/ice_generic_flow.c  | 18 ++++++++++--------
 drivers/net/ice/ice_generic_flow.h  |  1 +
 drivers/net/ice/ice_hash.c          |  2 ++
 drivers/net/ice/ice_switch_filter.c | 14 +++++++++-----
 6 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c
index f7dbe53574..14e36aa9f6 100644
--- a/drivers/net/ice/ice_acl_filter.c
+++ b/drivers/net/ice/ice_acl_filter.c
@@ -904,6 +904,7 @@ ice_acl_parse(struct ice_adapter *ad,
 	       uint32_t array_len,
 	       const struct rte_flow_item pattern[],
 	       const struct rte_flow_action actions[],
+	       uint32_t priority __rte_unused,
 	       void **meta,
 	       struct rte_flow_error *error)
 {
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 4a071254ce..d2bc882720 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -2029,6 +2029,7 @@ ice_fdir_parse(struct ice_adapter *ad,
 	       uint32_t array_len,
 	       const struct rte_flow_item pattern[],
 	       const struct rte_flow_action actions[],
+	       uint32_t priority __rte_unused,
 	       void **meta,
 	       struct rte_flow_error *error)
 {
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index c9910a65d1..ec141e8fa0 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1799,6 +1799,7 @@ enum rte_flow_item_type pattern_eth_ipv6_pfcp[] = {
 typedef struct ice_flow_engine * (*parse_engine_t)(struct ice_adapter *ad,
 		struct rte_flow *flow,
 		struct ice_parser_list *parser_list,
+		uint32_t priority,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error);
@@ -1990,11 +1991,10 @@ ice_flow_valid_attr(struct ice_adapter *ad,
 	} else {
 		*ice_pipeline_stage =
 			ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY;
-		/* Not supported */
-		if (attr->priority) {
+		if (attr->priority > 1) {
 			rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
-					attr, "Not support priority.");
+					attr, "Only support priority 0 and 1.");
 			return -rte_errno;
 		}
 	}
@@ -2139,6 +2139,7 @@ static struct ice_flow_engine *
 ice_parse_engine_create(struct ice_adapter *ad,
 		struct rte_flow *flow,
 		struct ice_parser_list *parser_list,
+		uint32_t priority,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error)
@@ -2154,7 +2155,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
-				pattern, actions, &meta, error) < 0)
+				pattern, actions, priority, &meta, error) < 0)
 			continue;
 
 		engine = parser_node->parser->engine;
@@ -2172,6 +2173,7 @@ static struct ice_flow_engine *
 ice_parse_engine_validate(struct ice_adapter *ad,
 		struct rte_flow *flow __rte_unused,
 		struct ice_parser_list *parser_list,
+		uint32_t priority,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_flow_error *error)
@@ -2184,7 +2186,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
-				pattern, actions, NULL, error) < 0)
+				pattern, actions, priority, NULL, error) < 0)
 			continue;
 
 		engine = parser_node->parser->engine;
@@ -2234,7 +2236,7 @@ ice_flow_process_filter(struct rte_eth_dev *dev,
 		return ret;
 
 	*engine = ice_parse_engine(ad, flow, &pf->rss_parser_list,
-			pattern, actions, error);
+			attr->priority, pattern, actions, error);
 	if (*engine != NULL)
 		return 0;
 
@@ -2242,11 +2244,11 @@ ice_flow_process_filter(struct rte_eth_dev *dev,
 	case ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY:
 	case ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR:
 		*engine = ice_parse_engine(ad, flow, &pf->dist_parser_list,
-				pattern, actions, error);
+				attr->priority, pattern, actions, error);
 		break;
 	case ICE_FLOW_CLASSIFY_STAGE_PERMISSION:
 		*engine = ice_parse_engine(ad, flow, &pf->perm_parser_list,
-				pattern, actions, error);
+				attr->priority, pattern, actions, error);
 		break;
 	default:
 		return -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h
index 1b2cdf7e4c..6bfe3f6dd6 100644
--- a/drivers/net/ice/ice_generic_flow.h
+++ b/drivers/net/ice/ice_generic_flow.h
@@ -555,6 +555,7 @@ typedef int (*parse_pattern_action_t)(struct ice_adapter *ad,
 		uint32_t array_len,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
+		uint32_t priority,
 		void **meta,
 		struct rte_flow_error *error);
 
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 2b0a479c7e..ae095eb3cf 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -75,6 +75,7 @@ ice_hash_parse_pattern_action(struct ice_adapter *ad,
 			uint32_t array_len,
 			const struct rte_flow_item pattern[],
 			const struct rte_flow_action actions[],
+			uint32_t priority,
 			void **meta,
 			struct rte_flow_error *error);
 
@@ -1237,6 +1238,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
 			uint32_t array_len,
 			const struct rte_flow_item pattern[],
 			const struct rte_flow_action actions[],
+			uint32_t priority __rte_unused,
 			void **meta,
 			struct rte_flow_error *error)
 {
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 455123ee06..48726dc967 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1512,6 +1512,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 static int
 ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad,
 			    const struct rte_flow_action *actions,
+			    uint32_t priority,
 			    struct rte_flow_error *error,
 			    struct ice_adv_rule_info *rule_info)
 {
@@ -1559,7 +1560,7 @@ ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad,
 	rule_info->sw_act.src = rule_info->sw_act.vsi_handle;
 	rule_info->sw_act.flag = ICE_FLTR_RX;
 	rule_info->rx = 1;
-	rule_info->priority = 5;
+	rule_info->priority = priority + 5;
 
 	return 0;
 }
@@ -1567,6 +1568,7 @@ ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad,
 static int
 ice_switch_parse_action(struct ice_pf *pf,
 		const struct rte_flow_action *actions,
+		uint32_t priority,
 		struct rte_flow_error *error,
 		struct ice_adv_rule_info *rule_info)
 {
@@ -1637,7 +1639,7 @@ ice_switch_parse_action(struct ice_pf *pf,
 	rule_info->sw_act.vsi_handle = vsi->idx;
 	rule_info->rx = 1;
 	rule_info->sw_act.src = vsi->idx;
-	rule_info->priority = 5;
+	rule_info->priority = priority + 5;
 
 	return 0;
 
@@ -1729,6 +1731,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 		uint32_t array_len,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
+		uint32_t priority,
 		void **meta,
 		struct rte_flow_error *error)
 {
@@ -1818,10 +1821,11 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 		goto error;
 
 	if (ad->hw.dcf_enabled)
-		ret = ice_switch_parse_dcf_action((void *)ad, actions, error,
-						  &rule_info);
+		ret = ice_switch_parse_dcf_action((void *)ad, actions, priority,
+						  error, &rule_info);
 	else
-		ret = ice_switch_parse_action(pf, actions, error, &rule_info);
+		ret = ice_switch_parse_action(pf, actions, priority, error,
+					      &rule_info);
 
 	if (ret)
 		goto error;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 22/22] net/ice/base: add priority check of matching recipe
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (20 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 21/22] net/ice: support flow priority for DCF switch filter Wenjun Wu
@ 2021-08-03  8:38 ` Wenjun Wu
  2021-08-04  1:20 ` [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Min Hu (Connor)
  22 siblings, 0 replies; 27+ messages in thread
From: Wenjun Wu @ 2021-08-03  8:38 UTC (permalink / raw)
  To: dev

From: Qi Zhang <qi.z.zhang@intel.com>

[ upstream commit 2e6228787d91967a775fb1d99cd887d3f11ad5c6 ]

This patch is not for LTS upstream, just for customer to cherry-pick.

Check priority when look for a recipe which matches our request
to enable flow priority for switch filter.

Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index d5576267b8..6c4dfec062 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -6172,7 +6172,7 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = {
  * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found.
  */
 static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts,
-			 enum ice_sw_tunnel_type tun_type)
+			 enum ice_sw_tunnel_type tun_type, u32 priority)
 {
 	bool refresh_required = true;
 	struct ice_sw_recipe *recp;
@@ -6234,7 +6234,8 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts,
 			/* If for "i"th recipe the found was never set to false
 			 * then it means we found our match
 			 */
-			if (tun_type == recp[i].tun_type && found)
+			if (tun_type == recp[i].tun_type && found &&
+			    priority == recp[i].priority)
 				return i; /* Return the recipe ID */
 		}
 	}
@@ -7248,7 +7249,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	}
 
 	/* Look for a recipe which matches our requested fv / mask list */
-	*rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type);
+	*rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type, rinfo->priority);
 	if (*rid < ICE_MAX_NUM_RECIPES)
 		/* Success if found a recipe that match the existing criteria */
 		goto err_unroll;
@@ -8377,7 +8378,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	if (status)
 		return status;
 
-	rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type);
+	rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type, rinfo->priority);
 	/* If did not find a recipe that match the existing criteria */
 	if (rid == ICE_MAX_NUM_RECIPES)
 		return ICE_ERR_PARAM;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11
  2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
                   ` (21 preceding siblings ...)
  2021-08-03  8:38 ` [dpdk-dev] [PATCH 22/22] net/ice/base: add priority check of matching recipe Wenjun Wu
@ 2021-08-04  1:20 ` Min Hu (Connor)
  2021-08-04  7:54   ` Thomas Monjalon
  22 siblings, 1 reply; 27+ messages in thread
From: Min Hu (Connor) @ 2021-08-04  1:20 UTC (permalink / raw)
  To: Wenjun Wu, dev; +Cc: Thomas Monjalon

Hi, all,
     Features could be backport to TLS version? It is surprising.


在 2021/8/3 16:37, Wenjun Wu 写道:
> enable QinQ filter for switch

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11
  2021-08-04  1:20 ` [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Min Hu (Connor)
@ 2021-08-04  7:54   ` Thomas Monjalon
  2021-08-04  8:48     ` Min Hu (Connor)
  0 siblings, 1 reply; 27+ messages in thread
From: Thomas Monjalon @ 2021-08-04  7:54 UTC (permalink / raw)
  To: Min Hu (Connor); +Cc: Wenjun Wu, dev

04/08/2021 03:20, Min Hu (Connor):
> Hi, all,
>      Features could be backport to TLS version? It is surprising.

No it cannot, but you skipped the interesting part of the message:
"
They are not for LTS upstream, just for customer to cherrypick.
"

> 在 2021/8/3 16:37, Wenjun Wu 写道:
> > enable QinQ filter for switch



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11
  2021-08-04  7:54   ` Thomas Monjalon
@ 2021-08-04  8:48     ` Min Hu (Connor)
  2021-08-04  9:00       ` Thomas Monjalon
  0 siblings, 1 reply; 27+ messages in thread
From: Min Hu (Connor) @ 2021-08-04  8:48 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Wenjun Wu, dev



在 2021/8/4 15:54, Thomas Monjalon 写道:
> 04/08/2021 03:20, Min Hu (Connor):
>> Hi, all,
>>       Features could be backport to TLS version? It is surprising.
> 
> No it cannot, but you skipped the interesting part of the message:
> "
> They are not for LTS upstream, just for customer to cherrypick.
> "
Hi, Thomas,
"just for customer to cherrypick."  -- what doet it mean?
It means that it tells users to backport the patches to 20.11 if they 
need the features?

I cannot understand why this set of patches are sent to dev@dpdk.org ?


> 
>> 在 2021/8/3 16:37, Wenjun Wu 写道:
>>> enable QinQ filter for switch
> 
> 
> .
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11
  2021-08-04  8:48     ` Min Hu (Connor)
@ 2021-08-04  9:00       ` Thomas Monjalon
  0 siblings, 0 replies; 27+ messages in thread
From: Thomas Monjalon @ 2021-08-04  9:00 UTC (permalink / raw)
  To: Min Hu (Connor); +Cc: Wenjun Wu, dev

04/08/2021 10:48, Min Hu (Connor):
> 在 2021/8/4 15:54, Thomas Monjalon 写道:
> > 04/08/2021 03:20, Min Hu (Connor):
> >> Hi, all,
> >>       Features could be backport to TLS version? It is surprising.
> > 
> > No it cannot, but you skipped the interesting part of the message:
> > "
> > They are not for LTS upstream, just for customer to cherrypick.
> > "
> Hi, Thomas,
> "just for customer to cherrypick."  -- what doet it mean?
> It means that it tells users to backport the patches to 20.11 if they 
> need the features?

Yes anyone can use the patches from the mailing list on their own.

> I cannot understand why this set of patches are sent to dev@dpdk.org ?

Here the mailing list is used as "patch storage".
In patchwork, the state is changed to "Not Applicable".



^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2021-08-04  9:00 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-03  8:37 [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Wenjun Wu
2021-08-03  8:37 ` [dpdk-dev] [PATCH 01/22] net/ice: support RSS hash for IP fragment Wenjun Wu
2021-08-03  8:37 ` [dpdk-dev] [PATCH 02/22] net/ice/base: align add VSI and update VSI AQ command buffer Wenjun Wu
2021-08-03  8:37 ` [dpdk-dev] [PATCH 03/22] net/ice/base: add interface to support configuring VLAN mode Wenjun Wu
2021-08-03  8:37 ` [dpdk-dev] [PATCH 04/22] net/ice/base: fix outer VLAN related macro Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 05/22] net/ice/base: add VLAN TPID for VLAN filters Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 06/22] net/ice/base: support checking double VLAN mode Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 07/22] net/ice/base: support configuring device in " Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 08/22] net/ice/base: do not set VLAN mode in DCF mode Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 09/22] net/ice/base: update boost TCAM for DVM Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 10/22] net/ice/base: change protocol ID for VLAN in DVM Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 11/22] net/ice/base: refactor post DDP download VLAN mode config Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 12/22] net/ice/base: log if DDP/FW do not support QinQ Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 13/22] net/ice/base: add ethertype offset for QinQ dummy packet Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 14/22] net/ice/base: add inner VLAN protocol type for QinQ filter Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 15/22] net/ice/base: fix QinQ PPPoE dummy packet selection Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 16/22] net/ice: fix VLAN strip for double VLAN Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 17/22] net/ice: fix VLAN 0 adding based on VLAN mode Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 18/22] net/ice: enable QinQ filter for switch Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 19/22] net/ice: update QinQ switch filter handling Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 20/22] net/ice/base: fix wrong ptype bitmap for IP fragment Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 21/22] net/ice: support flow priority for DCF switch filter Wenjun Wu
2021-08-03  8:38 ` [dpdk-dev] [PATCH 22/22] net/ice/base: add priority check of matching recipe Wenjun Wu
2021-08-04  1:20 ` [dpdk-dev] [PATCH 00/22] backport feature support to DPDK 20.11 Min Hu (Connor)
2021-08-04  7:54   ` Thomas Monjalon
2021-08-04  8:48     ` Min Hu (Connor)
2021-08-04  9:00       ` Thomas Monjalon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.