All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] add generic L2/L3 tunnel encapsulation actions
@ 2018-09-16 16:53 Ori Kam
  2018-09-16 16:53 ` [PATCH 1/3] ethdev: " Ori Kam
                   ` (3 more replies)
  0 siblings, 4 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-16 16:53 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika

This series implement the generic L2/L3 tunnel encapsulation actions
and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.
    
Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.
    
This series introduce a generic encapsulation for L2 tunnel types, and
generic encapsulation for L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

[1]https://mails.dpdk.org/archives/dev/2018-August/109944.html

Ori Kam (3):
  ethdev: add generic L2/L3 tunnel encapsulation actions
  ethdev: convert testpmd encap commands to new API
  ethdev: remove vxlan and nvgre encapsulation commands

 app/test-pmd/cmdline_flow.c        | 294 +++++++++++++++++--------------------
 doc/guides/prog_guide/rte_flow.rst | 105 +++++--------
 lib/librte_ethdev/rte_flow.h       |  66 ++++++---
 3 files changed, 219 insertions(+), 246 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH 1/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-09-16 16:53 [PATCH 0/3] add generic L2/L3 tunnel encapsulation actions Ori Kam
@ 2018-09-16 16:53 ` Ori Kam
  2018-09-16 16:53 ` [PATCH 2/3] ethdev: convert testpmd encap commands to new API Ori Kam
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-16 16:53 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.

Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.

This patch introduce a generic encapsulation for L2 tunnel types, and
generic encapsulation for L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 doc/guides/prog_guide/rte_flow.rst | 70 ++++++++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_flow.h       | 64 ++++++++++++++++++++++++++++++++++
 2 files changed, 134 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b305a72..0f29435 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2076,6 +2076,76 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
 
 This action modifies the payload of matched flows.
 
+Action: ``TUNNEL_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a tunnel encapsulation action by encapsulating the matched flow with
+a tunnel header as defined in the``rte_flow_action_tunnel_encap``.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
+tunnel packet overlay.
+
+.. _table_rte_flow_action_tunnel_encap:
+
+.. table:: TUNNEL_ENCAP
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Tunnel end-point overlay definition |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
+Action: ``TUNNEL_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``TUNNEL_DECAP``
+action is specified, must define a valid tunnel. If the
+flow pattern does not specify a valid tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+Action: ``TUNNEL_ENCAP_L3``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Replace the packet layer 2 header with the encapsulation tunnel header
+as defined in the ``rte_flow_action_tunnel_encap_l3``.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap_l3`` action structure must define a valid
+tunnel packet overlay.
+
+.. _table_rte_flow_action_tunnel_encap_l3:
+
+.. table:: TUNNEL_ENCAP_L3
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Tunnel end-point overlay definition |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
+Action: ``TUNNEL_DECAP_L3``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Replace the packet tunnel network overlay from the matched flow with
+layer 2 header as defined by ``rte_flow_action_tunnel_decap_l3``.
+
+The flow items pattern defined for the flow rule with which a ``TUNNEL_DECAP_L3``
+action is specified, must define a valid tunnel. If the
+flow pattern does not specify a valid tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
 Negative types
 ~~~~~~~~~~~~~~
 
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index f8ba71c..d1f7ebf 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1505,6 +1505,40 @@ enum rte_flow_action_type {
 	 * error.
 	 */
 	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
+
+	/**
+	 * Encapsulate the packet with tunnel header as defined in
+	 * rte_flow_action_tunnel_encap action structure.
+	 *
+	 * See struct rte_flow_action_tunnel_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP,
+
+	/**
+	 * Decapsulate outer most tunnel from matched flow.
+	 *
+	 * The flow pattern must have a valid tunnel header
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,
+
+	/**
+	 * Remove the packet L2 header and encapsulate the
+	 * packet with tunnel header as defined in
+	 * rte_flow_action_tunnel_encap_l3 action structure.
+	 *
+	 * See struct rte_flow_action_tunnel_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP_L3,
+
+	/**
+	 * Decapsulate outer most tunnel from matched flow,
+	 * and add L2 layer.
+	 *
+	 * The flow pattern must have a valid tunnel header.
+	 *
+	 * See struct ret_flow_action_tunnel_decap_l3
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP_L3,
 };
 
 /**
@@ -1868,6 +1902,36 @@ struct rte_flow_action_nvgre_encap {
 	struct rte_flow_item *definition;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP
+ *
+ * Tunnel end-point encapsulation data definition
+ *
+ * The encapsulation header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_encap {
+	uint8_t *buf; /**< Encapsulation data. */
+	uint16_t size; /**< Buffer size. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP_L3
+ *
+ * Tunnel end-point encapsulation data definition
+ *
+ * The encapsulation header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_encap_l3 {
+	uint8_t *buf; /**< Encapsulation data. */
+	uint16_t size; /**< Buffer size. */
+};
+
 /*
  * Definition of a single action.
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 2/3] ethdev: convert testpmd encap commands to new API
  2018-09-16 16:53 [PATCH 0/3] add generic L2/L3 tunnel encapsulation actions Ori Kam
  2018-09-16 16:53 ` [PATCH 1/3] ethdev: " Ori Kam
@ 2018-09-16 16:53 ` Ori Kam
  2018-09-16 16:53 ` [PATCH 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
  3 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-16 16:53 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika

Currently there are 2 encapsulation commands in testpmd one for VXLAN
and one for NVGRE, both of those commands are using the old rte encap
command.

This commit update the commands to work with the new tunnel encap
actions.

The reason that we have different encapsulation commands, one for VXLAN
and one for NVGRE is the ease of use in testpmd, both commands are using
the same rte flow action for tunnel encap.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline_flow.c | 294 +++++++++++++++++++++-----------------------
 1 file changed, 137 insertions(+), 157 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f926060..349e822 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -262,37 +262,13 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
-/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
-#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
-
-/** Storage for struct rte_flow_action_vxlan_encap including external data. */
-struct action_vxlan_encap_data {
-	struct rte_flow_action_vxlan_encap conf;
-	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
-	struct rte_flow_item_eth item_eth;
-	struct rte_flow_item_vlan item_vlan;
-	union {
-		struct rte_flow_item_ipv4 item_ipv4;
-		struct rte_flow_item_ipv6 item_ipv6;
-	};
-	struct rte_flow_item_udp item_udp;
-	struct rte_flow_item_vxlan item_vxlan;
-};
+/** Maximum buffer size for the encap data. */
+#define ACTION_TUNNEL_ENCAP_MAX_BUFFER_SIZE 64
 
-/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
-#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
-
-/** Storage for struct rte_flow_action_nvgre_encap including external data. */
-struct action_nvgre_encap_data {
-	struct rte_flow_action_nvgre_encap conf;
-	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
-	struct rte_flow_item_eth item_eth;
-	struct rte_flow_item_vlan item_vlan;
-	union {
-		struct rte_flow_item_ipv4 item_ipv4;
-		struct rte_flow_item_ipv6 item_ipv6;
-	};
-	struct rte_flow_item_nvgre item_nvgre;
+/** Storage for struct rte_flow_action_tunnel_encap including external data. */
+struct action_tunnel_encap_data {
+	struct rte_flow_action_tunnel_encap conf;
+	uint8_t buf[ACTION_TUNNEL_ENCAP_MAX_BUFFER_SIZE];
 };
 
 /** Maximum number of subsequent tokens and arguments on the stack. */
@@ -2438,8 +2414,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.name = "vxlan_encap",
 		.help = "VXLAN encapsulation, uses configuration set by \"set"
 			" vxlan\"",
-		.priv = PRIV_ACTION(VXLAN_ENCAP,
-				    sizeof(struct action_vxlan_encap_data)),
+		.priv = PRIV_ACTION(TUNNEL_ENCAP,
+				    sizeof(struct action_tunnel_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_vxlan_encap,
 	},
@@ -2448,7 +2424,7 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.help = "Performs a decapsulation action by stripping all"
 			" headers of the VXLAN tunnel network overlay from the"
 			" matched flow.",
-		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.priv = PRIV_ACTION(TUNNEL_DECAP, 0),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
@@ -2456,8 +2432,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.name = "nvgre_encap",
 		.help = "NVGRE encapsulation, uses configuration set by \"set"
 			" nvgre\"",
-		.priv = PRIV_ACTION(NVGRE_ENCAP,
-				    sizeof(struct action_nvgre_encap_data)),
+		.priv = PRIV_ACTION(TUNNEL_ENCAP,
+				    sizeof(struct action_tunnel_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_nvgre_encap,
 	},
@@ -2466,7 +2442,7 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.help = "Performs a decapsulation action by stripping all"
 			" headers of the NVGRE tunnel network overlay from the"
 			" matched flow.",
-		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.priv = PRIV_ACTION(TUNNEL_DECAP, 0),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
@@ -3034,6 +3010,9 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return len;
 }
 
+/** IP next protocol UDP. */
+#define IP_PROTO_UDP 0x11
+
 /** Parse VXLAN encap action. */
 static int
 parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
@@ -3042,7 +3021,32 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 {
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
-	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	struct action_tunnel_encap_data *action_vxlan_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = vxlan_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+			.next_proto_id = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+	};
+	struct rte_flow_item_vxlan vxlan = { .flags = 0, };
+	uint8_t *header;
 	int ret;
 
 	ret = parse_vc(ctx, token, str, len, buf, size);
@@ -3057,83 +3061,58 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	/* Point to selected object. */
 	ctx->object = out->args.vc.data;
 	ctx->objmask = NULL;
-	/* Set up default configuration. */
+	/* Copy the headers to the buffer. */
 	action_vxlan_encap_data = ctx->object;
-	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
-		.conf = (struct rte_flow_action_vxlan_encap){
-			.definition = action_vxlan_encap_data->items,
-		},
-		.items = {
-			{
-				.type = RTE_FLOW_ITEM_TYPE_ETH,
-				.spec = &action_vxlan_encap_data->item_eth,
-				.mask = &rte_flow_item_eth_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VLAN,
-				.spec = &action_vxlan_encap_data->item_vlan,
-				.mask = &rte_flow_item_vlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_IPV4,
-				.spec = &action_vxlan_encap_data->item_ipv4,
-				.mask = &rte_flow_item_ipv4_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_UDP,
-				.spec = &action_vxlan_encap_data->item_udp,
-				.mask = &rte_flow_item_udp_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
-				.spec = &action_vxlan_encap_data->item_vxlan,
-				.mask = &rte_flow_item_vxlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_END,
-			},
-		},
-		.item_eth.type = 0,
-		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
-		},
-		.item_ipv4.hdr = {
-			.src_addr = vxlan_encap_conf.ipv4_src,
-			.dst_addr = vxlan_encap_conf.ipv4_dst,
+	*action_vxlan_encap_data = (struct action_tunnel_encap_data) {
+		.conf = (struct rte_flow_action_tunnel_encap){
+			.buf = action_vxlan_encap_data->buf,
 		},
-		.item_udp.hdr = {
-			.src_port = vxlan_encap_conf.udp_src,
-			.dst_port = vxlan_encap_conf.udp_dst,
-		},
-		.item_vxlan.flags = 0,
+		.buf = {},
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	header = action_vxlan_encap_data->buf;
+	if (vxlan_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (vxlan_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
 	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(eth.src.addr_bytes,
 	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
-	if (!vxlan_encap_conf.select_ipv4) {
-		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (vxlan_encap_conf.select_vlan) {
+		if (vxlan_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (vxlan_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
 		       &vxlan_encap_conf.ipv6_src,
 		       sizeof(vxlan_encap_conf.ipv6_src));
-		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		memcpy(&ipv6.hdr.dst_addr,
 		       &vxlan_encap_conf.ipv6_dst,
 		       sizeof(vxlan_encap_conf.ipv6_dst));
-		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
-			.type = RTE_FLOW_ITEM_TYPE_IPV6,
-			.spec = &action_vxlan_encap_data->item_ipv6,
-			.mask = &rte_flow_item_ipv6_mask,
-		};
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
 	}
-	if (!vxlan_encap_conf.select_vlan)
-		action_vxlan_encap_data->items[1].type =
-			RTE_FLOW_ITEM_TYPE_VOID;
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
-	       RTE_DIM(vxlan_encap_conf.vni));
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(vxlan.vni, vxlan_encap_conf.vni, RTE_DIM(vxlan_encap_conf.vni));
+	memcpy(header, &vxlan, sizeof(vxlan));
+	header += sizeof(vxlan);
+	action_vxlan_encap_data->conf.size = header -
+		action_vxlan_encap_data->buf;
 	action->conf = &action_vxlan_encap_data->conf;
 	return ret;
 }
-
 /** Parse NVGRE encap action. */
 static int
 parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
@@ -3142,7 +3121,26 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 {
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
-	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	struct action_tunnel_encap_data *action_nvgre_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = nvgre_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = nvgre_encap_conf.ipv4_src,
+			.dst_addr = nvgre_encap_conf.ipv4_dst,
+			.next_proto_id = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_nvgre nvgre = { .flow_id = 0, };
+	uint8_t *header;
 	int ret;
 
 	ret = parse_vc(ctx, token, str, len, buf, size);
@@ -3157,74 +3155,56 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	/* Point to selected object. */
 	ctx->object = out->args.vc.data;
 	ctx->objmask = NULL;
-	/* Set up default configuration. */
+	/* Copy the headers to the buffer. */
 	action_nvgre_encap_data = ctx->object;
-	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
-		.conf = (struct rte_flow_action_nvgre_encap){
-			.definition = action_nvgre_encap_data->items,
-		},
-		.items = {
-			{
-				.type = RTE_FLOW_ITEM_TYPE_ETH,
-				.spec = &action_nvgre_encap_data->item_eth,
-				.mask = &rte_flow_item_eth_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VLAN,
-				.spec = &action_nvgre_encap_data->item_vlan,
-				.mask = &rte_flow_item_vlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_IPV4,
-				.spec = &action_nvgre_encap_data->item_ipv4,
-				.mask = &rte_flow_item_ipv4_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
-				.spec = &action_nvgre_encap_data->item_nvgre,
-				.mask = &rte_flow_item_nvgre_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_END,
-			},
-		},
-		.item_eth.type = 0,
-		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
-		},
-		.item_ipv4.hdr = {
-		       .src_addr = nvgre_encap_conf.ipv4_src,
-		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+	*action_nvgre_encap_data = (struct action_tunnel_encap_data) {
+		.conf = (struct rte_flow_action_tunnel_encap){
+			.buf = action_nvgre_encap_data->buf,
 		},
-		.item_nvgre.flow_id = 0,
+		.buf = {},
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	header = action_nvgre_encap_data->buf;
+	if (nvgre_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (nvgre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
 	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(eth.src.addr_bytes,
 	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
-	if (!nvgre_encap_conf.select_ipv4) {
-		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (nvgre_encap_conf.select_vlan) {
+		if (nvgre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (nvgre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
 		       &nvgre_encap_conf.ipv6_src,
 		       sizeof(nvgre_encap_conf.ipv6_src));
-		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		memcpy(&ipv6.hdr.dst_addr,
 		       &nvgre_encap_conf.ipv6_dst,
 		       sizeof(nvgre_encap_conf.ipv6_dst));
-		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
-			.type = RTE_FLOW_ITEM_TYPE_IPV6,
-			.spec = &action_nvgre_encap_data->item_ipv6,
-			.mask = &rte_flow_item_ipv6_mask,
-		};
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
 	}
-	if (!nvgre_encap_conf.select_vlan)
-		action_nvgre_encap_data->items[1].type =
-			RTE_FLOW_ITEM_TYPE_VOID;
-	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
-	       RTE_DIM(nvgre_encap_conf.tni));
+	memcpy(nvgre.tni, nvgre_encap_conf.tni, RTE_DIM(nvgre_encap_conf.tni));
+	memcpy(header, &nvgre, sizeof(nvgre));
+	header += sizeof(nvgre);
+	action_nvgre_encap_data->conf.size = header -
+		action_nvgre_encap_data->buf;
 	action->conf = &action_nvgre_encap_data->conf;
 	return ret;
 }
-
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 3/3] ethdev: remove vxlan and nvgre encapsulation commands
  2018-09-16 16:53 [PATCH 0/3] add generic L2/L3 tunnel encapsulation actions Ori Kam
  2018-09-16 16:53 ` [PATCH 1/3] ethdev: " Ori Kam
  2018-09-16 16:53 ` [PATCH 2/3] ethdev: convert testpmd encap commands to new API Ori Kam
@ 2018-09-16 16:53 ` Ori Kam
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
  3 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-16 16:53 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika

This patch removes the VXLAN and NVGRE encapsulation commands.

Those commands are subset of the TUNNEL_ENCAP command so there is no
need to keep both versions.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 doc/guides/prog_guide/rte_flow.rst | 107 -------------------------------------
 lib/librte_ethdev/rte_flow.h       |  34 ------------
 2 files changed, 141 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 0f29435..b600b2d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1969,113 +1969,6 @@ Implements ``OFPAT_PUSH_MPLS`` ("push a new MPLS tag") as defined by the
    | ``ethertype`` | EtherType |
    +---------------+-----------+
 
-Action: ``VXLAN_ENCAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a VXLAN encapsulation action by encapsulating the matched flow in the
-VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items
-definition.
-
-This action modifies the payload of matched flows. The flow definition specified
-in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
-VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local
-Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks
-over Layer 3 Networks). The pattern must be terminated with the
-RTE_FLOW_ITEM_TYPE_END item type.
-
-.. _table_rte_flow_action_vxlan_encap:
-
-.. table:: VXLAN_ENCAP
-
-   +----------------+-------------------------------------+
-   | Field          | Value                               |
-   +================+=====================================+
-   | ``definition`` | Tunnel end-point overlay definition |
-   +----------------+-------------------------------------+
-
-.. _table_rte_flow_action_vxlan_encap_example:
-
-.. table:: IPv4 VxLAN flow pattern example.
-
-   +-------+----------+
-   | Index | Item     |
-   +=======+==========+
-   | 0     | Ethernet |
-   +-------+----------+
-   | 1     | IPv4     |
-   +-------+----------+
-   | 2     | UDP      |
-   +-------+----------+
-   | 3     | VXLAN    |
-   +-------+----------+
-   | 4     | END      |
-   +-------+----------+
-
-Action: ``VXLAN_DECAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a decapsulation action by stripping all headers of the VXLAN tunnel
-network overlay from the matched flow.
-
-The flow items pattern defined for the flow rule with which a ``VXLAN_DECAP``
-action is specified, must define a valid VXLAN tunnel as per RFC7348. If the
-flow pattern does not specify a valid VXLAN tunnel then a
-RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
-
-This action modifies the payload of matched flows.
-
-Action: ``NVGRE_ENCAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a NVGRE encapsulation action by encapsulating the matched flow in the
-NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item
-definition.
-
-This action modifies the payload of matched flows. The flow definition specified
-in the ``rte_flow_action_tunnel_encap`` action structure must defined a valid
-NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network
-Virtualization Using Generic Routing Encapsulation). The pattern must be
-terminated with the RTE_FLOW_ITEM_TYPE_END item type.
-
-.. _table_rte_flow_action_nvgre_encap:
-
-.. table:: NVGRE_ENCAP
-
-   +----------------+-------------------------------------+
-   | Field          | Value                               |
-   +================+=====================================+
-   | ``definition`` | NVGRE end-point overlay definition  |
-   +----------------+-------------------------------------+
-
-.. _table_rte_flow_action_nvgre_encap_example:
-
-.. table:: IPv4 NVGRE flow pattern example.
-
-   +-------+----------+
-   | Index | Item     |
-   +=======+==========+
-   | 0     | Ethernet |
-   +-------+----------+
-   | 1     | IPv4     |
-   +-------+----------+
-   | 2     | NVGRE    |
-   +-------+----------+
-   | 3     | END      |
-   +-------+----------+
-
-Action: ``NVGRE_DECAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a decapsulation action by stripping all headers of the NVGRE tunnel
-network overlay from the matched flow.
-
-The flow items pattern defined for the flow rule with which a ``NVGRE_DECAP``
-action is specified, must define a valid NVGRE tunnel as per RFC7637. If the
-flow pattern does not specify a valid NVGRE tunnel then a
-RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
-
-This action modifies the payload of matched flows.
-
 Action: ``TUNNEL_ENCAP``
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index d1f7ebf..9505281 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1473,40 +1473,6 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS,
 
 	/**
-	 * Encapsulate flow in VXLAN tunnel as defined in
-	 * rte_flow_action_vxlan_encap action structure.
-	 *
-	 * See struct rte_flow_action_vxlan_encap.
-	 */
-	RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP,
-
-	/**
-	 * Decapsulate outer most VXLAN tunnel from matched flow.
-	 *
-	 * If flow pattern does not define a valid VXLAN tunnel (as specified by
-	 * RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
-	 * error.
-	 */
-	RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
-
-	/**
-	 * Encapsulate flow in NVGRE tunnel defined in the
-	 * rte_flow_action_nvgre_encap action structure.
-	 *
-	 * See struct rte_flow_action_nvgre_encap.
-	 */
-	RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP,
-
-	/**
-	 * Decapsulate outer most NVGRE tunnel from matched flow.
-	 *
-	 * If flow pattern does not define a valid NVGRE tunnel (as specified by
-	 * RFC7637) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
-	 * error.
-	 */
-	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
-
-	/**
 	 * Encapsulate the packet with tunnel header as defined in
 	 * rte_flow_action_tunnel_encap action structure.
 	 *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-09-16 16:53 [PATCH 0/3] add generic L2/L3 tunnel encapsulation actions Ori Kam
                   ` (2 preceding siblings ...)
  2018-09-16 16:53 ` [PATCH 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
@ 2018-09-26 21:00 ` Ori Kam
  2018-09-26 21:00   ` [PATCH v2 1/3] " Ori Kam
                     ` (5 more replies)
  3 siblings, 6 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-26 21:00 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

This series implement the generic L2/L3 tunnel encapsulation actions
and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.
    
Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.
    
This series introduce a generic encapsulation for L2 tunnel types, and
generic encapsulation for L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

[1]https://mails.dpdk.org/archives/dev/2018-August/109944.html

Changes in v2:

* add missing decap_l3 structure.
* fix typo.



Ori Kam (3):
  ethdev: add generic L2/L3 tunnel encapsulation actions
  app/testpmd: convert testpmd encap commands to new API
  ethdev: remove vxlan and nvgre encapsulation commands

 app/test-pmd/cmdline_flow.c        | 294 +++++++++++++++++--------------------
 app/test-pmd/config.c              |   3 +
 doc/guides/prog_guide/rte_flow.rst | 115 ++++++---------
 lib/librte_ethdev/rte_flow.h       | 108 ++++++--------
 4 files changed, 227 insertions(+), 293 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v2 1/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
@ 2018-09-26 21:00   ` Ori Kam
  2018-09-26 21:00   ` [PATCH v2 2/3] app/testpmd: convert testpmd encap commands to new API Ori Kam
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-26 21:00 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.

Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.

This patch introduce a generic encapsulation for L2 tunnel types, and
generic encapsulation for L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 doc/guides/prog_guide/rte_flow.rst | 82 ++++++++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_flow.h       | 79 ++++++++++++++++++++++++++++++++++++
 2 files changed, 161 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b305a72..3ba8018 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2076,6 +2076,88 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
 
 This action modifies the payload of matched flows.
 
+Action: ``TUNNEL_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a tunnel encapsulation action by encapsulating the matched flow with
+a tunnel header as defined in the``rte_flow_action_tunnel_encap``.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
+tunnel packet overlay.
+
+.. _table_rte_flow_action_tunnel_encap:
+
+.. table:: TUNNEL_ENCAP
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Tunnel end-point overlay definition |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
+Action: ``TUNNEL_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``TUNNEL_DECAP``
+action is specified, must define a valid tunnel. If the
+flow pattern does not specify a valid tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+Action: ``TUNNEL_ENCAP_L3``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Replace the packet layer 2 header with the encapsulation tunnel header
+as defined in the ``rte_flow_action_tunnel_encap_l3``.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap_l3`` action structure must define a valid
+tunnel packet overlay.
+
+.. _table_rte_flow_action_tunnel_encap_l3:
+
+.. table:: TUNNEL_ENCAP_L3
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Tunnel end-point overlay definition |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
+Action: ``TUNNEL_DECAP_L3``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Replace the packet tunnel network overlay from the matched flow with
+layer 2 header as defined by ``rte_flow_action_tunnel_decap_l3``.
+
+The flow items pattern defined for the flow rule with which a ``TUNNEL_DECAP_L3``
+action is specified, must define a valid tunnel. If the
+flow pattern does not specify a valid tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+.. _table_rte_flow_action_tunnel_decap_l3:
+
+.. table:: TUNNEL_DECAP_L3
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Layer 2 definition                  |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
 Negative types
 ~~~~~~~~~~~~~~
 
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index f8ba71c..e29c561 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1505,6 +1505,40 @@ enum rte_flow_action_type {
 	 * error.
 	 */
 	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
+
+	/**
+	 * Encapsulate the packet with tunnel header as defined in
+	 * rte_flow_action_tunnel_encap action structure.
+	 *
+	 * See struct rte_flow_action_tunnel_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP,
+
+	/**
+	 * Decapsulate outer most tunnel from matched flow.
+	 *
+	 * The flow pattern must have a valid tunnel header
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,
+
+	/**
+	 * Remove the packet L2 header and encapsulate the
+	 * packet with tunnel header as defined in
+	 * rte_flow_action_tunnel_encap_l3 action structure.
+	 *
+	 * See struct rte_flow_action_tunnel_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP_L3,
+
+	/**
+	 * Decapsulate outer most tunnel from matched flow,
+	 * and add L2 layer.
+	 *
+	 * The flow pattern must have a valid tunnel header.
+	 *
+	 * See struct rte_flow_action_tunnel_decap_l3
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP_L3,
 };
 
 /**
@@ -1868,6 +1902,51 @@ struct rte_flow_action_nvgre_encap {
 	struct rte_flow_item *definition;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP
+ *
+ * Tunnel end-point encapsulation data definition
+ *
+ * The encapsulation header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_encap {
+	uint8_t *buf; /**< Encapsulation data. */
+	uint16_t size; /**< Buffer size. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP_L3
+ *
+ * Tunnel end-point encapsulation data definition
+ *
+ * The encapsulation header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_encap_l3 {
+	uint8_t *buf; /**< Encapsulation data. */
+	uint16_t size; /**< Buffer size. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP_L3
+ *
+ * Layer 2 definition to encapsulte the packet after decapsulating the packet.
+ *
+ * The layer 2 definition header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_decap_l3 {
+	uint8_t *buf; /**< L2 data. */
+	uint16_t size; /**< Buffer size. */
+};
+
 /*
  * Definition of a single action.
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 2/3] app/testpmd: convert testpmd encap commands to new API
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
  2018-09-26 21:00   ` [PATCH v2 1/3] " Ori Kam
@ 2018-09-26 21:00   ` Ori Kam
  2018-09-26 21:00   ` [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-26 21:00 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Currently there are 2 encapsulation commands in testpmd one for VXLAN
and one for NVGRE, both of those commands are using the old rte encap
command.

This commit update the commands to work with the new tunnel encap
actions.

The reason that we have different encapsulation commands, one for VXLAN
and one for NVGRE is the ease of use in testpmd, both commands are using
the same rte flow action for tunnel encap.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline_flow.c | 294 +++++++++++++++++++++-----------------------
 app/test-pmd/config.c       |   3 +
 2 files changed, 140 insertions(+), 157 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f926060..349e822 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -262,37 +262,13 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
-/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
-#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
-
-/** Storage for struct rte_flow_action_vxlan_encap including external data. */
-struct action_vxlan_encap_data {
-	struct rte_flow_action_vxlan_encap conf;
-	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
-	struct rte_flow_item_eth item_eth;
-	struct rte_flow_item_vlan item_vlan;
-	union {
-		struct rte_flow_item_ipv4 item_ipv4;
-		struct rte_flow_item_ipv6 item_ipv6;
-	};
-	struct rte_flow_item_udp item_udp;
-	struct rte_flow_item_vxlan item_vxlan;
-};
+/** Maximum buffer size for the encap data. */
+#define ACTION_TUNNEL_ENCAP_MAX_BUFFER_SIZE 64
 
-/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
-#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
-
-/** Storage for struct rte_flow_action_nvgre_encap including external data. */
-struct action_nvgre_encap_data {
-	struct rte_flow_action_nvgre_encap conf;
-	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
-	struct rte_flow_item_eth item_eth;
-	struct rte_flow_item_vlan item_vlan;
-	union {
-		struct rte_flow_item_ipv4 item_ipv4;
-		struct rte_flow_item_ipv6 item_ipv6;
-	};
-	struct rte_flow_item_nvgre item_nvgre;
+/** Storage for struct rte_flow_action_tunnel_encap including external data. */
+struct action_tunnel_encap_data {
+	struct rte_flow_action_tunnel_encap conf;
+	uint8_t buf[ACTION_TUNNEL_ENCAP_MAX_BUFFER_SIZE];
 };
 
 /** Maximum number of subsequent tokens and arguments on the stack. */
@@ -2438,8 +2414,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.name = "vxlan_encap",
 		.help = "VXLAN encapsulation, uses configuration set by \"set"
 			" vxlan\"",
-		.priv = PRIV_ACTION(VXLAN_ENCAP,
-				    sizeof(struct action_vxlan_encap_data)),
+		.priv = PRIV_ACTION(TUNNEL_ENCAP,
+				    sizeof(struct action_tunnel_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_vxlan_encap,
 	},
@@ -2448,7 +2424,7 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.help = "Performs a decapsulation action by stripping all"
 			" headers of the VXLAN tunnel network overlay from the"
 			" matched flow.",
-		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.priv = PRIV_ACTION(TUNNEL_DECAP, 0),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
@@ -2456,8 +2432,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.name = "nvgre_encap",
 		.help = "NVGRE encapsulation, uses configuration set by \"set"
 			" nvgre\"",
-		.priv = PRIV_ACTION(NVGRE_ENCAP,
-				    sizeof(struct action_nvgre_encap_data)),
+		.priv = PRIV_ACTION(TUNNEL_ENCAP,
+				    sizeof(struct action_tunnel_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_nvgre_encap,
 	},
@@ -2466,7 +2442,7 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.help = "Performs a decapsulation action by stripping all"
 			" headers of the NVGRE tunnel network overlay from the"
 			" matched flow.",
-		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.priv = PRIV_ACTION(TUNNEL_DECAP, 0),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
@@ -3034,6 +3010,9 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return len;
 }
 
+/** IP next protocol UDP. */
+#define IP_PROTO_UDP 0x11
+
 /** Parse VXLAN encap action. */
 static int
 parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
@@ -3042,7 +3021,32 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 {
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
-	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	struct action_tunnel_encap_data *action_vxlan_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = vxlan_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+			.next_proto_id = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+	};
+	struct rte_flow_item_vxlan vxlan = { .flags = 0, };
+	uint8_t *header;
 	int ret;
 
 	ret = parse_vc(ctx, token, str, len, buf, size);
@@ -3057,83 +3061,58 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	/* Point to selected object. */
 	ctx->object = out->args.vc.data;
 	ctx->objmask = NULL;
-	/* Set up default configuration. */
+	/* Copy the headers to the buffer. */
 	action_vxlan_encap_data = ctx->object;
-	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
-		.conf = (struct rte_flow_action_vxlan_encap){
-			.definition = action_vxlan_encap_data->items,
-		},
-		.items = {
-			{
-				.type = RTE_FLOW_ITEM_TYPE_ETH,
-				.spec = &action_vxlan_encap_data->item_eth,
-				.mask = &rte_flow_item_eth_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VLAN,
-				.spec = &action_vxlan_encap_data->item_vlan,
-				.mask = &rte_flow_item_vlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_IPV4,
-				.spec = &action_vxlan_encap_data->item_ipv4,
-				.mask = &rte_flow_item_ipv4_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_UDP,
-				.spec = &action_vxlan_encap_data->item_udp,
-				.mask = &rte_flow_item_udp_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
-				.spec = &action_vxlan_encap_data->item_vxlan,
-				.mask = &rte_flow_item_vxlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_END,
-			},
-		},
-		.item_eth.type = 0,
-		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
-		},
-		.item_ipv4.hdr = {
-			.src_addr = vxlan_encap_conf.ipv4_src,
-			.dst_addr = vxlan_encap_conf.ipv4_dst,
+	*action_vxlan_encap_data = (struct action_tunnel_encap_data) {
+		.conf = (struct rte_flow_action_tunnel_encap){
+			.buf = action_vxlan_encap_data->buf,
 		},
-		.item_udp.hdr = {
-			.src_port = vxlan_encap_conf.udp_src,
-			.dst_port = vxlan_encap_conf.udp_dst,
-		},
-		.item_vxlan.flags = 0,
+		.buf = {},
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	header = action_vxlan_encap_data->buf;
+	if (vxlan_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (vxlan_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
 	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(eth.src.addr_bytes,
 	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
-	if (!vxlan_encap_conf.select_ipv4) {
-		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (vxlan_encap_conf.select_vlan) {
+		if (vxlan_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (vxlan_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
 		       &vxlan_encap_conf.ipv6_src,
 		       sizeof(vxlan_encap_conf.ipv6_src));
-		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		memcpy(&ipv6.hdr.dst_addr,
 		       &vxlan_encap_conf.ipv6_dst,
 		       sizeof(vxlan_encap_conf.ipv6_dst));
-		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
-			.type = RTE_FLOW_ITEM_TYPE_IPV6,
-			.spec = &action_vxlan_encap_data->item_ipv6,
-			.mask = &rte_flow_item_ipv6_mask,
-		};
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
 	}
-	if (!vxlan_encap_conf.select_vlan)
-		action_vxlan_encap_data->items[1].type =
-			RTE_FLOW_ITEM_TYPE_VOID;
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
-	       RTE_DIM(vxlan_encap_conf.vni));
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(vxlan.vni, vxlan_encap_conf.vni, RTE_DIM(vxlan_encap_conf.vni));
+	memcpy(header, &vxlan, sizeof(vxlan));
+	header += sizeof(vxlan);
+	action_vxlan_encap_data->conf.size = header -
+		action_vxlan_encap_data->buf;
 	action->conf = &action_vxlan_encap_data->conf;
 	return ret;
 }
-
 /** Parse NVGRE encap action. */
 static int
 parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token,
@@ -3142,7 +3121,26 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 {
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
-	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	struct action_tunnel_encap_data *action_nvgre_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = nvgre_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = nvgre_encap_conf.ipv4_src,
+			.dst_addr = nvgre_encap_conf.ipv4_dst,
+			.next_proto_id = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_nvgre nvgre = { .flow_id = 0, };
+	uint8_t *header;
 	int ret;
 
 	ret = parse_vc(ctx, token, str, len, buf, size);
@@ -3157,74 +3155,56 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	/* Point to selected object. */
 	ctx->object = out->args.vc.data;
 	ctx->objmask = NULL;
-	/* Set up default configuration. */
+	/* Copy the headers to the buffer. */
 	action_nvgre_encap_data = ctx->object;
-	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
-		.conf = (struct rte_flow_action_nvgre_encap){
-			.definition = action_nvgre_encap_data->items,
-		},
-		.items = {
-			{
-				.type = RTE_FLOW_ITEM_TYPE_ETH,
-				.spec = &action_nvgre_encap_data->item_eth,
-				.mask = &rte_flow_item_eth_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VLAN,
-				.spec = &action_nvgre_encap_data->item_vlan,
-				.mask = &rte_flow_item_vlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_IPV4,
-				.spec = &action_nvgre_encap_data->item_ipv4,
-				.mask = &rte_flow_item_ipv4_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
-				.spec = &action_nvgre_encap_data->item_nvgre,
-				.mask = &rte_flow_item_nvgre_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_END,
-			},
-		},
-		.item_eth.type = 0,
-		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
-		},
-		.item_ipv4.hdr = {
-		       .src_addr = nvgre_encap_conf.ipv4_src,
-		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+	*action_nvgre_encap_data = (struct action_tunnel_encap_data) {
+		.conf = (struct rte_flow_action_tunnel_encap){
+			.buf = action_nvgre_encap_data->buf,
 		},
-		.item_nvgre.flow_id = 0,
+		.buf = {},
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	header = action_nvgre_encap_data->buf;
+	if (nvgre_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (nvgre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
 	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(eth.src.addr_bytes,
 	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
-	if (!nvgre_encap_conf.select_ipv4) {
-		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (nvgre_encap_conf.select_vlan) {
+		if (nvgre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (nvgre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
 		       &nvgre_encap_conf.ipv6_src,
 		       sizeof(nvgre_encap_conf.ipv6_src));
-		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		memcpy(&ipv6.hdr.dst_addr,
 		       &nvgre_encap_conf.ipv6_dst,
 		       sizeof(nvgre_encap_conf.ipv6_dst));
-		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
-			.type = RTE_FLOW_ITEM_TYPE_IPV6,
-			.spec = &action_nvgre_encap_data->item_ipv6,
-			.mask = &rte_flow_item_ipv6_mask,
-		};
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
 	}
-	if (!nvgre_encap_conf.select_vlan)
-		action_nvgre_encap_data->items[1].type =
-			RTE_FLOW_ITEM_TYPE_VOID;
-	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
-	       RTE_DIM(nvgre_encap_conf.tni));
+	memcpy(nvgre.tni, nvgre_encap_conf.tni, RTE_DIM(nvgre_encap_conf.tni));
+	memcpy(header, &nvgre, sizeof(nvgre));
+	header += sizeof(nvgre);
+	action_nvgre_encap_data->conf.size = header -
+		action_nvgre_encap_data->buf;
 	action->conf = &action_nvgre_encap_data->conf;
 	return ret;
 }
-
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 14ccd68..99a82de 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1153,6 +1153,9 @@ enum item_spec_type {
 		       sizeof(struct rte_flow_action_of_pop_mpls)),
 	MK_FLOW_ACTION(OF_PUSH_MPLS,
 		       sizeof(struct rte_flow_action_of_push_mpls)),
+	MK_FLOW_ACTION(TUNNEL_ENCAP,
+		       sizeof(struct rte_flow_action_tunnel_encap)),
+	MK_FLOW_ACTION(TUNNEL_DECAP, 0),
 };
 
 /** Compute storage space needed by action configuration and copy it. */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
  2018-09-26 21:00   ` [PATCH v2 1/3] " Ori Kam
  2018-09-26 21:00   ` [PATCH v2 2/3] app/testpmd: convert testpmd encap commands to new API Ori Kam
@ 2018-09-26 21:00   ` Ori Kam
  2018-10-05 12:59     ` Ferruh Yigit
  2018-10-05 13:27     ` Mohammad Abdul Awal
  2018-10-03 20:38   ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Thomas Monjalon
                     ` (2 subsequent siblings)
  5 siblings, 2 replies; 53+ messages in thread
From: Ori Kam @ 2018-09-26 21:00 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

This patch removes the VXLAN and NVGRE encapsulation commands.

Those commands are subset of the TUNNEL_ENCAP command so there is no
need to keep both versions.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 doc/guides/prog_guide/rte_flow.rst | 107 -------------------------------------
 lib/librte_ethdev/rte_flow.h       | 103 -----------------------------------
 2 files changed, 210 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3ba8018..9e739f3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1969,113 +1969,6 @@ Implements ``OFPAT_PUSH_MPLS`` ("push a new MPLS tag") as defined by the
    | ``ethertype`` | EtherType |
    +---------------+-----------+
 
-Action: ``VXLAN_ENCAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a VXLAN encapsulation action by encapsulating the matched flow in the
-VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items
-definition.
-
-This action modifies the payload of matched flows. The flow definition specified
-in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
-VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local
-Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks
-over Layer 3 Networks). The pattern must be terminated with the
-RTE_FLOW_ITEM_TYPE_END item type.
-
-.. _table_rte_flow_action_vxlan_encap:
-
-.. table:: VXLAN_ENCAP
-
-   +----------------+-------------------------------------+
-   | Field          | Value                               |
-   +================+=====================================+
-   | ``definition`` | Tunnel end-point overlay definition |
-   +----------------+-------------------------------------+
-
-.. _table_rte_flow_action_vxlan_encap_example:
-
-.. table:: IPv4 VxLAN flow pattern example.
-
-   +-------+----------+
-   | Index | Item     |
-   +=======+==========+
-   | 0     | Ethernet |
-   +-------+----------+
-   | 1     | IPv4     |
-   +-------+----------+
-   | 2     | UDP      |
-   +-------+----------+
-   | 3     | VXLAN    |
-   +-------+----------+
-   | 4     | END      |
-   +-------+----------+
-
-Action: ``VXLAN_DECAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a decapsulation action by stripping all headers of the VXLAN tunnel
-network overlay from the matched flow.
-
-The flow items pattern defined for the flow rule with which a ``VXLAN_DECAP``
-action is specified, must define a valid VXLAN tunnel as per RFC7348. If the
-flow pattern does not specify a valid VXLAN tunnel then a
-RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
-
-This action modifies the payload of matched flows.
-
-Action: ``NVGRE_ENCAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a NVGRE encapsulation action by encapsulating the matched flow in the
-NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item
-definition.
-
-This action modifies the payload of matched flows. The flow definition specified
-in the ``rte_flow_action_tunnel_encap`` action structure must defined a valid
-NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network
-Virtualization Using Generic Routing Encapsulation). The pattern must be
-terminated with the RTE_FLOW_ITEM_TYPE_END item type.
-
-.. _table_rte_flow_action_nvgre_encap:
-
-.. table:: NVGRE_ENCAP
-
-   +----------------+-------------------------------------+
-   | Field          | Value                               |
-   +================+=====================================+
-   | ``definition`` | NVGRE end-point overlay definition  |
-   +----------------+-------------------------------------+
-
-.. _table_rte_flow_action_nvgre_encap_example:
-
-.. table:: IPv4 NVGRE flow pattern example.
-
-   +-------+----------+
-   | Index | Item     |
-   +=======+==========+
-   | 0     | Ethernet |
-   +-------+----------+
-   | 1     | IPv4     |
-   +-------+----------+
-   | 2     | NVGRE    |
-   +-------+----------+
-   | 3     | END      |
-   +-------+----------+
-
-Action: ``NVGRE_DECAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a decapsulation action by stripping all headers of the NVGRE tunnel
-network overlay from the matched flow.
-
-The flow items pattern defined for the flow rule with which a ``NVGRE_DECAP``
-action is specified, must define a valid NVGRE tunnel as per RFC7637. If the
-flow pattern does not specify a valid NVGRE tunnel then a
-RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
-
-This action modifies the payload of matched flows.
-
 Action: ``TUNNEL_ENCAP``
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index e29c561..55521e3 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1473,40 +1473,6 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS,
 
 	/**
-	 * Encapsulate flow in VXLAN tunnel as defined in
-	 * rte_flow_action_vxlan_encap action structure.
-	 *
-	 * See struct rte_flow_action_vxlan_encap.
-	 */
-	RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP,
-
-	/**
-	 * Decapsulate outer most VXLAN tunnel from matched flow.
-	 *
-	 * If flow pattern does not define a valid VXLAN tunnel (as specified by
-	 * RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
-	 * error.
-	 */
-	RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
-
-	/**
-	 * Encapsulate flow in NVGRE tunnel defined in the
-	 * rte_flow_action_nvgre_encap action structure.
-	 *
-	 * See struct rte_flow_action_nvgre_encap.
-	 */
-	RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP,
-
-	/**
-	 * Decapsulate outer most NVGRE tunnel from matched flow.
-	 *
-	 * If flow pattern does not define a valid NVGRE tunnel (as specified by
-	 * RFC7637) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
-	 * error.
-	 */
-	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
-
-	/**
 	 * Encapsulate the packet with tunnel header as defined in
 	 * rte_flow_action_tunnel_encap action structure.
 	 *
@@ -1837,75 +1803,6 @@ struct rte_flow_action_of_push_mpls {
  * @warning
  * @b EXPERIMENTAL: this structure may change without prior notice
  *
- * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP
- *
- * VXLAN tunnel end-point encapsulation data definition
- *
- * The tunnel definition is provided through the flow item pattern, the
- * provided pattern must conform to RFC7348 for the tunnel specified. The flow
- * definition must be provided in order from the RTE_FLOW_ITEM_TYPE_ETH
- * definition up the end item which is specified by RTE_FLOW_ITEM_TYPE_END.
- *
- * The mask field allows user to specify which fields in the flow item
- * definitions can be ignored and which have valid data and can be used
- * verbatim.
- *
- * Note: the last field is not used in the definition of a tunnel and can be
- * ignored.
- *
- * Valid flow definition for RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP include:
- *
- * - ETH / IPV4 / UDP / VXLAN / END
- * - ETH / IPV6 / UDP / VXLAN / END
- * - ETH / VLAN / IPV4 / UDP / VXLAN / END
- *
- */
-struct rte_flow_action_vxlan_encap {
-	/**
-	 * Encapsulating vxlan tunnel definition
-	 * (terminated by the END pattern item).
-	 */
-	struct rte_flow_item *definition;
-};
-
-/**
- * @warning
- * @b EXPERIMENTAL: this structure may change without prior notice
- *
- * RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP
- *
- * NVGRE tunnel end-point encapsulation data definition
- *
- * The tunnel definition is provided through the flow item pattern  the
- * provided pattern must conform with RFC7637. The flow definition must be
- * provided in order from the RTE_FLOW_ITEM_TYPE_ETH definition up the end item
- * which is specified by RTE_FLOW_ITEM_TYPE_END.
- *
- * The mask field allows user to specify which fields in the flow item
- * definitions can be ignored and which have valid data and can be used
- * verbatim.
- *
- * Note: the last field is not used in the definition of a tunnel and can be
- * ignored.
- *
- * Valid flow definition for RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP include:
- *
- * - ETH / IPV4 / NVGRE / END
- * - ETH / VLAN / IPV6 / NVGRE / END
- *
- */
-struct rte_flow_action_nvgre_encap {
-	/**
-	 * Encapsulating vxlan tunnel definition
-	 * (terminated by the END pattern item).
-	 */
-	struct rte_flow_item *definition;
-};
-
-/**
- * @warning
- * @b EXPERIMENTAL: this structure may change without prior notice
- *
  * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP
  *
  * Tunnel end-point encapsulation data definition
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
                     ` (2 preceding siblings ...)
  2018-09-26 21:00   ` [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
@ 2018-10-03 20:38   ` Thomas Monjalon
  2018-10-05 12:57   ` Ferruh Yigit
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
  5 siblings, 0 replies; 53+ messages in thread
From: Thomas Monjalon @ 2018-10-03 20:38 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: dev, Ori Kam, arybchenko, stephen, adrien.mazarguil, dekelp,
	nelio.laranjeiro, yskoh, shahafs, bernard.iremonger

26/09/2018 23:00, Ori Kam:
> Ori Kam (3):
>   ethdev: add generic L2/L3 tunnel encapsulation actions
>   app/testpmd: convert testpmd encap commands to new API
>   ethdev: remove vxlan and nvgre encapsulation commands

If no more comment, I think we should accept this series.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
                     ` (3 preceding siblings ...)
  2018-10-03 20:38   ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Thomas Monjalon
@ 2018-10-05 12:57   ` Ferruh Yigit
  2018-10-05 14:00     ` Ori Kam
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
  5 siblings, 1 reply; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-05 12:57 UTC (permalink / raw)
  To: Ori Kam, arybchenko, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 9/26/2018 10:00 PM, Ori Kam wrote:
> This series implement the generic L2/L3 tunnel encapsulation actions
> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> 
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the action
> and from the action to the NIC. This results in negetive impact on the
> insertion performance.
>     
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
>     
> This series introduce a generic encapsulation for L2 tunnel types, and
> generic encapsulation for L3 tunnel types. In addtion the new
> encapsulations commands are using raw buffer inorder to save the
> converstion time, both for the application and the PMD.
> 
> [1]https://mails.dpdk.org/archives/dev/2018-August/109944.html
> 
> Changes in v2:
> 
> * add missing decap_l3 structure.
> * fix typo.
> 
> 
> 
> Ori Kam (3):
>   ethdev: add generic L2/L3 tunnel encapsulation actions
>   app/testpmd: convert testpmd encap commands to new API
>   ethdev: remove vxlan and nvgre encapsulation commands


Hi Ori,

Can you please rebase the patch on top of latest next-net?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands
  2018-09-26 21:00   ` [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
@ 2018-10-05 12:59     ` Ferruh Yigit
  2018-10-05 13:26       ` Awal, Mohammad Abdul
  2018-10-05 13:27     ` Mohammad Abdul Awal
  1 sibling, 1 reply; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-05 12:59 UTC (permalink / raw)
  To: Declan Doherty, Awal, Mohammad Abdul
  Cc: Ori Kam, arybchenko, stephen, adrien.mazarguil, dev, dekelp,
	thomas, nelio.laranjeiro, yskoh, shahafs

On 9/26/2018 10:00 PM, Ori Kam wrote:
> This patch removes the VXLAN and NVGRE encapsulation commands.
> 
> Those commands are subset of the TUNNEL_ENCAP command so there is no
> need to keep both versions.
> 
> Signed-off-by: Ori Kam <orika@mellanox.com>

Hi Declan, Awal,

I guess these were added by you, can you please check the patch and let us know
if there is an objection?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands
  2018-10-05 12:59     ` Ferruh Yigit
@ 2018-10-05 13:26       ` Awal, Mohammad Abdul
  0 siblings, 0 replies; 53+ messages in thread
From: Awal, Mohammad Abdul @ 2018-10-05 13:26 UTC (permalink / raw)
  To: Yigit, Ferruh, Doherty, Declan
  Cc: Ori Kam, arybchenko, stephen, adrien.mazarguil, dev, dekelp,
	thomas, nelio.laranjeiro, yskoh, shahafs

Hi Ferruh,

The patch looks ok to me. No objection.

Regards,
Awal.

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, October 5, 2018 1:59 PM
> To: Doherty, Declan <declan.doherty@intel.com>; Awal, Mohammad Abdul
> <mohammad.abdul.awal@intel.com>
> Cc: Ori Kam <orika@mellanox.com>; arybchenko@solarflare.com;
> stephen@networkplumber.org; adrien.mazarguil@6wind.com;
> dev@dpdk.org; dekelp@mellanox.com; thomas@monjalon.net;
> nelio.laranjeiro@6wind.com; yskoh@mellanox.com; shahafs@mellanox.com
> Subject: Re: [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation
> commands
> 
> On 9/26/2018 10:00 PM, Ori Kam wrote:
> > This patch removes the VXLAN and NVGRE encapsulation commands.
> >
> > Those commands are subset of the TUNNEL_ENCAP command so there is
> no
> > need to keep both versions.
> >
> > Signed-off-by: Ori Kam <orika@mellanox.com>
> 
> Hi Declan, Awal,
> 
> I guess these were added by you, can you please check the patch and let us
> know
> if there is an objection?
> 
> Thanks,
> ferruh

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands
  2018-09-26 21:00   ` [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
  2018-10-05 12:59     ` Ferruh Yigit
@ 2018-10-05 13:27     ` Mohammad Abdul Awal
  1 sibling, 0 replies; 53+ messages in thread
From: Mohammad Abdul Awal @ 2018-10-05 13:27 UTC (permalink / raw)
  To: Ori Kam, arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs



On 26/09/2018 22:00, Ori Kam wrote:
> This patch removes the VXLAN and NVGRE encapsulation commands.
>
> Those commands are subset of the TUNNEL_ENCAP command so there is no
> need to keep both versions.
>
> Signed-off-by: Ori Kam <orika@mellanox.com>
> ---
>   doc/guides/prog_guide/rte_flow.rst | 107 -------------------------------------
>   lib/librte_ethdev/rte_flow.h       | 103 -----------------------------------
>   2 files changed, 210 deletions(-)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 3ba8018..9e739f3 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1969,113 +1969,6 @@ Implements ``OFPAT_PUSH_MPLS`` ("push a new MPLS tag") as defined by the
>      | ``ethertype`` | EtherType |
>      +---------------+-----------+
>   
> -Action: ``VXLAN_ENCAP``
> -^^^^^^^^^^^^^^^^^^^^^^^
> -
> -Performs a VXLAN encapsulation action by encapsulating the matched flow in the
> -VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items
> -definition.
> -
> -This action modifies the payload of matched flows. The flow definition specified
> -in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
> -VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local
> -Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks
> -over Layer 3 Networks). The pattern must be terminated with the
> -RTE_FLOW_ITEM_TYPE_END item type.
> -
> -.. _table_rte_flow_action_vxlan_encap:
> -
> -.. table:: VXLAN_ENCAP
> -
> -   +----------------+-------------------------------------+
> -   | Field          | Value                               |
> -   +================+=====================================+
> -   | ``definition`` | Tunnel end-point overlay definition |
> -   +----------------+-------------------------------------+
> -
> -.. _table_rte_flow_action_vxlan_encap_example:
> -
> -.. table:: IPv4 VxLAN flow pattern example.
> -
> -   +-------+----------+
> -   | Index | Item     |
> -   +=======+==========+
> -   | 0     | Ethernet |
> -   +-------+----------+
> -   | 1     | IPv4     |
> -   +-------+----------+
> -   | 2     | UDP      |
> -   +-------+----------+
> -   | 3     | VXLAN    |
> -   +-------+----------+
> -   | 4     | END      |
> -   +-------+----------+
> -
> -Action: ``VXLAN_DECAP``
> -^^^^^^^^^^^^^^^^^^^^^^^
> -
> -Performs a decapsulation action by stripping all headers of the VXLAN tunnel
> -network overlay from the matched flow.
> -
> -The flow items pattern defined for the flow rule with which a ``VXLAN_DECAP``
> -action is specified, must define a valid VXLAN tunnel as per RFC7348. If the
> -flow pattern does not specify a valid VXLAN tunnel then a
> -RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
> -
> -This action modifies the payload of matched flows.
> -
> -Action: ``NVGRE_ENCAP``
> -^^^^^^^^^^^^^^^^^^^^^^^
> -
> -Performs a NVGRE encapsulation action by encapsulating the matched flow in the
> -NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item
> -definition.
> -
> -This action modifies the payload of matched flows. The flow definition specified
> -in the ``rte_flow_action_tunnel_encap`` action structure must defined a valid
> -NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network
> -Virtualization Using Generic Routing Encapsulation). The pattern must be
> -terminated with the RTE_FLOW_ITEM_TYPE_END item type.
> -
> -.. _table_rte_flow_action_nvgre_encap:
> -
> -.. table:: NVGRE_ENCAP
> -
> -   +----------------+-------------------------------------+
> -   | Field          | Value                               |
> -   +================+=====================================+
> -   | ``definition`` | NVGRE end-point overlay definition  |
> -   +----------------+-------------------------------------+
> -
> -.. _table_rte_flow_action_nvgre_encap_example:
> -
> -.. table:: IPv4 NVGRE flow pattern example.
> -
> -   +-------+----------+
> -   | Index | Item     |
> -   +=======+==========+
> -   | 0     | Ethernet |
> -   +-------+----------+
> -   | 1     | IPv4     |
> -   +-------+----------+
> -   | 2     | NVGRE    |
> -   +-------+----------+
> -   | 3     | END      |
> -   +-------+----------+
> -
> -Action: ``NVGRE_DECAP``
> -^^^^^^^^^^^^^^^^^^^^^^^
> -
> -Performs a decapsulation action by stripping all headers of the NVGRE tunnel
> -network overlay from the matched flow.
> -
> -The flow items pattern defined for the flow rule with which a ``NVGRE_DECAP``
> -action is specified, must define a valid NVGRE tunnel as per RFC7637. If the
> -flow pattern does not specify a valid NVGRE tunnel then a
> -RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
> -
> -This action modifies the payload of matched flows.
> -
>   Action: ``TUNNEL_ENCAP``
>   ^^^^^^^^^^^^^^^^^^^^^^^^
>   
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index e29c561..55521e3 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -1473,40 +1473,6 @@ enum rte_flow_action_type {
>   	RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS,
>   
>   	/**
> -	 * Encapsulate flow in VXLAN tunnel as defined in
> -	 * rte_flow_action_vxlan_encap action structure.
> -	 *
> -	 * See struct rte_flow_action_vxlan_encap.
> -	 */
> -	RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP,
> -
> -	/**
> -	 * Decapsulate outer most VXLAN tunnel from matched flow.
> -	 *
> -	 * If flow pattern does not define a valid VXLAN tunnel (as specified by
> -	 * RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
> -	 * error.
> -	 */
> -	RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
> -
> -	/**
> -	 * Encapsulate flow in NVGRE tunnel defined in the
> -	 * rte_flow_action_nvgre_encap action structure.
> -	 *
> -	 * See struct rte_flow_action_nvgre_encap.
> -	 */
> -	RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP,
> -
> -	/**
> -	 * Decapsulate outer most NVGRE tunnel from matched flow.
> -	 *
> -	 * If flow pattern does not define a valid NVGRE tunnel (as specified by
> -	 * RFC7637) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
> -	 * error.
> -	 */
> -	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
> -
> -	/**
>   	 * Encapsulate the packet with tunnel header as defined in
>   	 * rte_flow_action_tunnel_encap action structure.
>   	 *
> @@ -1837,75 +1803,6 @@ struct rte_flow_action_of_push_mpls {
>    * @warning
>    * @b EXPERIMENTAL: this structure may change without prior notice
>    *
> - * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP
> - *
> - * VXLAN tunnel end-point encapsulation data definition
> - *
> - * The tunnel definition is provided through the flow item pattern, the
> - * provided pattern must conform to RFC7348 for the tunnel specified. The flow
> - * definition must be provided in order from the RTE_FLOW_ITEM_TYPE_ETH
> - * definition up the end item which is specified by RTE_FLOW_ITEM_TYPE_END.
> - *
> - * The mask field allows user to specify which fields in the flow item
> - * definitions can be ignored and which have valid data and can be used
> - * verbatim.
> - *
> - * Note: the last field is not used in the definition of a tunnel and can be
> - * ignored.
> - *
> - * Valid flow definition for RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP include:
> - *
> - * - ETH / IPV4 / UDP / VXLAN / END
> - * - ETH / IPV6 / UDP / VXLAN / END
> - * - ETH / VLAN / IPV4 / UDP / VXLAN / END
> - *
> - */
> -struct rte_flow_action_vxlan_encap {
> -	/**
> -	 * Encapsulating vxlan tunnel definition
> -	 * (terminated by the END pattern item).
> -	 */
> -	struct rte_flow_item *definition;
> -};
> -
> -/**
> - * @warning
> - * @b EXPERIMENTAL: this structure may change without prior notice
> - *
> - * RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP
> - *
> - * NVGRE tunnel end-point encapsulation data definition
> - *
> - * The tunnel definition is provided through the flow item pattern  the
> - * provided pattern must conform with RFC7637. The flow definition must be
> - * provided in order from the RTE_FLOW_ITEM_TYPE_ETH definition up the end item
> - * which is specified by RTE_FLOW_ITEM_TYPE_END.
> - *
> - * The mask field allows user to specify which fields in the flow item
> - * definitions can be ignored and which have valid data and can be used
> - * verbatim.
> - *
> - * Note: the last field is not used in the definition of a tunnel and can be
> - * ignored.
> - *
> - * Valid flow definition for RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP include:
> - *
> - * - ETH / IPV4 / NVGRE / END
> - * - ETH / VLAN / IPV6 / NVGRE / END
> - *
> - */
> -struct rte_flow_action_nvgre_encap {
> -	/**
> -	 * Encapsulating vxlan tunnel definition
> -	 * (terminated by the END pattern item).
> -	 */
> -	struct rte_flow_item *definition;
> -};
> -
> -/**
> - * @warning
> - * @b EXPERIMENTAL: this structure may change without prior notice
> - *
>    * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP
>    *
>    * Tunnel end-point encapsulation data definition
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-05 12:57   ` Ferruh Yigit
@ 2018-10-05 14:00     ` Ori Kam
  0 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-05 14:00 UTC (permalink / raw)
  To: Ferruh Yigit, arybchenko, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, October 5, 2018 3:57 PM
> To: Ori Kam <orika@mellanox.com>; arybchenko@solarflare.com;
> stephen@networkplumber.org; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> On 9/26/2018 10:00 PM, Ori Kam wrote:
> > This series implement the generic L2/L3 tunnel encapsulation actions
> > and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> >
> > Currenlty the encap/decap actions only support encapsulation
> > of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> > the inner packet has a valid Ethernet header, while L3 encapsulation
> > is where the inner packet doesn't have the Ethernet header).
> > In addtion the parameter to to the encap action is a list of rte items,
> > this results in 2 extra translation, between the application to the action
> > and from the action to the NIC. This results in negetive impact on the
> > insertion performance.
> >
> > Looking forward there are going to be a need to support many more tunnel
> > encapsulations. For example MPLSoGRE, MPLSoUDP.
> > Adding the new encapsulation will result in duplication of code.
> > For example the code for handling NVGRE and VXLAN are exactly the same,
> > and each new tunnel will have the same exact structure.
> >
> > This series introduce a generic encapsulation for L2 tunnel types, and
> > generic encapsulation for L3 tunnel types. In addtion the new
> > encapsulations commands are using raw buffer inorder to save the
> > converstion time, both for the application and the PMD.
> >
> >
> [1]https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
> s.dpdk.org%2Farchives%2Fdev%2F2018-
> August%2F109944.html&amp;data=02%7C01%7Corika%40mellanox.com%7Cbb
> 15cba1e21d4d9af71208d62ac228d7%7Ca652971c7d2e4d9ba6a4d149256f461b
> %7C0%7C0%7C636743410739573101&amp;sdata=but7TODDrb6JUTwwv7So4i%
> 2Ff7k12Fla9eJW0n3MIDFY%3D&amp;reserved=0
> >
> > Changes in v2:
> >
> > * add missing decap_l3 structure.
> > * fix typo.
> >
> >
> >
> > Ori Kam (3):
> >   ethdev: add generic L2/L3 tunnel encapsulation actions
> >   app/testpmd: convert testpmd encap commands to new API
> >   ethdev: remove vxlan and nvgre encapsulation commands
> 
> 
> Hi Ori,
> 
> Can you please rebase the patch on top of latest next-net?
> 

Sure, I will  do it on Sunday.

> Thanks,
> Ferruh

Best,
Ori

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
                     ` (4 preceding siblings ...)
  2018-10-05 12:57   ` Ferruh Yigit
@ 2018-10-07 12:57   ` Ori Kam
  2018-10-07 12:57     ` [PATCH v3 1/3] " Ori Kam
                       ` (4 more replies)
  5 siblings, 5 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-07 12:57 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

This series implement the generic L2/L3 tunnel encapsulation actions
and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.
    
Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.
    
This series introduce a generic encapsulation for L2 tunnel types, and
generic encapsulation for L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

[1]https://mails.dpdk.org/archives/dev/2018-August/109944.html

v3:
 * rebase on tip.

v2:
 * add missing decap_l3 structure.
 * fix typo.


Ori Kam (3):
  ethdev: add generic L2/L3 tunnel encapsulation actions
  app/testpmd: convert testpmd encap commands to new API
  ethdev: remove vxlan and nvgre encapsulation commands

 app/test-pmd/cmdline_flow.c        | 292 +++++++++++++++++--------------------
 app/test-pmd/config.c              |   2 -
 doc/guides/prog_guide/rte_flow.rst | 115 ++++++---------
 lib/librte_ethdev/rte_flow.c       |  44 +-----
 lib/librte_ethdev/rte_flow.h       | 108 ++++++--------
 5 files changed, 231 insertions(+), 330 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v3 1/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
@ 2018-10-07 12:57     ` Ori Kam
  2018-10-07 12:57     ` [PATCH v3 2/3] app/testpmd: convert testpmd encap commands to new API Ori Kam
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-07 12:57 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.

Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.

This patch introduce a generic encapsulation for L2 tunnel types, and
generic encapsulation for L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 doc/guides/prog_guide/rte_flow.rst | 82 ++++++++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_flow.c       |  7 ++++
 lib/librte_ethdev/rte_flow.h       | 79 ++++++++++++++++++++++++++++++++++++
 3 files changed, 168 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 1b17f6e..497afc2 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2076,6 +2076,88 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
 
 This action modifies the payload of matched flows.
 
+Action: ``TUNNEL_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a tunnel encapsulation action by encapsulating the matched flow with
+a tunnel header as defined in the``rte_flow_action_tunnel_encap``.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
+tunnel packet overlay.
+
+.. _table_rte_flow_action_tunnel_encap:
+
+.. table:: TUNNEL_ENCAP
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Tunnel end-point overlay definition |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
+Action: ``TUNNEL_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``TUNNEL_DECAP``
+action is specified, must define a valid tunnel. If the
+flow pattern does not specify a valid tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+Action: ``TUNNEL_ENCAP_L3``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Replace the packet layer 2 header with the encapsulation tunnel header
+as defined in the ``rte_flow_action_tunnel_encap_l3``.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap_l3`` action structure must define a valid
+tunnel packet overlay.
+
+.. _table_rte_flow_action_tunnel_encap_l3:
+
+.. table:: TUNNEL_ENCAP_L3
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Tunnel end-point overlay definition |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
+Action: ``TUNNEL_DECAP_L3``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Replace the packet tunnel network overlay from the matched flow with
+layer 2 header as defined by ``rte_flow_action_tunnel_decap_l3``.
+
+The flow items pattern defined for the flow rule with which a ``TUNNEL_DECAP_L3``
+action is specified, must define a valid tunnel. If the
+flow pattern does not specify a valid tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+.. _table_rte_flow_action_tunnel_decap_l3:
+
+.. table:: TUNNEL_DECAP_L3
+
+   +----------------+-------------------------------------+
+   | Field          | Value                               |
+   +================+=====================================+
+   | ``buf``        | Layer 2 definition                  |
+   +----------------+-------------------------------------+
+   | ``size``       | The size of the buffer in bytes     |
+   +----------------+-------------------------------------+
+
 Negative types
 ~~~~~~~~~~~~~~
 
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 9c56a97..4b548b8 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -123,6 +123,13 @@ struct rte_flow_desc_data {
 	MK_FLOW_ACTION(VXLAN_DECAP, 0),
 	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
 	MK_FLOW_ACTION(NVGRE_DECAP, 0),
+	MK_FLOW_ACTION(TUNNEL_ENCAP,
+		       sizeof(struct rte_flow_action_tunnel_encap)),
+	MK_FLOW_ACTION(TUNNEL_DECAP, 0),
+	MK_FLOW_ACTION(TUNNEL_ENCAP_L3,
+		       sizeof(struct rte_flow_action_tunnel_encap_l3)),
+	MK_FLOW_ACTION(TUNNEL_DECAP_L3,
+		       sizeof(struct rte_flow_action_tunnel_decap_l3)),
 };
 
 static int
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index f062ffe..76b4759 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1506,6 +1506,40 @@ enum rte_flow_action_type {
 	 * error.
 	 */
 	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
+
+	/**
+	 * Encapsulate the packet with tunnel header as defined in
+	 * rte_flow_action_tunnel_encap action structure.
+	 *
+	 * See struct rte_flow_action_tunnel_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP,
+
+	/**
+	 * Decapsulate outer most tunnel from matched flow.
+	 *
+	 * The flow pattern must have a valid tunnel header
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,
+
+	/**
+	 * Remove the packet L2 header and encapsulate the
+	 * packet with tunnel header as defined in
+	 * rte_flow_action_tunnel_encap_l3 action structure.
+	 *
+	 * See struct rte_flow_action_tunnel_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP_L3,
+
+	/**
+	 * Decapsulate outer most tunnel from matched flow,
+	 * and add L2 layer.
+	 *
+	 * The flow pattern must have a valid tunnel header.
+	 *
+	 * See struct rte_flow_action_tunnel_decap_l3
+	 */
+	RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP_L3,
 };
 
 /**
@@ -1869,6 +1903,51 @@ struct rte_flow_action_nvgre_encap {
 	struct rte_flow_item *definition;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP
+ *
+ * Tunnel end-point encapsulation data definition
+ *
+ * The encapsulation header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_encap {
+	uint8_t *buf; /**< Encapsulation data. */
+	uint16_t size; /**< Buffer size. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP_L3
+ *
+ * Tunnel end-point encapsulation data definition
+ *
+ * The encapsulation header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_encap_l3 {
+	uint8_t *buf; /**< Encapsulation data. */
+	uint16_t size; /**< Buffer size. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP_L3
+ *
+ * Layer 2 definition to encapsulte the packet after decapsulating the packet.
+ *
+ * The layer 2 definition header is provided through raw buffer.
+ */
+struct rte_flow_action_tunnel_decap_l3 {
+	uint8_t *buf; /**< L2 data. */
+	uint16_t size; /**< Buffer size. */
+};
+
 /*
  * Definition of a single action.
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v3 2/3] app/testpmd: convert testpmd encap commands to new API
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
  2018-10-07 12:57     ` [PATCH v3 1/3] " Ori Kam
@ 2018-10-07 12:57     ` Ori Kam
  2018-10-07 12:57     ` [PATCH v3 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-07 12:57 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Currently there are 2 encapsulation commands in testpmd one for VXLAN
and one for NVGRE, both of those commands are using the old rte encap
command.

This commit update the commands to work with the new tunnel encap
actions.

The reason that we have different encapsulation commands, one for VXLAN
and one for NVGRE is the ease of use in testpmd, both commands are using
the same rte flow action for tunnel encap.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline_flow.c | 292 +++++++++++++++++++++-----------------------
 app/test-pmd/config.c       |   2 -
 2 files changed, 137 insertions(+), 157 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f926060..c9dba79 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -262,37 +262,13 @@ struct action_rss_data {
 	uint16_t queue[ACTION_RSS_QUEUE_NUM];
 };
 
-/** Maximum number of items in struct rte_flow_action_vxlan_encap. */
-#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6
-
-/** Storage for struct rte_flow_action_vxlan_encap including external data. */
-struct action_vxlan_encap_data {
-	struct rte_flow_action_vxlan_encap conf;
-	struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
-	struct rte_flow_item_eth item_eth;
-	struct rte_flow_item_vlan item_vlan;
-	union {
-		struct rte_flow_item_ipv4 item_ipv4;
-		struct rte_flow_item_ipv6 item_ipv6;
-	};
-	struct rte_flow_item_udp item_udp;
-	struct rte_flow_item_vxlan item_vxlan;
-};
+/** Maximum buffer size for the encap data. */
+#define ACTION_TUNNEL_ENCAP_MAX_BUFFER_SIZE 64
 
-/** Maximum number of items in struct rte_flow_action_nvgre_encap. */
-#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5
-
-/** Storage for struct rte_flow_action_nvgre_encap including external data. */
-struct action_nvgre_encap_data {
-	struct rte_flow_action_nvgre_encap conf;
-	struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM];
-	struct rte_flow_item_eth item_eth;
-	struct rte_flow_item_vlan item_vlan;
-	union {
-		struct rte_flow_item_ipv4 item_ipv4;
-		struct rte_flow_item_ipv6 item_ipv6;
-	};
-	struct rte_flow_item_nvgre item_nvgre;
+/** Storage for struct rte_flow_action_tunnel_encap including external data. */
+struct action_tunnel_encap_data {
+	struct rte_flow_action_tunnel_encap conf;
+	uint8_t buf[ACTION_TUNNEL_ENCAP_MAX_BUFFER_SIZE];
 };
 
 /** Maximum number of subsequent tokens and arguments on the stack. */
@@ -2438,8 +2414,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.name = "vxlan_encap",
 		.help = "VXLAN encapsulation, uses configuration set by \"set"
 			" vxlan\"",
-		.priv = PRIV_ACTION(VXLAN_ENCAP,
-				    sizeof(struct action_vxlan_encap_data)),
+		.priv = PRIV_ACTION(TUNNEL_ENCAP,
+				    sizeof(struct action_tunnel_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_vxlan_encap,
 	},
@@ -2448,7 +2424,7 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.help = "Performs a decapsulation action by stripping all"
 			" headers of the VXLAN tunnel network overlay from the"
 			" matched flow.",
-		.priv = PRIV_ACTION(VXLAN_DECAP, 0),
+		.priv = PRIV_ACTION(TUNNEL_DECAP, 0),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
@@ -2456,8 +2432,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.name = "nvgre_encap",
 		.help = "NVGRE encapsulation, uses configuration set by \"set"
 			" nvgre\"",
-		.priv = PRIV_ACTION(NVGRE_ENCAP,
-				    sizeof(struct action_nvgre_encap_data)),
+		.priv = PRIV_ACTION(TUNNEL_ENCAP,
+				    sizeof(struct action_tunnel_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_nvgre_encap,
 	},
@@ -2466,7 +2442,7 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.help = "Performs a decapsulation action by stripping all"
 			" headers of the NVGRE tunnel network overlay from the"
 			" matched flow.",
-		.priv = PRIV_ACTION(NVGRE_DECAP, 0),
+		.priv = PRIV_ACTION(TUNNEL_DECAP, 0),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
@@ -3034,6 +3010,9 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return len;
 }
 
+/** IP next protocol UDP. */
+#define IP_PROTO_UDP 0x11
+
 /** Parse VXLAN encap action. */
 static int
 parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token,
@@ -3042,7 +3021,32 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 {
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
-	struct action_vxlan_encap_data *action_vxlan_encap_data;
+	struct action_tunnel_encap_data *action_vxlan_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = vxlan_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = vxlan_encap_conf.ipv4_src,
+			.dst_addr = vxlan_encap_conf.ipv4_dst,
+			.next_proto_id = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.src_port = vxlan_encap_conf.udp_src,
+			.dst_port = vxlan_encap_conf.udp_dst,
+		},
+	};
+	struct rte_flow_item_vxlan vxlan = { .flags = 0, };
+	uint8_t *header;
 	int ret;
 
 	ret = parse_vc(ctx, token, str, len, buf, size);
@@ -3057,79 +3061,55 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	/* Point to selected object. */
 	ctx->object = out->args.vc.data;
 	ctx->objmask = NULL;
-	/* Set up default configuration. */
+	/* Copy the headers to the buffer. */
 	action_vxlan_encap_data = ctx->object;
-	*action_vxlan_encap_data = (struct action_vxlan_encap_data){
-		.conf = (struct rte_flow_action_vxlan_encap){
-			.definition = action_vxlan_encap_data->items,
-		},
-		.items = {
-			{
-				.type = RTE_FLOW_ITEM_TYPE_ETH,
-				.spec = &action_vxlan_encap_data->item_eth,
-				.mask = &rte_flow_item_eth_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VLAN,
-				.spec = &action_vxlan_encap_data->item_vlan,
-				.mask = &rte_flow_item_vlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_IPV4,
-				.spec = &action_vxlan_encap_data->item_ipv4,
-				.mask = &rte_flow_item_ipv4_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_UDP,
-				.spec = &action_vxlan_encap_data->item_udp,
-				.mask = &rte_flow_item_udp_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VXLAN,
-				.spec = &action_vxlan_encap_data->item_vxlan,
-				.mask = &rte_flow_item_vxlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_END,
-			},
-		},
-		.item_eth.type = 0,
-		.item_vlan = {
-			.tci = vxlan_encap_conf.vlan_tci,
-			.inner_type = 0,
-		},
-		.item_ipv4.hdr = {
-			.src_addr = vxlan_encap_conf.ipv4_src,
-			.dst_addr = vxlan_encap_conf.ipv4_dst,
-		},
-		.item_udp.hdr = {
-			.src_port = vxlan_encap_conf.udp_src,
-			.dst_port = vxlan_encap_conf.udp_dst,
+	*action_vxlan_encap_data = (struct action_tunnel_encap_data) {
+		.conf = (struct rte_flow_action_tunnel_encap){
+			.buf = action_vxlan_encap_data->buf,
 		},
-		.item_vxlan.flags = 0,
+		.buf = {},
 	};
-	memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes,
+	header = action_vxlan_encap_data->buf;
+	if (vxlan_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (vxlan_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
 	       vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN);
-	memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes,
+	memcpy(eth.src.addr_bytes,
 	       vxlan_encap_conf.eth_src, ETHER_ADDR_LEN);
-	if (!vxlan_encap_conf.select_ipv4) {
-		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr,
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (vxlan_encap_conf.select_vlan) {
+		if (vxlan_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (vxlan_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
 		       &vxlan_encap_conf.ipv6_src,
 		       sizeof(vxlan_encap_conf.ipv6_src));
-		memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr,
+		memcpy(&ipv6.hdr.dst_addr,
 		       &vxlan_encap_conf.ipv6_dst,
 		       sizeof(vxlan_encap_conf.ipv6_dst));
-		action_vxlan_encap_data->items[2] = (struct rte_flow_item){
-			.type = RTE_FLOW_ITEM_TYPE_IPV6,
-			.spec = &action_vxlan_encap_data->item_ipv6,
-			.mask = &rte_flow_item_ipv6_mask,
-		};
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
 	}
-	if (!vxlan_encap_conf.select_vlan)
-		action_vxlan_encap_data->items[1].type =
-			RTE_FLOW_ITEM_TYPE_VOID;
-	memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni,
-	       RTE_DIM(vxlan_encap_conf.vni));
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(vxlan.vni, vxlan_encap_conf.vni, RTE_DIM(vxlan_encap_conf.vni));
+	memcpy(header, &vxlan, sizeof(vxlan));
+	header += sizeof(vxlan);
+	action_vxlan_encap_data->conf.size = header -
+		action_vxlan_encap_data->buf;
 	action->conf = &action_vxlan_encap_data->conf;
 	return ret;
 }
@@ -3142,7 +3122,26 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 {
 	struct buffer *out = buf;
 	struct rte_flow_action *action;
-	struct action_nvgre_encap_data *action_nvgre_encap_data;
+	struct action_tunnel_encap_data *action_nvgre_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = nvgre_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = nvgre_encap_conf.ipv4_src,
+			.dst_addr = nvgre_encap_conf.ipv4_dst,
+			.next_proto_id = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IP_PROTO_UDP,
+		},
+	};
+	struct rte_flow_item_nvgre nvgre = { .flow_id = 0, };
+	uint8_t *header;
 	int ret;
 
 	ret = parse_vc(ctx, token, str, len, buf, size);
@@ -3157,70 +3156,53 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	/* Point to selected object. */
 	ctx->object = out->args.vc.data;
 	ctx->objmask = NULL;
-	/* Set up default configuration. */
+	/* Copy the headers to the buffer. */
 	action_nvgre_encap_data = ctx->object;
-	*action_nvgre_encap_data = (struct action_nvgre_encap_data){
-		.conf = (struct rte_flow_action_nvgre_encap){
-			.definition = action_nvgre_encap_data->items,
-		},
-		.items = {
-			{
-				.type = RTE_FLOW_ITEM_TYPE_ETH,
-				.spec = &action_nvgre_encap_data->item_eth,
-				.mask = &rte_flow_item_eth_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_VLAN,
-				.spec = &action_nvgre_encap_data->item_vlan,
-				.mask = &rte_flow_item_vlan_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_IPV4,
-				.spec = &action_nvgre_encap_data->item_ipv4,
-				.mask = &rte_flow_item_ipv4_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_NVGRE,
-				.spec = &action_nvgre_encap_data->item_nvgre,
-				.mask = &rte_flow_item_nvgre_mask,
-			},
-			{
-				.type = RTE_FLOW_ITEM_TYPE_END,
-			},
-		},
-		.item_eth.type = 0,
-		.item_vlan = {
-			.tci = nvgre_encap_conf.vlan_tci,
-			.inner_type = 0,
-		},
-		.item_ipv4.hdr = {
-		       .src_addr = nvgre_encap_conf.ipv4_src,
-		       .dst_addr = nvgre_encap_conf.ipv4_dst,
+	*action_nvgre_encap_data = (struct action_tunnel_encap_data) {
+		.conf = (struct rte_flow_action_tunnel_encap){
+			.buf = action_nvgre_encap_data->buf,
 		},
-		.item_nvgre.flow_id = 0,
+		.buf = {},
 	};
-	memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes,
+	header = action_nvgre_encap_data->buf;
+	if (nvgre_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (nvgre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
 	       nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN);
-	memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes,
+	memcpy(eth.src.addr_bytes,
 	       nvgre_encap_conf.eth_src, ETHER_ADDR_LEN);
-	if (!nvgre_encap_conf.select_ipv4) {
-		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr,
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (nvgre_encap_conf.select_vlan) {
+		if (nvgre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (nvgre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
 		       &nvgre_encap_conf.ipv6_src,
 		       sizeof(nvgre_encap_conf.ipv6_src));
-		memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr,
+		memcpy(&ipv6.hdr.dst_addr,
 		       &nvgre_encap_conf.ipv6_dst,
 		       sizeof(nvgre_encap_conf.ipv6_dst));
-		action_nvgre_encap_data->items[2] = (struct rte_flow_item){
-			.type = RTE_FLOW_ITEM_TYPE_IPV6,
-			.spec = &action_nvgre_encap_data->item_ipv6,
-			.mask = &rte_flow_item_ipv6_mask,
-		};
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
 	}
-	if (!nvgre_encap_conf.select_vlan)
-		action_nvgre_encap_data->items[1].type =
-			RTE_FLOW_ITEM_TYPE_VOID;
-	memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni,
-	       RTE_DIM(nvgre_encap_conf.tni));
+	memcpy(nvgre.tni, nvgre_encap_conf.tni, RTE_DIM(nvgre_encap_conf.tni));
+	memcpy(header, &nvgre, sizeof(nvgre));
+	header += sizeof(nvgre);
+	action_nvgre_encap_data->conf.size = header -
+		action_nvgre_encap_data->buf;
 	action->conf = &action_nvgre_encap_data->conf;
 	return ret;
 }
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 009c92c..ccd9a18 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1009,8 +1009,6 @@ void print_valid_ports(void)
 	printf("Set MTU failed. diag=%d\n", diag);
 }
 
-/* Generic flow management functions. */
-
 /** Generate a port_flow entry from attributes/pattern/actions. */
 static struct port_flow *
 port_flow_new(const struct rte_flow_attr *attr,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v3 3/3] ethdev: remove vxlan and nvgre encapsulation commands
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
  2018-10-07 12:57     ` [PATCH v3 1/3] " Ori Kam
  2018-10-07 12:57     ` [PATCH v3 2/3] app/testpmd: convert testpmd encap commands to new API Ori Kam
@ 2018-10-07 12:57     ` Ori Kam
  2018-10-09 16:48     ` [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ferruh Yigit
  2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
  4 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-07 12:57 UTC (permalink / raw)
  To: arybchenko, ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

This patch removes the VXLAN and NVGRE encapsulation commands.

Those commands are subset of the TUNNEL_ENCAP command so there is no
need to keep both versions.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
---
 doc/guides/prog_guide/rte_flow.rst | 107 -------------------------------------
 lib/librte_ethdev/rte_flow.c       |  37 -------------
 lib/librte_ethdev/rte_flow.h       | 103 -----------------------------------
 3 files changed, 247 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 497afc2..126e5d3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1969,113 +1969,6 @@ Implements ``OFPAT_PUSH_MPLS`` ("push a new MPLS tag") as defined by the
    | ``ethertype`` | EtherType |
    +---------------+-----------+
 
-Action: ``VXLAN_ENCAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a VXLAN encapsulation action by encapsulating the matched flow in the
-VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items
-definition.
-
-This action modifies the payload of matched flows. The flow definition specified
-in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
-VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local
-Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks
-over Layer 3 Networks). The pattern must be terminated with the
-RTE_FLOW_ITEM_TYPE_END item type.
-
-.. _table_rte_flow_action_vxlan_encap:
-
-.. table:: VXLAN_ENCAP
-
-   +----------------+-------------------------------------+
-   | Field          | Value                               |
-   +================+=====================================+
-   | ``definition`` | Tunnel end-point overlay definition |
-   +----------------+-------------------------------------+
-
-.. _table_rte_flow_action_vxlan_encap_example:
-
-.. table:: IPv4 VxLAN flow pattern example.
-
-   +-------+----------+
-   | Index | Item     |
-   +=======+==========+
-   | 0     | Ethernet |
-   +-------+----------+
-   | 1     | IPv4     |
-   +-------+----------+
-   | 2     | UDP      |
-   +-------+----------+
-   | 3     | VXLAN    |
-   +-------+----------+
-   | 4     | END      |
-   +-------+----------+
-
-Action: ``VXLAN_DECAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a decapsulation action by stripping all headers of the VXLAN tunnel
-network overlay from the matched flow.
-
-The flow items pattern defined for the flow rule with which a ``VXLAN_DECAP``
-action is specified, must define a valid VXLAN tunnel as per RFC7348. If the
-flow pattern does not specify a valid VXLAN tunnel then a
-RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
-
-This action modifies the payload of matched flows.
-
-Action: ``NVGRE_ENCAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a NVGRE encapsulation action by encapsulating the matched flow in the
-NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item
-definition.
-
-This action modifies the payload of matched flows. The flow definition specified
-in the ``rte_flow_action_tunnel_encap`` action structure must defined a valid
-NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network
-Virtualization Using Generic Routing Encapsulation). The pattern must be
-terminated with the RTE_FLOW_ITEM_TYPE_END item type.
-
-.. _table_rte_flow_action_nvgre_encap:
-
-.. table:: NVGRE_ENCAP
-
-   +----------------+-------------------------------------+
-   | Field          | Value                               |
-   +================+=====================================+
-   | ``definition`` | NVGRE end-point overlay definition  |
-   +----------------+-------------------------------------+
-
-.. _table_rte_flow_action_nvgre_encap_example:
-
-.. table:: IPv4 NVGRE flow pattern example.
-
-   +-------+----------+
-   | Index | Item     |
-   +=======+==========+
-   | 0     | Ethernet |
-   +-------+----------+
-   | 1     | IPv4     |
-   +-------+----------+
-   | 2     | NVGRE    |
-   +-------+----------+
-   | 3     | END      |
-   +-------+----------+
-
-Action: ``NVGRE_DECAP``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Performs a decapsulation action by stripping all headers of the NVGRE tunnel
-network overlay from the matched flow.
-
-The flow items pattern defined for the flow rule with which a ``NVGRE_DECAP``
-action is specified, must define a valid NVGRE tunnel as per RFC7637. If the
-flow pattern does not specify a valid NVGRE tunnel then a
-RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
-
-This action modifies the payload of matched flows.
-
 Action: ``TUNNEL_ENCAP``
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 4b548b8..8a2e074 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -119,10 +119,6 @@ struct rte_flow_desc_data {
 		       sizeof(struct rte_flow_action_of_pop_mpls)),
 	MK_FLOW_ACTION(OF_PUSH_MPLS,
 		       sizeof(struct rte_flow_action_of_push_mpls)),
-	MK_FLOW_ACTION(VXLAN_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
-	MK_FLOW_ACTION(VXLAN_DECAP, 0),
-	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
-	MK_FLOW_ACTION(NVGRE_DECAP, 0),
 	MK_FLOW_ACTION(TUNNEL_ENCAP,
 		       sizeof(struct rte_flow_action_tunnel_encap)),
 	MK_FLOW_ACTION(TUNNEL_DECAP, 0),
@@ -427,16 +423,11 @@ enum rte_flow_conv_item_spec_type {
 	switch (action->type) {
 		union {
 			const struct rte_flow_action_rss *rss;
-			const struct rte_flow_action_vxlan_encap *vxlan_encap;
-			const struct rte_flow_action_nvgre_encap *nvgre_encap;
 		} src;
 		union {
 			struct rte_flow_action_rss *rss;
-			struct rte_flow_action_vxlan_encap *vxlan_encap;
-			struct rte_flow_action_nvgre_encap *nvgre_encap;
 		} dst;
 		size_t tmp;
-		int ret;
 
 	case RTE_FLOW_ACTION_TYPE_RSS:
 		src.rss = action->conf;
@@ -470,34 +461,6 @@ enum rte_flow_conv_item_spec_type {
 			off += tmp;
 		}
 		break;
-	case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
-	case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
-		src.vxlan_encap = action->conf;
-		dst.vxlan_encap = buf;
-		RTE_BUILD_BUG_ON(sizeof(*src.vxlan_encap) !=
-				 sizeof(*src.nvgre_encap) ||
-				 offsetof(struct rte_flow_action_vxlan_encap,
-					  definition) !=
-				 offsetof(struct rte_flow_action_nvgre_encap,
-					  definition));
-		off = sizeof(*dst.vxlan_encap);
-		if (src.vxlan_encap->definition) {
-			off = RTE_ALIGN_CEIL
-				(off, sizeof(*dst.vxlan_encap->definition));
-			ret = rte_flow_conv
-				(RTE_FLOW_CONV_OP_PATTERN,
-				 (void *)((uintptr_t)dst.vxlan_encap + off),
-				 size > off ? size - off : 0,
-				 src.vxlan_encap->definition, NULL);
-			if (ret < 0)
-				return 0;
-			if (size >= off + ret)
-				dst.vxlan_encap->definition =
-					(void *)((uintptr_t)dst.vxlan_encap +
-						 off);
-			off += ret;
-		}
-		break;
 	default:
 		off = rte_flow_desc_action[action->type].size;
 		rte_memcpy(buf, action->conf, (size > off ? off : size));
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 76b4759..0e7e0a2 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1474,40 +1474,6 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS,
 
 	/**
-	 * Encapsulate flow in VXLAN tunnel as defined in
-	 * rte_flow_action_vxlan_encap action structure.
-	 *
-	 * See struct rte_flow_action_vxlan_encap.
-	 */
-	RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP,
-
-	/**
-	 * Decapsulate outer most VXLAN tunnel from matched flow.
-	 *
-	 * If flow pattern does not define a valid VXLAN tunnel (as specified by
-	 * RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
-	 * error.
-	 */
-	RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
-
-	/**
-	 * Encapsulate flow in NVGRE tunnel defined in the
-	 * rte_flow_action_nvgre_encap action structure.
-	 *
-	 * See struct rte_flow_action_nvgre_encap.
-	 */
-	RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP,
-
-	/**
-	 * Decapsulate outer most NVGRE tunnel from matched flow.
-	 *
-	 * If flow pattern does not define a valid NVGRE tunnel (as specified by
-	 * RFC7637) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
-	 * error.
-	 */
-	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
-
-	/**
 	 * Encapsulate the packet with tunnel header as defined in
 	 * rte_flow_action_tunnel_encap action structure.
 	 *
@@ -1838,75 +1804,6 @@ struct rte_flow_action_of_push_mpls {
  * @warning
  * @b EXPERIMENTAL: this structure may change without prior notice
  *
- * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP
- *
- * VXLAN tunnel end-point encapsulation data definition
- *
- * The tunnel definition is provided through the flow item pattern, the
- * provided pattern must conform to RFC7348 for the tunnel specified. The flow
- * definition must be provided in order from the RTE_FLOW_ITEM_TYPE_ETH
- * definition up the end item which is specified by RTE_FLOW_ITEM_TYPE_END.
- *
- * The mask field allows user to specify which fields in the flow item
- * definitions can be ignored and which have valid data and can be used
- * verbatim.
- *
- * Note: the last field is not used in the definition of a tunnel and can be
- * ignored.
- *
- * Valid flow definition for RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP include:
- *
- * - ETH / IPV4 / UDP / VXLAN / END
- * - ETH / IPV6 / UDP / VXLAN / END
- * - ETH / VLAN / IPV4 / UDP / VXLAN / END
- *
- */
-struct rte_flow_action_vxlan_encap {
-	/**
-	 * Encapsulating vxlan tunnel definition
-	 * (terminated by the END pattern item).
-	 */
-	struct rte_flow_item *definition;
-};
-
-/**
- * @warning
- * @b EXPERIMENTAL: this structure may change without prior notice
- *
- * RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP
- *
- * NVGRE tunnel end-point encapsulation data definition
- *
- * The tunnel definition is provided through the flow item pattern  the
- * provided pattern must conform with RFC7637. The flow definition must be
- * provided in order from the RTE_FLOW_ITEM_TYPE_ETH definition up the end item
- * which is specified by RTE_FLOW_ITEM_TYPE_END.
- *
- * The mask field allows user to specify which fields in the flow item
- * definitions can be ignored and which have valid data and can be used
- * verbatim.
- *
- * Note: the last field is not used in the definition of a tunnel and can be
- * ignored.
- *
- * Valid flow definition for RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP include:
- *
- * - ETH / IPV4 / NVGRE / END
- * - ETH / VLAN / IPV6 / NVGRE / END
- *
- */
-struct rte_flow_action_nvgre_encap {
-	/**
-	 * Encapsulating vxlan tunnel definition
-	 * (terminated by the END pattern item).
-	 */
-	struct rte_flow_item *definition;
-};
-
-/**
- * @warning
- * @b EXPERIMENTAL: this structure may change without prior notice
- *
  * RTE_FLOW_ACTION_TYPE_TUNNEL_ENCAP
  *
  * Tunnel end-point encapsulation data definition
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
                       ` (2 preceding siblings ...)
  2018-10-07 12:57     ` [PATCH v3 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
@ 2018-10-09 16:48     ` Ferruh Yigit
  2018-10-10  6:45       ` Andrew Rybchenko
  2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
  4 siblings, 1 reply; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-09 16:48 UTC (permalink / raw)
  To: Ori Kam, arybchenko, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/7/2018 1:57 PM, Ori Kam wrote:
> This series implement the generic L2/L3 tunnel encapsulation actions
> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> 
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the action
> and from the action to the NIC. This results in negetive impact on the
> insertion performance.
>     
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
>     
> This series introduce a generic encapsulation for L2 tunnel types, and
> generic encapsulation for L3 tunnel types. In addtion the new
> encapsulations commands are using raw buffer inorder to save the
> converstion time, both for the application and the PMD.
> 
> [1]https://mails.dpdk.org/archives/dev/2018-August/109944.html
> 
> v3:
>  * rebase on tip.
> 
> v2:
>  * add missing decap_l3 structure.
>  * fix typo.
> 
> 
> Ori Kam (3):
>   ethdev: add generic L2/L3 tunnel encapsulation actions
>   app/testpmd: convert testpmd encap commands to new API
>   ethdev: remove vxlan and nvgre encapsulation commands

Reminder of this patchset, any reviews welcome.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-09 16:48     ` [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ferruh Yigit
@ 2018-10-10  6:45       ` Andrew Rybchenko
  2018-10-10  9:00         ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2018-10-10  6:45 UTC (permalink / raw)
  To: Ferruh Yigit, Ori Kam, stephen, adrien.mazarguil, Declan Doherty
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/9/18 7:48 PM, Ferruh Yigit wrote:
> On 10/7/2018 1:57 PM, Ori Kam wrote:
>> This series implement the generic L2/L3 tunnel encapsulation actions
>> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
>>
>> Currenlty the encap/decap actions only support encapsulation
>> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
>> the inner packet has a valid Ethernet header, while L3 encapsulation
>> is where the inner packet doesn't have the Ethernet header).
>> In addtion the parameter to to the encap action is a list of rte items,
>> this results in 2 extra translation, between the application to the action
>> and from the action to the NIC. This results in negetive impact on the
>> insertion performance.
>>      
>> Looking forward there are going to be a need to support many more tunnel
>> encapsulations. For example MPLSoGRE, MPLSoUDP.
>> Adding the new encapsulation will result in duplication of code.
>> For example the code for handling NVGRE and VXLAN are exactly the same,
>> and each new tunnel will have the same exact structure.
>>      
>> This series introduce a generic encapsulation for L2 tunnel types, and
>> generic encapsulation for L3 tunnel types. In addtion the new
>> encapsulations commands are using raw buffer inorder to save the
>> converstion time, both for the application and the PMD.
>>
>> [1]https://mails.dpdk.org/archives/dev/2018-August/109944.html
>>
>> v3:
>>   * rebase on tip.
>>
>> v2:
>>   * add missing decap_l3 structure.
>>   * fix typo.
>>
>>
>> Ori Kam (3):
>>    ethdev: add generic L2/L3 tunnel encapsulation actions
>>    app/testpmd: convert testpmd encap commands to new API
>>    ethdev: remove vxlan and nvgre encapsulation commands
> Reminder of this patchset, any reviews welcome.

I've added the author of previous actions in recipients.

I like the idea to generalize encap/decap actions. It makes a bit harder
for reader to find which encap/decap actions are supported in fact,
but it changes nothing for automated usage in the code - just try it
(as a generic way used in rte_flow).

Arguments about a way of encap/decap headers specification (flow items
vs raw) sound sensible, but I'm not sure about it.
It would be simpler if the tunnel header is added appended or removed
as is, but as I understand it is not true. For example, IPv4 ID will be
different in incoming packets to be decapsulated and different values
should be used on encapsulation. Checksums will be different (but
offloaded in any case).

Current way allows to specify which fields do not matter and which one
must match. It allows to say that, for example, VNI match is sufficient
to decapsulate.

Also arguments assume that action input is accepted as is by the HW.
It could be true, but could be obviously false and HW interface may
require parsed input (i.e. driver must parse the input buffer and extract
required fields of packet headers).

So, I'd say no. It should be better motivated if we change existing
approach (even advertised as experimental).

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10  6:45       ` Andrew Rybchenko
@ 2018-10-10  9:00         ` Ori Kam
  2018-10-10  9:30           ` Andrew Rybchenko
  2018-10-10 12:02           ` Adrien Mazarguil
  0 siblings, 2 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-10  9:00 UTC (permalink / raw)
  To: Andrew Rybchenko, Ferruh Yigit, stephen, Adrien Mazarguil,
	Declan Doherty
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler



> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Wednesday, October 10, 2018 9:45 AM
> To: Ferruh Yigit <ferruh.yigit@intel.com>; Ori Kam <orika@mellanox.com>;
> stephen@networkplumber.org; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>; Declan Doherty <declan.doherty@intel.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> On 10/9/18 7:48 PM, Ferruh Yigit wrote:
> On 10/7/2018 1:57 PM, Ori Kam wrote:
> This series implement the generic L2/L3 tunnel encapsulation actions
> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> 
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the action
> and from the action to the NIC. This results in negetive impact on the
> insertion performance.
> 
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
> 
> This series introduce a generic encapsulation for L2 tunnel types, and
> generic encapsulation for L3 tunnel types. In addtion the new
> encapsulations commands are using raw buffer inorder to save the
> converstion time, both for the application and the PMD.
> 
> [1]https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
> s.dpdk.org%2Farchives%2Fdev%2F2018-
> August%2F109944.html&data=02%7C01%7Corika%40mellanox.com%7C468bfe
> a033d642c3af5a08d62e7c0bd2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0
> %7C0%7C636747507642154668&sdata=cAMr3ThkyhjFrGv6K%2FvIDKHAskzMZI
> E8cpRTWiBl1eA%3D&reserved=0
> 
> v3:
>  * rebase on tip.
> 
> v2:
>  * add missing decap_l3 structure.
>  * fix typo.
> 
> 
> Ori Kam (3):
>   ethdev: add generic L2/L3 tunnel encapsulation actions
>   app/testpmd: convert testpmd encap commands to new API
>   ethdev: remove vxlan and nvgre encapsulation commands
> 
> Reminder of this patchset, any reviews welcome.
> 
> I've added the author of previous actions in recipients.
> 
> I like the idea to generalize encap/decap actions. It makes a bit harder
> for reader to find which encap/decap actions are supported in fact,
> but it changes nothing for automated usage in the code - just try it
> (as a generic way used in rte_flow).
> 

Even now the user doesn't know which encapsulation is supported since
it is PMD and sometime kernel related. On the other end it simplify adding
encapsulation to specific costumers with some time just FW update.

> Arguments about a way of encap/decap headers specification (flow items
> vs raw) sound sensible, but I'm not sure about it.
> It would be simpler if the tunnel header is added appended or removed
> as is, but as I understand it is not true. For example, IPv4 ID will be
> different in incoming packets to be decapsulated and different values
> should be used on encapsulation. Checksums will be different (but
> offloaded in any case).
> 

I'm not sure I understand your comment. 
Decapsulation is independent of encapsulation, for example if we decap 
L2 tunnel type then there is no parameter at all the NIC just removes 
the outer layers.

> Current way allows to specify which fields do not matter and which one
> must match. It allows to say that, for example, VNI match is sufficient
> to decapsulate.
> 

The encapsulation according to definition, is a list of headers that should 
encapsulate the packet. So I don't understand your comment about matching
fields. The matching is based on the flow and the encapsulation is just data
that should be added on top of the packet.

> Also arguments assume that action input is accepted as is by the HW.
> It could be true, but could be obviously false and HW interface may
> require parsed input (i.e. driver must parse the input buffer and extract
> required fields of packet headers).
> 

You are correct there some PMD even Mellanox (for the E-Switch) require to parsed input
There is no driver that knows rte_flow structure so in any case there should be 
Some translation between the encapsulation data and the NIC data.
I agree that writing the code for translation can be harder in this approach,
but the code is only written once is the insertion speed is much higher this way.
Also like I said some Virtual Switches are already store this data in raw buffer 
(they update only specific fields) so this will also save time for the application when
creating a rule.

> So, I'd say no. It should be better motivated if we change existing
> approach (even advertised as experimental).

I think the reasons I gave are very good motivation to change the approach
please also consider that there is no implementation yet that supports the old approach.
while we do have code that uses the new approach.

Best,
Ori



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10  9:00         ` Ori Kam
@ 2018-10-10  9:30           ` Andrew Rybchenko
  2018-10-10  9:38             ` Thomas Monjalon
  2018-10-10 12:02           ` Adrien Mazarguil
  1 sibling, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2018-10-10  9:30 UTC (permalink / raw)
  To: Ori Kam, Ferruh Yigit, stephen, Adrien Mazarguil, Declan Doherty
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

On 10/10/18 12:00 PM, Ori Kam wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>> Sent: Wednesday, October 10, 2018 9:45 AM
>> To: Ferruh Yigit <ferruh.yigit@intel.com>; Ori Kam <orika@mellanox.com>;
>> stephen@networkplumber.org; Adrien Mazarguil
>> <adrien.mazarguil@6wind.com>; Declan Doherty <declan.doherty@intel.com>
>> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
>> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
>> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
>> <shahafs@mellanox.com>
>> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
>> actions
>>
>> On 10/9/18 7:48 PM, Ferruh Yigit wrote:
>> On 10/7/2018 1:57 PM, Ori Kam wrote:
>> This series implement the generic L2/L3 tunnel encapsulation actions
>> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
>>
>> Currenlty the encap/decap actions only support encapsulation
>> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
>> the inner packet has a valid Ethernet header, while L3 encapsulation
>> is where the inner packet doesn't have the Ethernet header).
>> In addtion the parameter to to the encap action is a list of rte items,
>> this results in 2 extra translation, between the application to the action
>> and from the action to the NIC. This results in negetive impact on the
>> insertion performance.
>>
>> Looking forward there are going to be a need to support many more tunnel
>> encapsulations. For example MPLSoGRE, MPLSoUDP.
>> Adding the new encapsulation will result in duplication of code.
>> For example the code for handling NVGRE and VXLAN are exactly the same,
>> and each new tunnel will have the same exact structure.
>>
>> This series introduce a generic encapsulation for L2 tunnel types, and
>> generic encapsulation for L3 tunnel types. In addtion the new
>> encapsulations commands are using raw buffer inorder to save the
>> converstion time, both for the application and the PMD.
>>
>> [1]https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
>> s.dpdk.org%2Farchives%2Fdev%2F2018-
>> August%2F109944.html&data=02%7C01%7Corika%40mellanox.com%7C468bfe
>> a033d642c3af5a08d62e7c0bd2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0
>> %7C0%7C636747507642154668&sdata=cAMr3ThkyhjFrGv6K%2FvIDKHAskzMZI
>> E8cpRTWiBl1eA%3D&reserved=0
>>
>> v3:
>>   * rebase on tip.
>>
>> v2:
>>   * add missing decap_l3 structure.
>>   * fix typo.
>>
>>
>> Ori Kam (3):
>>    ethdev: add generic L2/L3 tunnel encapsulation actions
>>    app/testpmd: convert testpmd encap commands to new API
>>    ethdev: remove vxlan and nvgre encapsulation commands
>>
>> Reminder of this patchset, any reviews welcome.
>>
>> I've added the author of previous actions in recipients.
>>
>> I like the idea to generalize encap/decap actions. It makes a bit harder
>> for reader to find which encap/decap actions are supported in fact,
>> but it changes nothing for automated usage in the code - just try it
>> (as a generic way used in rte_flow).
>>
> Even now the user doesn't know which encapsulation is supported since
> it is PMD and sometime kernel related. On the other end it simplify adding
> encapsulation to specific costumers with some time just FW update.

I was just trying to say that previous way is a bit easier to understand
from sources that PMD pretends to support VXLAN or NVGRE encap/decap.
In any case it is not that important, so OK to have the new way.

>> Arguments about a way of encap/decap headers specification (flow items
>> vs raw) sound sensible, but I'm not sure about it.
>> It would be simpler if the tunnel header is added appended or removed
>> as is, but as I understand it is not true. For example, IPv4 ID will be
>> different in incoming packets to be decapsulated and different values
>> should be used on encapsulation. Checksums will be different (but
>> offloaded in any case).
>>
> I'm not sure I understand your comment.
> Decapsulation is independent of encapsulation, for example if we decap
> L2 tunnel type then there is no parameter at all the NIC just removes
> the outer layers.

OK, I've just mixed filtering and action parameters for decaps. My bad.
The argument for encapsulation still makes sense since the header is not
appended just as is. IP IDs change, lengths change, checksums change,
however, I agree that it is natural and expected behaviour.

>> Current way allows to specify which fields do not matter and which one
>> must match. It allows to say that, for example, VNI match is sufficient
>> to decapsulate.
>>
> The encapsulation according to definition, is a list of headers that should
> encapsulate the packet. So I don't understand your comment about matching
> fields. The matching is based on the flow and the encapsulation is just data
> that should be added on top of the packet.

Yes, my bad as I've described above.

>> Also arguments assume that action input is accepted as is by the HW.
>> It could be true, but could be obviously false and HW interface may
>> require parsed input (i.e. driver must parse the input buffer and extract
>> required fields of packet headers).
>>
> You are correct there some PMD even Mellanox (for the E-Switch) require to parsed input
> There is no driver that knows rte_flow structure so in any case there should be
> Some translation between the encapsulation data and the NIC data.
> I agree that writing the code for translation can be harder in this approach,
> but the code is only written once is the insertion speed is much higher this way.
> Also like I said some Virtual Switches are already store this data in raw buffer
> (they update only specific fields) so this will also save time for the application when
> creating a rule.

Yes, makes sense.

>> So, I'd say no. It should be better motivated if we change existing
>> approach (even advertised as experimental).
> I think the reasons I gave are very good motivation to change the approach
> please also consider that there is no implementation yet that supports the old approach.
> while we do have code that uses the new approach.

It is really bad practice that features are accepted without at least 
one implementation/usage.

Thanks for the reply. I'll provide my comments on patches.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10  9:30           ` Andrew Rybchenko
@ 2018-10-10  9:38             ` Thomas Monjalon
  0 siblings, 0 replies; 53+ messages in thread
From: Thomas Monjalon @ 2018-10-10  9:38 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Ori Kam, Ferruh Yigit, stephen, Adrien Mazarguil, Declan Doherty,
	dev, Dekel Peled, Nélio Laranjeiro, Yongseok Koh,
	Shahaf Shuler

10/10/2018 11:30, Andrew Rybchenko:
> It is really bad practice that features are accepted without at least 
> one implementation/usage.

Yes.
In future, we should take care of not accepting new API without at least
one implementation.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10  9:00         ` Ori Kam
  2018-10-10  9:30           ` Andrew Rybchenko
@ 2018-10-10 12:02           ` Adrien Mazarguil
  2018-10-10 13:17             ` Ori Kam
  1 sibling, 1 reply; 53+ messages in thread
From: Adrien Mazarguil @ 2018-10-10 12:02 UTC (permalink / raw)
  To: Ori Kam
  Cc: Andrew Rybchenko, Ferruh Yigit, stephen, Declan Doherty, dev,
	Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

Sorry if I'm a bit late to the discussion, please see below.

On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
<snip>
> > On 10/7/2018 1:57 PM, Ori Kam wrote:
> > This series implement the generic L2/L3 tunnel encapsulation actions
> > and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> > 
> > Currenlty the encap/decap actions only support encapsulation
> > of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> > the inner packet has a valid Ethernet header, while L3 encapsulation
> > is where the inner packet doesn't have the Ethernet header).
> > In addtion the parameter to to the encap action is a list of rte items,
> > this results in 2 extra translation, between the application to the action
> > and from the action to the NIC. This results in negetive impact on the
> > insertion performance.

Not sure it's a valid concern since in this proposal, PMD is still expected
to interpret the opaque buffer contents regardless for validation and to
convert it to its internal format.

Worse, it will require a packet parser to iterate over enclosed headers
instead of a list of convenient rte_flow_whatever objects. It won't be
faster without the convenience of pointers to properly aligned structures
that only contain relevant data fields.

> > Looking forward there are going to be a need to support many more tunnel
> > encapsulations. For example MPLSoGRE, MPLSoUDP.
> > Adding the new encapsulation will result in duplication of code.
> > For example the code for handling NVGRE and VXLAN are exactly the same,
> > and each new tunnel will have the same exact structure.
> > 
> > This series introduce a generic encapsulation for L2 tunnel types, and
> > generic encapsulation for L3 tunnel types. In addtion the new
> > encapsulations commands are using raw buffer inorder to save the
> > converstion time, both for the application and the PMD.

>From a usability standpoint I'm not a fan of the current interface to
perform NVGRE/VXLAN encap, however this proposal adds another layer of
opaqueness in the name of making things more generic than rte_flow already
is.

Assuming they are not to be interpreted by PMDs, maybe there's a case for
prepending arbitrary buffers to outgoing traffic and removing them from
incoming traffic. However this feature should not be named "generic tunnel
encap/decap" as it's misleading.

Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be more
appropriate. I think on the "pop" side, only the size would matter.

Another problem is that you must not require actions to rely on specific
pattern content:

 [...]
 *
 * Decapsulate outer most tunnel from matched flow.
 *
 * The flow pattern must have a valid tunnel header
 */
 RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,

For maximum flexibility, all actions should be usable on their own on empty
pattern. On the other hand, you can document undefined behavior when
performing some action on traffic that doesn't contain something.

Reason is that invalid traffic may have already been removed by other flow
rules (or whatever happens) before such a rule is reached; it's a user's
responsibility to provide such guarantee.

When parsing an action, a PMD is not supposed to look at the pattern. Action
list should contain all the needed info, otherwise it means the API is badly
defined.

I'm aware the above makes it tough to implement something like
RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP as defined in this series, but that's
unfortunately why I think it must not be defined like that.

My opinion is that the best generic approach to perform encap/decap with
rte_flow would use one dedicated action per protocol header to
add/remove/modify. This is the suggestion I originally made for
VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
matters [3].

Remember that whatever is provided, be it an opaque buffer (like you did), a
separate list of items (like VXLAN/NVGRE) or the rte_flow action list itself
(what I'm suggesting to do), PMDs have to process it. There will be a CPU
cost. Keep in mind odd use cases that involve QinQinQinQinQ.

> > I like the idea to generalize encap/decap actions. It makes a bit harder
> > for reader to find which encap/decap actions are supported in fact,
> > but it changes nothing for automated usage in the code - just try it
> > (as a generic way used in rte_flow).
> > 
> 
> Even now the user doesn't know which encapsulation is supported since
> it is PMD and sometime kernel related. On the other end it simplify adding
> encapsulation to specific costumers with some time just FW update.

Except for raw push/pop of uninterpreted headers, tunnel encapsulations not
explicitly supported by rte_flow shouldn't be possible. Who will expect
something that isn't defined by the API to work and rely on it in their
application? I don't see it happening.

Come on, adding new encap/decap actions to DPDK is shouldn't be such a pain
that the only alternative is a generic API to work around me :)

> > Arguments about a way of encap/decap headers specification (flow items
> > vs raw) sound sensible, but I'm not sure about it.
> > It would be simpler if the tunnel header is added appended or removed
> > as is, but as I understand it is not true. For example, IPv4 ID will be
> > different in incoming packets to be decapsulated and different values
> > should be used on encapsulation. Checksums will be different (but
> > offloaded in any case).
> > 
> 
> I'm not sure I understand your comment. 
> Decapsulation is independent of encapsulation, for example if we decap 
> L2 tunnel type then there is no parameter at all the NIC just removes 
> the outer layers.

According to the pattern? As described above, you can't rely on that.
Pattern does not necessarily match the full stack of outer layers.

Decap action must be able to determine what to do on its own, possibly in
conjunction with other actions in the list but that's all.

> > Current way allows to specify which fields do not matter and which one
> > must match. It allows to say that, for example, VNI match is sufficient
> > to decapsulate.
> > 
> 
> The encapsulation according to definition, is a list of headers that should 
> encapsulate the packet. So I don't understand your comment about matching
> fields. The matching is based on the flow and the encapsulation is just data
> that should be added on top of the packet.
> 
> > Also arguments assume that action input is accepted as is by the HW.
> > It could be true, but could be obviously false and HW interface may
> > require parsed input (i.e. driver must parse the input buffer and extract
> > required fields of packet headers).
> > 
> 
> You are correct there some PMD even Mellanox (for the E-Switch) require to parsed input
> There is no driver that knows rte_flow structure so in any case there should be 
> Some translation between the encapsulation data and the NIC data.
> I agree that writing the code for translation can be harder in this approach,
> but the code is only written once is the insertion speed is much higher this way.

Avoiding code duplication enough of a reason to do something. Yes NVGRE and
VXLAN encap/decap should be redefined because of that. But IMO, they should
prepend a single VXLAN or NVGRE header and be followed by other actions that
in turn prepend a UDP header, an IPv4/IPv6 one, any number of VLAN headers
and finally an Ethernet header.

> Also like I said some Virtual Switches are already store this data in raw buffer 
> (they update only specific fields) so this will also save time for the application when
> creating a rule.
> 
> > So, I'd say no. It should be better motivated if we change existing
> > approach (even advertised as experimental).
> 
> I think the reasons I gave are very good motivation to change the approach
> please also consider that there is no implementation yet that supports the
> old approach.

Well, although the existing API made this painful, I did submit one [4] and
there's an updated version from Slava [5] for mlx5.

> while we do have code that uses the new approach.

If you need the ability to prepend a raw buffer, please consider a different
name for the related actions, redefine them without reliance on specific
pattern items and leave NVGRE/VXLAN encap/decap as is for the time
being. They can deprecated anytime without ABI impact.

On the other hand if that raw buffer is to be interpreted by the PMD for
more intelligent tunnel encap/decap handling, I do not agree with the
proposed approach for usability reasons.

[2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
    https://mails.dpdk.org/archives/dev/2018-April/096418.html

[3] ethdev: alter behavior of flow API actions
    https://git.dpdk.org/dpdk/commit/?id=cc17feb90413

[4] net/mlx5: add VXLAN encap support to switch flow rules
    https://mails.dpdk.org/archives/dev/2018-August/110598.html

[5] net/mlx5: e-switch VXLAN flow validation routine
    https://mails.dpdk.org/archives/dev/2018-October/113782.html

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10 12:02           ` Adrien Mazarguil
@ 2018-10-10 13:17             ` Ori Kam
  2018-10-10 16:10               ` Adrien Mazarguil
  0 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-10 13:17 UTC (permalink / raw)
  To: Adrien Mazarguil
  Cc: Andrew Rybchenko, Ferruh Yigit, stephen, Declan Doherty, dev,
	Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

Hi
PSB.

> -----Original Message-----
> From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Sent: Wednesday, October 10, 2018 3:02 PM
> To: Ori Kam <orika@mellanox.com>
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; stephen@networkplumber.org; Declan Doherty
> <declan.doherty@intel.com>; dev@dpdk.org; Dekel Peled
> <dekelp@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; Nélio
> Laranjeiro <nelio.laranjeiro@6wind.com>; Yongseok Koh
> <yskoh@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>
> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> Sorry if I'm a bit late to the discussion, please see below.
> 
> On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
> <snip>
> > > On 10/7/2018 1:57 PM, Ori Kam wrote:
> > > This series implement the generic L2/L3 tunnel encapsulation actions
> > > and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> > >
> > > Currenlty the encap/decap actions only support encapsulation
> > > of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> > > the inner packet has a valid Ethernet header, while L3 encapsulation
> > > is where the inner packet doesn't have the Ethernet header).
> > > In addtion the parameter to to the encap action is a list of rte items,
> > > this results in 2 extra translation, between the application to the action
> > > and from the action to the NIC. This results in negetive impact on the
> > > insertion performance.
> 
> Not sure it's a valid concern since in this proposal, PMD is still expected
> to interpret the opaque buffer contents regardless for validation and to
> convert it to its internal format.
> 
This is the action to take, we should assume
that the pattern is valid and not parse it at all.
Another issue, we have a lot of complains about the time we take 
for validation, I know that currently we must validate the rule when creating it,
but this can change, why should a rule that was validate and the only change
is the IP dest of the encap data?
virtual switch after creating the first flow are just modifying it so why force
them into revalidating it? (but this issue is a different topic)

> Worse, it will require a packet parser to iterate over enclosed headers
> instead of a list of convenient rte_flow_whatever objects. It won't be
> faster without the convenience of pointers to properly aligned structures
> that only contain relevant data fields.
>
Also in the rte_item we are not aligned so there is no difference in performance,
between the two approaches, In the rte_item actually we have unused pointer which
are just a waste.
Also needs to consider how application are using it. They are already have it in raw buffer
so it saves the conversation time for the application.
 
> > > Looking forward there are going to be a need to support many more tunnel
> > > encapsulations. For example MPLSoGRE, MPLSoUDP.
> > > Adding the new encapsulation will result in duplication of code.
> > > For example the code for handling NVGRE and VXLAN are exactly the same,
> > > and each new tunnel will have the same exact structure.
> > >
> > > This series introduce a generic encapsulation for L2 tunnel types, and
> > > generic encapsulation for L3 tunnel types. In addtion the new
> > > encapsulations commands are using raw buffer inorder to save the
> > > converstion time, both for the application and the PMD.
> 
> From a usability standpoint I'm not a fan of the current interface to
> perform NVGRE/VXLAN encap, however this proposal adds another layer of
> opaqueness in the name of making things more generic than rte_flow already
> is.
> 
I'm sorry but I don't understand why it is more opaqueness, as I see it is very simple
just give the encapsulation data and that's it. For example on system that support number of
encapsulations they don't need to call to a different function just to change the buffer.

> Assuming they are not to be interpreted by PMDs, maybe there's a case for
> prepending arbitrary buffers to outgoing traffic and removing them from
> incoming traffic. However this feature should not be named "generic tunnel
> encap/decap" as it's misleading.
> 
> Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be
> more
> appropriate. I think on the "pop" side, only the size would matter.
> 
Maybe the name can be change but again the application does encapsulation so it will
be more intuitive for it.

> Another problem is that you must not require actions to rely on specific
> pattern content:
> 
I don't think this can be true anymore since for example what do you expect
to happen when you place an action for example modify ip to packet with no ip?
This may raise issues in the NIC.
Same goes for decap after the flow is in the NIC he must assume that he can remove otherwise 
really unexpected beaver can accord.

>  [...]
>  *
>  * Decapsulate outer most tunnel from matched flow.
>  *
>  * The flow pattern must have a valid tunnel header
>  */
>  RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP,
> 
> For maximum flexibility, all actions should be usable on their own on empty
> pattern. On the other hand, you can document undefined behavior when
> performing some action on traffic that doesn't contain something.
> 

Like I said and like it is already defined for VXLAN_enacp  we must know
the pattern otherwise the rule can be declined in Kernel / crash when trying to decap 
packet without outer tunnel.

> Reason is that invalid traffic may have already been removed by other flow
> rules (or whatever happens) before such a rule is reached; it's a user's
> responsibility to provide such guarantee.
> 
> When parsing an action, a PMD is not supposed to look at the pattern. Action
> list should contain all the needed info, otherwise it means the API is badly
> defined.
> 
> I'm aware the above makes it tough to implement something like
> RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP as defined in this series, but that's
> unfortunately why I think it must not be defined like that.
> 
> My opinion is that the best generic approach to perform encap/decap with
> rte_flow would use one dedicated action per protocol header to
> add/remove/modify. This is the suggestion I originally made for
> VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
> matters [3].
I agree that your approach make a lot of sense, but there are number of issues with it
* it is harder and takes more time from the application point of view.
* it is slower when compared to the raw buffer. 

> 
> Remember that whatever is provided, be it an opaque buffer (like you did), a
> separate list of items (like VXLAN/NVGRE) or the rte_flow action list itself
> (what I'm suggesting to do), PMDs have to process it. There will be a CPU
> cost. Keep in mind odd use cases that involve QinQinQinQinQ.
> 
> > > I like the idea to generalize encap/decap actions. It makes a bit harder
> > > for reader to find which encap/decap actions are supported in fact,
> > > but it changes nothing for automated usage in the code - just try it
> > > (as a generic way used in rte_flow).
> > >
> >
> > Even now the user doesn't know which encapsulation is supported since
> > it is PMD and sometime kernel related. On the other end it simplify adding
> > encapsulation to specific costumers with some time just FW update.
> 
> Except for raw push/pop of uninterpreted headers, tunnel encapsulations not
> explicitly supported by rte_flow shouldn't be possible. Who will expect
> something that isn't defined by the API to work and rely on it in their
> application? I don't see it happening.
> 
Some of our customers are working with private tunnel type, and they can configure it using kernel 
or just new FW this is a real use case.

> Come on, adding new encap/decap actions to DPDK is shouldn't be such a pain
> that the only alternative is a generic API to work around me :)
> 

Yes but like I said when a costumer asks for a ecnap and I can give it to him why wait for the DPDK next release?

> > > Arguments about a way of encap/decap headers specification (flow items
> > > vs raw) sound sensible, but I'm not sure about it.
> > > It would be simpler if the tunnel header is added appended or removed
> > > as is, but as I understand it is not true. For example, IPv4 ID will be
> > > different in incoming packets to be decapsulated and different values
> > > should be used on encapsulation. Checksums will be different (but
> > > offloaded in any case).
> > >
> >
> > I'm not sure I understand your comment.
> > Decapsulation is independent of encapsulation, for example if we decap
> > L2 tunnel type then there is no parameter at all the NIC just removes
> > the outer layers.
> 
> According to the pattern? As described above, you can't rely on that.
> Pattern does not necessarily match the full stack of outer layers.
> 
> Decap action must be able to determine what to do on its own, possibly in
> conjunction with other actions in the list but that's all.
> 
Decap removes the outer headers.
Some tunnels don't have inner L2 and it must be added after the decap
this is what L3 decap means, and the user must supply the valid L2 header.

> > > Current way allows to specify which fields do not matter and which one
> > > must match. It allows to say that, for example, VNI match is sufficient
> > > to decapsulate.
> > >
> >
> > The encapsulation according to definition, is a list of headers that should
> > encapsulate the packet. So I don't understand your comment about matching
> > fields. The matching is based on the flow and the encapsulation is just data
> > that should be added on top of the packet.
> >
> > > Also arguments assume that action input is accepted as is by the HW.
> > > It could be true, but could be obviously false and HW interface may
> > > require parsed input (i.e. driver must parse the input buffer and extract
> > > required fields of packet headers).
> > >
> >
> > You are correct there some PMD even Mellanox (for the E-Switch) require to
> parsed input
> > There is no driver that knows rte_flow structure so in any case there should
> be
> > Some translation between the encapsulation data and the NIC data.
> > I agree that writing the code for translation can be harder in this approach,
> > but the code is only written once is the insertion speed is much higher this
> way.
> 
> Avoiding code duplication enough of a reason to do something. Yes NVGRE and
> VXLAN encap/decap should be redefined because of that. But IMO, they should
> prepend a single VXLAN or NVGRE header and be followed by other actions that
> in turn prepend a UDP header, an IPv4/IPv6 one, any number of VLAN headers
> and finally an Ethernet header.
> 
> > Also like I said some Virtual Switches are already store this data in raw buffer
> > (they update only specific fields) so this will also save time for the application
> when
> > creating a rule.
> >
> > > So, I'd say no. It should be better motivated if we change existing
> > > approach (even advertised as experimental).
> >
> > I think the reasons I gave are very good motivation to change the approach
> > please also consider that there is no implementation yet that supports the
> > old approach.
> 
> Well, although the existing API made this painful, I did submit one [4] and
> there's an updated version from Slava [5] for mlx5.
> 
> > while we do have code that uses the new approach.
> 
> If you need the ability to prepend a raw buffer, please consider a different
> name for the related actions, redefine them without reliance on specific
> pattern items and leave NVGRE/VXLAN encap/decap as is for the time
> being. They can deprecated anytime without ABI impact.
> 
> On the other hand if that raw buffer is to be interpreted by the PMD for
> more intelligent tunnel encap/decap handling, I do not agree with the
> proposed approach for usability reasons.
> 
> [2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> April%2F096418.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7b9
> 9c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b%
> 7C0%7C0%7C636747697489048905&amp;sdata=prABlYixGAkdnyZ2cetpgz5%2F
> vkMmiC66T3ZNE%2FewkQ4%3D&amp;reserved=0
> 
> [3] ethdev: alter behavior of flow API actions
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk
> .org%2Fdpdk%2Fcommit%2F%3Fid%3Dcc17feb90413&amp;data=02%7C01%7C
> orika%40mellanox.com%7C7b99c5f781424ba7950608d62ea83efa%7Ca652971
> c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636747697489058915&amp;sdata
> =VavsHXeQ3SgMzaTlklBWdkKSEBjELMp9hwUHBlLQlVA%3D&amp;reserved=0
> 
> [4] net/mlx5: add VXLAN encap support to switch flow rules
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> August%2F110598.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7b
> 99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b
> %7C0%7C0%7C636747697489058915&amp;sdata=lpfDWp9oBN8AFNYZ6VL5BjI
> 38SDFt91iuU7pvhbC%2F0E%3D&amp;reserved=0
> 
> [5] net/mlx5: e-switch VXLAN flow validation routine
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> October%2F113782.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7
> b99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461
> b%7C0%7C0%7C636747697489058915&amp;sdata=8GCbYk6uB2ahZaHaqWX4
> OOq%2B7ZLwxiApcs%2FyRAT9qOw%3D&amp;reserved=0
> 
> --
> Adrien Mazarguil
> 6WIND

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10 13:17             ` Ori Kam
@ 2018-10-10 16:10               ` Adrien Mazarguil
  2018-10-11  8:48                 ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Adrien Mazarguil @ 2018-10-10 16:10 UTC (permalink / raw)
  To: Ori Kam
  Cc: Andrew Rybchenko, Ferruh Yigit, stephen, Declan Doherty, dev,
	Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

On Wed, Oct 10, 2018 at 01:17:01PM +0000, Ori Kam wrote:
<snip>
> > -----Original Message-----
> > From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
<snip>
> > On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
> > <snip>
> > > > On 10/7/2018 1:57 PM, Ori Kam wrote:
<snip>
> > > > In addtion the parameter to to the encap action is a list of rte items,
> > > > this results in 2 extra translation, between the application to the action
> > > > and from the action to the NIC. This results in negetive impact on the
> > > > insertion performance.
> > 
> > Not sure it's a valid concern since in this proposal, PMD is still expected
> > to interpret the opaque buffer contents regardless for validation and to
> > convert it to its internal format.
> > 
> This is the action to take, we should assume
> that the pattern is valid and not parse it at all.
> Another issue, we have a lot of complains about the time we take 
> for validation, I know that currently we must validate the rule when creating it,
> but this can change, why should a rule that was validate and the only change
> is the IP dest of the encap data?
> virtual switch after creating the first flow are just modifying it so why force
> them into revalidating it? (but this issue is a different topic)

Did you measure what proportion of time is spent on validation when creating
a flow rule?

Based on past experience with mlx4/mlx5, creation used to involve a number
of expensive system calls while validation was basically a single logic loop
checking individual items/actions while performing conversion to HW
format (mandatory for creation). Context switches related to kernel
involvement are the true performance killers.

I'm not sure this is a valid argument in favor of this approach since flow
rule validation still needs to happen regardless.

By the way, applications are not supposed to call rte_flow_validate() before
rte_flow_create(). The former can be helpful in some cases (e.g. to get a
rough idea of PMD capabilities during initialization) but they should in
practice only rely on rte_flow_create(), then fall back to software
processing if that fails.

> > Worse, it will require a packet parser to iterate over enclosed headers
> > instead of a list of convenient rte_flow_whatever objects. It won't be
> > faster without the convenience of pointers to properly aligned structures
> > that only contain relevant data fields.
> >
> Also in the rte_item we are not aligned so there is no difference in performance,
> between the two approaches, In the rte_item actually we have unused pointer which
> are just a waste.

Regarding unused pointers: right, VXLAN/NVGRE encap actions shouldn't have
relied on _pattern item_ structures, the room for their "last" pointer is
arguably wasted. On the other hand, the "mask" pointer allows masking
relevant fields that matter to the application (e.g. source/destination
addresses as opposed to IPv4 length, version and other irrelevant fields for
encap).

Not sure why you think it's not aligned. We're comparing an array of
rte_flow_item objects with raw packet data. The latter requires
interpretation of each protocol header to jump to the next offset. This is
more complex on both sides: to build such a buffer for the application, then
to have it processed by the PMD.

> Also needs to consider how application are using it. They are already have it in raw buffer
> so it saves the conversation time for the application.

I don't think so. Applications typically know where some traffic is supposed
to go and what VNI it should use. They don't have a prefabricated packet
handy to prepend to outgoing traffic. If that was the case they'd most
likely do so themselves through a extra packet segment and not bother with
PMD offloads.

<snip>
> > From a usability standpoint I'm not a fan of the current interface to
> > perform NVGRE/VXLAN encap, however this proposal adds another layer of
> > opaqueness in the name of making things more generic than rte_flow already
> > is.
> > 
> I'm sorry but I don't understand why it is more opaqueness, as I see it is very simple
> just give the encapsulation data and that's it. For example on system that support number of
> encapsulations they don't need to call to a different function just to change the buffer.

I'm saying it's opaque from an API standpoint if you expect the PMD to
interpret that buffer's contents in order to prepend it in a smart way.

Since this generic encap does not support masks, there is no way for an
application to at least tell a PMD what data matters and what doesn't in the
provided buffer. This means invalid checksums, lengths and so on must be
sent as is to the wire. What's the use case for such a behavior?

> > Assuming they are not to be interpreted by PMDs, maybe there's a case for
> > prepending arbitrary buffers to outgoing traffic and removing them from
> > incoming traffic. However this feature should not be named "generic tunnel
> > encap/decap" as it's misleading.
> > 
> > Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be
> > more
> > appropriate. I think on the "pop" side, only the size would matter.
> > 
> Maybe the name can be change but again the application does encapsulation so it will
> be more intuitive for it.
> 
> > Another problem is that you must not require actions to rely on specific
> > pattern content:
> > 
> I don't think this can be true anymore since for example what do you expect
> to happen when you place an action for example modify ip to packet with no ip?
>
> This may raise issues in the NIC.
> Same goes for decap after the flow is in the NIC he must assume that he can remove otherwise 
> really unexpected beaver can accord.

Right, that's why it must be documented as undefined behavior. The API is
not supposed to enforce the relationship. A PMD may require the presence of
some pattern item in order to perform some action, but this is a PMD
limitation, not a limitation of the API itself.

<snip>
> For maximum flexibility, all actions should be usable on their own on empty
> > pattern. On the other hand, you can document undefined behavior when
> > performing some action on traffic that doesn't contain something.
> > 
> 
> Like I said and like it is already defined for VXLAN_enacp  we must know
> the pattern otherwise the rule can be declined in Kernel / crash when trying to decap 
> packet without outer tunnel.

Right, PMD limitation then. You are free to document it in the PMD.

<snip>
> > My opinion is that the best generic approach to perform encap/decap with
> > rte_flow would use one dedicated action per protocol header to
> > add/remove/modify. This is the suggestion I originally made for
> > VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
> > matters [3].
>
> I agree that your approach make a lot of sense, but there are number of issues with it
> * it is harder and takes more time from the application point of view.
> * it is slower when compared to the raw buffer. 

I'm convinced of the opposite :) We could try to implement your raw buffer
approach as well as mine in testpmd (one action per layer, not the current
VXLAN/NVGRE encap mess mind you) in order to determine which is the most
convenient on the application side.

<snip>
> > Except for raw push/pop of uninterpreted headers, tunnel encapsulations not
> > explicitly supported by rte_flow shouldn't be possible. Who will expect
> > something that isn't defined by the API to work and rely on it in their
> > application? I don't see it happening.
> > 
> Some of our customers are working with private tunnel type, and they can configure it using kernel 
> or just new FW this is a real use case.

You can already use negative types to quickly address HW and
customer-specific needs by the way. Could this [6] perhaps address the
issue?

PMDs can expose public APIs. You could devise one that spits new negative
item/action types based on some data, to be subsequently used by flow
rules with that PMD only.

> > Come on, adding new encap/decap actions to DPDK is shouldn't be such a pain
> > that the only alternative is a generic API to work around me :)
> > 
> 
> Yes but like I said when a costumer asks for a ecnap and I can give it to him why wait for the DPDK next release?

I don't know, is rte_flow held to a special standard compared to other DPDK
features in this regard? Engineering patches can always be provided,
backported and whatnot.

Customer applications will have to be modified and recompiled to benefit
from any new FW capabilities regardless, it's extremely unlikely to be just
a matter of installing a new FW image.

<snip>
> > Pattern does not necessarily match the full stack of outer layers.
> > 
> > Decap action must be able to determine what to do on its own, possibly in
> > conjunction with other actions in the list but that's all.
> > 
> Decap removes the outer headers.
> Some tunnels don't have inner L2 and it must be added after the decap
> this is what L3 decap means, and the user must supply the valid L2 header.

My point is that any data required to perform decap must be provided by the
decap action itself, not through a pattern item, whose only purpose is to
filter traffic and may not be present. Precisely what you did for L3 decap.

<snip>
> > > I think the reasons I gave are very good motivation to change the approach
> > > please also consider that there is no implementation yet that supports the
> > > old approach.
> > 
> > Well, although the existing API made this painful, I did submit one [4] and
> > there's an updated version from Slava [5] for mlx5.
> > 
> > > while we do have code that uses the new approach.
> > 
> > If you need the ability to prepend a raw buffer, please consider a different
> > name for the related actions, redefine them without reliance on specific
> > pattern items and leave NVGRE/VXLAN encap/decap as is for the time
> > being. They can deprecated anytime without ABI impact.
> > 
> > On the other hand if that raw buffer is to be interpreted by the PMD for
> > more intelligent tunnel encap/decap handling, I do not agree with the
> > proposed approach for usability reasons.

I'm still not convinced by your approach. If these new actions *must* be
included unmodified right now to prevent some customer cataclysm, then fine
as an experiment but please leave VXLAN/NVGRE encaps alone for the time
being.

> > [2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
> > 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > pdk.org%2Farchives%2Fdev%2F2018-
> > April%2F096418.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7b9
> > 9c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b%
> > 7C0%7C0%7C636747697489048905&amp;sdata=prABlYixGAkdnyZ2cetpgz5%2F
> > vkMmiC66T3ZNE%2FewkQ4%3D&amp;reserved=0
> > 
> > [3] ethdev: alter behavior of flow API actions
> > 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk
> > .org%2Fdpdk%2Fcommit%2F%3Fid%3Dcc17feb90413&amp;data=02%7C01%7C
> > orika%40mellanox.com%7C7b99c5f781424ba7950608d62ea83efa%7Ca652971
> > c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636747697489058915&amp;sdata
> > =VavsHXeQ3SgMzaTlklBWdkKSEBjELMp9hwUHBlLQlVA%3D&amp;reserved=0
> > 
> > [4] net/mlx5: add VXLAN encap support to switch flow rules
> > 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > pdk.org%2Farchives%2Fdev%2F2018-
> > August%2F110598.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7b
> > 99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b
> > %7C0%7C0%7C636747697489058915&amp;sdata=lpfDWp9oBN8AFNYZ6VL5BjI
> > 38SDFt91iuU7pvhbC%2F0E%3D&amp;reserved=0
> > 
> > [5] net/mlx5: e-switch VXLAN flow validation routine
> > 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > pdk.org%2Farchives%2Fdev%2F2018-
> > October%2F113782.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7
> > b99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461
> > b%7C0%7C0%7C636747697489058915&amp;sdata=8GCbYk6uB2ahZaHaqWX4
> > OOq%2B7ZLwxiApcs%2FyRAT9qOw%3D&amp;reserved=0

[6] "9.2.9. Negative types"
    http://doc.dpdk.org/guides-18.08/prog_guide/rte_flow.html#negative-types

On an unrelated note, is there a way to prevent Outlook from mangling URLs
on your side? (those emea01.safelinks things)

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-10 16:10               ` Adrien Mazarguil
@ 2018-10-11  8:48                 ` Ori Kam
  2018-10-11 13:12                   ` Adrien Mazarguil
  0 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-11  8:48 UTC (permalink / raw)
  To: Adrien Mazarguil
  Cc: Andrew Rybchenko, Ferruh Yigit, stephen, Declan Doherty, dev,
	Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler, Ori Kam

Hi Adrian,

Thanks for your comments please see my answer below and inline.

Due to a very short time limit and the fact that we have more than
4 patches that are based on this we need to close it fast.

As I can see there are number of options:
* the old approach that neither of us like. And which mean that for 
   every tunnel we create a new command.
* My proposed suggestion as is. Which is easier for at least number of application
   to implement and faster in most cases.
* My suggestion with different name, but then we need to find also a name
   for the decap and also a name for decap_l3. This approach is also problematic
   since we have 2 API that are doing the same thig. For example in test-pmd encap
   vxlan in which API shell we use?
* Combine between my suggestion and the current one by replacing the raw
   buffer with list of items. Less code duplication easier on the validation ( that 
   don't think we need to validate the encap data) but we loss insertion rate.
* your suggestion of  list of action that each action is one item. Main problem
   is speed.  Complexity form the application side and time to implement.

> -----Original Message-----
> From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Sent: Wednesday, October 10, 2018 7:10 PM
> To: Ori Kam <orika@mellanox.com>
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; stephen@networkplumber.org; Declan Doherty
> <declan.doherty@intel.com>; dev@dpdk.org; Dekel Peled
> <dekelp@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; Nélio
> Laranjeiro <nelio.laranjeiro@6wind.com>; Yongseok Koh
> <yskoh@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>
> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> On Wed, Oct 10, 2018 at 01:17:01PM +0000, Ori Kam wrote:
> <snip>
> > > -----Original Message-----
> > > From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> <snip>
> > > On Wed, Oct 10, 2018 at 09:00:52AM +0000, Ori Kam wrote:
> > > <snip>
> > > > > On 10/7/2018 1:57 PM, Ori Kam wrote:
> <snip>
> > > > > In addtion the parameter to to the encap action is a list of rte items,
> > > > > this results in 2 extra translation, between the application to the action
> > > > > and from the action to the NIC. This results in negetive impact on the
> > > > > insertion performance.
> > >
> > > Not sure it's a valid concern since in this proposal, PMD is still expected
> > > to interpret the opaque buffer contents regardless for validation and to
> > > convert it to its internal format.
> > >
> > This is the action to take, we should assume
> > that the pattern is valid and not parse it at all.
> > Another issue, we have a lot of complains about the time we take
> > for validation, I know that currently we must validate the rule when creating
> it,
> > but this can change, why should a rule that was validate and the only change
> > is the IP dest of the encap data?
> > virtual switch after creating the first flow are just modifying it so why force
> > them into revalidating it? (but this issue is a different topic)
> 
> Did you measure what proportion of time is spent on validation when creating
> a flow rule?
> 
> Based on past experience with mlx4/mlx5, creation used to involve a number
> of expensive system calls while validation was basically a single logic loop
> checking individual items/actions while performing conversion to HW
> format (mandatory for creation). Context switches related to kernel
> involvement are the true performance killers.
> 

I'm sorry to say I don't have the numbers, but I can tell you
that in the new API in most cases there will be just one system call.
In addition any extra time is a wasted time, again this is a request we got from number
of customers.

> I'm not sure this is a valid argument in favor of this approach since flow
> rule validation still needs to happen regardless.
> 
> By the way, applications are not supposed to call rte_flow_validate() before
> rte_flow_create(). The former can be helpful in some cases (e.g. to get a
> rough idea of PMD capabilities during initialization) but they should in
> practice only rely on rte_flow_create(), then fall back to software
> processing if that fails.
> 
First I don't think we need to validate the encapsulation data if the data is wrong 
then there will the packet will be dropped. Just like you are saying with the restrication
of the flow items it is the responsibility of the application.
Also I said there is a demand for costumers and there is no reason not to do it
but in any case this is not relevant for the current patch.

> > > Worse, it will require a packet parser to iterate over enclosed headers
> > > instead of a list of convenient rte_flow_whatever objects. It won't be
> > > faster without the convenience of pointers to properly aligned structures
> > > that only contain relevant data fields.
> > >
> > Also in the rte_item we are not aligned so there is no difference in
> performance,
> > between the two approaches, In the rte_item actually we have unused
> pointer which
> > are just a waste.
> 
> Regarding unused pointers: right, VXLAN/NVGRE encap actions shouldn't have
> relied on _pattern item_ structures, the room for their "last" pointer is
> arguably wasted. On the other hand, the "mask" pointer allows masking
> relevant fields that matter to the application (e.g. source/destination
> addresses as opposed to IPv4 length, version and other irrelevant fields for
> encap).
> 
At least according to my testing the NIC can't uses masks and and it is working based
on the offloading configured to any packet (like checksum )

> Not sure why you think it's not aligned. We're comparing an array of
> rte_flow_item objects with raw packet data. The latter requires
> interpretation of each protocol header to jump to the next offset. This is
> more complex on both sides: to build such a buffer for the application, then
> to have it processed by the PMD.
> 
Maybe I missing something but the in a buffer approach likely all the data will be in the
cache and will if allocated will also be aligned.  On the other hand the rte_items
also are not guarantee to be in the same cache line each access to item may result
in a cache miss. Also accessing individual members are just as accessing them in
raw buffer.

> > Also needs to consider how application are using it. They are already have it
> in raw buffer
> > so it saves the conversation time for the application.
> 
> I don't think so. Applications typically know where some traffic is supposed
> to go and what VNI it should use. They don't have a prefabricated packet
> handy to prepend to outgoing traffic. If that was the case they'd most
> likely do so themselves through a extra packet segment and not bother with
> PMD offloads.
> 
Contrail V-Router has such a buffer and it just changes the specific fields.
This is one of the thing we wants to offload, from my last check also OVS uses
such buffer.

> <snip>
> > > From a usability standpoint I'm not a fan of the current interface to
> > > perform NVGRE/VXLAN encap, however this proposal adds another layer of
> > > opaqueness in the name of making things more generic than rte_flow
> already
> > > is.
> > >
> > I'm sorry but I don't understand why it is more opaqueness, as I see it is very
> simple
> > just give the encapsulation data and that's it. For example on system that
> support number of
> > encapsulations they don't need to call to a different function just to change
> the buffer.
> 
> I'm saying it's opaque from an API standpoint if you expect the PMD to
> interpret that buffer's contents in order to prepend it in a smart way.
> 
> Since this generic encap does not support masks, there is no way for an
> application to at least tell a PMD what data matters and what doesn't in the
> provided buffer. This means invalid checksums, lengths and so on must be
> sent as is to the wire. What's the use case for such a behavior?
> 
The NIC treats the packet as normal packet that goes throw all normal offloading.


> > > Assuming they are not to be interpreted by PMDs, maybe there's a case for
> > > prepending arbitrary buffers to outgoing traffic and removing them from
> > > incoming traffic. However this feature should not be named "generic tunnel
> > > encap/decap" as it's misleading.
> > >
> > > Something like RTE_FLOW_ACTION_TYPE_HEADER_(PUSH|POP) would be
> > > more
> > > appropriate. I think on the "pop" side, only the size would matter.
> > >
> > Maybe the name can be change but again the application does encapsulation
> so it will
> > be more intuitive for it.
> >
> > > Another problem is that you must not require actions to rely on specific
> > > pattern content:
> > >
> > I don't think this can be true anymore since for example what do you expect
> > to happen when you place an action for example modify ip to packet with no
> ip?
> >
> > This may raise issues in the NIC.
> > Same goes for decap after the flow is in the NIC he must assume that he can
> remove otherwise
> > really unexpected beaver can accord.
> 
> Right, that's why it must be documented as undefined behavior. The API is
> not supposed to enforce the relationship. A PMD may require the presence of
> some pattern item in order to perform some action, but this is a PMD
> limitation, not a limitation of the API itself.
> 

Agree

> <snip>
> > For maximum flexibility, all actions should be usable on their own on empty
> > > pattern. On the other hand, you can document undefined behavior when
> > > performing some action on traffic that doesn't contain something.
> > >
> >
> > Like I said and like it is already defined for VXLAN_enacp  we must know
> > the pattern otherwise the rule can be declined in Kernel / crash when trying
> to decap
> > packet without outer tunnel.
> 
> Right, PMD limitation then. You are free to document it in the PMD.
> 

Agree

> <snip>
> > > My opinion is that the best generic approach to perform encap/decap with
> > > rte_flow would use one dedicated action per protocol header to
> > > add/remove/modify. This is the suggestion I originally made for
> > > VXLAN/NVGRE [2] and this is one of the reasons the order of actions now
> > > matters [3].
> >
> > I agree that your approach make a lot of sense, but there are number of
> issues with it
> > * it is harder and takes more time from the application point of view.
> > * it is slower when compared to the raw buffer.
> 
> I'm convinced of the opposite :) We could try to implement your raw buffer
> approach as well as mine in testpmd (one action per layer, not the current
> VXLAN/NVGRE encap mess mind you) in order to determine which is the most
> convenient on the application side.
> 

There are 2 different implementations one for test-pmd and one for normal application.
Writing the code in test-pmd in raw buffer is simpler but less flexible
writing the code in a real application I think is simpler in the buffer approach.
Since they already have a buffer.

> <snip>
> > > Except for raw push/pop of uninterpreted headers, tunnel encapsulations
> not
> > > explicitly supported by rte_flow shouldn't be possible. Who will expect
> > > something that isn't defined by the API to work and rely on it in their
> > > application? I don't see it happening.
> > >
> > Some of our customers are working with private tunnel type, and they can
> configure it using kernel
> > or just new FW this is a real use case.
> 
> You can already use negative types to quickly address HW and
> customer-specific needs by the way. Could this [6] perhaps address the
> issue?
> 
> PMDs can expose public APIs. You could devise one that spits new negative
> item/action types based on some data, to be subsequently used by flow
> rules with that PMD only.
> 
> > > Come on, adding new encap/decap actions to DPDK is shouldn't be such a
> pain
> > > that the only alternative is a generic API to work around me :)
> > >
> >
> > Yes but like I said when a costumer asks for a ecnap and I can give it to him
> why wait for the DPDK next release?
> 
> I don't know, is rte_flow held to a special standard compared to other DPDK
> features in this regard? Engineering patches can always be provided,
> backported and whatnot.
> 
> Customer applications will have to be modified and recompiled to benefit
> from any new FW capabilities regardless, it's extremely unlikely to be just
> a matter of installing a new FW image.
> 

In some cases this is what's happen 😊

> <snip>
> > > Pattern does not necessarily match the full stack of outer layers.
> > >
> > > Decap action must be able to determine what to do on its own, possibly in
> > > conjunction with other actions in the list but that's all.
> > >
> > Decap removes the outer headers.
> > Some tunnels don't have inner L2 and it must be added after the decap
> > this is what L3 decap means, and the user must supply the valid L2 header.
> 
> My point is that any data required to perform decap must be provided by the
> decap action itself, not through a pattern item, whose only purpose is to
> filter traffic and may not be present. Precisely what you did for L3 decap.
> 
Agree we remove the limitation and just say unpredicted result may accord.

> <snip>
> > > > I think the reasons I gave are very good motivation to change the
> approach
> > > > please also consider that there is no implementation yet that supports the
> > > > old approach.
> > >
> > > Well, although the existing API made this painful, I did submit one [4] and
> > > there's an updated version from Slava [5] for mlx5.
> > >
> > > > while we do have code that uses the new approach.
> > >
> > > If you need the ability to prepend a raw buffer, please consider a different
> > > name for the related actions, redefine them without reliance on specific
> > > pattern items and leave NVGRE/VXLAN encap/decap as is for the time
> > > being. They can deprecated anytime without ABI impact.
> > >
> > > On the other hand if that raw buffer is to be interpreted by the PMD for
> > > more intelligent tunnel encap/decap handling, I do not agree with the
> > > proposed approach for usability reasons.
> 
> I'm still not convinced by your approach. If these new actions *must* be
> included unmodified right now to prevent some customer cataclysm, then fine
> as an experiment but please leave VXLAN/NVGRE encaps alone for the time
> being.
> 
> > > [2] [PATCH v3 2/4] ethdev: Add tunnel encap/decap actions
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > > pdk.org%2Farchives%2Fdev%2F2018-
> > >
> April%2F096418.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7b9
> > >
> 9c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b%
> > >
> 7C0%7C0%7C636747697489048905&amp;sdata=prABlYixGAkdnyZ2cetpgz5%2F
> > > vkMmiC66T3ZNE%2FewkQ4%3D&amp;reserved=0
> > >
> > > [3] ethdev: alter behavior of flow API actions
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk
> > >
> .org%2Fdpdk%2Fcommit%2F%3Fid%3Dcc17feb90413&amp;data=02%7C01%7C
> > >
> orika%40mellanox.com%7C7b99c5f781424ba7950608d62ea83efa%7Ca652971
> > >
> c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636747697489058915&amp;sdata
> > >
> =VavsHXeQ3SgMzaTlklBWdkKSEBjELMp9hwUHBlLQlVA%3D&amp;reserved=0
> > >
> > > [4] net/mlx5: add VXLAN encap support to switch flow rules
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > > pdk.org%2Farchives%2Fdev%2F2018-
> > >
> August%2F110598.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7b
> > >
> 99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461b
> > >
> %7C0%7C0%7C636747697489058915&amp;sdata=lpfDWp9oBN8AFNYZ6VL5BjI
> > > 38SDFt91iuU7pvhbC%2F0E%3D&amp;reserved=0
> > >
> > > [5] net/mlx5: e-switch VXLAN flow validation routine
> > >
> > >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> > > pdk.org%2Farchives%2Fdev%2F2018-
> > >
> October%2F113782.html&amp;data=02%7C01%7Corika%40mellanox.com%7C7
> > >
> b99c5f781424ba7950608d62ea83efa%7Ca652971c7d2e4d9ba6a4d149256f461
> > >
> b%7C0%7C0%7C636747697489058915&amp;sdata=8GCbYk6uB2ahZaHaqWX4
> > > OOq%2B7ZLwxiApcs%2FyRAT9qOw%3D&amp;reserved=0
> 
> [6] "9.2.9. Negative types"
> 
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk
> .org%2Fguides-18.08%2Fprog_guide%2Frte_flow.html%23negative-
> types&amp;data=02%7C01%7Corika%40mellanox.com%7C52a7b66d888f47a02
> fa308d62ecae971%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63
> 6747846398519627&amp;sdata=Rn1s5FgQB8pSgjLvs3K2M4rX%2BVbK5Txi59iy
> Q%2FbsUqQ%3D&amp;reserved=0
> 
> On an unrelated note, is there a way to prevent Outlook from mangling URLs
> on your side? (those emea01.safelinks things)
> 
I  will try to find a solution. I didn't find one so far.

> --
> Adrien Mazarguil
> 6WIND

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-11  8:48                 ` Ori Kam
@ 2018-10-11 13:12                   ` Adrien Mazarguil
  2018-10-11 13:55                     ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Adrien Mazarguil @ 2018-10-11 13:12 UTC (permalink / raw)
  To: Ori Kam
  Cc: Andrew Rybchenko, Ferruh Yigit, stephen, Declan Doherty, dev,
	Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

Hey Ori,

(removing most of the discussion, I'll only reply to the summary)

On Thu, Oct 11, 2018 at 08:48:05AM +0000, Ori Kam wrote:
> Hi Adrian,
> 
> Thanks for your comments please see my answer below and inline.
> 
> Due to a very short time limit and the fact that we have more than
> 4 patches that are based on this we need to close it fast.
> 
> As I can see there are number of options:
> * the old approach that neither of us like. And which mean that for 
>    every tunnel we create a new command.

Just to be sure, you mean that for each new tunnel *type* a new rte_flow
action *type* must be added to DPDK right? Because the above reads like with
your proposal, a single flow rule can manage any number of TEPs and flow
rule creation for subsequent tunnels can be somehow bypassed.

One flow *rule* is still needed per TEP or did I miss something?

> * My proposed suggestion as is. Which is easier for at least number of application
>    to implement and faster in most cases.
> * My suggestion with different name, but then we need to find also a name
>    for the decap and also a name for decap_l3. This approach is also problematic
>    since we have 2 API that are doing the same thig. For example in test-pmd encap
>    vxlan in which API shell we use?

Since you're doing this for MPLSoUDP and MPLSoGRE, you could leave
VXLAN/NVGRE encap as is, especially since (AFAIK) there are series still
relying on their API floating on the ML.

> * Combine between my suggestion and the current one by replacing the raw
>    buffer with list of items. Less code duplication easier on the validation ( that 
>    don't think we need to validate the encap data) but we loss insertion rate.

Already suggested in the past [1], this led to VXLAN and NVGRE encap as we
know them.

> * your suggestion of  list of action that each action is one item. Main problem
>    is speed.  Complexity form the application side and time to implement.

Speed matters a lot to me also (go figure) but I still doubt this approach
is measurably faster. On the usability side, compared to one action per
protocol layer which better fits the rte_flow model, I'm also not
convinced.

If we put aside usability and performance on which we'll never agree, there
is still one outstanding issue: the lack of mask. Users cannot tell which
fields are relevant and to be kept as is, and which are not.

How do applications know what blanks are filled in by HW? How do PMDs know
what applications expect? There's a risk of sending incomplete or malformed
packets depending on the implementation.

One may expect PMDs and HW to just "do the sensible thing" but some
applications won't know that some fields are not offloaded and will be
emitted with an unexpected value, while others will attempt to force a
normally offloaded field to some specific value and expect it to leave
unmodified. This cannot be predicted by the PMD, something is needed.

Assuming you add a mask pointer to address this, generic encap should be
functionally complete but not all that different from what we currently have
for VXLAN/NVGRE and from Declan's earlier proposal for generic encap [1];
PMD must parse the buffer (using a proper packet parser with your approach),
collect relevant fields, see if anything's unsupported while doing so before
proceeding with the flow rule.

Anyway, if you add that mask and rename these actions (since they should work
with pretty much anything, not necessarily tunnels, i.e. lazy applications
could ask HW to prepend missing Ethernet headers to pure IP traffic), they
can make sense. How about labeling this "raw" encap/decap?

 RTE_FLOW_ACTION_TYPE_RAW_(ENCAP|DECAP)

 struct rte_flow_action_raw_encap {
     uint8_t *data; /**< Encapsulation data. */
     uint8_t *preserve; /**< Bit-mask of @p data to preserve on output. */
     size_t size; /**< Size of @p data and @p preserve. */
 };

I guess decap could use the same object. Since there is no way to define a
sensible default behavior that works across multiple vendors when "preserve"
is not provided, I think this field cannot be NULL.

As for "L3 decap", well, can't one just provide a separate encap action?
I mean a raw decap action, followed by another action doing raw encap of the
intended L2? A separate set of actions seems unnecessary for that.

[1] "[PATCH v3 2/4] ethdev: Add tunnel encap/decap actions"
    https://mails.dpdk.org/archives/dev/2018-April/095733.html

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions
  2018-10-11 13:12                   ` Adrien Mazarguil
@ 2018-10-11 13:55                     ` Ori Kam
  0 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-11 13:55 UTC (permalink / raw)
  To: Adrien Mazarguil
  Cc: Andrew Rybchenko, Ferruh Yigit, stephen, Declan Doherty, dev,
	Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler


Hi Adrian,

> -----Original Message-----
> From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Sent: Thursday, October 11, 2018 4:12 PM
> To: Ori Kam <orika@mellanox.com>
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; stephen@networkplumber.org; Declan Doherty
> <declan.doherty@intel.com>; dev@dpdk.org; Dekel Peled
> <dekelp@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; Nélio
> Laranjeiro <nelio.laranjeiro@6wind.com>; Yongseok Koh
> <yskoh@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>
> Subject: Re: [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation
> actions
> 
> Hey Ori,
> 
> (removing most of the discussion, I'll only reply to the summary)
> 
> On Thu, Oct 11, 2018 at 08:48:05AM +0000, Ori Kam wrote:
> > Hi Adrian,
> >
> > Thanks for your comments please see my answer below and inline.
> >
> > Due to a very short time limit and the fact that we have more than
> > 4 patches that are based on this we need to close it fast.
> >
> > As I can see there are number of options:
> > * the old approach that neither of us like. And which mean that for
> >    every tunnel we create a new command.
> 
> Just to be sure, you mean that for each new tunnel *type* a new rte_flow
> action *type* must be added to DPDK right? Because the above reads like with
> your proposal, a single flow rule can manage any number of TEPs and flow
> rule creation for subsequent tunnels can be somehow bypassed.
> 
> One flow *rule* is still needed per TEP or did I miss something?
> 
Yes you are correct one rte_action per tunnel *type* must be added to the DPDK.
Sorry I'm not sure what TEP is.

> > * My proposed suggestion as is. Which is easier for at least number of
> application
> >    to implement and faster in most cases.
> > * My suggestion with different name, but then we need to find also a name
> >    for the decap and also a name for decap_l3. This approach is also
> problematic
> >    since we have 2 API that are doing the same thig. For example in test-pmd
> encap
> >    vxlan in which API shell we use?
> 
> Since you're doing this for MPLSoUDP and MPLSoGRE, you could leave
> VXLAN/NVGRE encap as is, especially since (AFAIK) there are series still
> relying on their API floating on the ML.
> 
I don't care about leaving also the old approach.
I even got Acked for removing it 😊
The only issue is the duplication API.
for example what will be the test-pmd encap vxlan ?
If you agree that test PMD will be converted to the new one,
I don't care about leaving the old one.


> > * Combine between my suggestion and the current one by replacing the raw
> >    buffer with list of items. Less code duplication easier on the validation ( that
> >    don't think we need to validate the encap data) but we loss insertion rate.
> 
> Already suggested in the past [1], this led to VXLAN and NVGRE encap as we
> know them.
> 
> > * your suggestion of  list of action that each action is one item. Main problem
> >    is speed.  Complexity form the application side and time to implement.
> 
> Speed matters a lot to me also (go figure) but I still doubt this approach
> is measurably faster. On the usability side, compared to one action per
> protocol layer which better fits the rte_flow model, I'm also not
> convinced.
> 
> If we put aside usability and performance on which we'll never agree, there
> is still one outstanding issue: the lack of mask. Users cannot tell which
> fields are relevant and to be kept as is, and which are not.
> 
😊

> How do applications know what blanks are filled in by HW? How do PMDs know
> what applications expect? There's a risk of sending incomplete or malformed
> packets depending on the implementation.
> 
> One may expect PMDs and HW to just "do the sensible thing" but some
> applications won't know that some fields are not offloaded and will be
> emitted with an unexpected value, while others will attempt to force a
> normally offloaded field to some specific value and expect it to leave
> unmodified. This cannot be predicted by the PMD, something is needed.
> 
> Assuming you add a mask pointer to address this, generic encap should be
> functionally complete but not all that different from what we currently have
> for VXLAN/NVGRE and from Declan's earlier proposal for generic encap [1];
> PMD must parse the buffer (using a proper packet parser with your approach),
> collect relevant fields, see if anything's unsupported while doing so before
> proceeding with the flow rule.
> 
There is no value for mask, since the NIC treats the packet as a normal packet
this means that the NIC operation is dependent on the offloads configured.
If for example the user configured VXLAN outer checksum then the NIC will modify this
field. In any case all values must be given since the PMD cannot guess and insert them.
From the NIC the packet after the adding the buffer must be a valid packet.

> Anyway, if you add that mask and rename these actions (since they should work
> with pretty much anything, not necessarily tunnels, i.e. lazy applications
> could ask HW to prepend missing Ethernet headers to pure IP traffic), they
> can make sense. How about labeling this "raw" encap/decap?
> 
I like it.

>  RTE_FLOW_ACTION_TYPE_RAW_(ENCAP|DECAP)
> 
>  struct rte_flow_action_raw_encap {
enum tunnel_type type; /**< VXLAN/ MPLSoGRE... / UNKNOWN. */
Then NICs can prase the packet even faster. And decide if it is supported.
Uint8_t remove_l2; /**< Marks if the L2 should be removed before the encap. */
>      uint8_t *data; /**< Encapsulation data. */
>      uint8_t *preserve; /**< Bit-mask of @p data to preserve on output. */
Remove the preserve. Since like I said there is no meaning to this. 
>      size_t size; /**< Size of @p data and @p preserve. */
>  };
> 
> I guess decap could use the same object. Since there is no way to define a
> sensible default behavior that works across multiple vendors when "preserve"
> is not provided, I think this field cannot be NULL.
> 
Like I said lets remove it.

> As for "L3 decap", well, can't one just provide a separate encap action?
> I mean a raw decap action, followed by another action doing raw encap of the
> intended L2? A separate set of actions seems unnecessary for that.

Agree we will have RTE_FLOW_ACTION_TYPE_RAW_DECAP
Which decapsulate the outer headers, and then we will give the encap command with 
the L2 header.

> 
> [1] "[PATCH v3 2/4] ethdev: Add tunnel encap/decap actions"
> 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.d
> pdk.org%2Farchives%2Fdev%2F2018-
> April%2F095733.html&amp;data=02%7C01%7Corika%40mellanox.com%7C1a1
> eb443d3de47d01fce08d62f7b3960%7Ca652971c7d2e4d9ba6a4d149256f461b%
> 7C0%7C0%7C636748603638728232&amp;sdata=hqge3zUCH%2FqIuqPwGPeSkY
> miqaaOvc3xvJ4zaRz3JNg%3D&amp;reserved=0
> 
> --
> Adrien Mazarguil
> 6WIND

Best,
Ori

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v4 0/3] ethdev: add generic raw tunnel encapsulation actions
  2018-10-07 12:57   ` [PATCH v3 " Ori Kam
                       ` (3 preceding siblings ...)
  2018-10-09 16:48     ` [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ferruh Yigit
@ 2018-10-16 21:40     ` Ori Kam
  2018-10-16 21:41       ` [PATCH v4 1/3] ethdev: add raw encapsulation action Ori Kam
                         ` (3 more replies)
  4 siblings, 4 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-16 21:40 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Ori Kam, Shahaf Shuler

This series implement the raw tunnel encapsulation actions
and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.
    
Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.
    
This series introduce a raw encapsulation that can support both L2 and L3
tunnel encapsulation.
In order to encap l3 tunnel for example MPLSoDUP:
ETH / IPV4 / UDP / MPLS / IPV4 / L4 .. L7
When creating the flow rule we add 2 actions, the first one is decap in order
to remove the original L2 of the packet and then the encap with the tunnel data.
Decapsulating such a tunnel is done in the following order, first decap the
outer tunnel and then encap the packet with the L2 header.
It is important to notice that from the Nic and PMD both actionsn happens
simultaneously, meaning that at we are always having a valid packet.

This series also inroduce the following commands for testpmd:
* l2_encap
* l2_decap
* mplsogre_encap
* mplsogre_decap
* mplsoudp_encap
* mplsoudp_decap

along with helper function to set teh headers that will be used for the actions,
the same as with vxlan_encap.

[1]https://mails.dpdk.org/archives/dev/2018-August/109944.html

v4:
 * convert to raw encap/decap, according to Adrien suggestion.
 * keep the old vxlan and nvgre encapsulation commands.

v3:
 * rebase on tip.

v2:
 * add missing decap_l3 structure.
 * fix typo.

Ori Kam (3):
  ethdev: add raw encapsulation action
  app/testpmd: add MPLSoUDP encapsulation
  app/testpmd: add MPLSoGRE encapsulation

 app/test-pmd/cmdline.c                      | 637 ++++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 595 ++++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  62 +++
 doc/guides/prog_guide/rte_flow.rst          |  51 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 278 ++++++++++++
 lib/librte_ethdev/rte_flow.c                |   2 +
 lib/librte_ethdev/rte_flow.h                |  59 +++
 7 files changed, 1684 insertions(+)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
@ 2018-10-16 21:41       ` Ori Kam
  2018-10-17  7:56         ` Andrew Rybchenko
  2018-10-16 21:41       ` [PATCH v4 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-16 21:41 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Ori Kam, Shahaf Shuler

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the
actioni and from the action to the NIC. This results in negetive impact
on the insertion performance.

Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.

This patch introduce a raw encapsulation that can support L2 tunnel types
and L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

In order to encapsulate L3 tunnel type there is a need to use both
actions in the same rule: The decap to remove the L2 of the original
packet, and then encap command to encapsulate the packet with the
tunnel.
For decap L3 there is also a need to use both commands in the same flow
first the decap command to remove the outer tunnel header and then encap
to add the L2 header.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 doc/guides/prog_guide/rte_flow.rst | 51 ++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_flow.c       |  2 ++
 lib/librte_ethdev/rte_flow.h       | 59 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 112 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a5ec441..647e938 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2076,6 +2076,57 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
 
 This action modifies the payload of matched flows.
 
+Action: ``RAW_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^
+
+Adds outer header whose template is provided in it's data buffer,
+as defined in the ``rte_flow_action_raw_encap`` definition.
+
+This action modifies the payload of matched flows. The data supplied must
+be a valid header, either holding layer 2 data in case of adding layer 2 after
+decap layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition
+starting from layer 2 and moving to the tunel item itself. When applied to
+the original packet the resulting packet must be a valid packet.
+
+.. _table_rte_flow_action_raw_encap:
+
+.. table:: RAW_ENCAP
+
+   +----------------+----------------------------------------+
+   | Field          | Value                                  |
+   +================+========================================+
+   | ``data``       | Encapsulation data                     |
+   +----------------+----------------------------------------+
+   | ``preserve``   | Bit-mask of data to preserve on output |
+   +----------------+----------------------------------------+
+   | ``size``       | Size of data and preserve              |
+   +----------------+----------------------------------------+
+
+Action: ``RAW_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Remove outer header whose template is provided in it's data buffer,
+as defined in hte ``rte_flow_action_raw_decap``
+
+This action modifies the payload of matched flows. The data supplied must
+be a valid header, either holding layer 2 data in case of removing layer 2
+before incapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
+tunnel definition starting from layer 2 and moving to the tunel item itself.
+When applied to the original packet the resulting packet must be a
+valid packet.
+
+.. _table_rte_flow_action_raw_decap:
+
+.. table:: RAW_DECAP
+
+   +----------------+----------------------------------------+
+   | Field          | Value                                  |
+   +================+========================================+
+   | ``data``       | Decapsulation data                     |
+   +----------------+----------------------------------------+
+   | ``size``       | Size of data                           |
+   +----------------+----------------------------------------+
+
 Action: ``SET_IPV4_SRC``
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index bc9e719..1e5cd73 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -123,6 +123,8 @@ struct rte_flow_desc_data {
 	MK_FLOW_ACTION(VXLAN_DECAP, 0),
 	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
 	MK_FLOW_ACTION(NVGRE_DECAP, 0),
+	MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct rte_flow_action_raw_encap)),
+	MK_FLOW_ACTION(RAW_DECAP, sizeof(struct rte_flow_action_raw_decap)),
 	MK_FLOW_ACTION(SET_IPV4_SRC,
 		       sizeof(struct rte_flow_action_set_ipv4)),
 	MK_FLOW_ACTION(SET_IPV4_DST,
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 68bbf57..3ae9de3 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1508,6 +1508,20 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
 
 	/**
+	 * Adds outer header whose template is provided in it's data buffer
+	 *
+	 * See struct rte_flow_action_raw_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_RAW_ENCAP,
+
+	/**
+	 * Remove outer header whose tempalte is provided in it's data buffer.
+	 *
+	 * See struct rte_flow_action_raw_decap
+	 */
+	RTE_FLOW_ACTION_TYPE_RAW_DECAP,
+
+	/**
 	 * Modify IPv4 source address in the outermost IPv4 header.
 	 *
 	 * If flow pattern does not define a valid RTE_FLOW_ITEM_TYPE_IPV4,
@@ -1946,6 +1960,51 @@ struct rte_flow_action_nvgre_encap {
  * @warning
  * @b EXPERIMENTAL: this structure may change without prior notice
  *
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ *
+ * Raw tunnel end-point encapsulation data definition.
+ *
+ * The data holds the headers definitions to be applied on the packet.
+ * The data must start with ETH header up to the tunnel item header itself.
+ * When used right after RAW_DECAP (for decapsulating L3 tunnel type for
+ * example MPLSoGRE) the data will just hold layer 2 header.
+ *
+ * The preserve parameter holds which bits in the packet the PMD is not allowd
+ * to change, this parameter can also be NULL and then the PMD is allowed
+ * to update any field.
+ *
+ * size holds the number of bytes in @p data and @p preserve.
+ */
+struct rte_flow_action_raw_encap {
+	uint8_t *data; /**< Encapsulation data. */
+	uint8_t *preserve; /**< Bit-mask of @p data to preserve on output. */
+	size_t size; /**< Size of @p data and @p preserve. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ *
+ * Raw tunnel end-point decapsulation data definition.
+ *
+ * The data holds the headers definitions to be removed from the packet.
+ * The data must start with ETH header up to the tunnel item header itself.
+ * When used right before RAW_DECAP (for encapsulating L3 tunnel type for
+ * example MPLSoGRE) the data will just hold layer 2 header.
+ *
+ * size holds the number of bytes in @p data.
+ */
+struct rte_flow_action_raw_decap {
+	uint8_t *data; /**< Encapsulation data. */
+	size_t size; /**< Size of @p data and @p preserve. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
  * RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC
  * RTE_FLOW_ACTION_TYPE_SET_IPV4_DST
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 2/3] app/testpmd: add MPLSoUDP encapsulation
  2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
  2018-10-16 21:41       ` [PATCH v4 1/3] ethdev: add raw encapsulation action Ori Kam
@ 2018-10-16 21:41       ` Ori Kam
  2018-10-16 21:41       ` [PATCH v4 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
  3 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-16 21:41 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Ori Kam, Shahaf Shuler

MPLSoUDP is an example for L3 tunnel encapsulation.

L3 tunnel type is a tunnel that is missing the layer 2 header of the
inner packet.

Example for MPLSoUDP tunnel:
ETH / IPV4 / UDP / MPLS / IP / L4..L7

In order to encapsulate such a tunnel there is a need to remove L2 of
the inner packet and encap the remaining tunnel, this is done by
applying 2 rte flow commands l2_decap followed by mplsoudp_encap.
Both commands must appear in the same flow, and from the point of the
packet it both actions are applyed at the same time. (There is no part
where a packet doesn't have L2 header).

Decapsulating such a tunnel works the other way, first we need to decap
the outer tunnel header and then apply the new L2.
So the commands will be mplsoudp_decap / l2_encap

Due to the complex encapsulation of MPLSoUDP and L2  flow actions and
based on the fact testpmd does not allocate memory, this patch adds a new
command in testpmd to initialise a global structurs containing the
necessary information to make the outer layer of the packet.  This same
global structures will then be used by the flow commands in testpmd when
the action mplsoudp_encap, mplsoudp_decap, l2_encap, l2_decap, will be
parsed, at this point, the conversion into such action becomes trivial.

The l2_encap and l2_decap actions can also be used for other L3 tunnel
types.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 410 ++++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 379 +++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  40 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 175 ++++++++++++
 4 files changed, 1004 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3b469ac..e807dbb 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -15282,6 +15282,408 @@ static void cmd_set_nvgre_parsed(void *parsed_result,
 	},
 };
 
+/** Set L2 encapsulation details */
+struct cmd_set_l2_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t l2_encap;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_l2_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, set, "set");
+cmdline_parse_token_string_t cmd_set_l2_encap_l2_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, l2_encap, "l2_encap");
+cmdline_parse_token_string_t cmd_set_l2_encap_l2_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, l2_encap,
+				 "l2_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_l2_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_l2_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_l2_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_l2_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_l2_encap_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_l2_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_l2_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_l2_encap_result, eth_src);
+cmdline_parse_token_string_t cmd_set_l2_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_l2_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_l2_encap_result, eth_dst);
+
+static void cmd_set_l2_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_l2_encap_result *res = parsed_result;
+
+	if (strcmp(res->l2_encap, "l2_encap") == 0)
+		l2_encap_conf.select_vlan = 0;
+	else if (strcmp(res->l2_encap, "l2_encap-with-vlan") == 0)
+		l2_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		l2_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		l2_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	if (l2_encap_conf.select_vlan)
+		l2_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(l2_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(l2_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_l2_encap = {
+	.f = cmd_set_l2_encap_parsed,
+	.data = NULL,
+	.help_str = "set l2_encap ip-version ipv4|ipv6"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_l2_encap_set,
+		(void *)&cmd_set_l2_encap_l2_encap,
+		(void *)&cmd_set_l2_encap_ip_version,
+		(void *)&cmd_set_l2_encap_ip_version_value,
+		(void *)&cmd_set_l2_encap_eth_src,
+		(void *)&cmd_set_l2_encap_eth_src_value,
+		(void *)&cmd_set_l2_encap_eth_dst,
+		(void *)&cmd_set_l2_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_l2_encap_with_vlan = {
+	.f = cmd_set_l2_encap_parsed,
+	.data = NULL,
+	.help_str = "set l2_encap-with-vlan ip-version ipv4|ipv6"
+		" vlan-tci <vlan-tci> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_l2_encap_set,
+		(void *)&cmd_set_l2_encap_l2_encap_with_vlan,
+		(void *)&cmd_set_l2_encap_ip_version,
+		(void *)&cmd_set_l2_encap_ip_version_value,
+		(void *)&cmd_set_l2_encap_vlan,
+		(void *)&cmd_set_l2_encap_vlan_value,
+		(void *)&cmd_set_l2_encap_eth_src,
+		(void *)&cmd_set_l2_encap_eth_src_value,
+		(void *)&cmd_set_l2_encap_eth_dst,
+		(void *)&cmd_set_l2_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set L2 decapsulation details */
+struct cmd_set_l2_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t l2_decap;
+	cmdline_fixed_string_t pos_token;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_l2_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, set, "set");
+cmdline_parse_token_string_t cmd_set_l2_decap_l2_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, l2_decap,
+				 "l2_decap");
+cmdline_parse_token_string_t cmd_set_l2_decap_l2_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, l2_decap,
+				 "l2_decap-with-vlan");
+
+static void cmd_set_l2_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_l2_decap_result *res = parsed_result;
+
+	if (strcmp(res->l2_decap, "l2_decap") == 0)
+		l2_decap_conf.select_vlan = 0;
+	else if (strcmp(res->l2_decap, "l2_decap-with-vlan") == 0)
+		l2_decap_conf.select_vlan = 1;
+}
+
+cmdline_parse_inst_t cmd_set_l2_decap = {
+	.f = cmd_set_l2_decap_parsed,
+	.data = NULL,
+	.help_str = "set l2_decap",
+	.tokens = {
+		(void *)&cmd_set_l2_decap_set,
+		(void *)&cmd_set_l2_decap_l2_decap,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_l2_decap_with_vlan = {
+	.f = cmd_set_l2_decap_parsed,
+	.data = NULL,
+	.help_str = "set l2_decap-with-vlan",
+	.tokens = {
+		(void *)&cmd_set_l2_decap_set,
+		(void *)&cmd_set_l2_decap_l2_decap_with_vlan,
+		NULL,
+	},
+};
+
+/** Set MPLSoUDP encapsulation details */
+struct cmd_set_mplsoudp_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsoudp;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t label;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_mplsoudp_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result, mplsoudp,
+				 "mplsoudp_encap");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_mplsoudp_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 mplsoudp, "mplsoudp_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 ip_version, "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_label =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "label");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_label_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, label,
+			      UINT32);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "udp-src");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_src_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_src,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "udp-dst");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_dst_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_dst,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_mplsoudp_encap_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result, ip_src);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_mplsoudp_encap_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "vlan-tci");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, tci,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_mplsoudp_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				    eth_src);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_mplsoudp_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				    eth_dst);
+
+static void cmd_set_mplsoudp_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsoudp_encap_result *res = parsed_result;
+	union {
+		uint32_t mplsoudp_label;
+		uint8_t label[3];
+	} id = {
+		.mplsoudp_label =
+			rte_cpu_to_be_32(res->label) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->mplsoudp, "mplsoudp_encap") == 0)
+		mplsoudp_encap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsoudp, "mplsoudp_encap-with-vlan") == 0)
+		mplsoudp_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsoudp_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsoudp_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(mplsoudp_encap_conf.label, &id.label[1], 3);
+	mplsoudp_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	mplsoudp_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (mplsoudp_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, mplsoudp_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, mplsoudp_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, mplsoudp_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, mplsoudp_encap_conf.ipv6_dst);
+	}
+	if (mplsoudp_encap_conf.select_vlan)
+		mplsoudp_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(mplsoudp_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(mplsoudp_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_mplsoudp_encap = {
+	.f = cmd_set_mplsoudp_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_encap ip-version ipv4|ipv6 label <label>"
+		" udp-src <udp-src> udp-dst <udp-dst> ip-src <ip-src>"
+		" ip-dst <ip-dst> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_encap_set,
+		(void *)&cmd_set_mplsoudp_encap_mplsoudp_encap,
+		(void *)&cmd_set_mplsoudp_encap_ip_version,
+		(void *)&cmd_set_mplsoudp_encap_ip_version_value,
+		(void *)&cmd_set_mplsoudp_encap_label,
+		(void *)&cmd_set_mplsoudp_encap_label_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_src,
+		(void *)&cmd_set_mplsoudp_encap_udp_src_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_src,
+		(void *)&cmd_set_mplsoudp_encap_ip_src_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_src,
+		(void *)&cmd_set_mplsoudp_encap_eth_src_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsoudp_encap_with_vlan = {
+	.f = cmd_set_mplsoudp_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_encap-with-vlan ip-version ipv4|ipv6"
+		" label <label> udp-src <udp-src> udp-dst <udp-dst>"
+		" ip-src <ip-src> ip-dst <ip-dst> vlan-tci <vlan-tci>"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_encap_set,
+		(void *)&cmd_set_mplsoudp_encap_mplsoudp_encap_with_vlan,
+		(void *)&cmd_set_mplsoudp_encap_ip_version,
+		(void *)&cmd_set_mplsoudp_encap_ip_version_value,
+		(void *)&cmd_set_mplsoudp_encap_label,
+		(void *)&cmd_set_mplsoudp_encap_label_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_src,
+		(void *)&cmd_set_mplsoudp_encap_udp_src_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_src,
+		(void *)&cmd_set_mplsoudp_encap_ip_src_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_vlan,
+		(void *)&cmd_set_mplsoudp_encap_vlan_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_src,
+		(void *)&cmd_set_mplsoudp_encap_eth_src_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set MPLSoUDP decapsulation details */
+struct cmd_set_mplsoudp_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsoudp;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_mplsoudp_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result, mplsoudp,
+				 "mplsoudp_decap");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_mplsoudp_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 mplsoudp, "mplsoudp_decap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 ip_version, "ipv4#ipv6");
+
+static void cmd_set_mplsoudp_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsoudp_decap_result *res = parsed_result;
+
+	if (strcmp(res->mplsoudp, "mplsoudp_decap") == 0)
+		mplsoudp_decap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsoudp, "mplsoudp_decap-with-vlan") == 0)
+		mplsoudp_decap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsoudp_decap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsoudp_decap_conf.select_ipv4 = 0;
+}
+
+cmdline_parse_inst_t cmd_set_mplsoudp_decap = {
+	.f = cmd_set_mplsoudp_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_decap ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_decap_set,
+		(void *)&cmd_set_mplsoudp_decap_mplsoudp_decap,
+		(void *)&cmd_set_mplsoudp_decap_ip_version,
+		(void *)&cmd_set_mplsoudp_decap_ip_version_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
+	.f = cmd_set_mplsoudp_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_decap-with-vlan ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_decap_set,
+		(void *)&cmd_set_mplsoudp_decap_mplsoudp_decap_with_vlan,
+		(void *)&cmd_set_mplsoudp_decap_ip_version,
+		(void *)&cmd_set_mplsoudp_decap_ip_version_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17911,6 +18313,14 @@ struct cmd_config_per_queue_tx_offload_result {
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_nvgre,
 	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_l2_encap,
+	(cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_l2_decap,
+	(cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4a27642..94b9bf7 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -243,6 +243,10 @@ enum index {
 	ACTION_VXLAN_DECAP,
 	ACTION_NVGRE_ENCAP,
 	ACTION_NVGRE_DECAP,
+	ACTION_L2_ENCAP,
+	ACTION_L2_DECAP,
+	ACTION_MPLSOUDP_ENCAP,
+	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
 	ACTION_SET_IPV4_SRC_IPV4_SRC,
 	ACTION_SET_IPV4_DST,
@@ -308,6 +312,22 @@ struct action_nvgre_encap_data {
 	struct rte_flow_item_nvgre item_nvgre;
 };
 
+/** Maximum data size in struct rte_flow_action_raw_encap. */
+#define ACTION_RAW_ENCAP_MAX_DATA 128
+
+/** Storage for struct rte_flow_action_raw_encap including external data. */
+struct action_raw_encap_data {
+	struct rte_flow_action_raw_encap conf;
+	uint8_t data[ACTION_RAW_ENCAP_MAX_DATA];
+	uint8_t preserve[ACTION_RAW_ENCAP_MAX_DATA];
+};
+
+/** Storage for struct rte_flow_action_raw_decap including external data. */
+struct action_raw_decap_data {
+	struct rte_flow_action_raw_decap conf;
+	uint8_t data[ACTION_RAW_ENCAP_MAX_DATA];
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -829,6 +849,10 @@ struct parse_action_priv {
 	ACTION_VXLAN_DECAP,
 	ACTION_NVGRE_ENCAP,
 	ACTION_NVGRE_DECAP,
+	ACTION_L2_ENCAP,
+	ACTION_L2_DECAP,
+	ACTION_MPLSOUDP_ENCAP,
+	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
 	ACTION_SET_IPV4_DST,
 	ACTION_SET_IPV6_SRC,
@@ -1008,6 +1032,18 @@ static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_l2_encap(struct context *, const struct token *,
+				    const char *, unsigned int, void *,
+				    unsigned int);
+static int parse_vc_action_l2_decap(struct context *, const struct token *,
+				    const char *, unsigned int, void *,
+				    unsigned int);
+static int parse_vc_action_mplsoudp_encap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
+static int parse_vc_action_mplsoudp_decap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2526,6 +2562,42 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_MPLSOUDP_ENCAP] = {
+		.name = "mplsoudp_encap",
+		.help = "mplsoudp encapsulation, uses configuration set by"
+			" \"set mplsoudp_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_encap,
+	},
+	[ACTION_L2_ENCAP] = {
+		.name = "l2_encap",
+		.help = "l2 encapsulation, uses configuration set by"
+			" \"set l2_encp\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_l2_encap,
+	},
+	[ACTION_L2_DECAP] = {
+		.name = "l2_decap",
+		.help = "l2 decap, uses configuration set by"
+			" \"set l2_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_l2_decap,
+	},
+	[ACTION_MPLSOUDP_DECAP] = {
+		.name = "mplsoudp_decap",
+		.help = "mplsoudp decapsulation, uses configuration set by"
+			" \"set mplsoudp_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_decap,
+	},
 	[ACTION_SET_IPV4_SRC] = {
 		.name = "set_ipv4_src",
 		.help = "Set a new IPv4 source address in the outermost"
@@ -3391,6 +3463,313 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return ret;
 }
 
+/** Parse l2 encap action. */
+static int
+parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
+			 const char *str, unsigned int len,
+			 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+	};
+	header = action_encap_data->data;
+	if (l2_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (l2_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       l2_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       l2_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (l2_encap_conf.select_vlan) {
+		if (l2_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse l2 decap action. */
+static int
+parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
+			 const char *str, unsigned int len,
+			 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (l2_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (l2_decap_conf.select_vlan) {
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOUDP encap action. */
+static int
+parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = mplsoudp_encap_conf.ipv4_src,
+			.dst_addr = mplsoudp_encap_conf.ipv4_dst,
+			.next_proto_id = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.src_port = mplsoudp_encap_conf.udp_src,
+			.dst_port = mplsoudp_encap_conf.udp_dst,
+		},
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+		.preserve = {},
+	};
+	header = action_encap_data->data;
+	if (mplsoudp_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsoudp_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsoudp_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsoudp_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsoudp_encap_conf.select_vlan) {
+		if (mplsoudp_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsoudp_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
+		       &mplsoudp_encap_conf.ipv6_src,
+		       sizeof(mplsoudp_encap_conf.ipv6_src));
+		memcpy(&ipv6.hdr.dst_addr,
+		       &mplsoudp_encap_conf.ipv6_dst,
+		       sizeof(mplsoudp_encap_conf.ipv6_dst));
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(mpls.label_tc_s, mplsoudp_encap_conf.label,
+	       RTE_DIM(mplsoudp_encap_conf.label));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOUDP decap action. */
+static int
+parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.next_proto_id = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.dst_port = rte_cpu_to_be_16(6635),
+		},
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (mplsoudp_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsoudp_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsoudp_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsoudp_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsoudp_encap_conf.select_vlan) {
+		if (mplsoudp_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsoudp_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 121b756..12daee5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -507,6 +507,46 @@ struct nvgre_encap_conf {
 };
 struct nvgre_encap_conf nvgre_encap_conf;
 
+/* L2 encap parameters. */
+struct l2_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct l2_encap_conf l2_encap_conf;
+
+/* L2 decap parameters. */
+struct l2_decap_conf {
+	uint32_t select_vlan:1;
+};
+struct l2_decap_conf l2_decap_conf;
+
+/* MPLSoUDP encap parameters. */
+struct mplsoudp_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t label[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct mplsoudp_encap_conf mplsoudp_encap_conf;
+
+/* MPLSoUDP decap parameters. */
+struct mplsoudp_decap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+};
+struct mplsoudp_decap_conf mplsoudp_decap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index ca060e1..fb26315 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1580,6 +1580,63 @@ flow rule using the action nvgre_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config L2 Encap
+~~~~~~~~~~~~~~~
+
+Configure the l2 to be used when encapsulating a packet with L2::
+
+ set l2_encap ip-version (ipv4|ipv6) eth-src (eth-src) eth-dst (eth-dst)
+ set l2_encap-with-vlan ip-version (ipv4|ipv6) vlan-tci (vlan-tci) \
+        eth-src (eth-src) eth-dst (eth-dst)
+
+Those commands will set an internal configuration inside testpmd, any following
+flow rule using the action l2_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config L2 Decap
+~~~~~~~~~~~~~~~
+
+Configure the l2 to be removed when decapsulating a packet with L2::
+
+ set l2_decap ip-version (ipv4|ipv6)
+ set l2_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those commands will set an internal configuration inside testpmd, any following
+flow rule using the action l2_decap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoUDP Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a MPLSoUDP tunnel::
+
+ set mplsoudp_encap ip-version (ipv4|ipv6) label (label) udp-src (udp-src) \
+        udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) \
+        eth-src (eth-src) eth-dst (eth-dst)
+ set mplsoudp_encap-with-vlan ip-version (ipv4|ipv6) label (label) \
+        udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) \
+        vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsoudp_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoUDP Decap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to decapsulate MPLSoUDP packet::
+
+ set mplsoudp_decap ip-version (ipv4|ipv6)
+ set mplsoudp_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsoudp_decap will use the last configuration set.
+To have a different decapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3771,6 +3828,18 @@ This section lists supported actions and their attributes, if any.
 - ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
   the NVGRE tunnel network overlay from the matched flow.
 
+- ``l2_encap``: Performs a L2 encapsulation, L2 configuration
+  is done through `Config L2 Encap`_.
+
+- ``l2_decap``: Performs a L2 decapsulation, L2 configuration
+  is done through `Config L2 Decap`_.
+
+- ``mplsoudp_encap``: Performs a MPLSoUDP encapsulation, outer layer
+  configuration is done through `Config MPLSoUDP Encap outer layers`_.
+
+- ``mplsoudp_decap``: Performs a MPLSoUDP decapsulation, outer layer
+  configuration is done through `Config MPLSoUDP Decap outer layers`_.
+
 - ``set_ipv4_src``: Set a new IPv4 source address in the outermost IPv4 header.
 
   - ``ipv4_addr``: New IPv4 source address.
@@ -4130,6 +4199,112 @@ IPv6 NVGRE outer header::
  testpmd> flow create 0 ingress pattern end actions nvgre_encap /
         queue index 0 / end
 
+Sample L2 encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+L2 encapsulation has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+L2 header::
+
+ testpmd> set l2_encap ip-version ipv4
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+L2 with VXLAN header::
+
+ testpmd> set l2_encap-with-vlan ip-version ipv4 vlan-tci 34
+         eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+Sample L2 decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+L2 decapsulation has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+L2 header::
+
+ testpmd> set l2_decap
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap / mplsoudp_encap /
+        queue index 0 / end
+
+L2 with VXLAN header::
+
+ testpmd> set l2_encap-with-vlan
+ testpmd> flow create 0 egress pattern eth / end actions l2_encap / mplsoudp_encap /
+         queue index 0 / end
+
+Sample MPLSoUDP encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoUDP encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_encap ip-version ipv4 label 4 udp-src 5 udp-dst 10
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv4 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_encap-with-vlan ip-version ipv4 label 4 udp-src 5
+        udp-dst 10 ip-src 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv6 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_encap ip-version ipv6 mask 4 udp-src 5 udp-dst 10
+        ip-src ::1 ip-dst ::2222 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv6 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_encap-with-vlan ip-version ipv6 mask 4 udp-src 5
+        udp-dst 10 ip-src ::1 ip-dst ::2222 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+Sample MPLSoUDP decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoUDP decapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_decap ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+IPv4 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_decap-with-vlan ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv4 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
+IPv6 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_decap ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / ipv6 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
+IPv6 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_decap-with-vlan ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv6 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
 BPF Functions
 --------------
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 3/3] app/testpmd: add MPLSoGRE encapsulation
  2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
  2018-10-16 21:41       ` [PATCH v4 1/3] ethdev: add raw encapsulation action Ori Kam
  2018-10-16 21:41       ` [PATCH v4 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
@ 2018-10-16 21:41       ` Ori Kam
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
  3 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-16 21:41 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Ori Kam, Shahaf Shuler

Example for MPLSoGRE tunnel:
ETH / IPV4 / GRE / MPLS / IP / L4..L7

In order to encapsulate such a tunnel there is a need to remove L2 of
the inner packet and encap the remaining tunnel, this is done by
applying 2 rte flow commands l2_decap followed by mplsogre_encap.
Both commands must appear in the same flow, and from the point of the
packet it both actions are applyed at the same time. (There is no part
where a packet doesn't have L2 header).

Decapsulating such a tunnel works the other way, first we need to decap
the outer tunnel header and then apply the new L2.
So the commands will be mplsogre_decap / l2_encap

Due to the complex encapsulation of MPLSoGRE flow action and
based on the fact testpmd does not allocate memory, this patch adds a
new command in testpmd to initialise a global structure containing the
necessary information to make the outer layer of the packet.  This same
global structures will then be used by the flow commands in testpmd when
the action mplsogre_encap, mplsogre_decap, will be parsed, at this point,
the conversion into such action becomes trivial.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 227 ++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 238 ++++++++++++++++++++++++++--
 app/test-pmd/testpmd.h                      |  22 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 103 ++++++++++++
 4 files changed, 579 insertions(+), 11 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index e807dbb..6e14345 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -15436,6 +15436,229 @@ static void cmd_set_l2_decap_parsed(void *parsed_result,
 	},
 };
 
+/** Set MPLSoGRE encapsulation details */
+struct cmd_set_mplsogre_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsogre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t label;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_mplsogre_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result, mplsogre,
+				 "mplsogre_encap");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_mplsogre_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 mplsogre, "mplsogre_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 ip_version, "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_label =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "label");
+cmdline_parse_token_num_t cmd_set_mplsogre_encap_label_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, label,
+			      UINT32);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_mplsogre_encap_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result, ip_src);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_mplsogre_encap_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "vlan-tci");
+cmdline_parse_token_num_t cmd_set_mplsogre_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, tci,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_mplsogre_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				    eth_src);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_mplsogre_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				    eth_dst);
+
+static void cmd_set_mplsogre_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsogre_encap_result *res = parsed_result;
+	union {
+		uint32_t mplsogre_label;
+		uint8_t label[3];
+	} id = {
+		.mplsogre_label =
+			rte_cpu_to_be_32(res->label) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->mplsogre, "mplsogre_encap") == 0)
+		mplsogre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsogre, "mplsogre_encap-with-vlan") == 0)
+		mplsogre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsogre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsogre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(mplsogre_encap_conf.label, &id.label[1], 3);
+	if (mplsogre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, mplsogre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, mplsogre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, mplsogre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, mplsogre_encap_conf.ipv6_dst);
+	}
+	if (mplsogre_encap_conf.select_vlan)
+		mplsogre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(mplsogre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(mplsogre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_mplsogre_encap = {
+	.f = cmd_set_mplsogre_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_encap ip-version ipv4|ipv6 label <label>"
+		" ip-src <ip-src> ip-dst <ip-dst> eth-src <eth-src>"
+		" eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_encap_set,
+		(void *)&cmd_set_mplsogre_encap_mplsogre_encap,
+		(void *)&cmd_set_mplsogre_encap_ip_version,
+		(void *)&cmd_set_mplsogre_encap_ip_version_value,
+		(void *)&cmd_set_mplsogre_encap_label,
+		(void *)&cmd_set_mplsogre_encap_label_value,
+		(void *)&cmd_set_mplsogre_encap_ip_src,
+		(void *)&cmd_set_mplsogre_encap_ip_src_value,
+		(void *)&cmd_set_mplsogre_encap_ip_dst,
+		(void *)&cmd_set_mplsogre_encap_ip_dst_value,
+		(void *)&cmd_set_mplsogre_encap_eth_src,
+		(void *)&cmd_set_mplsogre_encap_eth_src_value,
+		(void *)&cmd_set_mplsogre_encap_eth_dst,
+		(void *)&cmd_set_mplsogre_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsogre_encap_with_vlan = {
+	.f = cmd_set_mplsogre_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_encap-with-vlan ip-version ipv4|ipv6"
+		" label <label> ip-src <ip-src> ip-dst <ip-dst>"
+		" vlan-tci <vlan-tci> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_encap_set,
+		(void *)&cmd_set_mplsogre_encap_mplsogre_encap_with_vlan,
+		(void *)&cmd_set_mplsogre_encap_ip_version,
+		(void *)&cmd_set_mplsogre_encap_ip_version_value,
+		(void *)&cmd_set_mplsogre_encap_label,
+		(void *)&cmd_set_mplsogre_encap_label_value,
+		(void *)&cmd_set_mplsogre_encap_ip_src,
+		(void *)&cmd_set_mplsogre_encap_ip_src_value,
+		(void *)&cmd_set_mplsogre_encap_ip_dst,
+		(void *)&cmd_set_mplsogre_encap_ip_dst_value,
+		(void *)&cmd_set_mplsogre_encap_vlan,
+		(void *)&cmd_set_mplsogre_encap_vlan_value,
+		(void *)&cmd_set_mplsogre_encap_eth_src,
+		(void *)&cmd_set_mplsogre_encap_eth_src_value,
+		(void *)&cmd_set_mplsogre_encap_eth_dst,
+		(void *)&cmd_set_mplsogre_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set MPLSoGRE decapsulation details */
+struct cmd_set_mplsogre_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsogre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_mplsogre_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result, mplsogre,
+				 "mplsogre_decap");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_mplsogre_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 mplsogre, "mplsogre_decap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 ip_version, "ipv4#ipv6");
+
+static void cmd_set_mplsogre_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsogre_decap_result *res = parsed_result;
+
+	if (strcmp(res->mplsogre, "mplsogre_decap") == 0)
+		mplsogre_decap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsogre, "mplsogre_decap-with-vlan") == 0)
+		mplsogre_decap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsogre_decap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsogre_decap_conf.select_ipv4 = 0;
+}
+
+cmdline_parse_inst_t cmd_set_mplsogre_decap = {
+	.f = cmd_set_mplsogre_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_decap ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_decap_set,
+		(void *)&cmd_set_mplsogre_decap_mplsogre_decap,
+		(void *)&cmd_set_mplsogre_decap_ip_version,
+		(void *)&cmd_set_mplsogre_decap_ip_version_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsogre_decap_with_vlan = {
+	.f = cmd_set_mplsogre_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_decap-with-vlan ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_decap_set,
+		(void *)&cmd_set_mplsogre_decap_mplsogre_decap_with_vlan,
+		(void *)&cmd_set_mplsogre_decap_ip_version,
+		(void *)&cmd_set_mplsogre_decap_ip_version_value,
+		NULL,
+	},
+};
+
 /** Set MPLSoUDP encapsulation details */
 struct cmd_set_mplsoudp_encap_result {
 	cmdline_fixed_string_t set;
@@ -18317,6 +18540,10 @@ struct cmd_config_per_queue_tx_offload_result {
 	(cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_l2_decap,
 	(cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_encap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_decap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_decap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 94b9bf7..1c72ad9 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -245,6 +245,8 @@ enum index {
 	ACTION_NVGRE_DECAP,
 	ACTION_L2_ENCAP,
 	ACTION_L2_DECAP,
+	ACTION_MPLSOGRE_ENCAP,
+	ACTION_MPLSOGRE_DECAP,
 	ACTION_MPLSOUDP_ENCAP,
 	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
@@ -851,6 +853,8 @@ struct parse_action_priv {
 	ACTION_NVGRE_DECAP,
 	ACTION_L2_ENCAP,
 	ACTION_L2_DECAP,
+	ACTION_MPLSOGRE_ENCAP,
+	ACTION_MPLSOGRE_DECAP,
 	ACTION_MPLSOUDP_ENCAP,
 	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
@@ -1038,6 +1042,12 @@ static int parse_vc_action_l2_encap(struct context *, const struct token *,
 static int parse_vc_action_l2_decap(struct context *, const struct token *,
 				    const char *, unsigned int, void *,
 				    unsigned int);
+static int parse_vc_action_mplsogre_encap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
+static int parse_vc_action_mplsogre_decap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
 static int parse_vc_action_mplsoudp_encap(struct context *,
 					  const struct token *, const char *,
 					  unsigned int, void *, unsigned int);
@@ -2562,19 +2572,10 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
-	[ACTION_MPLSOUDP_ENCAP] = {
-		.name = "mplsoudp_encap",
-		.help = "mplsoudp encapsulation, uses configuration set by"
-			" \"set mplsoudp_encap\"",
-		.priv = PRIV_ACTION(RAW_ENCAP,
-				    sizeof(struct action_raw_encap_data)),
-		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
-		.call = parse_vc_action_mplsoudp_encap,
-	},
 	[ACTION_L2_ENCAP] = {
 		.name = "l2_encap",
-		.help = "l2 encapsulation, uses configuration set by"
-			" \"set l2_encp\"",
+		.help = "l2 encap, uses configuration set by"
+			" \"set l2_encap\"",
 		.priv = PRIV_ACTION(RAW_ENCAP,
 				    sizeof(struct action_raw_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
@@ -2589,6 +2590,33 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_l2_decap,
 	},
+	[ACTION_MPLSOGRE_ENCAP] = {
+		.name = "mplsogre_encap",
+		.help = "mplsogre encapsulation, uses configuration set by"
+			" \"set mplsogre_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsogre_encap,
+	},
+	[ACTION_MPLSOGRE_DECAP] = {
+		.name = "mplsogre_decap",
+		.help = "mplsogre decapsulation, uses configuration set by"
+			" \"set mplsogre_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsogre_decap,
+	},
+	[ACTION_MPLSOUDP_ENCAP] = {
+		.name = "mplsoudp_encap",
+		.help = "mplsoudp encapsulation, uses configuration set by"
+			" \"set mplsoudp_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_encap,
+	},
 	[ACTION_MPLSOUDP_DECAP] = {
 		.name = "mplsoudp_decap",
 		.help = "mplsoudp decapsulation, uses configuration set by"
@@ -3579,6 +3607,194 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return ret;
 }
 
+#define ETHER_TYPE_MPLS_UNICAST 0x8847
+
+/** Parse MPLSOGRE encap action. */
+static int
+parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsogre_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = mplsogre_encap_conf.ipv4_src,
+			.dst_addr = mplsogre_encap_conf.ipv4_dst,
+			.next_proto_id = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_gre gre = {
+		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+		.preserve = {},
+	};
+	header = action_encap_data->data;
+	if (mplsogre_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsogre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsogre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsogre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsogre_encap_conf.select_vlan) {
+		if (mplsogre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsogre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
+		       &mplsogre_encap_conf.ipv6_src,
+		       sizeof(mplsogre_encap_conf.ipv6_src));
+		memcpy(&ipv6.hdr.dst_addr,
+		       &mplsogre_encap_conf.ipv6_dst,
+		       sizeof(mplsogre_encap_conf.ipv6_dst));
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &gre, sizeof(gre));
+	header += sizeof(gre);
+	memcpy(mpls.label_tc_s, mplsogre_encap_conf.label,
+	       RTE_DIM(mplsogre_encap_conf.label));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOGRE decap action. */
+static int
+parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.next_proto_id = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_gre gre = {
+		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (mplsogre_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsogre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsogre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsogre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsogre_encap_conf.select_vlan) {
+		if (mplsogre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsogre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &gre, sizeof(gre));
+	header += sizeof(gre);
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
 /** Parse MPLSOUDP encap action. */
 static int
 parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 12daee5..0738105 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -523,6 +523,28 @@ struct l2_decap_conf {
 };
 struct l2_decap_conf l2_decap_conf;
 
+/* MPLSoGRE encap parameters. */
+struct mplsogre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t label[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct mplsogre_encap_conf mplsogre_encap_conf;
+
+/* MPLSoGRE decap parameters. */
+struct mplsogre_decap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+};
+struct mplsogre_decap_conf mplsogre_decap_conf;
+
 /* MPLSoUDP encap parameters. */
 struct mplsoudp_encap_conf {
 	uint32_t select_ipv4:1;
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index fb26315..8d60bf0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1607,6 +1607,35 @@ flow rule using the action l2_decap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config MPLSoGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a MPLSoGRE tunnel::
+
+ set mplsogre_encap ip-version (ipv4|ipv6) label (label) \
+        ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst (eth-dst)
+ set mplsogre_encap-with-vlan ip-version (ipv4|ipv6) label (label) \
+        ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \
+        eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsogre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoGRE Decap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to decapsulate MPLSoGRE packet::
+
+ set mplsogre_decap ip-version (ipv4|ipv6)
+ set mplsogre_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsogre_decap will use the last configuration set.
+To have a different decapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Config MPLSoUDP Encap outer layers
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3834,6 +3863,12 @@ This section lists supported actions and their attributes, if any.
 - ``l2_decap``: Performs a L2 decapsulation, L2 configuration
   is done through `Config L2 Decap`_.
 
+- ``mplsogre_encap``: Performs a MPLSoGRE encapsulation, outer layer
+  configuration is done through `Config MPLSoGRE Encap outer layers`_.
+
+- ``mplsogre_decap``: Performs a MPLSoGRE decapsulation, outer layer
+  configuration is done through `Config MPLSoGRE Decap outer layers`_.
+
 - ``mplsoudp_encap``: Performs a MPLSoUDP encapsulation, outer layer
   configuration is done through `Config MPLSoUDP Encap outer layers`_.
 
@@ -4237,6 +4272,74 @@ L2 with VXLAN header::
  testpmd> flow create 0 egress pattern eth / end actions l2_encap / mplsoudp_encap /
          queue index 0 / end
 
+Sample MPLSoGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_encap ip-version ipv4 label 4
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv4 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_encap-with-vlan ip-version ipv4 label 4
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv6 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_encap ip-version ipv6 mask 4
+        ip-src ::1 ip-dst ::2222 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv6 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_encap-with-vlan ip-version ipv6 mask 4
+        ip-src ::1 ip-dst ::2222 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+Sample MPLSoGRE decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoGRE decapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_decap ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / ipv4 / gre / mpls / end actions
+        mplsogre_decap / l2_encap / end
+
+IPv4 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_decap-with-vlan ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv4 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
+IPv6 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_decap ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / ipv6 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
+IPv6 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_decap-with-vlan ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv6 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
 Sample MPLSoUDP encapsulation rule
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-16 21:41       ` [PATCH v4 1/3] ethdev: add raw encapsulation action Ori Kam
@ 2018-10-17  7:56         ` Andrew Rybchenko
  2018-10-17  8:43           ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2018-10-17  7:56 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

On 10/17/18 12:41 AM, Ori Kam wrote:
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the
> actioni and from the action to the NIC. This results in negetive impact
> on the insertion performance.
>
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
>
> This patch introduce a raw encapsulation that can support L2 tunnel types
> and L3 tunnel types. In addtion the new
> encapsulations commands are using raw buffer inorder to save the
> converstion time, both for the application and the PMD.
>
> In order to encapsulate L3 tunnel type there is a need to use both
> actions in the same rule: The decap to remove the L2 of the original
> packet, and then encap command to encapsulate the packet with the
> tunnel.
> For decap L3 there is also a need to use both commands in the same flow
> first the decap command to remove the outer tunnel header and then encap
> to add the L2 header.
>
> Signed-off-by: Ori Kam <orika@mellanox.com>

[...]

> +Action: ``RAW_DECAP``
> +^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Remove outer header whose template is provided in it's data buffer,

Typo: 'it's' -> its

> +as defined in hte ``rte_flow_action_raw_decap``

Typo above 'hte' -> 'the'.

> +
> +This action modifies the payload of matched flows. The data supplied must
> +be a valid header, either holding layer 2 data in case of removing layer 2
> +before incapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
> +tunnel definition starting from layer 2 and moving to the tunel item itself.
> +When applied to the original packet the resulting packet must be a
> +valid packet.
> +
> +.. _table_rte_flow_action_raw_decap:
> +
> +.. table:: RAW_DECAP
> +
> +   +----------------+----------------------------------------+
> +   | Field          | Value                                  |
> +   +================+========================================+
> +   | ``data``       | Decapsulation data                     |

Sorry, I've missed the point why it is here. Is it used for matching?
Why is the size insufficient?

> +   +----------------+----------------------------------------+
> +   | ``size``       | Size of data                           |
> +   +----------------+----------------------------------------+
> +
>   Action: ``SET_IPV4_SRC``
>   ^^^^^^^^^^^^^^^^^^^^^^^^
>   
> diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
> index bc9e719..1e5cd73 100644
> --- a/lib/librte_ethdev/rte_flow.c
> +++ b/lib/librte_ethdev/rte_flow.c
> @@ -123,6 +123,8 @@ struct rte_flow_desc_data {
>   	MK_FLOW_ACTION(VXLAN_DECAP, 0),
>   	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
>   	MK_FLOW_ACTION(NVGRE_DECAP, 0),
> +	MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct rte_flow_action_raw_encap)),
> +	MK_FLOW_ACTION(RAW_DECAP, sizeof(struct rte_flow_action_raw_decap)),
>   	MK_FLOW_ACTION(SET_IPV4_SRC,
>   		       sizeof(struct rte_flow_action_set_ipv4)),
>   	MK_FLOW_ACTION(SET_IPV4_DST,
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 68bbf57..3ae9de3 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -1508,6 +1508,20 @@ enum rte_flow_action_type {
>   	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
>   
>   	/**
> +	 * Adds outer header whose template is provided in it's data buffer

Adds -> Add, it's -> its

> +	 *
> +	 * See struct rte_flow_action_raw_encap.
> +	 */
> +	RTE_FLOW_ACTION_TYPE_RAW_ENCAP,
> +
> +	/**
> +	 * Remove outer header whose tempalte is provided in it's data buffer.

it's -> its

> +	 *
> +	 * See struct rte_flow_action_raw_decap
> +	 */
> +	RTE_FLOW_ACTION_TYPE_RAW_DECAP,
> +
> +	/**
>   	 * Modify IPv4 source address in the outermost IPv4 header.
>   	 *
>   	 * If flow pattern does not define a valid RTE_FLOW_ITEM_TYPE_IPV4,
> @@ -1946,6 +1960,51 @@ struct rte_flow_action_nvgre_encap {
>    * @warning
>    * @b EXPERIMENTAL: this structure may change without prior notice
>    *
> + * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
> + *
> + * Raw tunnel end-point encapsulation data definition.
> + *
> + * The data holds the headers definitions to be applied on the packet.
> + * The data must start with ETH header up to the tunnel item header itself.
> + * When used right after RAW_DECAP (for decapsulating L3 tunnel type for
> + * example MPLSoGRE) the data will just hold layer 2 header.
> + *
> + * The preserve parameter holds which bits in the packet the PMD is not allowd

allowd -> allowed

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-17  7:56         ` Andrew Rybchenko
@ 2018-10-17  8:43           ` Ori Kam
  2018-10-22 13:06             ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-17  8:43 UTC (permalink / raw)
  To: Andrew Rybchenko, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler



> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Wednesday, October 17, 2018 10:56 AM
> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com; ferruh.yigit@intel.com;
> stephen@networkplumber.org; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
> 
> On 10/17/18 12:41 AM, Ori Kam wrote:
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the
> actioni and from the action to the NIC. This results in negetive impact
> on the insertion performance.
> 
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
> 
> This patch introduce a raw encapsulation that can support L2 tunnel types
> and L3 tunnel types. In addtion the new
> encapsulations commands are using raw buffer inorder to save the
> converstion time, both for the application and the PMD.
> 
> In order to encapsulate L3 tunnel type there is a need to use both
> actions in the same rule: The decap to remove the L2 of the original
> packet, and then encap command to encapsulate the packet with the
> tunnel.
> For decap L3 there is also a need to use both commands in the same flow
> first the decap command to remove the outer tunnel header and then encap
> to add the L2 header.
> 
> Signed-off-by: Ori Kam mailto:orika@mellanox.com
> 
> [...]
> 
> 
> 
> +Action: ``RAW_DECAP``
> +^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Remove outer header whose template is provided in it's data buffer,
> 
> Typo: 'it's' -> its
> 
Thanks will fix.

> 
> 
> +as defined in hte ``rte_flow_action_raw_decap``
> 
> Typo above 'hte' -> 'the'.
>
Thanks will fix.
 
> 
> 
> +
> +This action modifies the payload of matched flows. The data supplied must
> +be a valid header, either holding layer 2 data in case of removing layer 2
> +before incapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
> +tunnel definition starting from layer 2 and moving to the tunel item itself.
> +When applied to the original packet the resulting packet must be a
> +valid packet.
> +
> +.. _table_rte_flow_action_raw_decap:
> +
> +.. table:: RAW_DECAP
> +
> +   +----------------+----------------------------------------+
> +   | Field          | Value                                  |
> +   +================+========================================+
> +   | ``data``       | Decapsulation data                     |
> 
> Sorry, I've missed the point why it is here. Is it used for matching?
> Why is the size insufficient?
> 
No it is not used for matching this is only for the encapsulation data.
The data is for PMD that needs to validate that they can decapsulate
The packet, and on some PMD there might need the specify which headers
to remove and not just number of bytes.

> 
> +   +----------------+----------------------------------------+
> +   | ``size``       | Size of data                           |
> +   +----------------+----------------------------------------+
> +
>  Action: ``SET_IPV4_SRC``
>  ^^^^^^^^^^^^^^^^^^^^^^^^
> 
> diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
> index bc9e719..1e5cd73 100644
> --- a/lib/librte_ethdev/rte_flow.c
> +++ b/lib/librte_ethdev/rte_flow.c
> @@ -123,6 +123,8 @@ struct rte_flow_desc_data {
>  	MK_FLOW_ACTION(VXLAN_DECAP, 0),
>  	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct
> rte_flow_action_vxlan_encap)),
>  	MK_FLOW_ACTION(NVGRE_DECAP, 0),
> +	MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct
> rte_flow_action_raw_encap)),
> +	MK_FLOW_ACTION(RAW_DECAP, sizeof(struct
> rte_flow_action_raw_decap)),
>  	MK_FLOW_ACTION(SET_IPV4_SRC,
>  		       sizeof(struct rte_flow_action_set_ipv4)),
>  	MK_FLOW_ACTION(SET_IPV4_DST,
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 68bbf57..3ae9de3 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -1508,6 +1508,20 @@ enum rte_flow_action_type {
>  	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
> 
>  	/**
> +	 * Adds outer header whose template is provided in it's data buffer
> 
> Adds -> Add, it's -> its
>
Thanks will fix.
 
> 
> 
> +	 *
> +	 * See struct rte_flow_action_raw_encap.
> +	 */
> +	RTE_FLOW_ACTION_TYPE_RAW_ENCAP,
> +
> +	/**
> +	 * Remove outer header whose tempalte is provided in it's data buffer.
> 
> it's -> its
> 
Thanks will fix.
> 
> 
> +	 *
> +	 * See struct rte_flow_action_raw_decap
> +	 */
> +	RTE_FLOW_ACTION_TYPE_RAW_DECAP,
> +
> +	/**
>  	 * Modify IPv4 source address in the outermost IPv4 header.
>  	 *
>  	 * If flow pattern does not define a valid RTE_FLOW_ITEM_TYPE_IPV4,
> @@ -1946,6 +1960,51 @@ struct rte_flow_action_nvgre_encap {
>   * @warning
>   * @b EXPERIMENTAL: this structure may change without prior notice
>   *
> + * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
> + *
> + * Raw tunnel end-point encapsulation data definition.
> + *
> + * The data holds the headers definitions to be applied on the packet.
> + * The data must start with ETH header up to the tunnel item header itself.
> + * When used right after RAW_DECAP (for decapsulating L3 tunnel type for
> + * example MPLSoGRE) the data will just hold layer 2 header.
> + *
> + * The preserve parameter holds which bits in the packet the PMD is not
> allowd
> 
> allowd -> allowed
Thanks will fix.


Best,
Ori


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions
  2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
                         ` (2 preceding siblings ...)
  2018-10-16 21:41       ` [PATCH v4 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
@ 2018-10-17 17:07       ` Ori Kam
  2018-10-17 17:07         ` [PATCH v5 1/3] ethdev: add raw encapsulation action Ori Kam
                           ` (4 more replies)
  3 siblings, 5 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-17 17:07 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

This series implement the raw tunnel encapsulation actions
and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.
    
Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.
    
This series introduce a raw encapsulation that can support both L2 and L3
tunnel encapsulation.
In order to encap l3 tunnel for example MPLSoDUP:
ETH / IPV4 / UDP / MPLS / IPV4 / L4 .. L7
When creating the flow rule we add 2 actions, the first one is decap in order
to remove the original L2 of the packet and then the encap with the tunnel data.
Decapsulating such a tunnel is done in the following order, first decap the
outer tunnel and then encap the packet with the L2 header.
It is important to notice that from the Nic and PMD both actionsn happens
simultaneously, meaning that at we are always having a valid packet.

This series also inroduce the following commands for testpmd:
* l2_encap
* l2_decap
* mplsogre_encap
* mplsogre_decap
* mplsoudp_encap
* mplsoudp_decap

along with helper function to set teh headers that will be used for the actions,
the same as with vxlan_encap.

[1]https://mails.dpdk.org/archives/dev/2018-August/109944.html
v4:
 * fix typos.

v4:
 * convert to raw encap/decap, according to Adrien suggestion.
 * keep the old vxlan and nvgre encapsulation commands.

v3:
 * rebase on tip.

v2:
 * add missing decap_l3 structure.
 * fix typo.


Ori Kam (3):
  ethdev: add raw encapsulation action
  app/testpmd: add MPLSoUDP encapsulation
  app/testpmd: add MPLSoGRE encapsulation

 app/test-pmd/cmdline.c                      | 637 ++++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 595 ++++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  62 +++
 doc/guides/prog_guide/rte_flow.rst          |  51 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 278 ++++++++++++
 lib/librte_ethdev/rte_flow.c                |   2 +
 lib/librte_ethdev/rte_flow.h                |  59 +++
 7 files changed, 1684 insertions(+)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v5 1/3] ethdev: add raw encapsulation action
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
@ 2018-10-17 17:07         ` Ori Kam
  2018-10-22 14:15           ` Andrew Rybchenko
  2018-10-17 17:07         ` [PATCH v5 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
                           ` (3 subsequent siblings)
  4 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-17 17:07 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the
actioni and from the action to the NIC. This results in negative impact
on the insertion performance.

Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.

This patch introduce a raw encapsulation that can support L2 tunnel types
and L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

In order to encapsulate L3 tunnel type there is a need to use both
actions in the same rule: The decap to remove the L2 of the original
packet, and then encap command to encapsulate the packet with the
tunnel.
For decap L3 there is also a need to use both commands in the same flow
first the decap command to remove the outer tunnel header and then encap
to add the L2 header.

Signed-off-by: Ori Kam <orika@mellanox.com>

---
v5:
 * fix typos.

---
 doc/guides/prog_guide/rte_flow.rst | 51 ++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_flow.c       |  2 ++
 lib/librte_ethdev/rte_flow.h       | 59 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 112 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a5ec441..5212b18 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2076,6 +2076,57 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
 
 This action modifies the payload of matched flows.
 
+Action: ``RAW_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^
+
+Adds outer header whose template is provided in it's data buffer,
+as defined in the ``rte_flow_action_raw_encap`` definition.
+
+This action modifies the payload of matched flows. The data supplied must
+be a valid header, either holding layer 2 data in case of adding layer 2 after
+decap layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition
+starting from layer 2 and moving to the tunnel item itself. When applied to
+the original packet the resulting packet must be a valid packet.
+
+.. _table_rte_flow_action_raw_encap:
+
+.. table:: RAW_ENCAP
+
+   +----------------+----------------------------------------+
+   | Field          | Value                                  |
+   +================+========================================+
+   | ``data``       | Encapsulation data                     |
+   +----------------+----------------------------------------+
+   | ``preserve``   | Bit-mask of data to preserve on output |
+   +----------------+----------------------------------------+
+   | ``size``       | Size of data and preserve              |
+   +----------------+----------------------------------------+
+
+Action: ``RAW_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Remove outer header whose template is provided in its data buffer,
+as defined in the ``rte_flow_action_raw_decap``
+
+This action modifies the payload of matched flows. The data supplied must
+be a valid header, either holding layer 2 data in case of removing layer 2
+before eincapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
+tunnel definition starting from layer 2 and moving to the tunnel item itself.
+When applied to the original packet the resulting packet must be a
+valid packet.
+
+.. _table_rte_flow_action_raw_decap:
+
+.. table:: RAW_DECAP
+
+   +----------------+----------------------------------------+
+   | Field          | Value                                  |
+   +================+========================================+
+   | ``data``       | Decapsulation data                     |
+   +----------------+----------------------------------------+
+   | ``size``       | Size of data                           |
+   +----------------+----------------------------------------+
+
 Action: ``SET_IPV4_SRC``
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index bc9e719..1e5cd73 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -123,6 +123,8 @@ struct rte_flow_desc_data {
 	MK_FLOW_ACTION(VXLAN_DECAP, 0),
 	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
 	MK_FLOW_ACTION(NVGRE_DECAP, 0),
+	MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct rte_flow_action_raw_encap)),
+	MK_FLOW_ACTION(RAW_DECAP, sizeof(struct rte_flow_action_raw_decap)),
 	MK_FLOW_ACTION(SET_IPV4_SRC,
 		       sizeof(struct rte_flow_action_set_ipv4)),
 	MK_FLOW_ACTION(SET_IPV4_DST,
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 68bbf57..4066483 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1508,6 +1508,20 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
 
 	/**
+	 * Add outer header whose template is provided in its data buffer
+	 *
+	 * See struct rte_flow_action_raw_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_RAW_ENCAP,
+
+	/**
+	 * Remove outer header whose template is provided in its data buffer.
+	 *
+	 * See struct rte_flow_action_raw_decap
+	 */
+	RTE_FLOW_ACTION_TYPE_RAW_DECAP,
+
+	/**
 	 * Modify IPv4 source address in the outermost IPv4 header.
 	 *
 	 * If flow pattern does not define a valid RTE_FLOW_ITEM_TYPE_IPV4,
@@ -1946,6 +1960,51 @@ struct rte_flow_action_nvgre_encap {
  * @warning
  * @b EXPERIMENTAL: this structure may change without prior notice
  *
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ *
+ * Raw tunnel end-point encapsulation data definition.
+ *
+ * The data holds the headers definitions to be applied on the packet.
+ * The data must start with ETH header up to the tunnel item header itself.
+ * When used right after RAW_DECAP (for decapsulating L3 tunnel type for
+ * example MPLSoGRE) the data will just hold layer 2 header.
+ *
+ * The preserve parameter holds which bits in the packet the PMD is not allowed
+ * to change, this parameter can also be NULL and then the PMD is allowed
+ * to update any field.
+ *
+ * size holds the number of bytes in @p data and @p preserve.
+ */
+struct rte_flow_action_raw_encap {
+	uint8_t *data; /**< Encapsulation data. */
+	uint8_t *preserve; /**< Bit-mask of @p data to preserve on output. */
+	size_t size; /**< Size of @p data and @p preserve. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ *
+ * Raw tunnel end-point decapsulation data definition.
+ *
+ * The data holds the headers definitions to be removed from the packet.
+ * The data must start with ETH header up to the tunnel item header itself.
+ * When used right before RAW_DECAP (for encapsulating L3 tunnel type for
+ * example MPLSoGRE) the data will just hold layer 2 header.
+ *
+ * size holds the number of bytes in @p data.
+ */
+struct rte_flow_action_raw_decap {
+	uint8_t *data; /**< Encapsulation data. */
+	size_t size; /**< Size of @p data and @p preserve. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
  * RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC
  * RTE_FLOW_ACTION_TYPE_SET_IPV4_DST
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v5 2/3] app/testpmd: add MPLSoUDP encapsulation
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
  2018-10-17 17:07         ` [PATCH v5 1/3] ethdev: add raw encapsulation action Ori Kam
@ 2018-10-17 17:07         ` Ori Kam
  2018-10-17 17:07         ` [PATCH v5 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
                           ` (2 subsequent siblings)
  4 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-17 17:07 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

MPLSoUDP is an example for L3 tunnel encapsulation.

L3 tunnel type is a tunnel that is missing the layer 2 header of the
inner packet.

Example for MPLSoUDP tunnel:
ETH / IPV4 / UDP / MPLS / IP / L4..L7

In order to encapsulate such a tunnel there is a need to remove L2 of
the inner packet and encap the remaining tunnel, this is done by
applying 2 rte flow commands l2_decap followed by mplsoudp_encap.
Both commands must appear in the same flow, and from the point of the
packet it both actions are applyed at the same time. (There is no part
where a packet doesn't have L2 header).

Decapsulating such a tunnel works the other way, first we need to decap
the outer tunnel header and then apply the new L2.
So the commands will be mplsoudp_decap / l2_encap

Due to the complex encapsulation of MPLSoUDP and L2  flow actions and
based on the fact testpmd does not allocate memory, this patch adds a new
command in testpmd to initialise a global structurs containing the
necessary information to make the outer layer of the packet.  This same
global structures will then be used by the flow commands in testpmd when
the action mplsoudp_encap, mplsoudp_decap, l2_encap, l2_decap, will be
parsed, at this point, the conversion into such action becomes trivial.

The l2_encap and l2_decap actions can also be used for other L3 tunnel
types.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 410 ++++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 379 +++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  40 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 175 ++++++++++++
 4 files changed, 1004 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3b469ac..e807dbb 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -15282,6 +15282,408 @@ static void cmd_set_nvgre_parsed(void *parsed_result,
 	},
 };
 
+/** Set L2 encapsulation details */
+struct cmd_set_l2_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t l2_encap;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_l2_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, set, "set");
+cmdline_parse_token_string_t cmd_set_l2_encap_l2_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, l2_encap, "l2_encap");
+cmdline_parse_token_string_t cmd_set_l2_encap_l2_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, l2_encap,
+				 "l2_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_l2_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_l2_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_l2_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_l2_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_l2_encap_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_l2_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_l2_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_l2_encap_result, eth_src);
+cmdline_parse_token_string_t cmd_set_l2_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_l2_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_l2_encap_result, eth_dst);
+
+static void cmd_set_l2_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_l2_encap_result *res = parsed_result;
+
+	if (strcmp(res->l2_encap, "l2_encap") == 0)
+		l2_encap_conf.select_vlan = 0;
+	else if (strcmp(res->l2_encap, "l2_encap-with-vlan") == 0)
+		l2_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		l2_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		l2_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	if (l2_encap_conf.select_vlan)
+		l2_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(l2_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(l2_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_l2_encap = {
+	.f = cmd_set_l2_encap_parsed,
+	.data = NULL,
+	.help_str = "set l2_encap ip-version ipv4|ipv6"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_l2_encap_set,
+		(void *)&cmd_set_l2_encap_l2_encap,
+		(void *)&cmd_set_l2_encap_ip_version,
+		(void *)&cmd_set_l2_encap_ip_version_value,
+		(void *)&cmd_set_l2_encap_eth_src,
+		(void *)&cmd_set_l2_encap_eth_src_value,
+		(void *)&cmd_set_l2_encap_eth_dst,
+		(void *)&cmd_set_l2_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_l2_encap_with_vlan = {
+	.f = cmd_set_l2_encap_parsed,
+	.data = NULL,
+	.help_str = "set l2_encap-with-vlan ip-version ipv4|ipv6"
+		" vlan-tci <vlan-tci> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_l2_encap_set,
+		(void *)&cmd_set_l2_encap_l2_encap_with_vlan,
+		(void *)&cmd_set_l2_encap_ip_version,
+		(void *)&cmd_set_l2_encap_ip_version_value,
+		(void *)&cmd_set_l2_encap_vlan,
+		(void *)&cmd_set_l2_encap_vlan_value,
+		(void *)&cmd_set_l2_encap_eth_src,
+		(void *)&cmd_set_l2_encap_eth_src_value,
+		(void *)&cmd_set_l2_encap_eth_dst,
+		(void *)&cmd_set_l2_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set L2 decapsulation details */
+struct cmd_set_l2_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t l2_decap;
+	cmdline_fixed_string_t pos_token;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_l2_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, set, "set");
+cmdline_parse_token_string_t cmd_set_l2_decap_l2_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, l2_decap,
+				 "l2_decap");
+cmdline_parse_token_string_t cmd_set_l2_decap_l2_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, l2_decap,
+				 "l2_decap-with-vlan");
+
+static void cmd_set_l2_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_l2_decap_result *res = parsed_result;
+
+	if (strcmp(res->l2_decap, "l2_decap") == 0)
+		l2_decap_conf.select_vlan = 0;
+	else if (strcmp(res->l2_decap, "l2_decap-with-vlan") == 0)
+		l2_decap_conf.select_vlan = 1;
+}
+
+cmdline_parse_inst_t cmd_set_l2_decap = {
+	.f = cmd_set_l2_decap_parsed,
+	.data = NULL,
+	.help_str = "set l2_decap",
+	.tokens = {
+		(void *)&cmd_set_l2_decap_set,
+		(void *)&cmd_set_l2_decap_l2_decap,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_l2_decap_with_vlan = {
+	.f = cmd_set_l2_decap_parsed,
+	.data = NULL,
+	.help_str = "set l2_decap-with-vlan",
+	.tokens = {
+		(void *)&cmd_set_l2_decap_set,
+		(void *)&cmd_set_l2_decap_l2_decap_with_vlan,
+		NULL,
+	},
+};
+
+/** Set MPLSoUDP encapsulation details */
+struct cmd_set_mplsoudp_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsoudp;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t label;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_mplsoudp_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result, mplsoudp,
+				 "mplsoudp_encap");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_mplsoudp_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 mplsoudp, "mplsoudp_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 ip_version, "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_label =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "label");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_label_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, label,
+			      UINT32);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "udp-src");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_src_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_src,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "udp-dst");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_dst_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_dst,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_mplsoudp_encap_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result, ip_src);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_mplsoudp_encap_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "vlan-tci");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, tci,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_mplsoudp_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				    eth_src);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_mplsoudp_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				    eth_dst);
+
+static void cmd_set_mplsoudp_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsoudp_encap_result *res = parsed_result;
+	union {
+		uint32_t mplsoudp_label;
+		uint8_t label[3];
+	} id = {
+		.mplsoudp_label =
+			rte_cpu_to_be_32(res->label) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->mplsoudp, "mplsoudp_encap") == 0)
+		mplsoudp_encap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsoudp, "mplsoudp_encap-with-vlan") == 0)
+		mplsoudp_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsoudp_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsoudp_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(mplsoudp_encap_conf.label, &id.label[1], 3);
+	mplsoudp_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	mplsoudp_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (mplsoudp_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, mplsoudp_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, mplsoudp_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, mplsoudp_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, mplsoudp_encap_conf.ipv6_dst);
+	}
+	if (mplsoudp_encap_conf.select_vlan)
+		mplsoudp_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(mplsoudp_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(mplsoudp_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_mplsoudp_encap = {
+	.f = cmd_set_mplsoudp_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_encap ip-version ipv4|ipv6 label <label>"
+		" udp-src <udp-src> udp-dst <udp-dst> ip-src <ip-src>"
+		" ip-dst <ip-dst> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_encap_set,
+		(void *)&cmd_set_mplsoudp_encap_mplsoudp_encap,
+		(void *)&cmd_set_mplsoudp_encap_ip_version,
+		(void *)&cmd_set_mplsoudp_encap_ip_version_value,
+		(void *)&cmd_set_mplsoudp_encap_label,
+		(void *)&cmd_set_mplsoudp_encap_label_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_src,
+		(void *)&cmd_set_mplsoudp_encap_udp_src_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_src,
+		(void *)&cmd_set_mplsoudp_encap_ip_src_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_src,
+		(void *)&cmd_set_mplsoudp_encap_eth_src_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsoudp_encap_with_vlan = {
+	.f = cmd_set_mplsoudp_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_encap-with-vlan ip-version ipv4|ipv6"
+		" label <label> udp-src <udp-src> udp-dst <udp-dst>"
+		" ip-src <ip-src> ip-dst <ip-dst> vlan-tci <vlan-tci>"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_encap_set,
+		(void *)&cmd_set_mplsoudp_encap_mplsoudp_encap_with_vlan,
+		(void *)&cmd_set_mplsoudp_encap_ip_version,
+		(void *)&cmd_set_mplsoudp_encap_ip_version_value,
+		(void *)&cmd_set_mplsoudp_encap_label,
+		(void *)&cmd_set_mplsoudp_encap_label_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_src,
+		(void *)&cmd_set_mplsoudp_encap_udp_src_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_src,
+		(void *)&cmd_set_mplsoudp_encap_ip_src_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_vlan,
+		(void *)&cmd_set_mplsoudp_encap_vlan_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_src,
+		(void *)&cmd_set_mplsoudp_encap_eth_src_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set MPLSoUDP decapsulation details */
+struct cmd_set_mplsoudp_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsoudp;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_mplsoudp_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result, mplsoudp,
+				 "mplsoudp_decap");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_mplsoudp_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 mplsoudp, "mplsoudp_decap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 ip_version, "ipv4#ipv6");
+
+static void cmd_set_mplsoudp_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsoudp_decap_result *res = parsed_result;
+
+	if (strcmp(res->mplsoudp, "mplsoudp_decap") == 0)
+		mplsoudp_decap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsoudp, "mplsoudp_decap-with-vlan") == 0)
+		mplsoudp_decap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsoudp_decap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsoudp_decap_conf.select_ipv4 = 0;
+}
+
+cmdline_parse_inst_t cmd_set_mplsoudp_decap = {
+	.f = cmd_set_mplsoudp_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_decap ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_decap_set,
+		(void *)&cmd_set_mplsoudp_decap_mplsoudp_decap,
+		(void *)&cmd_set_mplsoudp_decap_ip_version,
+		(void *)&cmd_set_mplsoudp_decap_ip_version_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
+	.f = cmd_set_mplsoudp_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_decap-with-vlan ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_decap_set,
+		(void *)&cmd_set_mplsoudp_decap_mplsoudp_decap_with_vlan,
+		(void *)&cmd_set_mplsoudp_decap_ip_version,
+		(void *)&cmd_set_mplsoudp_decap_ip_version_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17911,6 +18313,14 @@ struct cmd_config_per_queue_tx_offload_result {
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_nvgre,
 	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_l2_encap,
+	(cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_l2_decap,
+	(cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4a27642..94b9bf7 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -243,6 +243,10 @@ enum index {
 	ACTION_VXLAN_DECAP,
 	ACTION_NVGRE_ENCAP,
 	ACTION_NVGRE_DECAP,
+	ACTION_L2_ENCAP,
+	ACTION_L2_DECAP,
+	ACTION_MPLSOUDP_ENCAP,
+	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
 	ACTION_SET_IPV4_SRC_IPV4_SRC,
 	ACTION_SET_IPV4_DST,
@@ -308,6 +312,22 @@ struct action_nvgre_encap_data {
 	struct rte_flow_item_nvgre item_nvgre;
 };
 
+/** Maximum data size in struct rte_flow_action_raw_encap. */
+#define ACTION_RAW_ENCAP_MAX_DATA 128
+
+/** Storage for struct rte_flow_action_raw_encap including external data. */
+struct action_raw_encap_data {
+	struct rte_flow_action_raw_encap conf;
+	uint8_t data[ACTION_RAW_ENCAP_MAX_DATA];
+	uint8_t preserve[ACTION_RAW_ENCAP_MAX_DATA];
+};
+
+/** Storage for struct rte_flow_action_raw_decap including external data. */
+struct action_raw_decap_data {
+	struct rte_flow_action_raw_decap conf;
+	uint8_t data[ACTION_RAW_ENCAP_MAX_DATA];
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -829,6 +849,10 @@ struct parse_action_priv {
 	ACTION_VXLAN_DECAP,
 	ACTION_NVGRE_ENCAP,
 	ACTION_NVGRE_DECAP,
+	ACTION_L2_ENCAP,
+	ACTION_L2_DECAP,
+	ACTION_MPLSOUDP_ENCAP,
+	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
 	ACTION_SET_IPV4_DST,
 	ACTION_SET_IPV6_SRC,
@@ -1008,6 +1032,18 @@ static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_l2_encap(struct context *, const struct token *,
+				    const char *, unsigned int, void *,
+				    unsigned int);
+static int parse_vc_action_l2_decap(struct context *, const struct token *,
+				    const char *, unsigned int, void *,
+				    unsigned int);
+static int parse_vc_action_mplsoudp_encap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
+static int parse_vc_action_mplsoudp_decap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2526,6 +2562,42 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_MPLSOUDP_ENCAP] = {
+		.name = "mplsoudp_encap",
+		.help = "mplsoudp encapsulation, uses configuration set by"
+			" \"set mplsoudp_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_encap,
+	},
+	[ACTION_L2_ENCAP] = {
+		.name = "l2_encap",
+		.help = "l2 encapsulation, uses configuration set by"
+			" \"set l2_encp\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_l2_encap,
+	},
+	[ACTION_L2_DECAP] = {
+		.name = "l2_decap",
+		.help = "l2 decap, uses configuration set by"
+			" \"set l2_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_l2_decap,
+	},
+	[ACTION_MPLSOUDP_DECAP] = {
+		.name = "mplsoudp_decap",
+		.help = "mplsoudp decapsulation, uses configuration set by"
+			" \"set mplsoudp_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_decap,
+	},
 	[ACTION_SET_IPV4_SRC] = {
 		.name = "set_ipv4_src",
 		.help = "Set a new IPv4 source address in the outermost"
@@ -3391,6 +3463,313 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return ret;
 }
 
+/** Parse l2 encap action. */
+static int
+parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
+			 const char *str, unsigned int len,
+			 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+	};
+	header = action_encap_data->data;
+	if (l2_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (l2_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       l2_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       l2_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (l2_encap_conf.select_vlan) {
+		if (l2_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse l2 decap action. */
+static int
+parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
+			 const char *str, unsigned int len,
+			 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (l2_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (l2_decap_conf.select_vlan) {
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOUDP encap action. */
+static int
+parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = mplsoudp_encap_conf.ipv4_src,
+			.dst_addr = mplsoudp_encap_conf.ipv4_dst,
+			.next_proto_id = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.src_port = mplsoudp_encap_conf.udp_src,
+			.dst_port = mplsoudp_encap_conf.udp_dst,
+		},
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+		.preserve = {},
+	};
+	header = action_encap_data->data;
+	if (mplsoudp_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsoudp_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsoudp_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsoudp_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsoudp_encap_conf.select_vlan) {
+		if (mplsoudp_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsoudp_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
+		       &mplsoudp_encap_conf.ipv6_src,
+		       sizeof(mplsoudp_encap_conf.ipv6_src));
+		memcpy(&ipv6.hdr.dst_addr,
+		       &mplsoudp_encap_conf.ipv6_dst,
+		       sizeof(mplsoudp_encap_conf.ipv6_dst));
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(mpls.label_tc_s, mplsoudp_encap_conf.label,
+	       RTE_DIM(mplsoudp_encap_conf.label));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOUDP decap action. */
+static int
+parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.next_proto_id = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.dst_port = rte_cpu_to_be_16(6635),
+		},
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (mplsoudp_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsoudp_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsoudp_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsoudp_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsoudp_encap_conf.select_vlan) {
+		if (mplsoudp_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsoudp_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 121b756..12daee5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -507,6 +507,46 @@ struct nvgre_encap_conf {
 };
 struct nvgre_encap_conf nvgre_encap_conf;
 
+/* L2 encap parameters. */
+struct l2_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct l2_encap_conf l2_encap_conf;
+
+/* L2 decap parameters. */
+struct l2_decap_conf {
+	uint32_t select_vlan:1;
+};
+struct l2_decap_conf l2_decap_conf;
+
+/* MPLSoUDP encap parameters. */
+struct mplsoudp_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t label[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct mplsoudp_encap_conf mplsoudp_encap_conf;
+
+/* MPLSoUDP decap parameters. */
+struct mplsoudp_decap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+};
+struct mplsoudp_decap_conf mplsoudp_decap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index ca060e1..fb26315 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1580,6 +1580,63 @@ flow rule using the action nvgre_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config L2 Encap
+~~~~~~~~~~~~~~~
+
+Configure the l2 to be used when encapsulating a packet with L2::
+
+ set l2_encap ip-version (ipv4|ipv6) eth-src (eth-src) eth-dst (eth-dst)
+ set l2_encap-with-vlan ip-version (ipv4|ipv6) vlan-tci (vlan-tci) \
+        eth-src (eth-src) eth-dst (eth-dst)
+
+Those commands will set an internal configuration inside testpmd, any following
+flow rule using the action l2_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config L2 Decap
+~~~~~~~~~~~~~~~
+
+Configure the l2 to be removed when decapsulating a packet with L2::
+
+ set l2_decap ip-version (ipv4|ipv6)
+ set l2_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those commands will set an internal configuration inside testpmd, any following
+flow rule using the action l2_decap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoUDP Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a MPLSoUDP tunnel::
+
+ set mplsoudp_encap ip-version (ipv4|ipv6) label (label) udp-src (udp-src) \
+        udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) \
+        eth-src (eth-src) eth-dst (eth-dst)
+ set mplsoudp_encap-with-vlan ip-version (ipv4|ipv6) label (label) \
+        udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) \
+        vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsoudp_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoUDP Decap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to decapsulate MPLSoUDP packet::
+
+ set mplsoudp_decap ip-version (ipv4|ipv6)
+ set mplsoudp_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsoudp_decap will use the last configuration set.
+To have a different decapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3771,6 +3828,18 @@ This section lists supported actions and their attributes, if any.
 - ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
   the NVGRE tunnel network overlay from the matched flow.
 
+- ``l2_encap``: Performs a L2 encapsulation, L2 configuration
+  is done through `Config L2 Encap`_.
+
+- ``l2_decap``: Performs a L2 decapsulation, L2 configuration
+  is done through `Config L2 Decap`_.
+
+- ``mplsoudp_encap``: Performs a MPLSoUDP encapsulation, outer layer
+  configuration is done through `Config MPLSoUDP Encap outer layers`_.
+
+- ``mplsoudp_decap``: Performs a MPLSoUDP decapsulation, outer layer
+  configuration is done through `Config MPLSoUDP Decap outer layers`_.
+
 - ``set_ipv4_src``: Set a new IPv4 source address in the outermost IPv4 header.
 
   - ``ipv4_addr``: New IPv4 source address.
@@ -4130,6 +4199,112 @@ IPv6 NVGRE outer header::
  testpmd> flow create 0 ingress pattern end actions nvgre_encap /
         queue index 0 / end
 
+Sample L2 encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+L2 encapsulation has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+L2 header::
+
+ testpmd> set l2_encap ip-version ipv4
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+L2 with VXLAN header::
+
+ testpmd> set l2_encap-with-vlan ip-version ipv4 vlan-tci 34
+         eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+Sample L2 decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+L2 decapsulation has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+L2 header::
+
+ testpmd> set l2_decap
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap / mplsoudp_encap /
+        queue index 0 / end
+
+L2 with VXLAN header::
+
+ testpmd> set l2_encap-with-vlan
+ testpmd> flow create 0 egress pattern eth / end actions l2_encap / mplsoudp_encap /
+         queue index 0 / end
+
+Sample MPLSoUDP encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoUDP encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_encap ip-version ipv4 label 4 udp-src 5 udp-dst 10
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv4 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_encap-with-vlan ip-version ipv4 label 4 udp-src 5
+        udp-dst 10 ip-src 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv6 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_encap ip-version ipv6 mask 4 udp-src 5 udp-dst 10
+        ip-src ::1 ip-dst ::2222 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv6 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_encap-with-vlan ip-version ipv6 mask 4 udp-src 5
+        udp-dst 10 ip-src ::1 ip-dst ::2222 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+Sample MPLSoUDP decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoUDP decapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_decap ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+IPv4 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_decap-with-vlan ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv4 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
+IPv6 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_decap ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / ipv6 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
+IPv6 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_decap-with-vlan ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv6 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
 BPF Functions
 --------------
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v5 3/3] app/testpmd: add MPLSoGRE encapsulation
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
  2018-10-17 17:07         ` [PATCH v5 1/3] ethdev: add raw encapsulation action Ori Kam
  2018-10-17 17:07         ` [PATCH v5 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
@ 2018-10-17 17:07         ` Ori Kam
  2018-10-22 14:45         ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ferruh Yigit
  2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
  4 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-17 17:07 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Example for MPLSoGRE tunnel:
ETH / IPV4 / GRE / MPLS / IP / L4..L7

In order to encapsulate such a tunnel there is a need to remove L2 of
the inner packet and encap the remaining tunnel, this is done by
applying 2 rte flow commands l2_decap followed by mplsogre_encap.
Both commands must appear in the same flow, and from the point of the
packet it both actions are applyed at the same time. (There is no part
where a packet doesn't have L2 header).

Decapsulating such a tunnel works the other way, first we need to decap
the outer tunnel header and then apply the new L2.
So the commands will be mplsogre_decap / l2_encap

Due to the complex encapsulation of MPLSoGRE flow action and
based on the fact testpmd does not allocate memory, this patch adds a
new command in testpmd to initialise a global structure containing the
necessary information to make the outer layer of the packet.  This same
global structures will then be used by the flow commands in testpmd when
the action mplsogre_encap, mplsogre_decap, will be parsed, at this point,
the conversion into such action becomes trivial.

Signed-off-by: Ori Kam <orika@mellanox.com>
---
 app/test-pmd/cmdline.c                      | 227 ++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 238 ++++++++++++++++++++++++++--
 app/test-pmd/testpmd.h                      |  22 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 103 ++++++++++++
 4 files changed, 579 insertions(+), 11 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index e807dbb..6e14345 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -15436,6 +15436,229 @@ static void cmd_set_l2_decap_parsed(void *parsed_result,
 	},
 };
 
+/** Set MPLSoGRE encapsulation details */
+struct cmd_set_mplsogre_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsogre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t label;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_mplsogre_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result, mplsogre,
+				 "mplsogre_encap");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_mplsogre_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 mplsogre, "mplsogre_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 ip_version, "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_label =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "label");
+cmdline_parse_token_num_t cmd_set_mplsogre_encap_label_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, label,
+			      UINT32);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_mplsogre_encap_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result, ip_src);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_mplsogre_encap_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "vlan-tci");
+cmdline_parse_token_num_t cmd_set_mplsogre_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, tci,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_mplsogre_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				    eth_src);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_mplsogre_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				    eth_dst);
+
+static void cmd_set_mplsogre_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsogre_encap_result *res = parsed_result;
+	union {
+		uint32_t mplsogre_label;
+		uint8_t label[3];
+	} id = {
+		.mplsogre_label =
+			rte_cpu_to_be_32(res->label) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->mplsogre, "mplsogre_encap") == 0)
+		mplsogre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsogre, "mplsogre_encap-with-vlan") == 0)
+		mplsogre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsogre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsogre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(mplsogre_encap_conf.label, &id.label[1], 3);
+	if (mplsogre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, mplsogre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, mplsogre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, mplsogre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, mplsogre_encap_conf.ipv6_dst);
+	}
+	if (mplsogre_encap_conf.select_vlan)
+		mplsogre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(mplsogre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(mplsogre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_mplsogre_encap = {
+	.f = cmd_set_mplsogre_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_encap ip-version ipv4|ipv6 label <label>"
+		" ip-src <ip-src> ip-dst <ip-dst> eth-src <eth-src>"
+		" eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_encap_set,
+		(void *)&cmd_set_mplsogre_encap_mplsogre_encap,
+		(void *)&cmd_set_mplsogre_encap_ip_version,
+		(void *)&cmd_set_mplsogre_encap_ip_version_value,
+		(void *)&cmd_set_mplsogre_encap_label,
+		(void *)&cmd_set_mplsogre_encap_label_value,
+		(void *)&cmd_set_mplsogre_encap_ip_src,
+		(void *)&cmd_set_mplsogre_encap_ip_src_value,
+		(void *)&cmd_set_mplsogre_encap_ip_dst,
+		(void *)&cmd_set_mplsogre_encap_ip_dst_value,
+		(void *)&cmd_set_mplsogre_encap_eth_src,
+		(void *)&cmd_set_mplsogre_encap_eth_src_value,
+		(void *)&cmd_set_mplsogre_encap_eth_dst,
+		(void *)&cmd_set_mplsogre_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsogre_encap_with_vlan = {
+	.f = cmd_set_mplsogre_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_encap-with-vlan ip-version ipv4|ipv6"
+		" label <label> ip-src <ip-src> ip-dst <ip-dst>"
+		" vlan-tci <vlan-tci> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_encap_set,
+		(void *)&cmd_set_mplsogre_encap_mplsogre_encap_with_vlan,
+		(void *)&cmd_set_mplsogre_encap_ip_version,
+		(void *)&cmd_set_mplsogre_encap_ip_version_value,
+		(void *)&cmd_set_mplsogre_encap_label,
+		(void *)&cmd_set_mplsogre_encap_label_value,
+		(void *)&cmd_set_mplsogre_encap_ip_src,
+		(void *)&cmd_set_mplsogre_encap_ip_src_value,
+		(void *)&cmd_set_mplsogre_encap_ip_dst,
+		(void *)&cmd_set_mplsogre_encap_ip_dst_value,
+		(void *)&cmd_set_mplsogre_encap_vlan,
+		(void *)&cmd_set_mplsogre_encap_vlan_value,
+		(void *)&cmd_set_mplsogre_encap_eth_src,
+		(void *)&cmd_set_mplsogre_encap_eth_src_value,
+		(void *)&cmd_set_mplsogre_encap_eth_dst,
+		(void *)&cmd_set_mplsogre_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set MPLSoGRE decapsulation details */
+struct cmd_set_mplsogre_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsogre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_mplsogre_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result, mplsogre,
+				 "mplsogre_decap");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_mplsogre_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 mplsogre, "mplsogre_decap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 ip_version, "ipv4#ipv6");
+
+static void cmd_set_mplsogre_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsogre_decap_result *res = parsed_result;
+
+	if (strcmp(res->mplsogre, "mplsogre_decap") == 0)
+		mplsogre_decap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsogre, "mplsogre_decap-with-vlan") == 0)
+		mplsogre_decap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsogre_decap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsogre_decap_conf.select_ipv4 = 0;
+}
+
+cmdline_parse_inst_t cmd_set_mplsogre_decap = {
+	.f = cmd_set_mplsogre_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_decap ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_decap_set,
+		(void *)&cmd_set_mplsogre_decap_mplsogre_decap,
+		(void *)&cmd_set_mplsogre_decap_ip_version,
+		(void *)&cmd_set_mplsogre_decap_ip_version_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsogre_decap_with_vlan = {
+	.f = cmd_set_mplsogre_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_decap-with-vlan ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_decap_set,
+		(void *)&cmd_set_mplsogre_decap_mplsogre_decap_with_vlan,
+		(void *)&cmd_set_mplsogre_decap_ip_version,
+		(void *)&cmd_set_mplsogre_decap_ip_version_value,
+		NULL,
+	},
+};
+
 /** Set MPLSoUDP encapsulation details */
 struct cmd_set_mplsoudp_encap_result {
 	cmdline_fixed_string_t set;
@@ -18317,6 +18540,10 @@ struct cmd_config_per_queue_tx_offload_result {
 	(cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_l2_decap,
 	(cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_encap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_decap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_decap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 94b9bf7..1c72ad9 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -245,6 +245,8 @@ enum index {
 	ACTION_NVGRE_DECAP,
 	ACTION_L2_ENCAP,
 	ACTION_L2_DECAP,
+	ACTION_MPLSOGRE_ENCAP,
+	ACTION_MPLSOGRE_DECAP,
 	ACTION_MPLSOUDP_ENCAP,
 	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
@@ -851,6 +853,8 @@ struct parse_action_priv {
 	ACTION_NVGRE_DECAP,
 	ACTION_L2_ENCAP,
 	ACTION_L2_DECAP,
+	ACTION_MPLSOGRE_ENCAP,
+	ACTION_MPLSOGRE_DECAP,
 	ACTION_MPLSOUDP_ENCAP,
 	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
@@ -1038,6 +1042,12 @@ static int parse_vc_action_l2_encap(struct context *, const struct token *,
 static int parse_vc_action_l2_decap(struct context *, const struct token *,
 				    const char *, unsigned int, void *,
 				    unsigned int);
+static int parse_vc_action_mplsogre_encap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
+static int parse_vc_action_mplsogre_decap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
 static int parse_vc_action_mplsoudp_encap(struct context *,
 					  const struct token *, const char *,
 					  unsigned int, void *, unsigned int);
@@ -2562,19 +2572,10 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
-	[ACTION_MPLSOUDP_ENCAP] = {
-		.name = "mplsoudp_encap",
-		.help = "mplsoudp encapsulation, uses configuration set by"
-			" \"set mplsoudp_encap\"",
-		.priv = PRIV_ACTION(RAW_ENCAP,
-				    sizeof(struct action_raw_encap_data)),
-		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
-		.call = parse_vc_action_mplsoudp_encap,
-	},
 	[ACTION_L2_ENCAP] = {
 		.name = "l2_encap",
-		.help = "l2 encapsulation, uses configuration set by"
-			" \"set l2_encp\"",
+		.help = "l2 encap, uses configuration set by"
+			" \"set l2_encap\"",
 		.priv = PRIV_ACTION(RAW_ENCAP,
 				    sizeof(struct action_raw_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
@@ -2589,6 +2590,33 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_l2_decap,
 	},
+	[ACTION_MPLSOGRE_ENCAP] = {
+		.name = "mplsogre_encap",
+		.help = "mplsogre encapsulation, uses configuration set by"
+			" \"set mplsogre_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsogre_encap,
+	},
+	[ACTION_MPLSOGRE_DECAP] = {
+		.name = "mplsogre_decap",
+		.help = "mplsogre decapsulation, uses configuration set by"
+			" \"set mplsogre_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsogre_decap,
+	},
+	[ACTION_MPLSOUDP_ENCAP] = {
+		.name = "mplsoudp_encap",
+		.help = "mplsoudp encapsulation, uses configuration set by"
+			" \"set mplsoudp_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_encap,
+	},
 	[ACTION_MPLSOUDP_DECAP] = {
 		.name = "mplsoudp_decap",
 		.help = "mplsoudp decapsulation, uses configuration set by"
@@ -3579,6 +3607,194 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return ret;
 }
 
+#define ETHER_TYPE_MPLS_UNICAST 0x8847
+
+/** Parse MPLSOGRE encap action. */
+static int
+parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsogre_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = mplsogre_encap_conf.ipv4_src,
+			.dst_addr = mplsogre_encap_conf.ipv4_dst,
+			.next_proto_id = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_gre gre = {
+		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+		.preserve = {},
+	};
+	header = action_encap_data->data;
+	if (mplsogre_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsogre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsogre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsogre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsogre_encap_conf.select_vlan) {
+		if (mplsogre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsogre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
+		       &mplsogre_encap_conf.ipv6_src,
+		       sizeof(mplsogre_encap_conf.ipv6_src));
+		memcpy(&ipv6.hdr.dst_addr,
+		       &mplsogre_encap_conf.ipv6_dst,
+		       sizeof(mplsogre_encap_conf.ipv6_dst));
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &gre, sizeof(gre));
+	header += sizeof(gre);
+	memcpy(mpls.label_tc_s, mplsogre_encap_conf.label,
+	       RTE_DIM(mplsogre_encap_conf.label));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOGRE decap action. */
+static int
+parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.next_proto_id = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_gre gre = {
+		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (mplsogre_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsogre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsogre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsogre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsogre_encap_conf.select_vlan) {
+		if (mplsogre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsogre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &gre, sizeof(gre));
+	header += sizeof(gre);
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
 /** Parse MPLSOUDP encap action. */
 static int
 parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 12daee5..0738105 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -523,6 +523,28 @@ struct l2_decap_conf {
 };
 struct l2_decap_conf l2_decap_conf;
 
+/* MPLSoGRE encap parameters. */
+struct mplsogre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t label[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct mplsogre_encap_conf mplsogre_encap_conf;
+
+/* MPLSoGRE decap parameters. */
+struct mplsogre_decap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+};
+struct mplsogre_decap_conf mplsogre_decap_conf;
+
 /* MPLSoUDP encap parameters. */
 struct mplsoudp_encap_conf {
 	uint32_t select_ipv4:1;
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index fb26315..8d60bf0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1607,6 +1607,35 @@ flow rule using the action l2_decap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config MPLSoGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a MPLSoGRE tunnel::
+
+ set mplsogre_encap ip-version (ipv4|ipv6) label (label) \
+        ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst (eth-dst)
+ set mplsogre_encap-with-vlan ip-version (ipv4|ipv6) label (label) \
+        ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \
+        eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsogre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoGRE Decap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to decapsulate MPLSoGRE packet::
+
+ set mplsogre_decap ip-version (ipv4|ipv6)
+ set mplsogre_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsogre_decap will use the last configuration set.
+To have a different decapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Config MPLSoUDP Encap outer layers
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3834,6 +3863,12 @@ This section lists supported actions and their attributes, if any.
 - ``l2_decap``: Performs a L2 decapsulation, L2 configuration
   is done through `Config L2 Decap`_.
 
+- ``mplsogre_encap``: Performs a MPLSoGRE encapsulation, outer layer
+  configuration is done through `Config MPLSoGRE Encap outer layers`_.
+
+- ``mplsogre_decap``: Performs a MPLSoGRE decapsulation, outer layer
+  configuration is done through `Config MPLSoGRE Decap outer layers`_.
+
 - ``mplsoudp_encap``: Performs a MPLSoUDP encapsulation, outer layer
   configuration is done through `Config MPLSoUDP Encap outer layers`_.
 
@@ -4237,6 +4272,74 @@ L2 with VXLAN header::
  testpmd> flow create 0 egress pattern eth / end actions l2_encap / mplsoudp_encap /
          queue index 0 / end
 
+Sample MPLSoGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_encap ip-version ipv4 label 4
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv4 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_encap-with-vlan ip-version ipv4 label 4
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv6 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_encap ip-version ipv6 mask 4
+        ip-src ::1 ip-dst ::2222 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv6 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_encap-with-vlan ip-version ipv6 mask 4
+        ip-src ::1 ip-dst ::2222 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+Sample MPLSoGRE decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoGRE decapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_decap ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / ipv4 / gre / mpls / end actions
+        mplsogre_decap / l2_encap / end
+
+IPv4 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_decap-with-vlan ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv4 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
+IPv6 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_decap ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / ipv6 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
+IPv6 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_decap-with-vlan ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv6 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
 Sample MPLSoUDP encapsulation rule
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-17  8:43           ` Ori Kam
@ 2018-10-22 13:06             ` Andrew Rybchenko
  2018-10-22 13:19               ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2018-10-22 13:06 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

On 10/17/18 11:43 AM, Ori Kam wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>> Sent: Wednesday, October 17, 2018 10:56 AM
>> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
>> jingjing.wu@intel.com; bernard.iremonger@intel.com; ferruh.yigit@intel.com;
>> stephen@networkplumber.org; Adrien Mazarguil
>> <adrien.mazarguil@6wind.com>
>> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
>> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
>> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
>> <shahafs@mellanox.com>
>> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
>>
>> On 10/17/18 12:41 AM, Ori Kam wrote:
>> Currenlty the encap/decap actions only support encapsulation
>> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
>> the inner packet has a valid Ethernet header, while L3 encapsulation
>> is where the inner packet doesn't have the Ethernet header).
>> In addtion the parameter to to the encap action is a list of rte items,
>> this results in 2 extra translation, between the application to the
>> actioni and from the action to the NIC. This results in negetive impact
>> on the insertion performance.
>>
>> Looking forward there are going to be a need to support many more tunnel
>> encapsulations. For example MPLSoGRE, MPLSoUDP.
>> Adding the new encapsulation will result in duplication of code.
>> For example the code for handling NVGRE and VXLAN are exactly the same,
>> and each new tunnel will have the same exact structure.
>>
>> This patch introduce a raw encapsulation that can support L2 tunnel types
>> and L3 tunnel types. In addtion the new
>> encapsulations commands are using raw buffer inorder to save the
>> converstion time, both for the application and the PMD.
>>
>> In order to encapsulate L3 tunnel type there is a need to use both
>> actions in the same rule: The decap to remove the L2 of the original
>> packet, and then encap command to encapsulate the packet with the
>> tunnel.
>> For decap L3 there is also a need to use both commands in the same flow
>> first the decap command to remove the outer tunnel header and then encap
>> to add the L2 header.
>>
>> Signed-off-by: Ori Kam mailto:orika@mellanox.com

[...]

>> +
>> +This action modifies the payload of matched flows. The data supplied must
>> +be a valid header, either holding layer 2 data in case of removing layer 2
>> +before incapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
>> +tunnel definition starting from layer 2 and moving to the tunel item itself.
>> +When applied to the original packet the resulting packet must be a
>> +valid packet.
>> +
>> +.. _table_rte_flow_action_raw_decap:
>> +
>> +.. table:: RAW_DECAP
>> +
>> +   +----------------+----------------------------------------+
>> +   | Field          | Value                                  |
>> +   +================+========================================+
>> +   | ``data``       | Decapsulation data                     |
>>
>> Sorry, I've missed the point why it is here. Is it used for matching?
>> Why is the size insufficient?
>>
> No it is not used for matching this is only for the encapsulation data.
> The data is for PMD that needs to validate that they can decapsulate
> The packet, and on some PMD there might need the specify which headers
> to remove and not just number of bytes.

Sorry, but I still don't understand. How should PMD or HW use it?
I guess the main problem here is that it is a generic action.
If it is VXLAN_DECAP, it would not be a problem and neither
size nor data would be required.

>> +   +----------------+----------------------------------------+
>> +   | ``size``       | Size of data                           |
>> +   +----------------+----------------------------------------+
>> +
>>   Action: ``SET_IPV4_SRC``
>>   ^^^^^^^^^^^^^^^^^^^^^^^^

Andrew.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-22 13:06             ` Andrew Rybchenko
@ 2018-10-22 13:19               ` Ori Kam
  2018-10-22 13:27                 ` Andrew Rybchenko
  0 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-22 13:19 UTC (permalink / raw)
  To: Andrew Rybchenko, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler



> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Monday, October 22, 2018 4:06 PM
> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com; ferruh.yigit@intel.com;
> stephen@networkplumber.org; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
> 
> On 10/17/18 11:43 AM, Ori Kam wrote:
> >> -----Original Message-----
> >> From: Andrew Rybchenko <arybchenko@solarflare.com>
> >> Sent: Wednesday, October 17, 2018 10:56 AM
> >> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> >> jingjing.wu@intel.com; bernard.iremonger@intel.com;
> ferruh.yigit@intel.com;
> >> stephen@networkplumber.org; Adrien Mazarguil
> >> <adrien.mazarguil@6wind.com>
> >> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> >> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> >> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> >> <shahafs@mellanox.com>
> >> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
> >>
> >> On 10/17/18 12:41 AM, Ori Kam wrote:
> >> Currenlty the encap/decap actions only support encapsulation
> >> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> >> the inner packet has a valid Ethernet header, while L3 encapsulation
> >> is where the inner packet doesn't have the Ethernet header).
> >> In addtion the parameter to to the encap action is a list of rte items,
> >> this results in 2 extra translation, between the application to the
> >> actioni and from the action to the NIC. This results in negetive impact
> >> on the insertion performance.
> >>
> >> Looking forward there are going to be a need to support many more tunnel
> >> encapsulations. For example MPLSoGRE, MPLSoUDP.
> >> Adding the new encapsulation will result in duplication of code.
> >> For example the code for handling NVGRE and VXLAN are exactly the same,
> >> and each new tunnel will have the same exact structure.
> >>
> >> This patch introduce a raw encapsulation that can support L2 tunnel types
> >> and L3 tunnel types. In addtion the new
> >> encapsulations commands are using raw buffer inorder to save the
> >> converstion time, both for the application and the PMD.
> >>
> >> In order to encapsulate L3 tunnel type there is a need to use both
> >> actions in the same rule: The decap to remove the L2 of the original
> >> packet, and then encap command to encapsulate the packet with the
> >> tunnel.
> >> For decap L3 there is also a need to use both commands in the same flow
> >> first the decap command to remove the outer tunnel header and then encap
> >> to add the L2 header.
> >>
> >> Signed-off-by: Ori Kam mailto:orika@mellanox.com
> 
> [...]
> 
> >> +
> >> +This action modifies the payload of matched flows. The data supplied must
> >> +be a valid header, either holding layer 2 data in case of removing layer 2
> >> +before incapsulation of layer 3 tunnel (for example MPLSoGRE) or
> complete
> >> +tunnel definition starting from layer 2 and moving to the tunel item itself.
> >> +When applied to the original packet the resulting packet must be a
> >> +valid packet.
> >> +
> >> +.. _table_rte_flow_action_raw_decap:
> >> +
> >> +.. table:: RAW_DECAP
> >> +
> >> +   +----------------+----------------------------------------+
> >> +   | Field          | Value                                  |
> >> +   +================+========================================+
> >> +   | ``data``       | Decapsulation data                     |
> >>
> >> Sorry, I've missed the point why it is here. Is it used for matching?
> >> Why is the size insufficient?
> >>
> > No it is not used for matching this is only for the encapsulation data.
> > The data is for PMD that needs to validate that they can decapsulate
> > The packet, and on some PMD there might need the specify which headers
> > to remove and not just number of bytes.
> 
> Sorry, but I still don't understand. How should PMD or HW use it?
> I guess the main problem here is that it is a generic action.
> If it is VXLAN_DECAP, it would not be a problem and neither
> size nor data would be required.
> 

The data is buffer of the encap/decap headers, so the PMD can parse the this data
and check the validity and if the HW supports it.
Some NICs will not use this others can check if the tunnel request is valid.
For example let's assume that some tunnel encapsulation is supported on some FW version
and not supported in other version so the PMD can check the encapsulation data to see 
what is the requested tunnel type and the FW capabilities to return success or fail.

This was one of Adrien requests, 

> >> +   +----------------+----------------------------------------+
> >> +   | ``size``       | Size of data                           |
> >> +   +----------------+----------------------------------------+
> >> +
> >>   Action: ``SET_IPV4_SRC``
> >>   ^^^^^^^^^^^^^^^^^^^^^^^^
> 
> Andrew.

Thanks,
Ori

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-22 13:19               ` Ori Kam
@ 2018-10-22 13:27                 ` Andrew Rybchenko
  2018-10-22 13:32                   ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2018-10-22 13:27 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

On 10/22/18 4:19 PM, Ori Kam wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>> Sent: Monday, October 22, 2018 4:06 PM
>> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
>> jingjing.wu@intel.com; bernard.iremonger@intel.com; ferruh.yigit@intel.com;
>> stephen@networkplumber.org; Adrien Mazarguil
>> <adrien.mazarguil@6wind.com>
>> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
>> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
>> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
>> <shahafs@mellanox.com>
>> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
>>
>> On 10/17/18 11:43 AM, Ori Kam wrote:
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> Sent: Wednesday, October 17, 2018 10:56 AM
>>>> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
>>>> jingjing.wu@intel.com; bernard.iremonger@intel.com;
>> ferruh.yigit@intel.com;
>>>> stephen@networkplumber.org; Adrien Mazarguil
>>>> <adrien.mazarguil@6wind.com>
>>>> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
>>>> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
>>>> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
>>>> <shahafs@mellanox.com>
>>>> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
>>>>
>>>> On 10/17/18 12:41 AM, Ori Kam wrote:
>>>> Currenlty the encap/decap actions only support encapsulation
>>>> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
>>>> the inner packet has a valid Ethernet header, while L3 encapsulation
>>>> is where the inner packet doesn't have the Ethernet header).
>>>> In addtion the parameter to to the encap action is a list of rte items,
>>>> this results in 2 extra translation, between the application to the
>>>> actioni and from the action to the NIC. This results in negetive impact
>>>> on the insertion performance.
>>>>
>>>> Looking forward there are going to be a need to support many more tunnel
>>>> encapsulations. For example MPLSoGRE, MPLSoUDP.
>>>> Adding the new encapsulation will result in duplication of code.
>>>> For example the code for handling NVGRE and VXLAN are exactly the same,
>>>> and each new tunnel will have the same exact structure.
>>>>
>>>> This patch introduce a raw encapsulation that can support L2 tunnel types
>>>> and L3 tunnel types. In addtion the new
>>>> encapsulations commands are using raw buffer inorder to save the
>>>> converstion time, both for the application and the PMD.
>>>>
>>>> In order to encapsulate L3 tunnel type there is a need to use both
>>>> actions in the same rule: The decap to remove the L2 of the original
>>>> packet, and then encap command to encapsulate the packet with the
>>>> tunnel.
>>>> For decap L3 there is also a need to use both commands in the same flow
>>>> first the decap command to remove the outer tunnel header and then encap
>>>> to add the L2 header.
>>>>
>>>> Signed-off-by: Ori Kam mailto:orika@mellanox.com
>> [...]
>>
>>>> +
>>>> +This action modifies the payload of matched flows. The data supplied must
>>>> +be a valid header, either holding layer 2 data in case of removing layer 2
>>>> +before incapsulation of layer 3 tunnel (for example MPLSoGRE) or
>> complete
>>>> +tunnel definition starting from layer 2 and moving to the tunel item itself.
>>>> +When applied to the original packet the resulting packet must be a
>>>> +valid packet.
>>>> +
>>>> +.. _table_rte_flow_action_raw_decap:
>>>> +
>>>> +.. table:: RAW_DECAP
>>>> +
>>>> +   +----------------+----------------------------------------+
>>>> +   | Field          | Value                                  |
>>>> +   +================+========================================+
>>>> +   | ``data``       | Decapsulation data                     |
>>>>
>>>> Sorry, I've missed the point why it is here. Is it used for matching?
>>>> Why is the size insufficient?
>>>>
>>> No it is not used for matching this is only for the encapsulation data.
>>> The data is for PMD that needs to validate that they can decapsulate
>>> The packet, and on some PMD there might need the specify which headers
>>> to remove and not just number of bytes.
>> Sorry, but I still don't understand. How should PMD or HW use it?
>> I guess the main problem here is that it is a generic action.
>> If it is VXLAN_DECAP, it would not be a problem and neither
>> size nor data would be required.
>>
> The data is buffer of the encap/decap headers, so the PMD can parse the this data
> and check the validity and if the HW supports it.
> Some NICs will not use this others can check if the tunnel request is valid.
> For example let's assume that some tunnel encapsulation is supported on some FW version
> and not supported in other version so the PMD can check the encapsulation data to see
> what is the requested tunnel type and the FW capabilities to return success or fail.

OK, I see. Could you improve the action description to make it clear.
Right now the description says nothing about it.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 1/3] ethdev: add raw encapsulation action
  2018-10-22 13:27                 ` Andrew Rybchenko
@ 2018-10-22 13:32                   ` Ori Kam
  0 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-22 13:32 UTC (permalink / raw)
  To: Andrew Rybchenko, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler



> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Monday, October 22, 2018 4:28 PM
> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com; ferruh.yigit@intel.com;
> stephen@networkplumber.org; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
> 
> On 10/22/18 4:19 PM, Ori Kam wrote:
> >> -----Original Message-----
> >> From: Andrew Rybchenko <arybchenko@solarflare.com>
> >> Sent: Monday, October 22, 2018 4:06 PM
> >> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> >> jingjing.wu@intel.com; bernard.iremonger@intel.com;
> ferruh.yigit@intel.com;
> >> stephen@networkplumber.org; Adrien Mazarguil
> >> <adrien.mazarguil@6wind.com>
> >> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> >> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> >> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> >> <shahafs@mellanox.com>
> >> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation action
> >>
> >> On 10/17/18 11:43 AM, Ori Kam wrote:
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <arybchenko@solarflare.com>
> >>>> Sent: Wednesday, October 17, 2018 10:56 AM
> >>>> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> >>>> jingjing.wu@intel.com; bernard.iremonger@intel.com;
> >> ferruh.yigit@intel.com;
> >>>> stephen@networkplumber.org; Adrien Mazarguil
> >>>> <adrien.mazarguil@6wind.com>
> >>>> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas
> Monjalon
> >>>> <thomas@monjalon.net>; Nélio Laranjeiro
> <nelio.laranjeiro@6wind.com>;
> >>>> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> >>>> <shahafs@mellanox.com>
> >>>> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: add raw encapsulation
> action
> >>>>
> >>>> On 10/17/18 12:41 AM, Ori Kam wrote:
> >>>> Currenlty the encap/decap actions only support encapsulation
> >>>> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> >>>> the inner packet has a valid Ethernet header, while L3 encapsulation
> >>>> is where the inner packet doesn't have the Ethernet header).
> >>>> In addtion the parameter to to the encap action is a list of rte items,
> >>>> this results in 2 extra translation, between the application to the
> >>>> actioni and from the action to the NIC. This results in negetive impact
> >>>> on the insertion performance.
> >>>>
> >>>> Looking forward there are going to be a need to support many more
> tunnel
> >>>> encapsulations. For example MPLSoGRE, MPLSoUDP.
> >>>> Adding the new encapsulation will result in duplication of code.
> >>>> For example the code for handling NVGRE and VXLAN are exactly the
> same,
> >>>> and each new tunnel will have the same exact structure.
> >>>>
> >>>> This patch introduce a raw encapsulation that can support L2 tunnel types
> >>>> and L3 tunnel types. In addtion the new
> >>>> encapsulations commands are using raw buffer inorder to save the
> >>>> converstion time, both for the application and the PMD.
> >>>>
> >>>> In order to encapsulate L3 tunnel type there is a need to use both
> >>>> actions in the same rule: The decap to remove the L2 of the original
> >>>> packet, and then encap command to encapsulate the packet with the
> >>>> tunnel.
> >>>> For decap L3 there is also a need to use both commands in the same flow
> >>>> first the decap command to remove the outer tunnel header and then
> encap
> >>>> to add the L2 header.
> >>>>
> >>>> Signed-off-by: Ori Kam mailto:orika@mellanox.com
> >> [...]
> >>
> >>>> +
> >>>> +This action modifies the payload of matched flows. The data supplied
> must
> >>>> +be a valid header, either holding layer 2 data in case of removing layer 2
> >>>> +before incapsulation of layer 3 tunnel (for example MPLSoGRE) or
> >> complete
> >>>> +tunnel definition starting from layer 2 and moving to the tunel item
> itself.
> >>>> +When applied to the original packet the resulting packet must be a
> >>>> +valid packet.
> >>>> +
> >>>> +.. _table_rte_flow_action_raw_decap:
> >>>> +
> >>>> +.. table:: RAW_DECAP
> >>>> +
> >>>> +   +----------------+----------------------------------------+
> >>>> +   | Field          | Value                                  |
> >>>> +
> +================+========================================+
> >>>> +   | ``data``       | Decapsulation data                     |
> >>>>
> >>>> Sorry, I've missed the point why it is here. Is it used for matching?
> >>>> Why is the size insufficient?
> >>>>
> >>> No it is not used for matching this is only for the encapsulation data.
> >>> The data is for PMD that needs to validate that they can decapsulate
> >>> The packet, and on some PMD there might need the specify which headers
> >>> to remove and not just number of bytes.
> >> Sorry, but I still don't understand. How should PMD or HW use it?
> >> I guess the main problem here is that it is a generic action.
> >> If it is VXLAN_DECAP, it would not be a problem and neither
> >> size nor data would be required.
> >>
> > The data is buffer of the encap/decap headers, so the PMD can parse the this
> data
> > and check the validity and if the HW supports it.
> > Some NICs will not use this others can check if the tunnel request is valid.
> > For example let's assume that some tunnel encapsulation is supported on
> some FW version
> > and not supported in other version so the PMD can check the encapsulation
> data to see
> > what is the requested tunnel type and the FW capabilities to return success or
> fail.
> 
> OK, I see. Could you improve the action description to make it clear.
> Right now the description says nothing about it.
> 

Sure I will update it.

Ori


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v5 1/3] ethdev: add raw encapsulation action
  2018-10-17 17:07         ` [PATCH v5 1/3] ethdev: add raw encapsulation action Ori Kam
@ 2018-10-22 14:15           ` Andrew Rybchenko
  2018-10-22 14:31             ` Ori Kam
  0 siblings, 1 reply; 53+ messages in thread
From: Andrew Rybchenko @ 2018-10-22 14:15 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/17/18 8:07 PM, Ori Kam wrote:
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the
> actioni and from the action to the NIC. This results in negative impact
> on the insertion performance.
>
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
>
> This patch introduce a raw encapsulation that can support L2 tunnel types
> and L3 tunnel types. In addtion the new
> encapsulations commands are using raw buffer inorder to save the
> converstion time, both for the application and the PMD.
>
> In order to encapsulate L3 tunnel type there is a need to use both
> actions in the same rule: The decap to remove the L2 of the original
> packet, and then encap command to encapsulate the packet with the
> tunnel.
> For decap L3 there is also a need to use both commands in the same flow
> first the decap command to remove the outer tunnel header and then encap
> to add the L2 header.
>
> Signed-off-by: Ori Kam <orika@mellanox.com>

One nit below

Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>

[...]

> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index a5ec441..5212b18 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -2076,6 +2076,57 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
>   
>   This action modifies the payload of matched flows.
>   
> +Action: ``RAW_ENCAP``
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +Adds outer header whose template is provided in it's data buffer,

it's -> its

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v5 1/3] ethdev: add raw encapsulation action
  2018-10-22 14:15           ` Andrew Rybchenko
@ 2018-10-22 14:31             ` Ori Kam
  0 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-22 14:31 UTC (permalink / raw)
  To: Andrew Rybchenko, wenzhuo.lu, jingjing.wu, bernard.iremonger,
	ferruh.yigit, stephen, Adrien Mazarguil
  Cc: dev, Dekel Peled, Thomas Monjalon, Nélio Laranjeiro,
	Yongseok Koh, Shahaf Shuler

Just a clarification,
Following a talk with Andrew regarding his comments about 
missing documentation in V4 it was agreed that no documentation change
is needed.

The only comment is the typo, which Ferruh agreed to correct when applying the patch.

Thanks,

Ori

> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Monday, October 22, 2018 5:16 PM
> To: Ori Kam <orika@mellanox.com>; wenzhuo.lu@intel.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com;
> arybchenko@solarflare.com; ferruh.yigit@intel.com;
> stephen@networkplumber.org; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>; Nélio Laranjeiro <nelio.laranjeiro@6wind.com>;
> Yongseok Koh <yskoh@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH v5 1/3] ethdev: add raw encapsulation action
> 
> On 10/17/18 8:07 PM, Ori Kam wrote:
> > Currenlty the encap/decap actions only support encapsulation
> > of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> > the inner packet has a valid Ethernet header, while L3 encapsulation
> > is where the inner packet doesn't have the Ethernet header).
> > In addtion the parameter to to the encap action is a list of rte items,
> > this results in 2 extra translation, between the application to the
> > actioni and from the action to the NIC. This results in negative impact
> > on the insertion performance.
> >
> > Looking forward there are going to be a need to support many more tunnel
> > encapsulations. For example MPLSoGRE, MPLSoUDP.
> > Adding the new encapsulation will result in duplication of code.
> > For example the code for handling NVGRE and VXLAN are exactly the same,
> > and each new tunnel will have the same exact structure.
> >
> > This patch introduce a raw encapsulation that can support L2 tunnel types
> > and L3 tunnel types. In addtion the new
> > encapsulations commands are using raw buffer inorder to save the
> > converstion time, both for the application and the PMD.
> >
> > In order to encapsulate L3 tunnel type there is a need to use both
> > actions in the same rule: The decap to remove the L2 of the original
> > packet, and then encap command to encapsulate the packet with the
> > tunnel.
> > For decap L3 there is also a need to use both commands in the same flow
> > first the decap command to remove the outer tunnel header and then encap
> > to add the L2 header.
> >
> > Signed-off-by: Ori Kam <orika@mellanox.com>
> 
> One nit below
> 
> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
> 
> [...]
> 
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index a5ec441..5212b18 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -2076,6 +2076,57 @@ RTE_FLOW_ERROR_TYPE_ACTION error should
> be returned.
> >
> >   This action modifies the payload of matched flows.
> >
> > +Action: ``RAW_ENCAP``
> > +^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Adds outer header whose template is provided in it's data buffer,
> 
> it's -> its


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
                           ` (2 preceding siblings ...)
  2018-10-17 17:07         ` [PATCH v5 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
@ 2018-10-22 14:45         ` Ferruh Yigit
  2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
  4 siblings, 0 replies; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-22 14:45 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/17/2018 6:07 PM, Ori Kam wrote:
> This series implement the raw tunnel encapsulation actions
> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> 
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the action
> and from the action to the NIC. This results in negetive impact on the
> insertion performance.
>     
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
>     
> This series introduce a raw encapsulation that can support both L2 and L3
> tunnel encapsulation.
> In order to encap l3 tunnel for example MPLSoDUP:
> ETH / IPV4 / UDP / MPLS / IPV4 / L4 .. L7
> When creating the flow rule we add 2 actions, the first one is decap in order
> to remove the original L2 of the packet and then the encap with the tunnel data.
> Decapsulating such a tunnel is done in the following order, first decap the
> outer tunnel and then encap the packet with the L2 header.
> It is important to notice that from the Nic and PMD both actionsn happens
> simultaneously, meaning that at we are always having a valid packet.
> 
> This series also inroduce the following commands for testpmd:
> * l2_encap
> * l2_decap
> * mplsogre_encap
> * mplsogre_decap
> * mplsoudp_encap
> * mplsoudp_decap
> 
> along with helper function to set teh headers that will be used for the actions,
> the same as with vxlan_encap.
> 
> [1]https://mails.dpdk.org/archives/dev/2018-August/109944.html
> v4:
>  * fix typos.
> 
> v4:
>  * convert to raw encap/decap, according to Adrien suggestion.
>  * keep the old vxlan and nvgre encapsulation commands.
> 
> v3:
>  * rebase on tip.
> 
> v2:
>  * add missing decap_l3 structure.
>  * fix typo.
> 
> 
> Ori Kam (3):
>   ethdev: add raw encapsulation action
>   app/testpmd: add MPLSoUDP encapsulation
>   app/testpmd: add MPLSoGRE encapsulation

Getting following build error with gcc, can you please check:

.../app/test-pmd/cmdline_flow.c: In function ‘parse_vc_action_mplsoudp_decap’:
.../app/test-pmd/cmdline_flow.c:4065:2: error: ‘mpls’ may be used uninitialized
in this function [-Werror=maybe-uninitialized]
  memcpy(header, &mpls, sizeof(mpls));
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../app/test-pmd/cmdline_flow.c: In function ‘parse_vc_action_mplsogre_decap’:
.../app/test-pmd/cmdline_flow.c:3874:2: error: ‘mpls’ may be used uninitialized
in this function [-Werror=maybe-uninitialized]
  memcpy(header, &mpls, sizeof(mpls));
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation
  2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
                           ` (3 preceding siblings ...)
  2018-10-22 14:45         ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ferruh Yigit
@ 2018-10-22 17:38         ` Ori Kam
  2018-10-22 17:38           ` [PATCH v6 1/3] ethdev: add raw encapsulation action Ori Kam
                             ` (3 more replies)
  4 siblings, 4 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-22 17:38 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

This series implement the raw tunnel encapsulation actions
and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the action
and from the action to the NIC. This results in negetive impact on the
insertion performance.
    
Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.
    
This series introduce a raw encapsulation that can support both L2 and L3
tunnel encapsulation.
In order to encap l3 tunnel for example MPLSoDUP:
ETH / IPV4 / UDP / MPLS / IPV4 / L4 .. L7
When creating the flow rule we add 2 actions, the first one is decap in order
to remove the original L2 of the packet and then the encap with the tunnel data.
Decapsulating such a tunnel is done in the following order, first decap the
outer tunnel and then encap the packet with the L2 header.
It is important to notice that from the Nic and PMD both actionsn happens
simultaneously, meaning that at we are always having a valid packet.

This series also inroduce the following commands for testpmd:
* l2_encap
* l2_decap
* mplsogre_encap
* mplsogre_decap
* mplsoudp_encap
* mplsoudp_decap

along with helper function to set teh headers that will be used for the actions,
the same as with vxlan_encap.

[1]https://mails.dpdk.org/archives/dev/2018-August/109944.html

v6:
 * fix compilation error
 * fix typo.

v5:
 * fix typos.

v4:
 * convert to raw encap/decap, according to Adrien suggestion.
 * keep the old vxlan and nvgre encapsulation commands.

v3:
 * rebase on tip.

v2:
 * add missing decap_l3 structure.
 * fix typo.



Ori Kam (3):
  ethdev: add raw encapsulation action
  app/testpmd: add MPLSoUDP encapsulation
  app/testpmd: add MPLSoGRE encapsulation

 app/test-pmd/cmdline.c                      | 637 ++++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 597 ++++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  62 +++
 doc/guides/prog_guide/rte_flow.rst          |  51 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 278 ++++++++++++
 lib/librte_ethdev/rte_flow.c                |   2 +
 lib/librte_ethdev/rte_flow.h                |  59 +++
 7 files changed, 1686 insertions(+)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v6 1/3] ethdev: add raw encapsulation action
  2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
@ 2018-10-22 17:38           ` Ori Kam
  2018-10-22 17:38           ` [PATCH v6 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 53+ messages in thread
From: Ori Kam @ 2018-10-22 17:38 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Currenlty the encap/decap actions only support encapsulation
of VXLAN and NVGRE L2 packets (L2 encapsulation is where
the inner packet has a valid Ethernet header, while L3 encapsulation
is where the inner packet doesn't have the Ethernet header).
In addtion the parameter to to the encap action is a list of rte items,
this results in 2 extra translation, between the application to the
actioni and from the action to the NIC. This results in negative impact
on the insertion performance.

Looking forward there are going to be a need to support many more tunnel
encapsulations. For example MPLSoGRE, MPLSoUDP.
Adding the new encapsulation will result in duplication of code.
For example the code for handling NVGRE and VXLAN are exactly the same,
and each new tunnel will have the same exact structure.

This patch introduce a raw encapsulation that can support L2 tunnel types
and L3 tunnel types. In addtion the new
encapsulations commands are using raw buffer inorder to save the
converstion time, both for the application and the PMD.

In order to encapsulate L3 tunnel type there is a need to use both
actions in the same rule: The decap to remove the L2 of the original
packet, and then encap command to encapsulate the packet with the
tunnel.
For decap L3 there is also a need to use both commands in the same flow
first the decap command to remove the outer tunnel header and then encap
to add the L2 header.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>

---
v6:
 * fix typo

---
 doc/guides/prog_guide/rte_flow.rst | 51 ++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_flow.c       |  2 ++
 lib/librte_ethdev/rte_flow.h       | 59 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 112 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a5ec441..d060ff6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2076,6 +2076,57 @@ RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
 
 This action modifies the payload of matched flows.
 
+Action: ``RAW_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^
+
+Adds outer header whose template is provided in its data buffer,
+as defined in the ``rte_flow_action_raw_encap`` definition.
+
+This action modifies the payload of matched flows. The data supplied must
+be a valid header, either holding layer 2 data in case of adding layer 2 after
+decap layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition
+starting from layer 2 and moving to the tunnel item itself. When applied to
+the original packet the resulting packet must be a valid packet.
+
+.. _table_rte_flow_action_raw_encap:
+
+.. table:: RAW_ENCAP
+
+   +----------------+----------------------------------------+
+   | Field          | Value                                  |
+   +================+========================================+
+   | ``data``       | Encapsulation data                     |
+   +----------------+----------------------------------------+
+   | ``preserve``   | Bit-mask of data to preserve on output |
+   +----------------+----------------------------------------+
+   | ``size``       | Size of data and preserve              |
+   +----------------+----------------------------------------+
+
+Action: ``RAW_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Remove outer header whose template is provided in its data buffer,
+as defined in the ``rte_flow_action_raw_decap``
+
+This action modifies the payload of matched flows. The data supplied must
+be a valid header, either holding layer 2 data in case of removing layer 2
+before eincapsulation of layer 3 tunnel (for example MPLSoGRE) or complete
+tunnel definition starting from layer 2 and moving to the tunnel item itself.
+When applied to the original packet the resulting packet must be a
+valid packet.
+
+.. _table_rte_flow_action_raw_decap:
+
+.. table:: RAW_DECAP
+
+   +----------------+----------------------------------------+
+   | Field          | Value                                  |
+   +================+========================================+
+   | ``data``       | Decapsulation data                     |
+   +----------------+----------------------------------------+
+   | ``size``       | Size of data                           |
+   +----------------+----------------------------------------+
+
 Action: ``SET_IPV4_SRC``
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index bc9e719..1e5cd73 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -123,6 +123,8 @@ struct rte_flow_desc_data {
 	MK_FLOW_ACTION(VXLAN_DECAP, 0),
 	MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
 	MK_FLOW_ACTION(NVGRE_DECAP, 0),
+	MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct rte_flow_action_raw_encap)),
+	MK_FLOW_ACTION(RAW_DECAP, sizeof(struct rte_flow_action_raw_decap)),
 	MK_FLOW_ACTION(SET_IPV4_SRC,
 		       sizeof(struct rte_flow_action_set_ipv4)),
 	MK_FLOW_ACTION(SET_IPV4_DST,
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 68bbf57..4066483 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -1508,6 +1508,20 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
 
 	/**
+	 * Add outer header whose template is provided in its data buffer
+	 *
+	 * See struct rte_flow_action_raw_encap.
+	 */
+	RTE_FLOW_ACTION_TYPE_RAW_ENCAP,
+
+	/**
+	 * Remove outer header whose template is provided in its data buffer.
+	 *
+	 * See struct rte_flow_action_raw_decap
+	 */
+	RTE_FLOW_ACTION_TYPE_RAW_DECAP,
+
+	/**
 	 * Modify IPv4 source address in the outermost IPv4 header.
 	 *
 	 * If flow pattern does not define a valid RTE_FLOW_ITEM_TYPE_IPV4,
@@ -1946,6 +1960,51 @@ struct rte_flow_action_nvgre_encap {
  * @warning
  * @b EXPERIMENTAL: this structure may change without prior notice
  *
+ * RTE_FLOW_ACTION_TYPE_RAW_ENCAP
+ *
+ * Raw tunnel end-point encapsulation data definition.
+ *
+ * The data holds the headers definitions to be applied on the packet.
+ * The data must start with ETH header up to the tunnel item header itself.
+ * When used right after RAW_DECAP (for decapsulating L3 tunnel type for
+ * example MPLSoGRE) the data will just hold layer 2 header.
+ *
+ * The preserve parameter holds which bits in the packet the PMD is not allowed
+ * to change, this parameter can also be NULL and then the PMD is allowed
+ * to update any field.
+ *
+ * size holds the number of bytes in @p data and @p preserve.
+ */
+struct rte_flow_action_raw_encap {
+	uint8_t *data; /**< Encapsulation data. */
+	uint8_t *preserve; /**< Bit-mask of @p data to preserve on output. */
+	size_t size; /**< Size of @p data and @p preserve. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_RAW_DECAP
+ *
+ * Raw tunnel end-point decapsulation data definition.
+ *
+ * The data holds the headers definitions to be removed from the packet.
+ * The data must start with ETH header up to the tunnel item header itself.
+ * When used right before RAW_DECAP (for encapsulating L3 tunnel type for
+ * example MPLSoGRE) the data will just hold layer 2 header.
+ *
+ * size holds the number of bytes in @p data.
+ */
+struct rte_flow_action_raw_decap {
+	uint8_t *data; /**< Encapsulation data. */
+	size_t size; /**< Size of @p data and @p preserve. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
  * RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC
  * RTE_FLOW_ACTION_TYPE_SET_IPV4_DST
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v6 2/3] app/testpmd: add MPLSoUDP encapsulation
  2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
  2018-10-22 17:38           ` [PATCH v6 1/3] ethdev: add raw encapsulation action Ori Kam
@ 2018-10-22 17:38           ` Ori Kam
  2018-10-23  9:55             ` Ferruh Yigit
  2018-10-22 17:38           ` [PATCH v6 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
  2018-10-23  9:56           ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ferruh Yigit
  3 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-22 17:38 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

MPLSoUDP is an example for L3 tunnel encapsulation.

L3 tunnel type is a tunnel that is missing the layer 2 header of the
inner packet.

Example for MPLSoUDP tunnel:
ETH / IPV4 / UDP / MPLS / IP / L4..L7

In order to encapsulate such a tunnel there is a need to remove L2 of
the inner packet and encap the remaining tunnel, this is done by
applying 2 rte flow commands l2_decap followed by mplsoudp_encap.
Both commands must appear in the same flow, and from the point of the
packet it both actions are applyed at the same time. (There is no part
where a packet doesn't have L2 header).

Decapsulating such a tunnel works the other way, first we need to decap
the outer tunnel header and then apply the new L2.
So the commands will be mplsoudp_decap / l2_encap

Due to the complex encapsulation of MPLSoUDP and L2  flow actions and
based on the fact testpmd does not allocate memory, this patch adds a new
command in testpmd to initialise a global structurs containing the
necessary information to make the outer layer of the packet.  This same
global structures will then be used by the flow commands in testpmd when
the action mplsoudp_encap, mplsoudp_decap, l2_encap, l2_decap, will be
parsed, at this point, the conversion into such action becomes trivial.

The l2_encap and l2_decap actions can also be used for other L3 tunnel
types.

Signed-off-by: Ori Kam <orika@mellanox.com>

---
v6:
 * fix compilation error.

---
 app/test-pmd/cmdline.c                      | 410 ++++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 380 ++++++++++++++++++++++++++
 app/test-pmd/testpmd.h                      |  40 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 175 ++++++++++++
 4 files changed, 1005 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3b469ac..e807dbb 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -15282,6 +15282,408 @@ static void cmd_set_nvgre_parsed(void *parsed_result,
 	},
 };
 
+/** Set L2 encapsulation details */
+struct cmd_set_l2_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t l2_encap;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_l2_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, set, "set");
+cmdline_parse_token_string_t cmd_set_l2_encap_l2_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, l2_encap, "l2_encap");
+cmdline_parse_token_string_t cmd_set_l2_encap_l2_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, l2_encap,
+				 "l2_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_l2_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "ip-version");
+cmdline_parse_token_string_t cmd_set_l2_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, ip_version,
+				 "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_l2_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "vlan-tci");
+cmdline_parse_token_num_t cmd_set_l2_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_l2_encap_result, tci, UINT16);
+cmdline_parse_token_string_t cmd_set_l2_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_l2_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_l2_encap_result, eth_src);
+cmdline_parse_token_string_t cmd_set_l2_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
+				 "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_l2_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_l2_encap_result, eth_dst);
+
+static void cmd_set_l2_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_l2_encap_result *res = parsed_result;
+
+	if (strcmp(res->l2_encap, "l2_encap") == 0)
+		l2_encap_conf.select_vlan = 0;
+	else if (strcmp(res->l2_encap, "l2_encap-with-vlan") == 0)
+		l2_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		l2_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		l2_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	if (l2_encap_conf.select_vlan)
+		l2_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(l2_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(l2_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_l2_encap = {
+	.f = cmd_set_l2_encap_parsed,
+	.data = NULL,
+	.help_str = "set l2_encap ip-version ipv4|ipv6"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_l2_encap_set,
+		(void *)&cmd_set_l2_encap_l2_encap,
+		(void *)&cmd_set_l2_encap_ip_version,
+		(void *)&cmd_set_l2_encap_ip_version_value,
+		(void *)&cmd_set_l2_encap_eth_src,
+		(void *)&cmd_set_l2_encap_eth_src_value,
+		(void *)&cmd_set_l2_encap_eth_dst,
+		(void *)&cmd_set_l2_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_l2_encap_with_vlan = {
+	.f = cmd_set_l2_encap_parsed,
+	.data = NULL,
+	.help_str = "set l2_encap-with-vlan ip-version ipv4|ipv6"
+		" vlan-tci <vlan-tci> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_l2_encap_set,
+		(void *)&cmd_set_l2_encap_l2_encap_with_vlan,
+		(void *)&cmd_set_l2_encap_ip_version,
+		(void *)&cmd_set_l2_encap_ip_version_value,
+		(void *)&cmd_set_l2_encap_vlan,
+		(void *)&cmd_set_l2_encap_vlan_value,
+		(void *)&cmd_set_l2_encap_eth_src,
+		(void *)&cmd_set_l2_encap_eth_src_value,
+		(void *)&cmd_set_l2_encap_eth_dst,
+		(void *)&cmd_set_l2_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set L2 decapsulation details */
+struct cmd_set_l2_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t l2_decap;
+	cmdline_fixed_string_t pos_token;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_l2_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, set, "set");
+cmdline_parse_token_string_t cmd_set_l2_decap_l2_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, l2_decap,
+				 "l2_decap");
+cmdline_parse_token_string_t cmd_set_l2_decap_l2_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_l2_decap_result, l2_decap,
+				 "l2_decap-with-vlan");
+
+static void cmd_set_l2_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_l2_decap_result *res = parsed_result;
+
+	if (strcmp(res->l2_decap, "l2_decap") == 0)
+		l2_decap_conf.select_vlan = 0;
+	else if (strcmp(res->l2_decap, "l2_decap-with-vlan") == 0)
+		l2_decap_conf.select_vlan = 1;
+}
+
+cmdline_parse_inst_t cmd_set_l2_decap = {
+	.f = cmd_set_l2_decap_parsed,
+	.data = NULL,
+	.help_str = "set l2_decap",
+	.tokens = {
+		(void *)&cmd_set_l2_decap_set,
+		(void *)&cmd_set_l2_decap_l2_decap,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_l2_decap_with_vlan = {
+	.f = cmd_set_l2_decap_parsed,
+	.data = NULL,
+	.help_str = "set l2_decap-with-vlan",
+	.tokens = {
+		(void *)&cmd_set_l2_decap_set,
+		(void *)&cmd_set_l2_decap_l2_decap_with_vlan,
+		NULL,
+	},
+};
+
+/** Set MPLSoUDP encapsulation details */
+struct cmd_set_mplsoudp_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsoudp;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t label;
+	uint16_t udp_src;
+	uint16_t udp_dst;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_mplsoudp_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result, mplsoudp,
+				 "mplsoudp_encap");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_mplsoudp_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 mplsoudp, "mplsoudp_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 ip_version, "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_label =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "label");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_label_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, label,
+			      UINT32);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "udp-src");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_src_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_src,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "udp-dst");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_dst_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_dst,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_mplsoudp_encap_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result, ip_src);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_mplsoudp_encap_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "vlan-tci");
+cmdline_parse_token_num_t cmd_set_mplsoudp_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, tci,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_mplsoudp_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				    eth_src);
+cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				 pos_token, "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_mplsoudp_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
+				    eth_dst);
+
+static void cmd_set_mplsoudp_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsoudp_encap_result *res = parsed_result;
+	union {
+		uint32_t mplsoudp_label;
+		uint8_t label[3];
+	} id = {
+		.mplsoudp_label =
+			rte_cpu_to_be_32(res->label) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->mplsoudp, "mplsoudp_encap") == 0)
+		mplsoudp_encap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsoudp, "mplsoudp_encap-with-vlan") == 0)
+		mplsoudp_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsoudp_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsoudp_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(mplsoudp_encap_conf.label, &id.label[1], 3);
+	mplsoudp_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src);
+	mplsoudp_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst);
+	if (mplsoudp_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, mplsoudp_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, mplsoudp_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, mplsoudp_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, mplsoudp_encap_conf.ipv6_dst);
+	}
+	if (mplsoudp_encap_conf.select_vlan)
+		mplsoudp_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(mplsoudp_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(mplsoudp_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_mplsoudp_encap = {
+	.f = cmd_set_mplsoudp_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_encap ip-version ipv4|ipv6 label <label>"
+		" udp-src <udp-src> udp-dst <udp-dst> ip-src <ip-src>"
+		" ip-dst <ip-dst> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_encap_set,
+		(void *)&cmd_set_mplsoudp_encap_mplsoudp_encap,
+		(void *)&cmd_set_mplsoudp_encap_ip_version,
+		(void *)&cmd_set_mplsoudp_encap_ip_version_value,
+		(void *)&cmd_set_mplsoudp_encap_label,
+		(void *)&cmd_set_mplsoudp_encap_label_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_src,
+		(void *)&cmd_set_mplsoudp_encap_udp_src_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_src,
+		(void *)&cmd_set_mplsoudp_encap_ip_src_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_src,
+		(void *)&cmd_set_mplsoudp_encap_eth_src_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsoudp_encap_with_vlan = {
+	.f = cmd_set_mplsoudp_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_encap-with-vlan ip-version ipv4|ipv6"
+		" label <label> udp-src <udp-src> udp-dst <udp-dst>"
+		" ip-src <ip-src> ip-dst <ip-dst> vlan-tci <vlan-tci>"
+		" eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_encap_set,
+		(void *)&cmd_set_mplsoudp_encap_mplsoudp_encap_with_vlan,
+		(void *)&cmd_set_mplsoudp_encap_ip_version,
+		(void *)&cmd_set_mplsoudp_encap_ip_version_value,
+		(void *)&cmd_set_mplsoudp_encap_label,
+		(void *)&cmd_set_mplsoudp_encap_label_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_src,
+		(void *)&cmd_set_mplsoudp_encap_udp_src_value,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst,
+		(void *)&cmd_set_mplsoudp_encap_udp_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_src,
+		(void *)&cmd_set_mplsoudp_encap_ip_src_value,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst,
+		(void *)&cmd_set_mplsoudp_encap_ip_dst_value,
+		(void *)&cmd_set_mplsoudp_encap_vlan,
+		(void *)&cmd_set_mplsoudp_encap_vlan_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_src,
+		(void *)&cmd_set_mplsoudp_encap_eth_src_value,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst,
+		(void *)&cmd_set_mplsoudp_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set MPLSoUDP decapsulation details */
+struct cmd_set_mplsoudp_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsoudp;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_mplsoudp_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result, mplsoudp,
+				 "mplsoudp_decap");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_mplsoudp_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 mplsoudp, "mplsoudp_decap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsoudp_decap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_decap_result,
+				 ip_version, "ipv4#ipv6");
+
+static void cmd_set_mplsoudp_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsoudp_decap_result *res = parsed_result;
+
+	if (strcmp(res->mplsoudp, "mplsoudp_decap") == 0)
+		mplsoudp_decap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsoudp, "mplsoudp_decap-with-vlan") == 0)
+		mplsoudp_decap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsoudp_decap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsoudp_decap_conf.select_ipv4 = 0;
+}
+
+cmdline_parse_inst_t cmd_set_mplsoudp_decap = {
+	.f = cmd_set_mplsoudp_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_decap ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_decap_set,
+		(void *)&cmd_set_mplsoudp_decap_mplsoudp_decap,
+		(void *)&cmd_set_mplsoudp_decap_ip_version,
+		(void *)&cmd_set_mplsoudp_decap_ip_version_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
+	.f = cmd_set_mplsoudp_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsoudp_decap-with-vlan ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsoudp_decap_set,
+		(void *)&cmd_set_mplsoudp_decap_mplsoudp_decap_with_vlan,
+		(void *)&cmd_set_mplsoudp_decap_ip_version,
+		(void *)&cmd_set_mplsoudp_decap_ip_version_value,
+		NULL,
+	},
+};
+
 /* Strict link priority scheduling mode setting */
 static void
 cmd_strict_link_prio_parsed(
@@ -17911,6 +18313,14 @@ struct cmd_config_per_queue_tx_offload_result {
 	(cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_nvgre,
 	(cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_l2_encap,
+	(cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_l2_decap,
+	(cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_ddp_add,
 	(cmdline_parse_inst_t *)&cmd_ddp_del,
 	(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4a27642..b876918 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -243,6 +243,10 @@ enum index {
 	ACTION_VXLAN_DECAP,
 	ACTION_NVGRE_ENCAP,
 	ACTION_NVGRE_DECAP,
+	ACTION_L2_ENCAP,
+	ACTION_L2_DECAP,
+	ACTION_MPLSOUDP_ENCAP,
+	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
 	ACTION_SET_IPV4_SRC_IPV4_SRC,
 	ACTION_SET_IPV4_DST,
@@ -308,6 +312,22 @@ struct action_nvgre_encap_data {
 	struct rte_flow_item_nvgre item_nvgre;
 };
 
+/** Maximum data size in struct rte_flow_action_raw_encap. */
+#define ACTION_RAW_ENCAP_MAX_DATA 128
+
+/** Storage for struct rte_flow_action_raw_encap including external data. */
+struct action_raw_encap_data {
+	struct rte_flow_action_raw_encap conf;
+	uint8_t data[ACTION_RAW_ENCAP_MAX_DATA];
+	uint8_t preserve[ACTION_RAW_ENCAP_MAX_DATA];
+};
+
+/** Storage for struct rte_flow_action_raw_decap including external data. */
+struct action_raw_decap_data {
+	struct rte_flow_action_raw_decap conf;
+	uint8_t data[ACTION_RAW_ENCAP_MAX_DATA];
+};
+
 /** Maximum number of subsequent tokens and arguments on the stack. */
 #define CTX_STACK_SIZE 16
 
@@ -829,6 +849,10 @@ struct parse_action_priv {
 	ACTION_VXLAN_DECAP,
 	ACTION_NVGRE_ENCAP,
 	ACTION_NVGRE_DECAP,
+	ACTION_L2_ENCAP,
+	ACTION_L2_DECAP,
+	ACTION_MPLSOUDP_ENCAP,
+	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
 	ACTION_SET_IPV4_DST,
 	ACTION_SET_IPV6_SRC,
@@ -1008,6 +1032,18 @@ static int parse_vc_action_vxlan_encap(struct context *, const struct token *,
 static int parse_vc_action_nvgre_encap(struct context *, const struct token *,
 				       const char *, unsigned int, void *,
 				       unsigned int);
+static int parse_vc_action_l2_encap(struct context *, const struct token *,
+				    const char *, unsigned int, void *,
+				    unsigned int);
+static int parse_vc_action_l2_decap(struct context *, const struct token *,
+				    const char *, unsigned int, void *,
+				    unsigned int);
+static int parse_vc_action_mplsoudp_encap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
+static int parse_vc_action_mplsoudp_decap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
 static int parse_destroy(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2526,6 +2562,42 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
+	[ACTION_MPLSOUDP_ENCAP] = {
+		.name = "mplsoudp_encap",
+		.help = "mplsoudp encapsulation, uses configuration set by"
+			" \"set mplsoudp_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_encap,
+	},
+	[ACTION_L2_ENCAP] = {
+		.name = "l2_encap",
+		.help = "l2 encapsulation, uses configuration set by"
+			" \"set l2_encp\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_l2_encap,
+	},
+	[ACTION_L2_DECAP] = {
+		.name = "l2_decap",
+		.help = "l2 decap, uses configuration set by"
+			" \"set l2_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_l2_decap,
+	},
+	[ACTION_MPLSOUDP_DECAP] = {
+		.name = "mplsoudp_decap",
+		.help = "mplsoudp decapsulation, uses configuration set by"
+			" \"set mplsoudp_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_decap,
+	},
 	[ACTION_SET_IPV4_SRC] = {
 		.name = "set_ipv4_src",
 		.help = "Set a new IPv4 source address in the outermost"
@@ -3391,6 +3463,314 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return ret;
 }
 
+/** Parse l2 encap action. */
+static int
+parse_vc_action_l2_encap(struct context *ctx, const struct token *token,
+			 const char *str, unsigned int len,
+			 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+	};
+	header = action_encap_data->data;
+	if (l2_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (l2_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       l2_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       l2_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (l2_encap_conf.select_vlan) {
+		if (l2_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse l2 decap action. */
+static int
+parse_vc_action_l2_decap(struct context *ctx, const struct token *token,
+			 const char *str, unsigned int len,
+			 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (l2_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (l2_decap_conf.select_vlan) {
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOUDP encap action. */
+static int
+parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsoudp_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = mplsoudp_encap_conf.ipv4_src,
+			.dst_addr = mplsoudp_encap_conf.ipv4_dst,
+			.next_proto_id = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.src_port = mplsoudp_encap_conf.udp_src,
+			.dst_port = mplsoudp_encap_conf.udp_dst,
+		},
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+		.preserve = {},
+	};
+	header = action_encap_data->data;
+	if (mplsoudp_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsoudp_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsoudp_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsoudp_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsoudp_encap_conf.select_vlan) {
+		if (mplsoudp_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsoudp_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
+		       &mplsoudp_encap_conf.ipv6_src,
+		       sizeof(mplsoudp_encap_conf.ipv6_src));
+		memcpy(&ipv6.hdr.dst_addr,
+		       &mplsoudp_encap_conf.ipv6_dst,
+		       sizeof(mplsoudp_encap_conf.ipv6_dst));
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memcpy(mpls.label_tc_s, mplsoudp_encap_conf.label,
+	       RTE_DIM(mplsoudp_encap_conf.label));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOUDP decap action. */
+static int
+parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.next_proto_id = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_UDP,
+		},
+	};
+	struct rte_flow_item_udp udp = {
+		.hdr = {
+			.dst_port = rte_cpu_to_be_16(6635),
+		},
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (mplsoudp_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsoudp_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsoudp_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsoudp_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsoudp_encap_conf.select_vlan) {
+		if (mplsoudp_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsoudp_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &udp, sizeof(udp));
+	header += sizeof(udp);
+	memset(&mpls, 0, sizeof(mpls));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
 /** Parse tokens for destroy command. */
 static int
 parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 121b756..12daee5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -507,6 +507,46 @@ struct nvgre_encap_conf {
 };
 struct nvgre_encap_conf nvgre_encap_conf;
 
+/* L2 encap parameters. */
+struct l2_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct l2_encap_conf l2_encap_conf;
+
+/* L2 decap parameters. */
+struct l2_decap_conf {
+	uint32_t select_vlan:1;
+};
+struct l2_decap_conf l2_decap_conf;
+
+/* MPLSoUDP encap parameters. */
+struct mplsoudp_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t label[3];
+	rte_be16_t udp_src;
+	rte_be16_t udp_dst;
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct mplsoudp_encap_conf mplsoudp_encap_conf;
+
+/* MPLSoUDP decap parameters. */
+struct mplsoudp_decap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+};
+struct mplsoudp_decap_conf mplsoudp_decap_conf;
+
 static inline unsigned int
 lcore_num(void)
 {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index ca060e1..fb26315 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1580,6 +1580,63 @@ flow rule using the action nvgre_encap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config L2 Encap
+~~~~~~~~~~~~~~~
+
+Configure the l2 to be used when encapsulating a packet with L2::
+
+ set l2_encap ip-version (ipv4|ipv6) eth-src (eth-src) eth-dst (eth-dst)
+ set l2_encap-with-vlan ip-version (ipv4|ipv6) vlan-tci (vlan-tci) \
+        eth-src (eth-src) eth-dst (eth-dst)
+
+Those commands will set an internal configuration inside testpmd, any following
+flow rule using the action l2_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config L2 Decap
+~~~~~~~~~~~~~~~
+
+Configure the l2 to be removed when decapsulating a packet with L2::
+
+ set l2_decap ip-version (ipv4|ipv6)
+ set l2_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those commands will set an internal configuration inside testpmd, any following
+flow rule using the action l2_decap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoUDP Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a MPLSoUDP tunnel::
+
+ set mplsoudp_encap ip-version (ipv4|ipv6) label (label) udp-src (udp-src) \
+        udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) \
+        eth-src (eth-src) eth-dst (eth-dst)
+ set mplsoudp_encap-with-vlan ip-version (ipv4|ipv6) label (label) \
+        udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) \
+        vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsoudp_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoUDP Decap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to decapsulate MPLSoUDP packet::
+
+ set mplsoudp_decap ip-version (ipv4|ipv6)
+ set mplsoudp_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsoudp_decap will use the last configuration set.
+To have a different decapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Port Functions
 --------------
 
@@ -3771,6 +3828,18 @@ This section lists supported actions and their attributes, if any.
 - ``nvgre_decap``: Performs a decapsulation action by stripping all headers of
   the NVGRE tunnel network overlay from the matched flow.
 
+- ``l2_encap``: Performs a L2 encapsulation, L2 configuration
+  is done through `Config L2 Encap`_.
+
+- ``l2_decap``: Performs a L2 decapsulation, L2 configuration
+  is done through `Config L2 Decap`_.
+
+- ``mplsoudp_encap``: Performs a MPLSoUDP encapsulation, outer layer
+  configuration is done through `Config MPLSoUDP Encap outer layers`_.
+
+- ``mplsoudp_decap``: Performs a MPLSoUDP decapsulation, outer layer
+  configuration is done through `Config MPLSoUDP Decap outer layers`_.
+
 - ``set_ipv4_src``: Set a new IPv4 source address in the outermost IPv4 header.
 
   - ``ipv4_addr``: New IPv4 source address.
@@ -4130,6 +4199,112 @@ IPv6 NVGRE outer header::
  testpmd> flow create 0 ingress pattern end actions nvgre_encap /
         queue index 0 / end
 
+Sample L2 encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+L2 encapsulation has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+L2 header::
+
+ testpmd> set l2_encap ip-version ipv4
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+L2 with VXLAN header::
+
+ testpmd> set l2_encap-with-vlan ip-version ipv4 vlan-tci 34
+         eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+Sample L2 decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+L2 decapsulation has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+L2 header::
+
+ testpmd> set l2_decap
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap / mplsoudp_encap /
+        queue index 0 / end
+
+L2 with VXLAN header::
+
+ testpmd> set l2_encap-with-vlan
+ testpmd> flow create 0 egress pattern eth / end actions l2_encap / mplsoudp_encap /
+         queue index 0 / end
+
+Sample MPLSoUDP encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoUDP encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_encap ip-version ipv4 label 4 udp-src 5 udp-dst 10
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv4 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_encap-with-vlan ip-version ipv4 label 4 udp-src 5
+        udp-dst 10 ip-src 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv6 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_encap ip-version ipv6 mask 4 udp-src 5 udp-dst 10
+        ip-src ::1 ip-dst ::2222 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+IPv6 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_encap-with-vlan ip-version ipv6 mask 4 udp-src 5
+        udp-dst 10 ip-src ::1 ip-dst ::2222 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsoudp_encap / end
+
+Sample MPLSoUDP decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoUDP decapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_decap ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / ipv4 / udp / mpls / end actions
+        mplsoudp_decap / l2_encap / end
+
+IPv4 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_decap-with-vlan ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv4 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
+IPv6 MPLSoUDP outer header::
+
+ testpmd> set mplsoudp_decap ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / ipv6 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
+IPv6 MPLSoUDP with VLAN outer header::
+
+ testpmd> set mplsoudp_decap-with-vlan ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv6 / udp / mpls / end
+        actions mplsoudp_decap / l2_encap / end
+
 BPF Functions
 --------------
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v6 3/3] app/testpmd: add MPLSoGRE encapsulation
  2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
  2018-10-22 17:38           ` [PATCH v6 1/3] ethdev: add raw encapsulation action Ori Kam
  2018-10-22 17:38           ` [PATCH v6 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
@ 2018-10-22 17:38           ` Ori Kam
  2018-10-23  9:56             ` Ferruh Yigit
  2018-10-23  9:56           ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ferruh Yigit
  3 siblings, 1 reply; 53+ messages in thread
From: Ori Kam @ 2018-10-22 17:38 UTC (permalink / raw)
  To: wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	ferruh.yigit, stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, orika, shahafs

Example for MPLSoGRE tunnel:
ETH / IPV4 / GRE / MPLS / IP / L4..L7

In order to encapsulate such a tunnel there is a need to remove L2 of
the inner packet and encap the remaining tunnel, this is done by
applying 2 rte flow commands l2_decap followed by mplsogre_encap.
Both commands must appear in the same flow, and from the point of the
packet it both actions are applyed at the same time. (There is no part
where a packet doesn't have L2 header).

Decapsulating such a tunnel works the other way, first we need to decap
the outer tunnel header and then apply the new L2.
So the commands will be mplsogre_decap / l2_encap

Due to the complex encapsulation of MPLSoGRE flow action and
based on the fact testpmd does not allocate memory, this patch adds a
new command in testpmd to initialise a global structure containing the
necessary information to make the outer layer of the packet.  This same
global structures will then be used by the flow commands in testpmd when
the action mplsogre_encap, mplsogre_decap, will be parsed, at this point,
the conversion into such action becomes trivial.

Signed-off-by: Ori Kam <orika@mellanox.com>

---
v6:
 * fix compilation issue.
---
 app/test-pmd/cmdline.c                      | 227 ++++++++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 239 ++++++++++++++++++++++++++--
 app/test-pmd/testpmd.h                      |  22 +++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 103 ++++++++++++
 4 files changed, 580 insertions(+), 11 deletions(-)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index e807dbb..6e14345 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -15436,6 +15436,229 @@ static void cmd_set_l2_decap_parsed(void *parsed_result,
 	},
 };
 
+/** Set MPLSoGRE encapsulation details */
+struct cmd_set_mplsogre_encap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsogre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+	uint32_t label;
+	cmdline_ipaddr_t ip_src;
+	cmdline_ipaddr_t ip_dst;
+	uint16_t tci;
+	struct ether_addr eth_src;
+	struct ether_addr eth_dst;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_mplsogre_encap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result, mplsogre,
+				 "mplsogre_encap");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_mplsogre_encap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 mplsogre, "mplsogre_encap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 ip_version, "ipv4#ipv6");
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_label =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "label");
+cmdline_parse_token_num_t cmd_set_mplsogre_encap_label_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, label,
+			      UINT32);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-src");
+cmdline_parse_token_ipaddr_t cmd_set_mplsogre_encap_ip_src_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result, ip_src);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "ip-dst");
+cmdline_parse_token_ipaddr_t cmd_set_mplsogre_encap_ip_dst_value =
+	TOKEN_IPADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result, ip_dst);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "vlan-tci");
+cmdline_parse_token_num_t cmd_set_mplsogre_encap_vlan_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, tci,
+			      UINT16);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_src =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "eth-src");
+cmdline_parse_token_etheraddr_t cmd_set_mplsogre_encap_eth_src_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				    eth_src);
+cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_dst =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				 pos_token, "eth-dst");
+cmdline_parse_token_etheraddr_t cmd_set_mplsogre_encap_eth_dst_value =
+	TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_mplsogre_encap_result,
+				    eth_dst);
+
+static void cmd_set_mplsogre_encap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsogre_encap_result *res = parsed_result;
+	union {
+		uint32_t mplsogre_label;
+		uint8_t label[3];
+	} id = {
+		.mplsogre_label =
+			rte_cpu_to_be_32(res->label) & RTE_BE32(0x00ffffff),
+	};
+
+	if (strcmp(res->mplsogre, "mplsogre_encap") == 0)
+		mplsogre_encap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsogre, "mplsogre_encap-with-vlan") == 0)
+		mplsogre_encap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsogre_encap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsogre_encap_conf.select_ipv4 = 0;
+	else
+		return;
+	rte_memcpy(mplsogre_encap_conf.label, &id.label[1], 3);
+	if (mplsogre_encap_conf.select_ipv4) {
+		IPV4_ADDR_TO_UINT(res->ip_src, mplsogre_encap_conf.ipv4_src);
+		IPV4_ADDR_TO_UINT(res->ip_dst, mplsogre_encap_conf.ipv4_dst);
+	} else {
+		IPV6_ADDR_TO_ARRAY(res->ip_src, mplsogre_encap_conf.ipv6_src);
+		IPV6_ADDR_TO_ARRAY(res->ip_dst, mplsogre_encap_conf.ipv6_dst);
+	}
+	if (mplsogre_encap_conf.select_vlan)
+		mplsogre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci);
+	rte_memcpy(mplsogre_encap_conf.eth_src, res->eth_src.addr_bytes,
+		   ETHER_ADDR_LEN);
+	rte_memcpy(mplsogre_encap_conf.eth_dst, res->eth_dst.addr_bytes,
+		   ETHER_ADDR_LEN);
+}
+
+cmdline_parse_inst_t cmd_set_mplsogre_encap = {
+	.f = cmd_set_mplsogre_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_encap ip-version ipv4|ipv6 label <label>"
+		" ip-src <ip-src> ip-dst <ip-dst> eth-src <eth-src>"
+		" eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_encap_set,
+		(void *)&cmd_set_mplsogre_encap_mplsogre_encap,
+		(void *)&cmd_set_mplsogre_encap_ip_version,
+		(void *)&cmd_set_mplsogre_encap_ip_version_value,
+		(void *)&cmd_set_mplsogre_encap_label,
+		(void *)&cmd_set_mplsogre_encap_label_value,
+		(void *)&cmd_set_mplsogre_encap_ip_src,
+		(void *)&cmd_set_mplsogre_encap_ip_src_value,
+		(void *)&cmd_set_mplsogre_encap_ip_dst,
+		(void *)&cmd_set_mplsogre_encap_ip_dst_value,
+		(void *)&cmd_set_mplsogre_encap_eth_src,
+		(void *)&cmd_set_mplsogre_encap_eth_src_value,
+		(void *)&cmd_set_mplsogre_encap_eth_dst,
+		(void *)&cmd_set_mplsogre_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsogre_encap_with_vlan = {
+	.f = cmd_set_mplsogre_encap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_encap-with-vlan ip-version ipv4|ipv6"
+		" label <label> ip-src <ip-src> ip-dst <ip-dst>"
+		" vlan-tci <vlan-tci> eth-src <eth-src> eth-dst <eth-dst>",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_encap_set,
+		(void *)&cmd_set_mplsogre_encap_mplsogre_encap_with_vlan,
+		(void *)&cmd_set_mplsogre_encap_ip_version,
+		(void *)&cmd_set_mplsogre_encap_ip_version_value,
+		(void *)&cmd_set_mplsogre_encap_label,
+		(void *)&cmd_set_mplsogre_encap_label_value,
+		(void *)&cmd_set_mplsogre_encap_ip_src,
+		(void *)&cmd_set_mplsogre_encap_ip_src_value,
+		(void *)&cmd_set_mplsogre_encap_ip_dst,
+		(void *)&cmd_set_mplsogre_encap_ip_dst_value,
+		(void *)&cmd_set_mplsogre_encap_vlan,
+		(void *)&cmd_set_mplsogre_encap_vlan_value,
+		(void *)&cmd_set_mplsogre_encap_eth_src,
+		(void *)&cmd_set_mplsogre_encap_eth_src_value,
+		(void *)&cmd_set_mplsogre_encap_eth_dst,
+		(void *)&cmd_set_mplsogre_encap_eth_dst_value,
+		NULL,
+	},
+};
+
+/** Set MPLSoGRE decapsulation details */
+struct cmd_set_mplsogre_decap_result {
+	cmdline_fixed_string_t set;
+	cmdline_fixed_string_t mplsogre;
+	cmdline_fixed_string_t pos_token;
+	cmdline_fixed_string_t ip_version;
+	uint32_t vlan_present:1;
+};
+
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_set =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result, set,
+				 "set");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_mplsogre_decap =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result, mplsogre,
+				 "mplsogre_decap");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_mplsogre_decap_with_vlan =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 mplsogre, "mplsogre_decap-with-vlan");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_ip_version =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 pos_token, "ip-version");
+cmdline_parse_token_string_t cmd_set_mplsogre_decap_ip_version_value =
+	TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_decap_result,
+				 ip_version, "ipv4#ipv6");
+
+static void cmd_set_mplsogre_decap_parsed(void *parsed_result,
+	__attribute__((unused)) struct cmdline *cl,
+	__attribute__((unused)) void *data)
+{
+	struct cmd_set_mplsogre_decap_result *res = parsed_result;
+
+	if (strcmp(res->mplsogre, "mplsogre_decap") == 0)
+		mplsogre_decap_conf.select_vlan = 0;
+	else if (strcmp(res->mplsogre, "mplsogre_decap-with-vlan") == 0)
+		mplsogre_decap_conf.select_vlan = 1;
+	if (strcmp(res->ip_version, "ipv4") == 0)
+		mplsogre_decap_conf.select_ipv4 = 1;
+	else if (strcmp(res->ip_version, "ipv6") == 0)
+		mplsogre_decap_conf.select_ipv4 = 0;
+}
+
+cmdline_parse_inst_t cmd_set_mplsogre_decap = {
+	.f = cmd_set_mplsogre_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_decap ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_decap_set,
+		(void *)&cmd_set_mplsogre_decap_mplsogre_decap,
+		(void *)&cmd_set_mplsogre_decap_ip_version,
+		(void *)&cmd_set_mplsogre_decap_ip_version_value,
+		NULL,
+	},
+};
+
+cmdline_parse_inst_t cmd_set_mplsogre_decap_with_vlan = {
+	.f = cmd_set_mplsogre_decap_parsed,
+	.data = NULL,
+	.help_str = "set mplsogre_decap-with-vlan ip-version ipv4|ipv6",
+	.tokens = {
+		(void *)&cmd_set_mplsogre_decap_set,
+		(void *)&cmd_set_mplsogre_decap_mplsogre_decap_with_vlan,
+		(void *)&cmd_set_mplsogre_decap_ip_version,
+		(void *)&cmd_set_mplsogre_decap_ip_version_value,
+		NULL,
+	},
+};
+
 /** Set MPLSoUDP encapsulation details */
 struct cmd_set_mplsoudp_encap_result {
 	cmdline_fixed_string_t set;
@@ -18317,6 +18540,10 @@ struct cmd_config_per_queue_tx_offload_result {
 	(cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_l2_decap,
 	(cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_encap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_encap_with_vlan,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_decap,
+	(cmdline_parse_inst_t *)&cmd_set_mplsogre_decap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
 	(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b876918..3d7d040 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -245,6 +245,8 @@ enum index {
 	ACTION_NVGRE_DECAP,
 	ACTION_L2_ENCAP,
 	ACTION_L2_DECAP,
+	ACTION_MPLSOGRE_ENCAP,
+	ACTION_MPLSOGRE_DECAP,
 	ACTION_MPLSOUDP_ENCAP,
 	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
@@ -851,6 +853,8 @@ struct parse_action_priv {
 	ACTION_NVGRE_DECAP,
 	ACTION_L2_ENCAP,
 	ACTION_L2_DECAP,
+	ACTION_MPLSOGRE_ENCAP,
+	ACTION_MPLSOGRE_DECAP,
 	ACTION_MPLSOUDP_ENCAP,
 	ACTION_MPLSOUDP_DECAP,
 	ACTION_SET_IPV4_SRC,
@@ -1038,6 +1042,12 @@ static int parse_vc_action_l2_encap(struct context *, const struct token *,
 static int parse_vc_action_l2_decap(struct context *, const struct token *,
 				    const char *, unsigned int, void *,
 				    unsigned int);
+static int parse_vc_action_mplsogre_encap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
+static int parse_vc_action_mplsogre_decap(struct context *,
+					  const struct token *, const char *,
+					  unsigned int, void *, unsigned int);
 static int parse_vc_action_mplsoudp_encap(struct context *,
 					  const struct token *, const char *,
 					  unsigned int, void *, unsigned int);
@@ -2562,19 +2572,10 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc,
 	},
-	[ACTION_MPLSOUDP_ENCAP] = {
-		.name = "mplsoudp_encap",
-		.help = "mplsoudp encapsulation, uses configuration set by"
-			" \"set mplsoudp_encap\"",
-		.priv = PRIV_ACTION(RAW_ENCAP,
-				    sizeof(struct action_raw_encap_data)),
-		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
-		.call = parse_vc_action_mplsoudp_encap,
-	},
 	[ACTION_L2_ENCAP] = {
 		.name = "l2_encap",
-		.help = "l2 encapsulation, uses configuration set by"
-			" \"set l2_encp\"",
+		.help = "l2 encap, uses configuration set by"
+			" \"set l2_encap\"",
 		.priv = PRIV_ACTION(RAW_ENCAP,
 				    sizeof(struct action_raw_encap_data)),
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
@@ -2589,6 +2590,33 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
 		.call = parse_vc_action_l2_decap,
 	},
+	[ACTION_MPLSOGRE_ENCAP] = {
+		.name = "mplsogre_encap",
+		.help = "mplsogre encapsulation, uses configuration set by"
+			" \"set mplsogre_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsogre_encap,
+	},
+	[ACTION_MPLSOGRE_DECAP] = {
+		.name = "mplsogre_decap",
+		.help = "mplsogre decapsulation, uses configuration set by"
+			" \"set mplsogre_decap\"",
+		.priv = PRIV_ACTION(RAW_DECAP,
+				    sizeof(struct action_raw_decap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsogre_decap,
+	},
+	[ACTION_MPLSOUDP_ENCAP] = {
+		.name = "mplsoudp_encap",
+		.help = "mplsoudp encapsulation, uses configuration set by"
+			" \"set mplsoudp_encap\"",
+		.priv = PRIV_ACTION(RAW_ENCAP,
+				    sizeof(struct action_raw_encap_data)),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc_action_mplsoudp_encap,
+	},
 	[ACTION_MPLSOUDP_DECAP] = {
 		.name = "mplsoudp_decap",
 		.help = "mplsoudp decapsulation, uses configuration set by"
@@ -3579,6 +3607,195 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *,
 	return ret;
 }
 
+#define ETHER_TYPE_MPLS_UNICAST 0x8847
+
+/** Parse MPLSOGRE encap action. */
+static int
+parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_encap_data *action_encap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {
+		.tci = mplsogre_encap_conf.vlan_tci,
+		.inner_type = 0,
+	};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.src_addr = mplsogre_encap_conf.ipv4_src,
+			.dst_addr = mplsogre_encap_conf.ipv4_dst,
+			.next_proto_id = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_gre gre = {
+		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_encap_data = ctx->object;
+	*action_encap_data = (struct action_raw_encap_data) {
+		.conf = (struct rte_flow_action_raw_encap){
+			.data = action_encap_data->data,
+		},
+		.data = {},
+		.preserve = {},
+	};
+	header = action_encap_data->data;
+	if (mplsogre_encap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsogre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsogre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsogre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsogre_encap_conf.select_vlan) {
+		if (mplsogre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsogre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(&ipv6.hdr.src_addr,
+		       &mplsogre_encap_conf.ipv6_src,
+		       sizeof(mplsogre_encap_conf.ipv6_src));
+		memcpy(&ipv6.hdr.dst_addr,
+		       &mplsogre_encap_conf.ipv6_dst,
+		       sizeof(mplsogre_encap_conf.ipv6_dst));
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &gre, sizeof(gre));
+	header += sizeof(gre);
+	memcpy(mpls.label_tc_s, mplsogre_encap_conf.label,
+	       RTE_DIM(mplsogre_encap_conf.label));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_encap_data->conf.size = header -
+		action_encap_data->data;
+	action->conf = &action_encap_data->conf;
+	return ret;
+}
+
+/** Parse MPLSOGRE decap action. */
+static int
+parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token,
+			       const char *str, unsigned int len,
+			       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	struct rte_flow_action *action;
+	struct action_raw_decap_data *action_decap_data;
+	struct rte_flow_item_eth eth = { .type = 0, };
+	struct rte_flow_item_vlan vlan = {.tci = 0};
+	struct rte_flow_item_ipv4 ipv4 = {
+		.hdr =  {
+			.next_proto_id = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_ipv6 ipv6 = {
+		.hdr =  {
+			.proto = IPPROTO_GRE,
+		},
+	};
+	struct rte_flow_item_gre gre = {
+		.protocol = rte_cpu_to_be_16(ETHER_TYPE_MPLS_UNICAST),
+	};
+	struct rte_flow_item_mpls mpls;
+	uint8_t *header;
+	int ret;
+
+	ret = parse_vc(ctx, token, str, len, buf, size);
+	if (ret < 0)
+		return ret;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return ret;
+	if (!out->args.vc.actions_n)
+		return -1;
+	action = &out->args.vc.actions[out->args.vc.actions_n - 1];
+	/* Point to selected object. */
+	ctx->object = out->args.vc.data;
+	ctx->objmask = NULL;
+	/* Copy the headers to the buffer. */
+	action_decap_data = ctx->object;
+	*action_decap_data = (struct action_raw_decap_data) {
+		.conf = (struct rte_flow_action_raw_decap){
+			.data = action_decap_data->data,
+		},
+		.data = {},
+	};
+	header = action_decap_data->data;
+	if (mplsogre_decap_conf.select_vlan)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
+	else if (mplsogre_encap_conf.select_ipv4)
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+	else
+		eth.type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+	memcpy(eth.dst.addr_bytes,
+	       mplsogre_encap_conf.eth_dst, ETHER_ADDR_LEN);
+	memcpy(eth.src.addr_bytes,
+	       mplsogre_encap_conf.eth_src, ETHER_ADDR_LEN);
+	memcpy(header, &eth, sizeof(eth));
+	header += sizeof(eth);
+	if (mplsogre_encap_conf.select_vlan) {
+		if (mplsogre_encap_conf.select_ipv4)
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+		else
+			vlan.inner_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6);
+		memcpy(header, &vlan, sizeof(vlan));
+		header += sizeof(vlan);
+	}
+	if (mplsogre_encap_conf.select_ipv4) {
+		memcpy(header, &ipv4, sizeof(ipv4));
+		header += sizeof(ipv4);
+	} else {
+		memcpy(header, &ipv6, sizeof(ipv6));
+		header += sizeof(ipv6);
+	}
+	memcpy(header, &gre, sizeof(gre));
+	header += sizeof(gre);
+	memset(&mpls, 0, sizeof(mpls));
+	memcpy(header, &mpls, sizeof(mpls));
+	header += sizeof(mpls);
+	action_decap_data->conf.size = header -
+		action_decap_data->data;
+	action->conf = &action_decap_data->conf;
+	return ret;
+}
+
 /** Parse MPLSOUDP encap action. */
 static int
 parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 12daee5..0738105 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -523,6 +523,28 @@ struct l2_decap_conf {
 };
 struct l2_decap_conf l2_decap_conf;
 
+/* MPLSoGRE encap parameters. */
+struct mplsogre_encap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+	uint8_t label[3];
+	rte_be32_t ipv4_src;
+	rte_be32_t ipv4_dst;
+	uint8_t ipv6_src[16];
+	uint8_t ipv6_dst[16];
+	rte_be16_t vlan_tci;
+	uint8_t eth_src[ETHER_ADDR_LEN];
+	uint8_t eth_dst[ETHER_ADDR_LEN];
+};
+struct mplsogre_encap_conf mplsogre_encap_conf;
+
+/* MPLSoGRE decap parameters. */
+struct mplsogre_decap_conf {
+	uint32_t select_ipv4:1;
+	uint32_t select_vlan:1;
+};
+struct mplsogre_decap_conf mplsogre_decap_conf;
+
 /* MPLSoUDP encap parameters. */
 struct mplsoudp_encap_conf {
 	uint32_t select_ipv4:1;
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index fb26315..8d60bf0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1607,6 +1607,35 @@ flow rule using the action l2_decap will use the last configuration set.
 To have a different encapsulation header, one of those commands must be called
 before the flow rule creation.
 
+Config MPLSoGRE Encap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to encapsulate a packet inside a MPLSoGRE tunnel::
+
+ set mplsogre_encap ip-version (ipv4|ipv6) label (label) \
+        ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst (eth-dst)
+ set mplsogre_encap-with-vlan ip-version (ipv4|ipv6) label (label) \
+        ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \
+        eth-src (eth-src) eth-dst (eth-dst)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsogre_encap will use the last configuration set.
+To have a different encapsulation header, one of those commands must be called
+before the flow rule creation.
+
+Config MPLSoGRE Decap outer layers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the outer layer to decapsulate MPLSoGRE packet::
+
+ set mplsogre_decap ip-version (ipv4|ipv6)
+ set mplsogre_decap-with-vlan ip-version (ipv4|ipv6)
+
+Those command will set an internal configuration inside testpmd, any following
+flow rule using the action mplsogre_decap will use the last configuration set.
+To have a different decapsulation header, one of those commands must be called
+before the flow rule creation.
+
 Config MPLSoUDP Encap outer layers
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3834,6 +3863,12 @@ This section lists supported actions and their attributes, if any.
 - ``l2_decap``: Performs a L2 decapsulation, L2 configuration
   is done through `Config L2 Decap`_.
 
+- ``mplsogre_encap``: Performs a MPLSoGRE encapsulation, outer layer
+  configuration is done through `Config MPLSoGRE Encap outer layers`_.
+
+- ``mplsogre_decap``: Performs a MPLSoGRE decapsulation, outer layer
+  configuration is done through `Config MPLSoGRE Decap outer layers`_.
+
 - ``mplsoudp_encap``: Performs a MPLSoUDP encapsulation, outer layer
   configuration is done through `Config MPLSoUDP Encap outer layers`_.
 
@@ -4237,6 +4272,74 @@ L2 with VXLAN header::
  testpmd> flow create 0 egress pattern eth / end actions l2_encap / mplsoudp_encap /
          queue index 0 / end
 
+Sample MPLSoGRE encapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoGRE encapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_encap ip-version ipv4 label 4
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv4 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_encap-with-vlan ip-version ipv4 label 4
+        ip-src 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv6 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_encap ip-version ipv6 mask 4
+        ip-src ::1 ip-dst ::2222 eth-src 11:11:11:11:11:11
+        eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+IPv6 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_encap-with-vlan ip-version ipv6 mask 4
+        ip-src ::1 ip-dst ::2222 vlan-tci 34
+        eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22
+ testpmd> flow create 0 egress pattern eth / end actions l2_decap /
+        mplsogre_encap / end
+
+Sample MPLSoGRE decapsulation rule
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MPLSoGRE decapsulation outer layer has default value pre-configured in testpmd
+source code, those can be changed by using the following commands
+
+IPv4 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_decap ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / ipv4 / gre / mpls / end actions
+        mplsogre_decap / l2_encap / end
+
+IPv4 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_decap-with-vlan ip-version ipv4
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv4 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
+IPv6 MPLSoGRE outer header::
+
+ testpmd> set mplsogre_decap ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / ipv6 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
+IPv6 MPLSoGRE with VLAN outer header::
+
+ testpmd> set mplsogre_decap-with-vlan ip-version ipv6
+ testpmd> flow create 0 ingress pattern eth / vlan / ipv6 / gre / mpls / end
+        actions mplsogre_decap / l2_encap / end
+
 Sample MPLSoUDP encapsulation rule
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v6 2/3] app/testpmd: add MPLSoUDP encapsulation
  2018-10-22 17:38           ` [PATCH v6 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
@ 2018-10-23  9:55             ` Ferruh Yigit
  0 siblings, 0 replies; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-23  9:55 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/22/2018 6:38 PM, Ori Kam wrote:
> MPLSoUDP is an example for L3 tunnel encapsulation.
> 
> L3 tunnel type is a tunnel that is missing the layer 2 header of the
> inner packet.
> 
> Example for MPLSoUDP tunnel:
> ETH / IPV4 / UDP / MPLS / IP / L4..L7
> 
> In order to encapsulate such a tunnel there is a need to remove L2 of
> the inner packet and encap the remaining tunnel, this is done by
> applying 2 rte flow commands l2_decap followed by mplsoudp_encap.
> Both commands must appear in the same flow, and from the point of the
> packet it both actions are applyed at the same time. (There is no part
> where a packet doesn't have L2 header).
> 
> Decapsulating such a tunnel works the other way, first we need to decap
> the outer tunnel header and then apply the new L2.
> So the commands will be mplsoudp_decap / l2_encap
> 
> Due to the complex encapsulation of MPLSoUDP and L2  flow actions and
> based on the fact testpmd does not allocate memory, this patch adds a new
> command in testpmd to initialise a global structurs containing the
> necessary information to make the outer layer of the packet.  This same
> global structures will then be used by the flow commands in testpmd when
> the action mplsoudp_encap, mplsoudp_decap, l2_encap, l2_decap, will be
> parsed, at this point, the conversion into such action becomes trivial.
> 
> The l2_encap and l2_decap actions can also be used for other L3 tunnel
> types.
> 
> Signed-off-by: Ori Kam <orika@mellanox.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v6 3/3] app/testpmd: add MPLSoGRE encapsulation
  2018-10-22 17:38           ` [PATCH v6 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
@ 2018-10-23  9:56             ` Ferruh Yigit
  0 siblings, 0 replies; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-23  9:56 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/22/2018 6:38 PM, Ori Kam wrote:
> Example for MPLSoGRE tunnel:
> ETH / IPV4 / GRE / MPLS / IP / L4..L7
> 
> In order to encapsulate such a tunnel there is a need to remove L2 of
> the inner packet and encap the remaining tunnel, this is done by
> applying 2 rte flow commands l2_decap followed by mplsogre_encap.
> Both commands must appear in the same flow, and from the point of the
> packet it both actions are applyed at the same time. (There is no part
> where a packet doesn't have L2 header).
> 
> Decapsulating such a tunnel works the other way, first we need to decap
> the outer tunnel header and then apply the new L2.
> So the commands will be mplsogre_decap / l2_encap
> 
> Due to the complex encapsulation of MPLSoGRE flow action and
> based on the fact testpmd does not allocate memory, this patch adds a
> new command in testpmd to initialise a global structure containing the
> necessary information to make the outer layer of the packet.  This same
> global structures will then be used by the flow commands in testpmd when
> the action mplsogre_encap, mplsogre_decap, will be parsed, at this point,
> the conversion into such action becomes trivial.
> 
> Signed-off-by: Ori Kam <orika@mellanox.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation
  2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
                             ` (2 preceding siblings ...)
  2018-10-22 17:38           ` [PATCH v6 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
@ 2018-10-23  9:56           ` Ferruh Yigit
  3 siblings, 0 replies; 53+ messages in thread
From: Ferruh Yigit @ 2018-10-23  9:56 UTC (permalink / raw)
  To: Ori Kam, wenzhuo.lu, jingjing.wu, bernard.iremonger, arybchenko,
	stephen, adrien.mazarguil
  Cc: dev, dekelp, thomas, nelio.laranjeiro, yskoh, shahafs

On 10/22/2018 6:38 PM, Ori Kam wrote:
> This series implement the raw tunnel encapsulation actions
> and is based on rfc [1] "add generic L2/L3 tunnel encapsulation actions"
> 
> Currenlty the encap/decap actions only support encapsulation
> of VXLAN and NVGRE L2 packets (L2 encapsulation is where
> the inner packet has a valid Ethernet header, while L3 encapsulation
> is where the inner packet doesn't have the Ethernet header).
> In addtion the parameter to to the encap action is a list of rte items,
> this results in 2 extra translation, between the application to the action
> and from the action to the NIC. This results in negetive impact on the
> insertion performance.
>     
> Looking forward there are going to be a need to support many more tunnel
> encapsulations. For example MPLSoGRE, MPLSoUDP.
> Adding the new encapsulation will result in duplication of code.
> For example the code for handling NVGRE and VXLAN are exactly the same,
> and each new tunnel will have the same exact structure.
>     
> This series introduce a raw encapsulation that can support both L2 and L3
> tunnel encapsulation.
> In order to encap l3 tunnel for example MPLSoDUP:
> ETH / IPV4 / UDP / MPLS / IPV4 / L4 .. L7
> When creating the flow rule we add 2 actions, the first one is decap in order
> to remove the original L2 of the packet and then the encap with the tunnel data.
> Decapsulating such a tunnel is done in the following order, first decap the
> outer tunnel and then encap the packet with the L2 header.
> It is important to notice that from the Nic and PMD both actionsn happens
> simultaneously, meaning that at we are always having a valid packet.
> 
> This series also inroduce the following commands for testpmd:
> * l2_encap
> * l2_decap
> * mplsogre_encap
> * mplsogre_decap
> * mplsoudp_encap
> * mplsoudp_decap
> 
> along with helper function to set teh headers that will be used for the actions,
> the same as with vxlan_encap.
> 
> [1]https://mails.dpdk.org/archives/dev/2018-August/109944.html
> 
> v6:
>  * fix compilation error
>  * fix typo.
> 
> v5:
>  * fix typos.
> 
> v4:
>  * convert to raw encap/decap, according to Adrien suggestion.
>  * keep the old vxlan and nvgre encapsulation commands.
> 
> v3:
>  * rebase on tip.
> 
> v2:
>  * add missing decap_l3 structure.
>  * fix typo.
> 
> 
> 
> Ori Kam (3):
>   ethdev: add raw encapsulation action
>   app/testpmd: add MPLSoUDP encapsulation
>   app/testpmd: add MPLSoGRE encapsulation

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2018-10-23  9:56 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-16 16:53 [PATCH 0/3] add generic L2/L3 tunnel encapsulation actions Ori Kam
2018-09-16 16:53 ` [PATCH 1/3] ethdev: " Ori Kam
2018-09-16 16:53 ` [PATCH 2/3] ethdev: convert testpmd encap commands to new API Ori Kam
2018-09-16 16:53 ` [PATCH 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
2018-09-26 21:00 ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ori Kam
2018-09-26 21:00   ` [PATCH v2 1/3] " Ori Kam
2018-09-26 21:00   ` [PATCH v2 2/3] app/testpmd: convert testpmd encap commands to new API Ori Kam
2018-09-26 21:00   ` [PATCH v2 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
2018-10-05 12:59     ` Ferruh Yigit
2018-10-05 13:26       ` Awal, Mohammad Abdul
2018-10-05 13:27     ` Mohammad Abdul Awal
2018-10-03 20:38   ` [PATCH v2 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Thomas Monjalon
2018-10-05 12:57   ` Ferruh Yigit
2018-10-05 14:00     ` Ori Kam
2018-10-07 12:57   ` [PATCH v3 " Ori Kam
2018-10-07 12:57     ` [PATCH v3 1/3] " Ori Kam
2018-10-07 12:57     ` [PATCH v3 2/3] app/testpmd: convert testpmd encap commands to new API Ori Kam
2018-10-07 12:57     ` [PATCH v3 3/3] ethdev: remove vxlan and nvgre encapsulation commands Ori Kam
2018-10-09 16:48     ` [PATCH v3 0/3] ethdev: add generic L2/L3 tunnel encapsulation actions Ferruh Yigit
2018-10-10  6:45       ` Andrew Rybchenko
2018-10-10  9:00         ` Ori Kam
2018-10-10  9:30           ` Andrew Rybchenko
2018-10-10  9:38             ` Thomas Monjalon
2018-10-10 12:02           ` Adrien Mazarguil
2018-10-10 13:17             ` Ori Kam
2018-10-10 16:10               ` Adrien Mazarguil
2018-10-11  8:48                 ` Ori Kam
2018-10-11 13:12                   ` Adrien Mazarguil
2018-10-11 13:55                     ` Ori Kam
2018-10-16 21:40     ` [PATCH v4 0/3] ethdev: add generic raw " Ori Kam
2018-10-16 21:41       ` [PATCH v4 1/3] ethdev: add raw encapsulation action Ori Kam
2018-10-17  7:56         ` Andrew Rybchenko
2018-10-17  8:43           ` Ori Kam
2018-10-22 13:06             ` Andrew Rybchenko
2018-10-22 13:19               ` Ori Kam
2018-10-22 13:27                 ` Andrew Rybchenko
2018-10-22 13:32                   ` Ori Kam
2018-10-16 21:41       ` [PATCH v4 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
2018-10-16 21:41       ` [PATCH v4 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
2018-10-17 17:07       ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ori Kam
2018-10-17 17:07         ` [PATCH v5 1/3] ethdev: add raw encapsulation action Ori Kam
2018-10-22 14:15           ` Andrew Rybchenko
2018-10-22 14:31             ` Ori Kam
2018-10-17 17:07         ` [PATCH v5 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
2018-10-17 17:07         ` [PATCH v5 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
2018-10-22 14:45         ` [PATCH v5 0/3] ethdev: add generic raw tunnel encapsulation actions Ferruh Yigit
2018-10-22 17:38         ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ori Kam
2018-10-22 17:38           ` [PATCH v6 1/3] ethdev: add raw encapsulation action Ori Kam
2018-10-22 17:38           ` [PATCH v6 2/3] app/testpmd: add MPLSoUDP encapsulation Ori Kam
2018-10-23  9:55             ` Ferruh Yigit
2018-10-22 17:38           ` [PATCH v6 3/3] app/testpmd: add MPLSoGRE encapsulation Ori Kam
2018-10-23  9:56             ` Ferruh Yigit
2018-10-23  9:56           ` [PATCH v6 0/3] ethdev: add generic raw tunnel encapsulation Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.