netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2] net/mlx5e: Add mlx5e_flower_parse_meta support
@ 2020-01-07  9:16 wenxu
  2020-01-14 20:46 ` Saeed Mahameed
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: wenxu @ 2020-01-07  9:16 UTC (permalink / raw)
  To: paulb, saeedm; +Cc: netdev

From: wenxu <wenxu@ucloud.cn>

In the flowtables offload all the devices in the flowtables
share the same flow_block. An offload rule will be installed on
all the devices. This scenario is not correct.

It is no problem if there are only two devices in the flowtable,
The rule with ingress and egress on the same device can be reject
by driver.

But more than two devices in the flowtable will install the wrong
rules on hardware.

For example:
Three devices in a offload flowtables: dev_a, dev_b, dev_c

A rule ingress from dev_a and egress to dev_b:
The rule will install on device dev_a.
The rule will try to install on dev_b but failed for ingress
and egress on the same device.
The rule will install on dev_c. This is not correct.

The flowtables offload avoid this case through restricting the ingress dev
with FLOW_DISSECTOR_KEY_META as following patch.
http://patchwork.ozlabs.org/patch/1218109/

So the mlx5e driver also should support the FLOW_DISSECTOR_KEY_META parse.

Signed-off-by: wenxu <wenxu@ucloud.cn>
---
v2: remap the patch description

 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 39 +++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 9b32a9c..33d1ce5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1805,6 +1805,40 @@ static void *get_match_headers_value(u32 flags,
 			     outer_headers);
 }
 
+static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
+				   struct flow_cls_offload *f)
+{
+	struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+	struct netlink_ext_ack *extack = f->common.extack;
+	struct net_device *ingress_dev;
+	struct flow_match_meta match;
+
+	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META))
+		return 0;
+
+	flow_rule_match_meta(rule, &match);
+	if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
+		NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask");
+		return -EINVAL;
+	}
+
+	ingress_dev = __dev_get_by_index(dev_net(filter_dev),
+					 match.key->ingress_ifindex);
+	if (!ingress_dev) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Can't find the ingress port to match on");
+		return -EINVAL;
+	}
+
+	if (ingress_dev != filter_dev) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Can't match on the ingress filter port");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int __parse_cls_flower(struct mlx5e_priv *priv,
 			      struct mlx5_flow_spec *spec,
 			      struct flow_cls_offload *f,
@@ -1825,6 +1859,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 	u16 addr_type = 0;
 	u8 ip_proto = 0;
 	u8 *match_level;
+	int err;
 
 	match_level = outer_match_level;
 
@@ -1868,6 +1903,10 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 						    spec);
 	}
 
+	err = mlx5e_flower_parse_meta(filter_dev, f);
+	if (err)
+		return err;
+
 	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
 		struct flow_match_basic match;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-01-22 19:17 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-07  9:16 [PATCH net-next v2] net/mlx5e: Add mlx5e_flower_parse_meta support wenxu
2020-01-14 20:46 ` Saeed Mahameed
2020-01-14 23:58   ` wenxu
2020-01-15  8:51     ` Roi Dayan
2020-01-21 20:56       ` Saeed Mahameed
2020-01-22  8:50         ` Roi Dayan
2020-01-22 19:17           ` Saeed Mahameed
2020-01-15  8:48 ` Roi Dayan
2020-01-15  9:13 ` Or Gerlitz
2020-01-15  9:37   ` wenxu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).