All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12 net-next,v2] add flow_rule infrastructure
@ 2018-11-19  0:15 Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 01/12] flow_dissector: add flow_rule and flow_match structures and use them Pablo Neira Ayuso
                   ` (13 more replies)
  0 siblings, 14 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Hi,

This patchset introduces a kernel intermediate representation (IR) to
express ACL hardware offloads, as already described in previous RFC and
v1 patchset [1] [2]. The idea is to normalize the frontend U/APIs to use
the flow dissectors and the flow actions so drivers can reuse the
existing TC offload driver codebase - that has been converted to use the
flow_rule infrastructure.

After this patch, as Or previously described, there is one extra layer:

kernel frontend U/API X --> kernel parser Y --> IR --> driver --> HW API
kernel frontend U/API Z --> kernel parser W --> IR --> driver --> HW API

However, cost of this layer is very small, adding 1 million rules via
tc -batch, perf shows:

     0.06%  tc               [kernel.vmlinux]    [k] tc_setup_flow_action

at position 187 in the call graph, far from the top ten. The flow_match
representation uses the flow dissector infrastructure, just like
cls_flower, therefore, there is no need for conversion of the rule match
side.

The flow_action representation is very similar to the TC action plus
this includes wake-up-on-lan and queue to CPU actions that are needed
for the ethtool_rx_flow_spec interface in the bcm_sf2 driver, that is
converted in this patchset to use it. It is now possible to add tc
cls_flower support for bcm_sf2 and reuse the existing parser that was
originally designed for the ethtool_rx_flow_spec interface.

As requested, this new patchset also converts qlogic/qede to use this
new infrastructure (see patch 12/12). This driver currently has two
parsers, one for ethtool_rx_flow_spec and another for tc cls_flower.
This driver supports for simple 5-tuple matching and available actions
are packet drop and queue. This patch updates the driver code to use one
single parser to populate HW IR.

Thanks.

[1] https://lwn.net/Articles/766695/
[2] https://marc.info/?l=linux-netdev&m=154233253114506&w=2

Pablo Neira Ayuso (12):
  flow_dissector: add flow_rule and flow_match structures and use them
  net/mlx5e: support for two independent packet edit actions
  flow_dissector: add flow action infrastructure
  cls_api: add translator to flow_action representation
  cls_flower: add statistics retrieval infrastructure and use it
  drivers: net: use flow action infrastructure
  cls_flower: don't expose TC actions to drivers anymore
  flow_dissector: add wake-up-on-lan and queue to flow_action
  flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure
    translator
  dsa: bcm_sf2: use flow_rule infrastructure
  qede: place ethtool_rx_flow_spec after code after TC flower codebase
  qede: use ethtool_rx_flow_rule() to remove duplicated parser code

 drivers/net/dsa/bcm_sf2_cfp.c                      | 108 +--
 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c       | 252 +++----
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c   | 450 ++++++-------
 drivers/net/ethernet/intel/i40e/i40e_main.c        | 178 ++---
 drivers/net/ethernet/intel/iavf/iavf_main.c        | 195 +++---
 drivers/net/ethernet/intel/igb/igb_main.c          |  64 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 743 ++++++++++-----------
 drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c |   2 +-
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  | 259 ++++---
 drivers/net/ethernet/netronome/nfp/flower/action.c | 196 +++---
 drivers/net/ethernet/netronome/nfp/flower/match.c  | 417 ++++++------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 151 ++---
 drivers/net/ethernet/qlogic/qede/qede_filter.c     | 537 ++++++---------
 include/net/flow_dissector.h                       | 185 +++++
 include/net/pkt_cls.h                              |  29 +-
 net/core/flow_dissector.c                          | 341 ++++++++++
 net/sched/cls_api.c                                | 113 ++++
 net/sched/cls_flower.c                             |  42 +-
 18 files changed, 2279 insertions(+), 1983 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 01/12] flow_dissector: add flow_rule and flow_match structures and use them
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 02/12] net/mlx5e: support for two independent packet edit actions Pablo Neira Ayuso
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This patch wraps the dissector key and mask - that flower uses to
represent the matching side - around the flow_match structure.

To avoid a follow up patch that would edit the same LoCs in the drivers,
this patch also wraps this new flow match structure around the flow rule
object. This new structure will also contain the flow actions in follow
up patches.

This introduces two new interfaces:

	bool flow_rule_match_key(rule, dissector_id)

that returns true if a given matching key is set on, and:

	flow_rule_match_XYZ(rule, &match);

To fetch the matching side XYZ into the match container structure, to
retrieve the key and the mask with one single call.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: Use reverse xmas tree for variable definition, requested by David S. Miller.

 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c       | 174 ++++-----
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c   | 194 ++++------
 drivers/net/ethernet/intel/i40e/i40e_main.c        | 178 ++++-----
 drivers/net/ethernet/intel/iavf/iavf_main.c        | 195 ++++------
 drivers/net/ethernet/intel/igb/igb_main.c          |  64 ++--
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 420 +++++++++------------
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  | 202 +++++-----
 drivers/net/ethernet/netronome/nfp/flower/action.c |  11 +-
 drivers/net/ethernet/netronome/nfp/flower/match.c  | 417 ++++++++++----------
 .../net/ethernet/netronome/nfp/flower/offload.c    | 145 +++----
 drivers/net/ethernet/qlogic/qede/qede_filter.c     |  85 ++---
 include/net/flow_dissector.h                       | 107 ++++++
 include/net/pkt_cls.h                              |  10 +-
 net/core/flow_dissector.c                          | 133 +++++++
 net/sched/cls_flower.c                             |  18 +-
 15 files changed, 1151 insertions(+), 1202 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index 749f63beddd8..b82143d6cdde 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -177,18 +177,12 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
 	return 0;
 }
 
-#define GET_KEY(flow_cmd, key_type)					\
-		skb_flow_dissector_target((flow_cmd)->dissector, key_type,\
-					  (flow_cmd)->key)
-#define GET_MASK(flow_cmd, key_type)					\
-		skb_flow_dissector_target((flow_cmd)->dissector, key_type,\
-					  (flow_cmd)->mask)
-
 static int bnxt_tc_parse_flow(struct bnxt *bp,
 			      struct tc_cls_flower_offload *tc_flow_cmd,
 			      struct bnxt_tc_flow *flow)
 {
-	struct flow_dissector *dissector = tc_flow_cmd->dissector;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(tc_flow_cmd);
+	struct flow_dissector *dissector = rule->match.dissector;
 
 	/* KEY_CONTROL and KEY_BASIC are needed for forming a meaningful key */
 	if ((dissector->used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL)) == 0 ||
@@ -198,140 +192,120 @@ static int bnxt_tc_parse_flow(struct bnxt *bp,
 		return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_BASIC);
-		struct flow_dissector_key_basic *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_BASIC);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		flow->l2_key.ether_type = key->n_proto;
-		flow->l2_mask.ether_type = mask->n_proto;
+		flow_rule_match_basic(rule, &match);
+		flow->l2_key.ether_type = match.key->n_proto;
+		flow->l2_mask.ether_type = match.mask->n_proto;
 
-		if (key->n_proto == htons(ETH_P_IP) ||
-		    key->n_proto == htons(ETH_P_IPV6)) {
-			flow->l4_key.ip_proto = key->ip_proto;
-			flow->l4_mask.ip_proto = mask->ip_proto;
+		if (match.key->n_proto == htons(ETH_P_IP) ||
+		    match.key->n_proto == htons(ETH_P_IPV6)) {
+			flow->l4_key.ip_proto = match.key->ip_proto;
+			flow->l4_mask.ip_proto = match.mask->ip_proto;
 		}
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		struct flow_dissector_key_eth_addrs *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_ETH_ADDRS);
-		struct flow_dissector_key_eth_addrs *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_ETH_ADDRS);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
 
+		flow_rule_match_eth_addrs(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_ETH_ADDRS;
-		ether_addr_copy(flow->l2_key.dmac, key->dst);
-		ether_addr_copy(flow->l2_mask.dmac, mask->dst);
-		ether_addr_copy(flow->l2_key.smac, key->src);
-		ether_addr_copy(flow->l2_mask.smac, mask->src);
+		ether_addr_copy(flow->l2_key.dmac, match.key->dst);
+		ether_addr_copy(flow->l2_mask.dmac, match.mask->dst);
+		ether_addr_copy(flow->l2_key.smac, match.key->src);
+		ether_addr_copy(flow->l2_mask.smac, match.mask->src);
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_VLAN);
-		struct flow_dissector_key_vlan *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_VLAN);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
 
+		flow_rule_match_vlan(rule, &match);
 		flow->l2_key.inner_vlan_tci =
-		   cpu_to_be16(VLAN_TCI(key->vlan_id, key->vlan_priority));
+			cpu_to_be16(VLAN_TCI(match.key->vlan_id,
+					     match.key->vlan_priority));
 		flow->l2_mask.inner_vlan_tci =
-		   cpu_to_be16((VLAN_TCI(mask->vlan_id, mask->vlan_priority)));
+			cpu_to_be16((VLAN_TCI(match.mask->vlan_id,
+					      match.mask->vlan_priority)));
 		flow->l2_key.inner_vlan_tpid = htons(ETH_P_8021Q);
 		flow->l2_mask.inner_vlan_tpid = htons(0xffff);
 		flow->l2_key.num_vlans = 1;
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_IPV4_ADDRS);
-		struct flow_dissector_key_ipv4_addrs *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_IPV4_ADDRS);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
+		struct flow_match_ipv4_addrs match;
 
+		flow_rule_match_ipv4_addrs(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_IPV4_ADDRS;
-		flow->l3_key.ipv4.daddr.s_addr = key->dst;
-		flow->l3_mask.ipv4.daddr.s_addr = mask->dst;
-		flow->l3_key.ipv4.saddr.s_addr = key->src;
-		flow->l3_mask.ipv4.saddr.s_addr = mask->src;
-	} else if (dissector_uses_key(dissector,
-				      FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
-		struct flow_dissector_key_ipv6_addrs *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_IPV6_ADDRS);
-		struct flow_dissector_key_ipv6_addrs *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_IPV6_ADDRS);
-
+		flow->l3_key.ipv4.daddr.s_addr = match.key->dst;
+		flow->l3_mask.ipv4.daddr.s_addr = match.mask->dst;
+		flow->l3_key.ipv4.saddr.s_addr = match.key->src;
+		flow->l3_mask.ipv4.saddr.s_addr = match.mask->src;
+	} else if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
+		struct flow_match_ipv6_addrs match;
+
+		flow_rule_match_ipv6_addrs(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_IPV6_ADDRS;
-		flow->l3_key.ipv6.daddr = key->dst;
-		flow->l3_mask.ipv6.daddr = mask->dst;
-		flow->l3_key.ipv6.saddr = key->src;
-		flow->l3_mask.ipv6.saddr = mask->src;
+		flow->l3_key.ipv6.daddr = match.key->dst;
+		flow->l3_mask.ipv6.daddr = match.mask->dst;
+		flow->l3_key.ipv6.saddr = match.key->src;
+		flow->l3_mask.ipv6.saddr = match.mask->src;
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		struct flow_dissector_key_ports *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_PORTS);
-		struct flow_dissector_key_ports *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_PORTS);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
 
+		flow_rule_match_ports(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_PORTS;
-		flow->l4_key.ports.dport = key->dst;
-		flow->l4_mask.ports.dport = mask->dst;
-		flow->l4_key.ports.sport = key->src;
-		flow->l4_mask.ports.sport = mask->src;
+		flow->l4_key.ports.dport = match.key->dst;
+		flow->l4_mask.ports.dport = match.mask->dst;
+		flow->l4_key.ports.sport = match.key->src;
+		flow->l4_mask.ports.sport = match.mask->src;
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_ICMP)) {
-		struct flow_dissector_key_icmp *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_ICMP);
-		struct flow_dissector_key_icmp *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_ICMP);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ICMP)) {
+		struct flow_match_icmp match;
 
+		flow_rule_match_icmp(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_ICMP;
-		flow->l4_key.icmp.type = key->type;
-		flow->l4_key.icmp.code = key->code;
-		flow->l4_mask.icmp.type = mask->type;
-		flow->l4_mask.icmp.code = mask->code;
+		flow->l4_key.icmp.type = match.key->type;
+		flow->l4_key.icmp.code = match.key->code;
+		flow->l4_mask.icmp.type = match.mask->type;
+		flow->l4_mask.icmp.code = match.mask->code;
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS);
-		struct flow_dissector_key_ipv4_addrs *mask =
-				GET_MASK(tc_flow_cmd,
-					 FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
+		struct flow_match_ipv4_addrs match;
 
+		flow_rule_match_enc_ipv4_addrs(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_TUNL_IPV4_ADDRS;
-		flow->tun_key.u.ipv4.dst = key->dst;
-		flow->tun_mask.u.ipv4.dst = mask->dst;
-		flow->tun_key.u.ipv4.src = key->src;
-		flow->tun_mask.u.ipv4.src = mask->src;
-	} else if (dissector_uses_key(dissector,
+		flow->tun_key.u.ipv4.dst = match.key->dst;
+		flow->tun_mask.u.ipv4.dst = match.mask->dst;
+		flow->tun_key.u.ipv4.src = match.key->src;
+		flow->tun_mask.u.ipv4.src = match.mask->src;
+	} else if (flow_rule_match_key(rule,
 				      FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS)) {
 		return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
-		struct flow_dissector_key_keyid *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_ENC_KEYID);
-		struct flow_dissector_key_keyid *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_ENC_KEYID);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_match_enc_keyid match;
 
+		flow_rule_match_enc_keyid(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_TUNL_ID;
-		flow->tun_key.tun_id = key32_to_tunnel_id(key->keyid);
-		flow->tun_mask.tun_id = key32_to_tunnel_id(mask->keyid);
+		flow->tun_key.tun_id = key32_to_tunnel_id(match.key->keyid);
+		flow->tun_mask.tun_id = key32_to_tunnel_id(match.mask->keyid);
 	}
 
-	if (dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
-		struct flow_dissector_key_ports *key =
-			GET_KEY(tc_flow_cmd, FLOW_DISSECTOR_KEY_ENC_PORTS);
-		struct flow_dissector_key_ports *mask =
-			GET_MASK(tc_flow_cmd, FLOW_DISSECTOR_KEY_ENC_PORTS);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
+		struct flow_match_ports match;
 
+		flow_rule_match_enc_ports(rule, &match);
 		flow->flags |= BNXT_TC_FLOW_FLAGS_TUNL_PORTS;
-		flow->tun_key.tp_dst = key->dst;
-		flow->tun_mask.tp_dst = mask->dst;
-		flow->tun_key.tp_src = key->src;
-		flow->tun_mask.tp_src = mask->src;
+		flow->tun_key.tp_dst = match.key->dst;
+		flow->tun_mask.tp_dst = match.mask->dst;
+		flow->tun_key.tp_src = match.key->src;
+		flow->tun_mask.tp_src = match.mask->src;
 	}
 
 	return bnxt_tc_parse_actions(bp, &flow->actions, tc_flow_cmd->exts);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
index c116f96956fe..39c5af5dad3d 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
@@ -83,28 +83,23 @@ static void cxgb4_process_flow_match(struct net_device *dev,
 				     struct tc_cls_flower_offload *cls,
 				     struct ch_filter_specification *fs)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(cls);
 	u16 addr_type = 0;
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_CONTROL,
-						  cls->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control match;
 
-		addr_type = key->addr_type;
+		flow_rule_match_control(rule, &match);
+		addr_type = match.key->addr_type;
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  cls->key);
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  cls->mask);
-		u16 ethtype_key = ntohs(key->n_proto);
-		u16 ethtype_mask = ntohs(mask->n_proto);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
+		u16 ethtype_key, ethtype_mask;
+
+		flow_rule_match_basic(rule, &match);
+		ethtype_key = ntohs(match.key->n_proto);
+		ethtype_mask = ntohs(match.mask->n_proto);
 
 		if (ethtype_key == ETH_P_ALL) {
 			ethtype_key = 0;
@@ -116,115 +111,89 @@ static void cxgb4_process_flow_match(struct net_device *dev,
 
 		fs->val.ethtype = ethtype_key;
 		fs->mask.ethtype = ethtype_mask;
-		fs->val.proto = key->ip_proto;
-		fs->mask.proto = mask->ip_proto;
+		fs->val.proto = match.key->ip_proto;
+		fs->mask.proto = match.mask->ip_proto;
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  cls->key);
-		struct flow_dissector_key_ipv4_addrs *mask =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  cls->mask);
+		struct flow_match_ipv4_addrs match;
+
+		flow_rule_match_ipv4_addrs(rule, &match);
 		fs->type = 0;
-		memcpy(&fs->val.lip[0], &key->dst, sizeof(key->dst));
-		memcpy(&fs->val.fip[0], &key->src, sizeof(key->src));
-		memcpy(&fs->mask.lip[0], &mask->dst, sizeof(mask->dst));
-		memcpy(&fs->mask.fip[0], &mask->src, sizeof(mask->src));
+		memcpy(&fs->val.lip[0], &match.key->dst, sizeof(match.key->dst));
+		memcpy(&fs->val.fip[0], &match.key->src, sizeof(match.key->src));
+		memcpy(&fs->mask.lip[0], &match.mask->dst, sizeof(match.mask->dst));
+		memcpy(&fs->mask.fip[0], &match.mask->src, sizeof(match.mask->src));
 
 		/* also initialize nat_lip/fip to same values */
-		memcpy(&fs->nat_lip[0], &key->dst, sizeof(key->dst));
-		memcpy(&fs->nat_fip[0], &key->src, sizeof(key->src));
-
+		memcpy(&fs->nat_lip[0], &match.key->dst, sizeof(match.key->dst));
+		memcpy(&fs->nat_fip[0], &match.key->src, sizeof(match.key->src));
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
-		struct flow_dissector_key_ipv6_addrs *key =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  cls->key);
-		struct flow_dissector_key_ipv6_addrs *mask =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  cls->mask);
+		struct flow_match_ipv6_addrs match;
 
+		flow_rule_match_ipv6_addrs(rule, &match);
 		fs->type = 1;
-		memcpy(&fs->val.lip[0], key->dst.s6_addr, sizeof(key->dst));
-		memcpy(&fs->val.fip[0], key->src.s6_addr, sizeof(key->src));
-		memcpy(&fs->mask.lip[0], mask->dst.s6_addr, sizeof(mask->dst));
-		memcpy(&fs->mask.fip[0], mask->src.s6_addr, sizeof(mask->src));
+		memcpy(&fs->val.lip[0], match.key->dst.s6_addr,
+		       sizeof(match.key->dst));
+		memcpy(&fs->val.fip[0], match.key->src.s6_addr,
+		       sizeof(match.key->src));
+		memcpy(&fs->mask.lip[0], match.mask->dst.s6_addr,
+		       sizeof(match.mask->dst));
+		memcpy(&fs->mask.fip[0], match.mask->src.s6_addr,
+		       sizeof(match.mask->src));
 
 		/* also initialize nat_lip/fip to same values */
-		memcpy(&fs->nat_lip[0], key->dst.s6_addr, sizeof(key->dst));
-		memcpy(&fs->nat_fip[0], key->src.s6_addr, sizeof(key->src));
+		memcpy(&fs->nat_lip[0], match.key->dst.s6_addr,
+		       sizeof(match.key->dst));
+		memcpy(&fs->nat_fip[0], match.key->src.s6_addr,
+		       sizeof(match.key->src));
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		struct flow_dissector_key_ports *key, *mask;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
 
-		key = skb_flow_dissector_target(cls->dissector,
-						FLOW_DISSECTOR_KEY_PORTS,
-						cls->key);
-		mask = skb_flow_dissector_target(cls->dissector,
-						 FLOW_DISSECTOR_KEY_PORTS,
-						 cls->mask);
-		fs->val.lport = cpu_to_be16(key->dst);
-		fs->mask.lport = cpu_to_be16(mask->dst);
-		fs->val.fport = cpu_to_be16(key->src);
-		fs->mask.fport = cpu_to_be16(mask->src);
+		flow_rule_match_ports(rule, &match);
+		fs->val.lport = cpu_to_be16(match.key->dst);
+		fs->mask.lport = cpu_to_be16(match.mask->dst);
+		fs->val.fport = cpu_to_be16(match.key->src);
+		fs->mask.fport = cpu_to_be16(match.mask->src);
 
 		/* also initialize nat_lport/fport to same values */
-		fs->nat_lport = cpu_to_be16(key->dst);
-		fs->nat_fport = cpu_to_be16(key->src);
+		fs->nat_lport = cpu_to_be16(match.key->dst);
+		fs->nat_fport = cpu_to_be16(match.key->src);
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_IP)) {
-		struct flow_dissector_key_ip *key, *mask;
-
-		key = skb_flow_dissector_target(cls->dissector,
-						FLOW_DISSECTOR_KEY_IP,
-						cls->key);
-		mask = skb_flow_dissector_target(cls->dissector,
-						 FLOW_DISSECTOR_KEY_IP,
-						 cls->mask);
-		fs->val.tos = key->tos;
-		fs->mask.tos = mask->tos;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
+		struct flow_match_ip match;
+
+		flow_rule_match_ip(rule, &match);
+		fs->val.tos = match.key->tos;
+		fs->mask.tos = match.mask->tos;
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
-		struct flow_dissector_key_keyid *key, *mask;
-
-		key = skb_flow_dissector_target(cls->dissector,
-						FLOW_DISSECTOR_KEY_ENC_KEYID,
-						cls->key);
-		mask = skb_flow_dissector_target(cls->dissector,
-						 FLOW_DISSECTOR_KEY_ENC_KEYID,
-						 cls->mask);
-		fs->val.vni = be32_to_cpu(key->keyid);
-		fs->mask.vni = be32_to_cpu(mask->keyid);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_match_enc_keyid match;
+
+		flow_rule_match_enc_keyid(rule, &match);
+		fs->val.vni = be32_to_cpu(match.key->keyid);
+		fs->mask.vni = be32_to_cpu(match.mask->keyid);
 		if (fs->mask.vni) {
 			fs->val.encap_vld = 1;
 			fs->mask.encap_vld = 1;
 		}
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key, *mask;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
 		u16 vlan_tci, vlan_tci_mask;
 
-		key = skb_flow_dissector_target(cls->dissector,
-						FLOW_DISSECTOR_KEY_VLAN,
-						cls->key);
-		mask = skb_flow_dissector_target(cls->dissector,
-						 FLOW_DISSECTOR_KEY_VLAN,
-						 cls->mask);
-		vlan_tci = key->vlan_id | (key->vlan_priority <<
-					   VLAN_PRIO_SHIFT);
-		vlan_tci_mask = mask->vlan_id | (mask->vlan_priority <<
-						 VLAN_PRIO_SHIFT);
+		flow_rule_match_vlan(rule, &match);
+		vlan_tci = match.key->vlan_id | (match.key->vlan_priority <<
+					       VLAN_PRIO_SHIFT);
+		vlan_tci_mask = match.mask->vlan_id | (match.mask->vlan_priority <<
+						     VLAN_PRIO_SHIFT);
 		fs->val.ivlan = vlan_tci;
 		fs->mask.ivlan = vlan_tci_mask;
 
@@ -255,10 +224,12 @@ static void cxgb4_process_flow_match(struct net_device *dev,
 static int cxgb4_validate_flow_match(struct net_device *dev,
 				     struct tc_cls_flower_offload *cls)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(cls);
+	struct flow_dissector *dissector = rule->match.dissector;
 	u16 ethtype_mask = 0;
 	u16 ethtype_key = 0;
 
-	if (cls->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
@@ -268,36 +239,29 @@ static int cxgb4_validate_flow_match(struct net_device *dev,
 	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
 	      BIT(FLOW_DISSECTOR_KEY_IP))) {
 		netdev_warn(dev, "Unsupported key used: 0x%x\n",
-			    cls->dissector->used_keys);
+			    dissector->used_keys);
 		return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  cls->key);
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(cls->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  cls->mask);
-		ethtype_key = ntohs(key->n_proto);
-		ethtype_mask = ntohs(mask->n_proto);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
+
+		flow_rule_match_basic(rule, &match);
+		ethtype_key = ntohs(match.key->n_proto);
+		ethtype_mask = ntohs(match.mask->n_proto);
 	}
 
-	if (dissector_uses_key(cls->dissector, FLOW_DISSECTOR_KEY_IP)) {
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
 		u16 eth_ip_type = ethtype_key & ethtype_mask;
-		struct flow_dissector_key_ip *mask;
+		struct flow_match_ip match;
 
 		if (eth_ip_type != ETH_P_IP && eth_ip_type != ETH_P_IPV6) {
 			netdev_err(dev, "IP Key supported only with IPv4/v6");
 			return -EINVAL;
 		}
 
-		mask = skb_flow_dissector_target(cls->dissector,
-						 FLOW_DISSECTOR_KEY_IP,
-						 cls->mask);
-		if (mask->ttl) {
+		flow_rule_match_ip(rule, &match);
+		if (match.mask->ttl) {
 			netdev_warn(dev, "ttl match unsupported for offload");
 			return -EOPNOTSUPP;
 		}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 47f0fdadbac9..b69c6a560b83 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -7169,11 +7169,13 @@ static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
 				 struct tc_cls_flower_offload *f,
 				 struct i40e_cloud_filter *filter)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
 	u16 n_proto_mask = 0, n_proto_key = 0, addr_type = 0;
 	struct i40e_pf *pf = vsi->back;
 	u8 field_flags = 0;
 
-	if (f->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
@@ -7183,143 +7185,109 @@ static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
 	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
 	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
 		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
-			f->dissector->used_keys);
+			dissector->used_keys);
 		return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
-		struct flow_dissector_key_keyid *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_KEYID,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_match_enc_keyid match;
 
-		struct flow_dissector_key_keyid *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_KEYID,
-						  f->mask);
-
-		if (mask->keyid != 0)
+		flow_rule_match_enc_keyid(rule, &match);
+		if (match.mask->keyid != 0)
 			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
 
-		filter->tenant_id = be32_to_cpu(key->keyid);
+		filter->tenant_id = be32_to_cpu(match.key->keyid);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->key);
-
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		n_proto_key = ntohs(key->n_proto);
-		n_proto_mask = ntohs(mask->n_proto);
+		flow_rule_match_basic(rule, &match);
+		n_proto_key = ntohs(match.key->n_proto);
+		n_proto_mask = ntohs(match.mask->n_proto);
 
 		if (n_proto_key == ETH_P_ALL) {
 			n_proto_key = 0;
 			n_proto_mask = 0;
 		}
 		filter->n_proto = n_proto_key & n_proto_mask;
-		filter->ip_proto = key->ip_proto;
+		filter->ip_proto = match.key->ip_proto;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		struct flow_dissector_key_eth_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
 
-		struct flow_dissector_key_eth_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->mask);
+		flow_rule_match_eth_addrs(rule, &match);
 
 		/* use is_broadcast and is_zero to check for all 0xf or 0 */
-		if (!is_zero_ether_addr(mask->dst)) {
-			if (is_broadcast_ether_addr(mask->dst)) {
+		if (!is_zero_ether_addr(match.mask->dst)) {
+			if (is_broadcast_ether_addr(match.mask->dst)) {
 				field_flags |= I40E_CLOUD_FIELD_OMAC;
 			} else {
 				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
-					mask->dst);
+					match.mask->dst);
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (!is_zero_ether_addr(mask->src)) {
-			if (is_broadcast_ether_addr(mask->src)) {
+		if (!is_zero_ether_addr(match.mask->src)) {
+			if (is_broadcast_ether_addr(match.mask->src)) {
 				field_flags |= I40E_CLOUD_FIELD_IMAC;
 			} else {
 				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
-					mask->src);
+					match.mask->src);
 				return I40E_ERR_CONFIG;
 			}
 		}
-		ether_addr_copy(filter->dst_mac, key->dst);
-		ether_addr_copy(filter->src_mac, key->src);
+		ether_addr_copy(filter->dst_mac, match.key->dst);
+		ether_addr_copy(filter->src_mac, match.key->src);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->key);
-		struct flow_dissector_key_vlan *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
 
-		if (mask->vlan_id) {
-			if (mask->vlan_id == VLAN_VID_MASK) {
+		flow_rule_match_vlan(rule, &match);
+		if (match.mask->vlan_id) {
+			if (match.mask->vlan_id == VLAN_VID_MASK) {
 				field_flags |= I40E_CLOUD_FIELD_IVLAN;
 
 			} else {
 				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
-					mask->vlan_id);
+					match.mask->vlan_id);
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		filter->vlan_id = cpu_to_be16(key->vlan_id);
+		filter->vlan_id = cpu_to_be16(match.key->vlan_id);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CONTROL,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control match;
 
-		addr_type = key->addr_type;
+		flow_rule_match_control(rule, &match);
+		addr_type = match.key->addr_type;
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv4_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  f->mask);
-
-		if (mask->dst) {
-			if (mask->dst == cpu_to_be32(0xffffffff)) {
+		struct flow_match_ipv4_addrs match;
+
+		flow_rule_match_ipv4_addrs(rule, &match);
+		if (match.mask->dst) {
+			if (match.mask->dst == cpu_to_be32(0xffffffff)) {
 				field_flags |= I40E_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&pf->pdev->dev, "Bad ip dst mask %pI4b\n",
-					&mask->dst);
+					&match.mask->dst);
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (mask->src) {
-			if (mask->src == cpu_to_be32(0xffffffff)) {
+		if (match.mask->src) {
+			if (match.mask->src == cpu_to_be32(0xffffffff)) {
 				field_flags |= I40E_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&pf->pdev->dev, "Bad ip src mask %pI4b\n",
-					&mask->src);
+					&match.mask->src);
 				return I40E_ERR_CONFIG;
 			}
 		}
@@ -7328,70 +7296,60 @@ static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
 			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
 			return I40E_ERR_CONFIG;
 		}
-		filter->dst_ipv4 = key->dst;
-		filter->src_ipv4 = key->src;
+		filter->dst_ipv4 = match.key->dst;
+		filter->src_ipv4 = match.key->src;
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
-		struct flow_dissector_key_ipv6_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv6_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  f->mask);
+		struct flow_match_ipv6_addrs match;
+
+		flow_rule_match_ipv6_addrs(rule, &match);
 
 		/* src and dest IPV6 address should not be LOOPBACK
 		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
 		 */
-		if (ipv6_addr_loopback(&key->dst) ||
-		    ipv6_addr_loopback(&key->src)) {
+		if (ipv6_addr_loopback(&match.key->dst) ||
+		    ipv6_addr_loopback(&match.key->src)) {
 			dev_err(&pf->pdev->dev,
 				"Bad ipv6, addr is LOOPBACK\n");
 			return I40E_ERR_CONFIG;
 		}
-		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
+		if (!ipv6_addr_any(&match.mask->dst) ||
+		    !ipv6_addr_any(&match.mask->src))
 			field_flags |= I40E_CLOUD_FIELD_IIP;
 
-		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
+		memcpy(&filter->src_ipv6, &match.key->src.s6_addr32,
 		       sizeof(filter->src_ipv6));
-		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
+		memcpy(&filter->dst_ipv6, &match.key->dst.s6_addr32,
 		       sizeof(filter->dst_ipv6));
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		struct flow_dissector_key_ports *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_PORTS,
-						  f->key);
-		struct flow_dissector_key_ports *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_PORTS,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
 
-		if (mask->src) {
-			if (mask->src == cpu_to_be16(0xffff)) {
+		flow_rule_match_ports(rule, &match);
+		if (match.mask->src) {
+			if (match.mask->src == cpu_to_be16(0xffff)) {
 				field_flags |= I40E_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
-					be16_to_cpu(mask->src));
+					be16_to_cpu(match.mask->src));
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (mask->dst) {
-			if (mask->dst == cpu_to_be16(0xffff)) {
+		if (match.mask->dst) {
+			if (match.mask->dst == cpu_to_be16(0xffff)) {
 				field_flags |= I40E_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
-					be16_to_cpu(mask->dst));
+					be16_to_cpu(match.mask->dst));
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		filter->dst_port = key->dst;
-		filter->src_port = key->src;
+		filter->dst_port = match.key->dst;
+		filter->src_port = match.key->src;
 
 		switch (filter->ip_proto) {
 		case IPPROTO_TCP:
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 9f2b7b7adf6b..4569d69a2b55 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -2439,6 +2439,8 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
 				 struct tc_cls_flower_offload *f,
 				 struct iavf_cloud_filter *filter)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
 	u16 n_proto_mask = 0;
 	u16 n_proto_key = 0;
 	u8 field_flags = 0;
@@ -2447,7 +2449,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
 	int i = 0;
 	struct virtchnl_filter *vf = &filter->f;
 
-	if (f->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
@@ -2457,32 +2459,24 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
 	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
 	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
 		dev_err(&adapter->pdev->dev, "Unsupported key used: 0x%x\n",
-			f->dissector->used_keys);
+			dissector->used_keys);
 		return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
-		struct flow_dissector_key_keyid *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_KEYID,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_match_enc_keyid match;
 
-		if (mask->keyid != 0)
+		flow_rule_match_enc_keyid(rule, &match);
+		if (match.mask->keyid != 0)
 			field_flags |= IAVF_CLOUD_FIELD_TEN_ID;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->mask);
-		n_proto_key = ntohs(key->n_proto);
-		n_proto_mask = ntohs(mask->n_proto);
+		flow_rule_match_basic(rule, &match);
+		n_proto_key = ntohs(match.key->n_proto);
+		n_proto_mask = ntohs(match.mask->n_proto);
 
 		if (n_proto_key == ETH_P_ALL) {
 			n_proto_key = 0;
@@ -2496,122 +2490,103 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
 			vf->flow_type = VIRTCHNL_TCP_V6_FLOW;
 		}
 
-		if (key->ip_proto != IPPROTO_TCP) {
+		if (match.key->ip_proto != IPPROTO_TCP) {
 			dev_info(&adapter->pdev->dev, "Only TCP transport is supported\n");
 			return -EINVAL;
 		}
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		struct flow_dissector_key_eth_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
+
+		flow_rule_match_eth_addrs(rule, &match);
 
-		struct flow_dissector_key_eth_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->mask);
 		/* use is_broadcast and is_zero to check for all 0xf or 0 */
-		if (!is_zero_ether_addr(mask->dst)) {
-			if (is_broadcast_ether_addr(mask->dst)) {
+		if (!is_zero_ether_addr(match.mask->dst)) {
+			if (is_broadcast_ether_addr(match.mask->dst)) {
 				field_flags |= IAVF_CLOUD_FIELD_OMAC;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad ether dest mask %pM\n",
-					mask->dst);
+					match.mask->dst);
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (!is_zero_ether_addr(mask->src)) {
-			if (is_broadcast_ether_addr(mask->src)) {
+		if (!is_zero_ether_addr(match.mask->src)) {
+			if (is_broadcast_ether_addr(match.mask->src)) {
 				field_flags |= IAVF_CLOUD_FIELD_IMAC;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad ether src mask %pM\n",
-					mask->src);
+					match.mask->src);
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (!is_zero_ether_addr(key->dst))
-			if (is_valid_ether_addr(key->dst) ||
-			    is_multicast_ether_addr(key->dst)) {
+		if (!is_zero_ether_addr(match.key->dst))
+			if (is_valid_ether_addr(match.key->dst) ||
+			    is_multicast_ether_addr(match.key->dst)) {
 				/* set the mask if a valid dst_mac address */
 				for (i = 0; i < ETH_ALEN; i++)
 					vf->mask.tcp_spec.dst_mac[i] |= 0xff;
 				ether_addr_copy(vf->data.tcp_spec.dst_mac,
-						key->dst);
+						match.key->dst);
 			}
 
-		if (!is_zero_ether_addr(key->src))
-			if (is_valid_ether_addr(key->src) ||
-			    is_multicast_ether_addr(key->src)) {
+		if (!is_zero_ether_addr(match.key->src))
+			if (is_valid_ether_addr(match.key->src) ||
+			    is_multicast_ether_addr(match.key->src)) {
 				/* set the mask if a valid dst_mac address */
 				for (i = 0; i < ETH_ALEN; i++)
 					vf->mask.tcp_spec.src_mac[i] |= 0xff;
 				ether_addr_copy(vf->data.tcp_spec.src_mac,
-						key->src);
+						match.key->src);
 		}
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->key);
-		struct flow_dissector_key_vlan *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
 
-		if (mask->vlan_id) {
-			if (mask->vlan_id == VLAN_VID_MASK) {
+		flow_rule_match_vlan(rule, &match);
+		if (match.mask->vlan_id) {
+			if (match.mask->vlan_id == VLAN_VID_MASK) {
 				field_flags |= IAVF_CLOUD_FIELD_IVLAN;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad vlan mask %u\n",
-					mask->vlan_id);
+					match.mask->vlan_id);
 				return I40E_ERR_CONFIG;
 			}
 		}
 		vf->mask.tcp_spec.vlan_id |= cpu_to_be16(0xffff);
-		vf->data.tcp_spec.vlan_id = cpu_to_be16(key->vlan_id);
+		vf->data.tcp_spec.vlan_id = cpu_to_be16(match.key->vlan_id);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CONTROL,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control match;
 
-		addr_type = key->addr_type;
+		flow_rule_match_control(rule, &match);
+		addr_type = match.key->addr_type;
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv4_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  f->mask);
-
-		if (mask->dst) {
-			if (mask->dst == cpu_to_be32(0xffffffff)) {
+		struct flow_match_ipv4_addrs match;
+
+		flow_rule_match_ipv4_addrs(rule, &match);
+		if (match.mask->dst) {
+			if (match.mask->dst == cpu_to_be32(0xffffffff)) {
 				field_flags |= IAVF_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad ip dst mask 0x%08x\n",
-					be32_to_cpu(mask->dst));
+					be32_to_cpu(match.mask->dst));
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (mask->src) {
-			if (mask->src == cpu_to_be32(0xffffffff)) {
+		if (match.mask->src) {
+			if (match.mask->src == cpu_to_be32(0xffffffff)) {
 				field_flags |= IAVF_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad ip src mask 0x%08x\n",
-					be32_to_cpu(mask->dst));
+					be32_to_cpu(match.mask->dst));
 				return I40E_ERR_CONFIG;
 			}
 		}
@@ -2620,28 +2595,23 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
 			dev_info(&adapter->pdev->dev, "Tenant id not allowed for ip filter\n");
 			return I40E_ERR_CONFIG;
 		}
-		if (key->dst) {
+		if (match.key->dst) {
 			vf->mask.tcp_spec.dst_ip[0] |= cpu_to_be32(0xffffffff);
-			vf->data.tcp_spec.dst_ip[0] = key->dst;
+			vf->data.tcp_spec.dst_ip[0] = match.key->dst;
 		}
-		if (key->src) {
+		if (match.key->src) {
 			vf->mask.tcp_spec.src_ip[0] |= cpu_to_be32(0xffffffff);
-			vf->data.tcp_spec.src_ip[0] = key->src;
+			vf->data.tcp_spec.src_ip[0] = match.key->src;
 		}
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
-		struct flow_dissector_key_ipv6_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv6_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  f->mask);
+		struct flow_match_ipv6_addrs match;
+
+		flow_rule_match_ipv6_addrs(rule, &match);
 
 		/* validate mask, make sure it is not IPV6_ADDR_ANY */
-		if (ipv6_addr_any(&mask->dst)) {
+		if (ipv6_addr_any(&match.mask->dst)) {
 			dev_err(&adapter->pdev->dev, "Bad ipv6 dst mask 0x%02x\n",
 				IPV6_ADDR_ANY);
 			return I40E_ERR_CONFIG;
@@ -2650,61 +2620,56 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
 		/* src and dest IPv6 address should not be LOOPBACK
 		 * (0:0:0:0:0:0:0:1) which can be represented as ::1
 		 */
-		if (ipv6_addr_loopback(&key->dst) ||
-		    ipv6_addr_loopback(&key->src)) {
+		if (ipv6_addr_loopback(&match.key->dst) ||
+		    ipv6_addr_loopback(&match.key->src)) {
 			dev_err(&adapter->pdev->dev,
 				"ipv6 addr should not be loopback\n");
 			return I40E_ERR_CONFIG;
 		}
-		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
+		if (!ipv6_addr_any(&match.mask->dst) ||
+		    !ipv6_addr_any(&match.mask->src))
 			field_flags |= IAVF_CLOUD_FIELD_IIP;
 
 		for (i = 0; i < 4; i++)
 			vf->mask.tcp_spec.dst_ip[i] |= cpu_to_be32(0xffffffff);
-		memcpy(&vf->data.tcp_spec.dst_ip, &key->dst.s6_addr32,
+		memcpy(&vf->data.tcp_spec.dst_ip, &match.key->dst.s6_addr32,
 		       sizeof(vf->data.tcp_spec.dst_ip));
 		for (i = 0; i < 4; i++)
 			vf->mask.tcp_spec.src_ip[i] |= cpu_to_be32(0xffffffff);
-		memcpy(&vf->data.tcp_spec.src_ip, &key->src.s6_addr32,
+		memcpy(&vf->data.tcp_spec.src_ip, &match.key->src.s6_addr32,
 		       sizeof(vf->data.tcp_spec.src_ip));
 	}
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		struct flow_dissector_key_ports *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_PORTS,
-						  f->key);
-		struct flow_dissector_key_ports *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_PORTS,
-						  f->mask);
-
-		if (mask->src) {
-			if (mask->src == cpu_to_be16(0xffff)) {
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
+
+		flow_rule_match_ports(rule, &match);
+		if (match.mask->src) {
+			if (match.mask->src == cpu_to_be16(0xffff)) {
 				field_flags |= IAVF_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad src port mask %u\n",
-					be16_to_cpu(mask->src));
+					be16_to_cpu(match.mask->src));
 				return I40E_ERR_CONFIG;
 			}
 		}
 
-		if (mask->dst) {
-			if (mask->dst == cpu_to_be16(0xffff)) {
+		if (match.mask->dst) {
+			if (match.mask->dst == cpu_to_be16(0xffff)) {
 				field_flags |= IAVF_CLOUD_FIELD_IIP;
 			} else {
 				dev_err(&adapter->pdev->dev, "Bad dst port mask %u\n",
-					be16_to_cpu(mask->dst));
+					be16_to_cpu(match.mask->dst));
 				return I40E_ERR_CONFIG;
 			}
 		}
-		if (key->dst) {
+		if (match.key->dst) {
 			vf->mask.tcp_spec.dst_port |= cpu_to_be16(0xffff);
-			vf->data.tcp_spec.dst_port = key->dst;
+			vf->data.tcp_spec.dst_port = match.key->dst;
 		}
 
-		if (key->src) {
+		if (match.key->src) {
 			vf->mask.tcp_spec.src_port |= cpu_to_be16(0xffff);
-			vf->data.tcp_spec.src_port = key->src;
+			vf->data.tcp_spec.src_port = match.key->src;
 		}
 	}
 	vf->field_flags = field_flags;
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 4584ebc9e8fe..c666494b3d1f 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -2581,9 +2581,11 @@ static int igb_parse_cls_flower(struct igb_adapter *adapter,
 				int traffic_class,
 				struct igb_nfc_filter *input)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
 	struct netlink_ext_ack *extack = f->common.extack;
 
-	if (f->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
@@ -2593,78 +2595,60 @@ static int igb_parse_cls_flower(struct igb_adapter *adapter,
 		return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		struct flow_dissector_key_eth_addrs *key, *mask;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
 
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						f->key);
-		mask = skb_flow_dissector_target(f->dissector,
-						 FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						 f->mask);
-
-		if (!is_zero_ether_addr(mask->dst)) {
-			if (!is_broadcast_ether_addr(mask->dst)) {
+		flow_rule_match_eth_addrs(rule, &match);
+		if (!is_zero_ether_addr(match.mask->dst)) {
+			if (!is_broadcast_ether_addr(match.mask->dst)) {
 				NL_SET_ERR_MSG_MOD(extack, "Only full masks are supported for destination MAC address");
 				return -EINVAL;
 			}
 
 			input->filter.match_flags |=
 				IGB_FILTER_FLAG_DST_MAC_ADDR;
-			ether_addr_copy(input->filter.dst_addr, key->dst);
+			ether_addr_copy(input->filter.dst_addr, match.key->dst);
 		}
 
-		if (!is_zero_ether_addr(mask->src)) {
-			if (!is_broadcast_ether_addr(mask->src)) {
+		if (!is_zero_ether_addr(match.mask->src)) {
+			if (!is_broadcast_ether_addr(match.mask->src)) {
 				NL_SET_ERR_MSG_MOD(extack, "Only full masks are supported for source MAC address");
 				return -EINVAL;
 			}
 
 			input->filter.match_flags |=
 				IGB_FILTER_FLAG_SRC_MAC_ADDR;
-			ether_addr_copy(input->filter.src_addr, key->src);
+			ether_addr_copy(input->filter.src_addr, match.key->src);
 		}
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key, *mask;
-
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_BASIC,
-						f->key);
-		mask = skb_flow_dissector_target(f->dissector,
-						 FLOW_DISSECTOR_KEY_BASIC,
-						 f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		if (mask->n_proto) {
-			if (mask->n_proto != ETHER_TYPE_FULL_MASK) {
+		flow_rule_match_basic(rule, &match);
+		if (match.mask->n_proto) {
+			if (match.mask->n_proto != ETHER_TYPE_FULL_MASK) {
 				NL_SET_ERR_MSG_MOD(extack, "Only full mask is supported for EtherType filter");
 				return -EINVAL;
 			}
 
 			input->filter.match_flags |= IGB_FILTER_FLAG_ETHER_TYPE;
-			input->filter.etype = key->n_proto;
+			input->filter.etype = match.key->n_proto;
 		}
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key, *mask;
-
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_VLAN,
-						f->key);
-		mask = skb_flow_dissector_target(f->dissector,
-						 FLOW_DISSECTOR_KEY_VLAN,
-						 f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
 
-		if (mask->vlan_priority) {
-			if (mask->vlan_priority != VLAN_PRIO_FULL_MASK) {
+		flow_rule_match_vlan(rule, &match);
+		if (match.mask->vlan_priority) {
+			if (match.mask->vlan_priority != VLAN_PRIO_FULL_MASK) {
 				NL_SET_ERR_MSG_MOD(extack, "Only full mask is supported for VLAN priority");
 				return -EINVAL;
 			}
 
 			input->filter.match_flags |= IGB_FILTER_FLAG_VLAN_TCI;
-			input->filter.vlan_tci = key->vlan_priority;
+			input->filter.vlan_tci = match.key->vlan_priority;
 		}
 	}
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 608025ca5c04..6a22f7f22890 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1201,23 +1201,19 @@ static void parse_vxlan_attr(struct mlx5_flow_spec *spec,
 				    misc_parameters);
 	void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
 				    misc_parameters);
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
 
 	MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ip_protocol);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP);
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
-		struct flow_dissector_key_keyid *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_KEYID,
-						  f->key);
-		struct flow_dissector_key_keyid *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_KEYID,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_match_enc_keyid match;
+
+		flow_rule_match_enc_keyid(rule, &match);
 		MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni,
-			 be32_to_cpu(mask->keyid));
+			 be32_to_cpu(match.mask->keyid));
 		MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni,
-			 be32_to_cpu(key->keyid));
+			 be32_to_cpu(match.key->keyid));
 	}
 }
 
@@ -1230,46 +1226,41 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
 				       outer_headers);
 	void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
 				       outer_headers);
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_match_control enc_control;
+
+	flow_rule_match_enc_control(rule, &enc_control);
+
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
+		struct flow_match_ports match;
 
-	struct flow_dissector_key_control *enc_control =
-		skb_flow_dissector_target(f->dissector,
-					  FLOW_DISSECTOR_KEY_ENC_CONTROL,
-					  f->key);
-
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
-		struct flow_dissector_key_ports *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_PORTS,
-						  f->key);
-		struct flow_dissector_key_ports *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_PORTS,
-						  f->mask);
+		flow_rule_match_enc_ports(rule, &match);
 
 		/* Full udp dst port must be given */
-		if (memchr_inv(&mask->dst, 0xff, sizeof(mask->dst)))
+		if (memchr_inv(&match.mask->dst, 0xff, sizeof(match.mask->dst)))
 			goto vxlan_match_offload_err;
 
-		if (mlx5_vxlan_lookup_port(priv->mdev->vxlan, be16_to_cpu(key->dst)) &&
+		if (mlx5_vxlan_lookup_port(priv->mdev->vxlan, be16_to_cpu(match.key->dst)) &&
 		    MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap))
 			parse_vxlan_attr(spec, f);
 		else {
 			NL_SET_ERR_MSG_MOD(extack,
 					   "port isn't an offloaded vxlan udp dport");
 			netdev_warn(priv->netdev,
-				    "%d isn't an offloaded vxlan udp dport\n", be16_to_cpu(key->dst));
+				    "%d isn't an offloaded vxlan udp dport\n",
+				    be16_to_cpu(match.key->dst));
 			return -EOPNOTSUPP;
 		}
 
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c,
-			 udp_dport, ntohs(mask->dst));
+			 udp_dport, ntohs(match.mask->dst));
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v,
-			 udp_dport, ntohs(key->dst));
+			 udp_dport, ntohs(match.key->dst));
 
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c,
-			 udp_sport, ntohs(mask->src));
+			 udp_sport, ntohs(match.mask->src));
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v,
-			 udp_sport, ntohs(key->src));
+			 udp_sport, ntohs(match.key->src));
 	} else { /* udp dst port must be given */
 vxlan_match_offload_err:
 		NL_SET_ERR_MSG_MOD(extack,
@@ -1279,79 +1270,68 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
 		return -EOPNOTSUPP;
 	}
 
-	if (enc_control->addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv4_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
-						  f->mask);
+	if (enc_control.key->addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
+		struct flow_match_ipv4_addrs match;
+
+		flow_rule_match_enc_ipv4_addrs(rule, &match);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c,
 			 src_ipv4_src_ipv6.ipv4_layout.ipv4,
-			 ntohl(mask->src));
+			 ntohl(match.mask->src));
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v,
 			 src_ipv4_src_ipv6.ipv4_layout.ipv4,
-			 ntohl(key->src));
+			 ntohl(match.key->src));
 
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c,
 			 dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
-			 ntohl(mask->dst));
+			 ntohl(match.mask->dst));
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v,
 			 dst_ipv4_dst_ipv6.ipv4_layout.ipv4,
-			 ntohl(key->dst));
+			 ntohl(match.key->dst));
 
 		MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP);
-	} else if (enc_control->addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
-		struct flow_dissector_key_ipv6_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv6_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS,
-						  f->mask);
+	} else if (enc_control.key->addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
+		struct flow_match_ipv6_addrs match;
 
+		flow_rule_match_enc_ipv6_addrs(rule, &match);
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 				    src_ipv4_src_ipv6.ipv6_layout.ipv6),
-		       &mask->src, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
+		       &match.mask->src, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 				    src_ipv4_src_ipv6.ipv6_layout.ipv6),
-		       &key->src, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
+		       &match.key->src, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
 
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 				    dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
-		       &mask->dst, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
+		       &match.mask->dst, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 				    dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
-		       &key->dst, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
+		       &match.key->dst, MLX5_FLD_SZ_BYTES(ipv6_layout, ipv6));
 
 		MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IPV6);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_IP)) {
-		struct flow_dissector_key_ip *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IP,
-						  f->key);
-		struct flow_dissector_key_ip *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IP,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IP)) {
+		struct flow_match_ip match;
 
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_ecn, mask->tos & 0x3);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, key->tos & 0x3);
+		flow_rule_match_enc_ip(rule, &match);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_ecn,
+			 match.mask->tos & 0x3);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn,
+			 match.key->tos & 0x3);
 
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_dscp, mask->tos >> 2);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, key->tos  >> 2);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_dscp,
+			 match.mask->tos >> 2);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp,
+			 match.key->tos  >> 2);
 
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ttl_hoplimit, mask->ttl);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ttl_hoplimit, key->ttl);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ttl_hoplimit,
+			 match.mask->ttl);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ttl_hoplimit,
+			 match.key->ttl);
 
-		if (mask->ttl &&
+		if (match.mask->ttl &&
 		    !MLX5_CAP_ESW_FLOWTABLE_FDB
 			(priv->mdev,
 			 ft_field_support.outer_ipv4_ttl)) {
@@ -1391,12 +1371,14 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 				    misc_parameters);
 	void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
 				    misc_parameters);
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
 	u16 addr_type = 0;
 	u8 ip_proto = 0;
 
 	*match_level = MLX5_MATCH_NONE;
 
-	if (f->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
@@ -1415,20 +1397,18 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 	      BIT(FLOW_DISSECTOR_KEY_ENC_IP))) {
 		NL_SET_ERR_MSG_MOD(extack, "Unsupported key");
 		netdev_warn(priv->netdev, "Unsupported key used: 0x%x\n",
-			    f->dissector->used_keys);
+			    dissector->used_keys);
 		return -EOPNOTSUPP;
 	}
 
-	if ((dissector_uses_key(f->dissector,
-				FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) ||
-	     dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID) ||
-	     dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS)) &&
-	    dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
-		struct flow_dissector_key_control *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_CONTROL,
-						  f->key);
-		switch (key->addr_type) {
+	if ((flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) ||
+	     flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID) ||
+	     flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) &&
+	    flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
+		struct flow_match_control match;
+
+		flow_rule_match_enc_control(rule, &match);
+		switch (match.key->addr_type) {
 		case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
 		case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
 			if (parse_tunnel_attr(priv, spec, f))
@@ -1447,45 +1427,37 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 					 inner_headers);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		struct flow_dissector_key_eth_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->key);
-		struct flow_dissector_key_eth_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
 
+		flow_rule_match_eth_addrs(rule, &match);
 		ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 					     dmac_47_16),
-				mask->dst);
+				match.mask->dst);
 		ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 					     dmac_47_16),
-				key->dst);
+				match.key->dst);
 
 		ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 					     smac_47_16),
-				mask->src);
+				match.mask->src);
 		ether_addr_copy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 					     smac_47_16),
-				key->src);
+				match.key->src);
 
-		if (!is_zero_ether_addr(mask->src) || !is_zero_ether_addr(mask->dst))
+		if (!is_zero_ether_addr(match.mask->src) ||
+		    !is_zero_ether_addr(match.mask->dst))
 			*match_level = MLX5_MATCH_L2;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->key);
-		struct flow_dissector_key_vlan *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->mask);
-		if (mask->vlan_id || mask->vlan_priority || mask->vlan_tpid) {
-			if (key->vlan_tpid == htons(ETH_P_8021AD)) {
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
+
+		flow_rule_match_vlan(rule, &match);
+		if (match.mask->vlan_id ||
+		    match.mask->vlan_priority ||
+		    match.mask->vlan_tpid) {
+			if (match.key->vlan_tpid == htons(ETH_P_8021AD)) {
 				MLX5_SET(fte_match_set_lyr_2_4, headers_c,
 					 svlan_tag, 1);
 				MLX5_SET(fte_match_set_lyr_2_4, headers_v,
@@ -1497,11 +1469,15 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 					 cvlan_tag, 1);
 			}
 
-			MLX5_SET(fte_match_set_lyr_2_4, headers_c, first_vid, mask->vlan_id);
-			MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid, key->vlan_id);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_c, first_vid,
+				 match.mask->vlan_id);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid,
+				 match.key->vlan_id);
 
-			MLX5_SET(fte_match_set_lyr_2_4, headers_c, first_prio, mask->vlan_priority);
-			MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_prio, key->vlan_priority);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_c, first_prio,
+				 match.mask->vlan_priority);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_prio,
+				 match.key->vlan_priority);
 
 			*match_level = MLX5_MATCH_L2;
 		}
@@ -1510,17 +1486,14 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CVLAN)) {
-		struct flow_dissector_key_vlan *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CVLAN,
-						  f->key);
-		struct flow_dissector_key_vlan *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CVLAN,
-						  f->mask);
-		if (mask->vlan_id || mask->vlan_priority || mask->vlan_tpid) {
-			if (key->vlan_tpid == htons(ETH_P_8021AD)) {
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CVLAN)) {
+		struct flow_match_vlan match;
+
+		flow_rule_match_vlan(rule, &match);
+		if (match.mask->vlan_id ||
+		    match.mask->vlan_priority ||
+		    match.mask->vlan_tpid) {
+			if (match.key->vlan_tpid == htons(ETH_P_8021AD)) {
 				MLX5_SET(fte_match_set_misc, misc_c,
 					 outer_second_svlan_tag, 1);
 				MLX5_SET(fte_match_set_misc, misc_v,
@@ -1533,59 +1506,48 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 			}
 
 			MLX5_SET(fte_match_set_misc, misc_c, outer_second_vid,
-				 mask->vlan_id);
+				 match.mask->vlan_id);
 			MLX5_SET(fte_match_set_misc, misc_v, outer_second_vid,
-				 key->vlan_id);
+				 match.key->vlan_id);
 			MLX5_SET(fte_match_set_misc, misc_c, outer_second_prio,
-				 mask->vlan_priority);
+				 match.mask->vlan_priority);
 			MLX5_SET(fte_match_set_misc, misc_v, outer_second_prio,
-				 key->vlan_priority);
+				 match.key->vlan_priority);
 
 			*match_level = MLX5_MATCH_L2;
 		}
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->key);
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
+
+		flow_rule_match_basic(rule, &match);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ethertype,
-			 ntohs(mask->n_proto));
+			 ntohs(match.mask->n_proto));
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype,
-			 ntohs(key->n_proto));
+			 ntohs(match.key->n_proto));
 
-		if (mask->n_proto)
+		if (match.mask->n_proto)
 			*match_level = MLX5_MATCH_L2;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CONTROL,
-						  f->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control match;
 
-		struct flow_dissector_key_control *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CONTROL,
-						  f->mask);
-		addr_type = key->addr_type;
+		flow_rule_match_control(rule, &match);
+		addr_type = match.key->addr_type;
 
 		/* the HW doesn't support frag first/later */
-		if (mask->flags & FLOW_DIS_FIRST_FRAG)
+		if (match.mask->flags & FLOW_DIS_FIRST_FRAG)
 			return -EOPNOTSUPP;
 
-		if (mask->flags & FLOW_DIS_IS_FRAGMENT) {
+		if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) {
 			MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1);
 			MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
-				 key->flags & FLOW_DIS_IS_FRAGMENT);
+				 match.key->flags & FLOW_DIS_IS_FRAGMENT);
 
 			/* the HW doesn't need L3 inline to match on frag=no */
-			if (!(key->flags & FLOW_DIS_IS_FRAGMENT))
+			if (!(match.key->flags & FLOW_DIS_IS_FRAGMENT))
 				*match_level = MLX5_INLINE_MODE_L2;
 	/* ***  L2 attributes parsing up to here *** */
 			else
@@ -1593,102 +1555,85 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 		}
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->key);
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->mask);
-		ip_proto = key->ip_proto;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
+
+		flow_rule_match_basic(rule, &match);
+		ip_proto = match.key->ip_proto;
 
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_protocol,
-			 mask->ip_proto);
+			 match.mask->ip_proto);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol,
-			 key->ip_proto);
+			 match.key->ip_proto);
 
-		if (mask->ip_proto)
+		if (match.mask->ip_proto)
 			*match_level = MLX5_MATCH_L3;
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
-		struct flow_dissector_key_ipv4_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv4_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						  f->mask);
+		struct flow_match_ipv4_addrs match;
 
+		flow_rule_match_ipv4_addrs(rule, &match);
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 				    src_ipv4_src_ipv6.ipv4_layout.ipv4),
-		       &mask->src, sizeof(mask->src));
+		       &match.mask->src, sizeof(match.mask->src));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 				    src_ipv4_src_ipv6.ipv4_layout.ipv4),
-		       &key->src, sizeof(key->src));
+		       &match.key->src, sizeof(match.key->src));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 				    dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
-		       &mask->dst, sizeof(mask->dst));
+		       &match.mask->dst, sizeof(match.mask->dst));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 				    dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
-		       &key->dst, sizeof(key->dst));
+		       &match.key->dst, sizeof(match.key->dst));
 
-		if (mask->src || mask->dst)
+		if (match.mask->src || match.mask->dst)
 			*match_level = MLX5_MATCH_L3;
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
-		struct flow_dissector_key_ipv6_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  f->key);
-		struct flow_dissector_key_ipv6_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						  f->mask);
+		struct flow_match_ipv6_addrs match;
 
+		flow_rule_match_ipv6_addrs(rule, &match);
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 				    src_ipv4_src_ipv6.ipv6_layout.ipv6),
-		       &mask->src, sizeof(mask->src));
+		       &match.mask->src, sizeof(match.mask->src));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 				    src_ipv4_src_ipv6.ipv6_layout.ipv6),
-		       &key->src, sizeof(key->src));
+		       &match.key->src, sizeof(match.key->src));
 
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_c,
 				    dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
-		       &mask->dst, sizeof(mask->dst));
+		       &match.mask->dst, sizeof(match.mask->dst));
 		memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v,
 				    dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
-		       &key->dst, sizeof(key->dst));
+		       &match.key->dst, sizeof(match.key->dst));
 
-		if (ipv6_addr_type(&mask->src) != IPV6_ADDR_ANY ||
-		    ipv6_addr_type(&mask->dst) != IPV6_ADDR_ANY)
+		if (ipv6_addr_type(&match.mask->src) != IPV6_ADDR_ANY ||
+		    ipv6_addr_type(&match.mask->dst) != IPV6_ADDR_ANY)
 			*match_level = MLX5_MATCH_L3;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_IP)) {
-		struct flow_dissector_key_ip *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IP,
-						  f->key);
-		struct flow_dissector_key_ip *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_IP,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
+		struct flow_match_ip match;
 
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_ecn, mask->tos & 0x3);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, key->tos & 0x3);
+		flow_rule_match_ip(rule, &match);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_ecn,
+			 match.mask->tos & 0x3);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn,
+			 match.key->tos & 0x3);
 
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_dscp, mask->tos >> 2);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, key->tos  >> 2);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ip_dscp,
+			 match.mask->tos >> 2);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp,
+			 match.key->tos  >> 2);
 
-		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ttl_hoplimit, mask->ttl);
-		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ttl_hoplimit, key->ttl);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_c, ttl_hoplimit,
+			 match.mask->ttl);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ttl_hoplimit,
+			 match.key->ttl);
 
-		if (mask->ttl &&
+		if (match.mask->ttl &&
 		    !MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
 						ft_field_support.outer_ipv4_ttl)) {
 			NL_SET_ERR_MSG_MOD(extack,
@@ -1696,44 +1641,39 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 			return -EOPNOTSUPP;
 		}
 
-		if (mask->tos || mask->ttl)
+		if (match.mask->tos || match.mask->ttl)
 			*match_level = MLX5_MATCH_L3;
 	}
 
 	/* ***  L3 attributes parsing up to here *** */
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		struct flow_dissector_key_ports *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_PORTS,
-						  f->key);
-		struct flow_dissector_key_ports *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_PORTS,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
+
+		flow_rule_match_ports(rule, &match);
 		switch (ip_proto) {
 		case IPPROTO_TCP:
 			MLX5_SET(fte_match_set_lyr_2_4, headers_c,
-				 tcp_sport, ntohs(mask->src));
+				 tcp_sport, ntohs(match.mask->src));
 			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
-				 tcp_sport, ntohs(key->src));
+				 tcp_sport, ntohs(match.key->src));
 
 			MLX5_SET(fte_match_set_lyr_2_4, headers_c,
-				 tcp_dport, ntohs(mask->dst));
+				 tcp_dport, ntohs(match.mask->dst));
 			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
-				 tcp_dport, ntohs(key->dst));
+				 tcp_dport, ntohs(match.key->dst));
 			break;
 
 		case IPPROTO_UDP:
 			MLX5_SET(fte_match_set_lyr_2_4, headers_c,
-				 udp_sport, ntohs(mask->src));
+				 udp_sport, ntohs(match.mask->src));
 			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
-				 udp_sport, ntohs(key->src));
+				 udp_sport, ntohs(match.key->src));
 
 			MLX5_SET(fte_match_set_lyr_2_4, headers_c,
-				 udp_dport, ntohs(mask->dst));
+				 udp_dport, ntohs(match.mask->dst));
 			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
-				 udp_dport, ntohs(key->dst));
+				 udp_dport, ntohs(match.key->dst));
 			break;
 		default:
 			NL_SET_ERR_MSG_MOD(extack,
@@ -1743,26 +1683,20 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
 			return -EINVAL;
 		}
 
-		if (mask->src || mask->dst)
+		if (match.mask->src || match.mask->dst)
 			*match_level = MLX5_MATCH_L4;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_TCP)) {
-		struct flow_dissector_key_tcp *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_TCP,
-						  f->key);
-		struct flow_dissector_key_tcp *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_TCP,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_TCP)) {
+		struct flow_match_tcp match;
 
+		flow_rule_match_tcp(rule, &match);
 		MLX5_SET(fte_match_set_lyr_2_4, headers_c, tcp_flags,
-			 ntohs(mask->flags));
+			 ntohs(match.mask->flags));
 		MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags,
-			 ntohs(key->flags));
+			 ntohs(match.key->flags));
 
-		if (mask->flags)
+		if (match.mask->flags)
 			*match_level = MLX5_MATCH_L4;
 	}
 
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 8d211972c5e9..193a6f9acf79 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -113,59 +113,49 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 static void mlxsw_sp_flower_parse_ipv4(struct mlxsw_sp_acl_rule_info *rulei,
 				       struct tc_cls_flower_offload *f)
 {
-	struct flow_dissector_key_ipv4_addrs *key =
-		skb_flow_dissector_target(f->dissector,
-					  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-					  f->key);
-	struct flow_dissector_key_ipv4_addrs *mask =
-		skb_flow_dissector_target(f->dissector,
-					  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-					  f->mask);
+	struct flow_match_ipv4_addrs match;
+
+	flow_rule_match_ipv4_addrs(&f->rule, &match);
 
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_0_31,
-				       (char *) &key->src,
-				       (char *) &mask->src, 4);
+				       (char *) &match.key->src,
+				       (char *) &match.mask->src, 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_0_31,
-				       (char *) &key->dst,
-				       (char *) &mask->dst, 4);
+				       (char *) &match.key->dst,
+				       (char *) &match.mask->dst, 4);
 }
 
 static void mlxsw_sp_flower_parse_ipv6(struct mlxsw_sp_acl_rule_info *rulei,
 				       struct tc_cls_flower_offload *f)
 {
-	struct flow_dissector_key_ipv6_addrs *key =
-		skb_flow_dissector_target(f->dissector,
-					  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-					  f->key);
-	struct flow_dissector_key_ipv6_addrs *mask =
-		skb_flow_dissector_target(f->dissector,
-					  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-					  f->mask);
+	struct flow_match_ipv6_addrs match;
+
+	flow_rule_match_ipv6_addrs(&f->rule, &match);
 
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_96_127,
-				       &key->src.s6_addr[0x0],
-				       &mask->src.s6_addr[0x0], 4);
+				       &match.key->src.s6_addr[0x0],
+				       &match.mask->src.s6_addr[0x0], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_64_95,
-				       &key->src.s6_addr[0x4],
-				       &mask->src.s6_addr[0x4], 4);
+				       &match.key->src.s6_addr[0x4],
+				       &match.mask->src.s6_addr[0x4], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_32_63,
-				       &key->src.s6_addr[0x8],
-				       &mask->src.s6_addr[0x8], 4);
+				       &match.key->src.s6_addr[0x8],
+				       &match.mask->src.s6_addr[0x8], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_0_31,
-				       &key->src.s6_addr[0xC],
-				       &mask->src.s6_addr[0xC], 4);
+				       &match.key->src.s6_addr[0xC],
+				       &match.mask->src.s6_addr[0xC], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_96_127,
-				       &key->dst.s6_addr[0x0],
-				       &mask->dst.s6_addr[0x0], 4);
+				       &match.key->dst.s6_addr[0x0],
+				       &match.mask->dst.s6_addr[0x0], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_64_95,
-				       &key->dst.s6_addr[0x4],
-				       &mask->dst.s6_addr[0x4], 4);
+				       &match.key->dst.s6_addr[0x4],
+				       &match.mask->dst.s6_addr[0x4], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_32_63,
-				       &key->dst.s6_addr[0x8],
-				       &mask->dst.s6_addr[0x8], 4);
+				       &match.key->dst.s6_addr[0x8],
+				       &match.mask->dst.s6_addr[0x8], 4);
 	mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_DST_IP_0_31,
-				       &key->dst.s6_addr[0xC],
-				       &mask->dst.s6_addr[0xC], 4);
+				       &match.key->dst.s6_addr[0xC],
+				       &match.mask->dst.s6_addr[0xC], 4);
 }
 
 static int mlxsw_sp_flower_parse_ports(struct mlxsw_sp *mlxsw_sp,
@@ -173,9 +163,10 @@ static int mlxsw_sp_flower_parse_ports(struct mlxsw_sp *mlxsw_sp,
 				       struct tc_cls_flower_offload *f,
 				       u8 ip_proto)
 {
-	struct flow_dissector_key_ports *key, *mask;
+	const struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_match_ports match;
 
-	if (!dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS))
+	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS))
 		return 0;
 
 	if (ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) {
@@ -184,16 +175,13 @@ static int mlxsw_sp_flower_parse_ports(struct mlxsw_sp *mlxsw_sp,
 		return -EINVAL;
 	}
 
-	key = skb_flow_dissector_target(f->dissector,
-					FLOW_DISSECTOR_KEY_PORTS,
-					f->key);
-	mask = skb_flow_dissector_target(f->dissector,
-					 FLOW_DISSECTOR_KEY_PORTS,
-					 f->mask);
+	flow_rule_match_ports(rule, &match);
 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_DST_L4_PORT,
-				       ntohs(key->dst), ntohs(mask->dst));
+				       ntohs(match.key->dst),
+				       ntohs(match.mask->dst));
 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_SRC_L4_PORT,
-				       ntohs(key->src), ntohs(mask->src));
+				       ntohs(match.key->src),
+				       ntohs(match.mask->src));
 	return 0;
 }
 
@@ -202,9 +190,10 @@ static int mlxsw_sp_flower_parse_tcp(struct mlxsw_sp *mlxsw_sp,
 				     struct tc_cls_flower_offload *f,
 				     u8 ip_proto)
 {
-	struct flow_dissector_key_tcp *key, *mask;
+	const struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_match_tcp match;
 
-	if (!dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_TCP))
+	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_TCP))
 		return 0;
 
 	if (ip_proto != IPPROTO_TCP) {
@@ -213,14 +202,11 @@ static int mlxsw_sp_flower_parse_tcp(struct mlxsw_sp *mlxsw_sp,
 		return -EINVAL;
 	}
 
-	key = skb_flow_dissector_target(f->dissector,
-					FLOW_DISSECTOR_KEY_TCP,
-					f->key);
-	mask = skb_flow_dissector_target(f->dissector,
-					 FLOW_DISSECTOR_KEY_TCP,
-					 f->mask);
+	flow_rule_match_tcp(rule, &match);
+
 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_TCP_FLAGS,
-				       ntohs(key->flags), ntohs(mask->flags));
+				       ntohs(match.key->flags),
+				       ntohs(match.mask->flags));
 	return 0;
 }
 
@@ -229,9 +215,10 @@ static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp,
 				    struct tc_cls_flower_offload *f,
 				    u16 n_proto)
 {
-	struct flow_dissector_key_ip *key, *mask;
+	const struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_match_ip match;
 
-	if (!dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_IP))
+	if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP))
 		return 0;
 
 	if (n_proto != ETH_P_IP && n_proto != ETH_P_IPV6) {
@@ -240,20 +227,18 @@ static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp,
 		return -EINVAL;
 	}
 
-	key = skb_flow_dissector_target(f->dissector,
-					FLOW_DISSECTOR_KEY_IP,
-					f->key);
-	mask = skb_flow_dissector_target(f->dissector,
-					 FLOW_DISSECTOR_KEY_IP,
-					 f->mask);
+	flow_rule_match_ip(rule, &match);
+
 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_IP_TTL_,
-				       key->ttl, mask->ttl);
+				       match.key->ttl, match.mask->ttl);
 
 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_IP_ECN,
-				       key->tos & 0x3, mask->tos & 0x3);
+				       match.key->tos & 0x3,
+				       match.mask->tos & 0x3);
 
 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_IP_DSCP,
-				       key->tos >> 6, mask->tos >> 6);
+				       match.key->tos >> 6,
+				       match.mask->tos >> 6);
 
 	return 0;
 }
@@ -263,13 +248,15 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
 				 struct mlxsw_sp_acl_rule_info *rulei,
 				 struct tc_cls_flower_offload *f)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
 	u16 n_proto_mask = 0;
 	u16 n_proto_key = 0;
 	u16 addr_type = 0;
 	u8 ip_proto = 0;
 	int err;
 
-	if (f->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
@@ -286,25 +273,19 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
 
 	mlxsw_sp_acl_rulei_priority(rulei, f->common.prio);
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_CONTROL,
-						  f->key);
-		addr_type = key->addr_type;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control match;
+
+		flow_rule_match_control(rule, &match);
+		addr_type = match.key->addr_type;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->key);
-		struct flow_dissector_key_basic *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  f->mask);
-		n_proto_key = ntohs(key->n_proto);
-		n_proto_mask = ntohs(mask->n_proto);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
+
+		flow_rule_match_basic(rule, &match);
+		n_proto_key = ntohs(match.key->n_proto);
+		n_proto_mask = ntohs(match.mask->n_proto);
 
 		if (n_proto_key == ETH_P_ALL) {
 			n_proto_key = 0;
@@ -314,60 +295,53 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
 					       MLXSW_AFK_ELEMENT_ETHERTYPE,
 					       n_proto_key, n_proto_mask);
 
-		ip_proto = key->ip_proto;
+		ip_proto = match.key->ip_proto;
 		mlxsw_sp_acl_rulei_keymask_u32(rulei,
 					       MLXSW_AFK_ELEMENT_IP_PROTO,
-					       key->ip_proto, mask->ip_proto);
+					       match.key->ip_proto,
+					       match.mask->ip_proto);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		struct flow_dissector_key_eth_addrs *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->key);
-		struct flow_dissector_key_eth_addrs *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
 
+		flow_rule_match_eth_addrs(rule, &match);
 		mlxsw_sp_acl_rulei_keymask_buf(rulei,
 					       MLXSW_AFK_ELEMENT_DMAC_32_47,
-					       key->dst, mask->dst, 2);
+					       match.key->dst,
+					       match.mask->dst, 2);
 		mlxsw_sp_acl_rulei_keymask_buf(rulei,
 					       MLXSW_AFK_ELEMENT_DMAC_0_31,
-					       key->dst + 2, mask->dst + 2, 4);
+					       match.key->dst + 2,
+					       match.mask->dst + 2, 4);
 		mlxsw_sp_acl_rulei_keymask_buf(rulei,
 					       MLXSW_AFK_ELEMENT_SMAC_32_47,
-					       key->src, mask->src, 2);
+					       match.key->src,
+					       match.mask->src, 2);
 		mlxsw_sp_acl_rulei_keymask_buf(rulei,
 					       MLXSW_AFK_ELEMENT_SMAC_0_31,
-					       key->src + 2, mask->src + 2, 4);
+					       match.key->src + 2,
+					       match.mask->src + 2, 4);
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *key =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->key);
-		struct flow_dissector_key_vlan *mask =
-			skb_flow_dissector_target(f->dissector,
-						  FLOW_DISSECTOR_KEY_VLAN,
-						  f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
 
+		flow_rule_match_vlan(rule, &match);
 		if (mlxsw_sp_acl_block_is_egress_bound(block)) {
 			NL_SET_ERR_MSG_MOD(f->common.extack, "vlan_id key is not supported on egress");
 			return -EOPNOTSUPP;
 		}
-		if (mask->vlan_id != 0)
+		if (match.mask->vlan_id != 0)
 			mlxsw_sp_acl_rulei_keymask_u32(rulei,
 						       MLXSW_AFK_ELEMENT_VID,
-						       key->vlan_id,
-						       mask->vlan_id);
-		if (mask->vlan_priority != 0)
+						       match.key->vlan_id,
+						       match.mask->vlan_id);
+		if (match.mask->vlan_priority != 0)
 			mlxsw_sp_acl_rulei_keymask_u32(rulei,
 						       MLXSW_AFK_ELEMENT_PCP,
-						       key->vlan_priority,
-						       mask->vlan_priority);
+						       match.key->vlan_priority,
+						       match.mask->vlan_priority);
 	}
 
 	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
index 8d54b36afee8..43192640bdd1 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
@@ -587,6 +587,7 @@ static int
 nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
 	     char *nfp_action, int *a_len, u32 *csum_updated)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
 	struct nfp_fl_set_ipv6_addr set_ip6_dst, set_ip6_src;
 	struct nfp_fl_set_ipv6_tc_hl_fl set_ip6_tc_hl_fl;
 	struct nfp_fl_set_ip4_ttl_tos set_ip_ttl_tos;
@@ -643,13 +644,11 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
 			return err;
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *basic;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		basic = skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  flow->key);
-		ip_proto = basic->ip_proto;
+		flow_rule_match_basic(rule, &match);
+		ip_proto = match.key->ip_proto;
 	}
 
 	if (set_eth.head.len_lw) {
diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
index cdf75595f627..f02cda3bf718 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
@@ -8,31 +8,41 @@
 #include "main.h"
 
 static void
-nfp_flower_compile_meta_tci(struct nfp_flower_meta_tci *frame,
-			    struct tc_cls_flower_offload *flow, u8 key_type,
-			    bool mask_version)
+nfp_flower_compile_meta_tci(struct nfp_flower_meta_tci *ext,
+			    struct nfp_flower_meta_tci *msk,
+			    struct tc_cls_flower_offload *flow, u8 key_type)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_vlan *flow_vlan;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
 	u16 tmp_tci;
 
-	memset(frame, 0, sizeof(struct nfp_flower_meta_tci));
+	memset(ext, 0, sizeof(struct nfp_flower_meta_tci));
+	memset(msk, 0, sizeof(struct nfp_flower_meta_tci));
+
 	/* Populate the metadata frame. */
-	frame->nfp_flow_key_layer = key_type;
-	frame->mask_id = ~0;
+	ext->nfp_flow_key_layer = key_type;
+	ext->mask_id = ~0;
+
+	msk->nfp_flow_key_layer = key_type;
+	msk->mask_id = ~0;
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		flow_vlan = skb_flow_dissector_target(flow->dissector,
-						      FLOW_DISSECTOR_KEY_VLAN,
-						      target);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan match;
+
+		flow_rule_match_vlan(rule, &match);
 		/* Populate the tci field. */
-		if (flow_vlan->vlan_id || flow_vlan->vlan_priority) {
+		if (match.key->vlan_id || match.key->vlan_priority) {
+			tmp_tci = FIELD_PREP(NFP_FLOWER_MASK_VLAN_PRIO,
+					     match.key->vlan_priority) |
+				  FIELD_PREP(NFP_FLOWER_MASK_VLAN_VID,
+					     match.key->vlan_id) |
+				  NFP_FLOWER_MASK_VLAN_CFI;
+			ext->tci = cpu_to_be16(tmp_tci);
 			tmp_tci = FIELD_PREP(NFP_FLOWER_MASK_VLAN_PRIO,
-					     flow_vlan->vlan_priority) |
+					     match.mask->vlan_priority) |
 				  FIELD_PREP(NFP_FLOWER_MASK_VLAN_VID,
-					     flow_vlan->vlan_id) |
+					     match.mask->vlan_id) |
 				  NFP_FLOWER_MASK_VLAN_CFI;
-			frame->tci = cpu_to_be16(tmp_tci);
+			msk->tci = cpu_to_be16(tmp_tci);
 		}
 	}
 }
@@ -64,231 +74,244 @@ nfp_flower_compile_port(struct nfp_flower_in_port *frame, u32 cmsg_port,
 }
 
 static void
-nfp_flower_compile_mac(struct nfp_flower_mac_mpls *frame,
-		       struct tc_cls_flower_offload *flow,
-		       bool mask_version)
+nfp_flower_compile_mac(struct nfp_flower_mac_mpls *ext,
+		       struct nfp_flower_mac_mpls *msk,
+		       struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_eth_addrs *addr;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
 
-	memset(frame, 0, sizeof(struct nfp_flower_mac_mpls));
+	memset(ext, 0, sizeof(struct nfp_flower_mac_mpls));
+	memset(msk, 0, sizeof(struct nfp_flower_mac_mpls));
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
-		addr = skb_flow_dissector_target(flow->dissector,
-						 FLOW_DISSECTOR_KEY_ETH_ADDRS,
-						 target);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_match_eth_addrs match;
+
+		flow_rule_match_eth_addrs(rule, &match);
 		/* Populate mac frame. */
-		ether_addr_copy(frame->mac_dst, &addr->dst[0]);
-		ether_addr_copy(frame->mac_src, &addr->src[0]);
+		ether_addr_copy(ext->mac_dst, &match.key->dst[0]);
+		ether_addr_copy(ext->mac_src, &match.key->src[0]);
+		ether_addr_copy(msk->mac_dst, &match.mask->dst[0]);
+		ether_addr_copy(msk->mac_src, &match.mask->src[0]);
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_MPLS)) {
-		struct flow_dissector_key_mpls *mpls;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS)) {
+		struct flow_match_mpls match;
 		u32 t_mpls;
 
-		mpls = skb_flow_dissector_target(flow->dissector,
-						 FLOW_DISSECTOR_KEY_MPLS,
-						 target);
-
-		t_mpls = FIELD_PREP(NFP_FLOWER_MASK_MPLS_LB, mpls->mpls_label) |
-			 FIELD_PREP(NFP_FLOWER_MASK_MPLS_TC, mpls->mpls_tc) |
-			 FIELD_PREP(NFP_FLOWER_MASK_MPLS_BOS, mpls->mpls_bos) |
+		flow_rule_match_mpls(rule, &match);
+		t_mpls = FIELD_PREP(NFP_FLOWER_MASK_MPLS_LB, match.key->mpls_label) |
+			 FIELD_PREP(NFP_FLOWER_MASK_MPLS_TC, match.key->mpls_tc) |
+			 FIELD_PREP(NFP_FLOWER_MASK_MPLS_BOS, match.key->mpls_bos) |
 			 NFP_FLOWER_MASK_MPLS_Q;
-
-		frame->mpls_lse = cpu_to_be32(t_mpls);
-	} else if (dissector_uses_key(flow->dissector,
-				      FLOW_DISSECTOR_KEY_BASIC)) {
+		ext->mpls_lse = cpu_to_be32(t_mpls);
+		t_mpls = FIELD_PREP(NFP_FLOWER_MASK_MPLS_LB, match.mask->mpls_label) |
+			 FIELD_PREP(NFP_FLOWER_MASK_MPLS_TC, match.mask->mpls_tc) |
+			 FIELD_PREP(NFP_FLOWER_MASK_MPLS_BOS, match.mask->mpls_bos) |
+			 NFP_FLOWER_MASK_MPLS_Q;
+		msk->mpls_lse = cpu_to_be32(t_mpls);
+	} else if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
 		/* Check for mpls ether type and set NFP_FLOWER_MASK_MPLS_Q
 		 * bit, which indicates an mpls ether type but without any
 		 * mpls fields.
 		 */
-		struct flow_dissector_key_basic *key_basic;
-
-		key_basic = skb_flow_dissector_target(flow->dissector,
-						      FLOW_DISSECTOR_KEY_BASIC,
-						      flow->key);
-		if (key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_UC) ||
-		    key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_MC))
-			frame->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q);
+		struct flow_match_basic match;
+
+		flow_rule_match_basic(rule, &match);
+		if (match.key->n_proto == cpu_to_be16(ETH_P_MPLS_UC) ||
+		    match.key->n_proto == cpu_to_be16(ETH_P_MPLS_MC)) {
+			ext->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q);
+			msk->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q);
+		}
 	}
 }
 
 static void
-nfp_flower_compile_tport(struct nfp_flower_tp_ports *frame,
-			 struct tc_cls_flower_offload *flow,
-			 bool mask_version)
+nfp_flower_compile_tport(struct nfp_flower_tp_ports *ext,
+			 struct nfp_flower_tp_ports *msk,
+			 struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_ports *tp;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
 
-	memset(frame, 0, sizeof(struct nfp_flower_tp_ports));
+	memset(ext, 0, sizeof(struct nfp_flower_tp_ports));
+	memset(msk, 0, sizeof(struct nfp_flower_tp_ports));
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		tp = skb_flow_dissector_target(flow->dissector,
-					       FLOW_DISSECTOR_KEY_PORTS,
-					       target);
-		frame->port_src = tp->src;
-		frame->port_dst = tp->dst;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
+
+		flow_rule_match_ports(rule, &match);
+		ext->port_src = match.key->src;
+		ext->port_dst = match.key->dst;
+		msk->port_src = match.mask->src;
+		msk->port_dst = match.mask->dst;
 	}
 }
 
 static void
-nfp_flower_compile_ip_ext(struct nfp_flower_ip_ext *frame,
-			  struct tc_cls_flower_offload *flow,
-			  bool mask_version)
+nfp_flower_compile_ip_ext(struct nfp_flower_ip_ext *ext,
+			  struct nfp_flower_ip_ext *msk,
+			  struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *basic;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		basic = skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_BASIC,
-						  target);
-		frame->proto = basic->ip_proto;
+		flow_rule_match_basic(rule, &match);
+		ext->proto = match.key->ip_proto;
+		msk->proto = match.mask->ip_proto;
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_IP)) {
-		struct flow_dissector_key_ip *flow_ip;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
+		struct flow_match_ip match;
 
-		flow_ip = skb_flow_dissector_target(flow->dissector,
-						    FLOW_DISSECTOR_KEY_IP,
-						    target);
-		frame->tos = flow_ip->tos;
-		frame->ttl = flow_ip->ttl;
+		flow_rule_match_ip(rule, &match);
+		ext->tos = match.key->tos;
+		ext->ttl = match.key->ttl;
+		msk->tos = match.mask->tos;
+		msk->ttl = match.mask->ttl;
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_TCP)) {
-		struct flow_dissector_key_tcp *tcp;
-		u32 tcp_flags;
-
-		tcp = skb_flow_dissector_target(flow->dissector,
-						FLOW_DISSECTOR_KEY_TCP, target);
-		tcp_flags = be16_to_cpu(tcp->flags);
-
-		if (tcp_flags & TCPHDR_FIN)
-			frame->flags |= NFP_FL_TCP_FLAG_FIN;
-		if (tcp_flags & TCPHDR_SYN)
-			frame->flags |= NFP_FL_TCP_FLAG_SYN;
-		if (tcp_flags & TCPHDR_RST)
-			frame->flags |= NFP_FL_TCP_FLAG_RST;
-		if (tcp_flags & TCPHDR_PSH)
-			frame->flags |= NFP_FL_TCP_FLAG_PSH;
-		if (tcp_flags & TCPHDR_URG)
-			frame->flags |= NFP_FL_TCP_FLAG_URG;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_TCP)) {
+		struct flow_match_tcp match;
+		u16 tcp_flags;
+
+		flow_rule_match_tcp(rule, &match);
+		tcp_flags = be16_to_cpu(match.key->flags);
+
+		if (tcp_flags & TCPHDR_FIN) {
+			ext->flags |= NFP_FL_TCP_FLAG_FIN;
+			msk->flags |= NFP_FL_TCP_FLAG_FIN;
+		}
+		if (tcp_flags & TCPHDR_SYN) {
+			ext->flags |= NFP_FL_TCP_FLAG_SYN;
+			msk->flags |= NFP_FL_TCP_FLAG_SYN;
+		}
+		if (tcp_flags & TCPHDR_RST) {
+			ext->flags |= NFP_FL_TCP_FLAG_RST;
+			msk->flags |= NFP_FL_TCP_FLAG_RST;
+		}
+		if (tcp_flags & TCPHDR_PSH) {
+			ext->flags |= NFP_FL_TCP_FLAG_PSH;
+			msk->flags |= NFP_FL_TCP_FLAG_PSH;
+		}
+		if (tcp_flags & TCPHDR_URG) {
+			ext->flags |= NFP_FL_TCP_FLAG_URG;
+			msk->flags |= NFP_FL_TCP_FLAG_URG;
+		}
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control match;
 
-		key = skb_flow_dissector_target(flow->dissector,
-						FLOW_DISSECTOR_KEY_CONTROL,
-						target);
-		if (key->flags & FLOW_DIS_IS_FRAGMENT)
-			frame->flags |= NFP_FL_IP_FRAGMENTED;
-		if (key->flags & FLOW_DIS_FIRST_FRAG)
-			frame->flags |= NFP_FL_IP_FRAG_FIRST;
+		flow_rule_match_control(rule, &match);
+		if (match.key->flags & FLOW_DIS_IS_FRAGMENT) {
+			ext->flags |= NFP_FL_IP_FRAGMENTED;
+			msk->flags |= NFP_FL_IP_FRAGMENTED;
+		}
+		if (match.key->flags & FLOW_DIS_FIRST_FRAG) {
+			ext->flags |= NFP_FL_IP_FRAG_FIRST;
+			msk->flags |= NFP_FL_IP_FRAG_FIRST;
+		}
 	}
 }
 
 static void
-nfp_flower_compile_ipv4(struct nfp_flower_ipv4 *frame,
-			struct tc_cls_flower_offload *flow,
-			bool mask_version)
+nfp_flower_compile_ipv4(struct nfp_flower_ipv4 *ext,
+			struct nfp_flower_ipv4 *msk,
+			struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_ipv4_addrs *addr;
-
-	memset(frame, 0, sizeof(struct nfp_flower_ipv4));
-
-	if (dissector_uses_key(flow->dissector,
-			       FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
-		addr = skb_flow_dissector_target(flow->dissector,
-						 FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						 target);
-		frame->ipv4_src = addr->src;
-		frame->ipv4_dst = addr->dst;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+	struct flow_match_ipv4_addrs match;
+
+	memset(ext, 0, sizeof(struct nfp_flower_ipv4));
+	memset(msk, 0, sizeof(struct nfp_flower_ipv4));
+
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
+		flow_rule_match_ipv4_addrs(rule, &match);
+		ext->ipv4_src = match.key->src;
+		ext->ipv4_dst = match.key->dst;
+		msk->ipv4_src = match.mask->src;
+		msk->ipv4_dst = match.mask->dst;
 	}
 
-	nfp_flower_compile_ip_ext(&frame->ip_ext, flow, mask_version);
+	nfp_flower_compile_ip_ext(&ext->ip_ext, &msk->ip_ext, flow);
 }
 
 static void
-nfp_flower_compile_ipv6(struct nfp_flower_ipv6 *frame,
-			struct tc_cls_flower_offload *flow,
-			bool mask_version)
+nfp_flower_compile_ipv6(struct nfp_flower_ipv6 *ext,
+			struct nfp_flower_ipv6 *msk,
+			struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_ipv6_addrs *addr;
-
-	memset(frame, 0, sizeof(struct nfp_flower_ipv6));
-
-	if (dissector_uses_key(flow->dissector,
-			       FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
-		addr = skb_flow_dissector_target(flow->dissector,
-						 FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						 target);
-		frame->ipv6_src = addr->src;
-		frame->ipv6_dst = addr->dst;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+
+	memset(ext, 0, sizeof(struct nfp_flower_ipv6));
+	memset(msk, 0, sizeof(struct nfp_flower_ipv6));
+
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
+		struct flow_match_ipv6_addrs match;
+
+		flow_rule_match_ipv6_addrs(rule, &match);
+		ext->ipv6_src = match.key->src;
+		ext->ipv6_dst = match.key->dst;
+		msk->ipv6_src = match.mask->src;
+		msk->ipv6_dst = match.mask->dst;
 	}
 
-	nfp_flower_compile_ip_ext(&frame->ip_ext, flow, mask_version);
+	nfp_flower_compile_ip_ext(&ext->ip_ext, &msk->ip_ext, flow);
 }
 
 static int
-nfp_flower_compile_geneve_opt(void *key_buf, struct tc_cls_flower_offload *flow,
-			      bool mask_version)
+nfp_flower_compile_geneve_opt(void *ext, void *msk,
+			      struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_enc_opts *opts;
+	struct flow_match_enc_opts match;
 
-	opts = skb_flow_dissector_target(flow->dissector,
-					 FLOW_DISSECTOR_KEY_ENC_OPTS,
-					 target);
-	memcpy(key_buf, opts->data, opts->len);
+	flow_rule_match_enc_opts(&flow->rule, &match);
+	memcpy(ext, match.key->data, match.key->len);
+	memcpy(msk, match.key->data, match.key->len);
 
 	return 0;
 }
 
 static void
-nfp_flower_compile_ipv4_udp_tun(struct nfp_flower_ipv4_udp_tun *frame,
-				struct tc_cls_flower_offload *flow,
-				bool mask_version)
+nfp_flower_compile_ipv4_udp_tun(struct nfp_flower_ipv4_udp_tun *ext,
+				struct nfp_flower_ipv4_udp_tun *msk,
+				struct tc_cls_flower_offload *flow)
 {
-	struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
-	struct flow_dissector_key_ipv4_addrs *tun_ips;
-	struct flow_dissector_key_keyid *vni;
-	struct flow_dissector_key_ip *ip;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
 
-	memset(frame, 0, sizeof(struct nfp_flower_ipv4_udp_tun));
+	memset(ext, 0, sizeof(struct nfp_flower_ipv4_udp_tun));
+	memset(msk, 0, sizeof(struct nfp_flower_ipv4_udp_tun));
 
-	if (dissector_uses_key(flow->dissector,
-			       FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_match_enc_keyid match;
 		u32 temp_vni;
 
-		vni = skb_flow_dissector_target(flow->dissector,
-						FLOW_DISSECTOR_KEY_ENC_KEYID,
-						target);
-		temp_vni = be32_to_cpu(vni->keyid) << NFP_FL_TUN_VNI_OFFSET;
-		frame->tun_id = cpu_to_be32(temp_vni);
+		flow_rule_match_enc_keyid(rule, &match);
+		temp_vni = be32_to_cpu(match.key->keyid) << NFP_FL_TUN_VNI_OFFSET;
+		ext->tun_id = cpu_to_be32(temp_vni);
+		temp_vni = be32_to_cpu(match.mask->keyid) << NFP_FL_TUN_VNI_OFFSET;
+		msk->tun_id = cpu_to_be32(temp_vni);
 	}
 
-	if (dissector_uses_key(flow->dissector,
-			       FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
-		tun_ips =
-		   skb_flow_dissector_target(flow->dissector,
-					     FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
-					     target);
-		frame->ip_src = tun_ips->src;
-		frame->ip_dst = tun_ips->dst;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
+		struct flow_match_ipv4_addrs match;
+
+		flow_rule_match_enc_ipv4_addrs(rule, &match);
+		ext->ip_src = match.key->src;
+		ext->ip_dst = match.key->dst;
+		msk->ip_src = match.mask->src;
+		msk->ip_dst = match.mask->dst;
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_ENC_IP)) {
-		ip = skb_flow_dissector_target(flow->dissector,
-					       FLOW_DISSECTOR_KEY_ENC_IP,
-					       target);
-		frame->tos = ip->tos;
-		frame->ttl = ip->ttl;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IP)) {
+		struct flow_match_ip match;
+
+		flow_rule_match_enc_ip(rule, &match);
+		ext->tos = match.key->tos;
+		ext->ttl = match.key->ttl;
+		msk->tos = match.mask->tos;
+		msk->ttl = match.mask->ttl;
 	}
 }
 
@@ -313,12 +336,9 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
 	ext = nfp_flow->unmasked_data;
 	msk = nfp_flow->mask_data;
 
-	/* Populate Exact Metadata. */
 	nfp_flower_compile_meta_tci((struct nfp_flower_meta_tci *)ext,
-				    flow, key_ls->key_layer, false);
-	/* Populate Mask Metadata. */
-	nfp_flower_compile_meta_tci((struct nfp_flower_meta_tci *)msk,
-				    flow, key_ls->key_layer, true);
+				    (struct nfp_flower_meta_tci *)msk,
+				    flow, key_ls->key_layer);
 	ext += sizeof(struct nfp_flower_meta_tci);
 	msk += sizeof(struct nfp_flower_meta_tci);
 
@@ -348,45 +368,33 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
 	msk += sizeof(struct nfp_flower_in_port);
 
 	if (NFP_FLOWER_LAYER_MAC & key_ls->key_layer) {
-		/* Populate Exact MAC Data. */
 		nfp_flower_compile_mac((struct nfp_flower_mac_mpls *)ext,
-				       flow, false);
-		/* Populate Mask MAC Data. */
-		nfp_flower_compile_mac((struct nfp_flower_mac_mpls *)msk,
-				       flow, true);
+				       (struct nfp_flower_mac_mpls *)msk,
+				       flow);
 		ext += sizeof(struct nfp_flower_mac_mpls);
 		msk += sizeof(struct nfp_flower_mac_mpls);
 	}
 
 	if (NFP_FLOWER_LAYER_TP & key_ls->key_layer) {
-		/* Populate Exact TP Data. */
 		nfp_flower_compile_tport((struct nfp_flower_tp_ports *)ext,
-					 flow, false);
-		/* Populate Mask TP Data. */
-		nfp_flower_compile_tport((struct nfp_flower_tp_ports *)msk,
-					 flow, true);
+					 (struct nfp_flower_tp_ports *)msk,
+					 flow);
 		ext += sizeof(struct nfp_flower_tp_ports);
 		msk += sizeof(struct nfp_flower_tp_ports);
 	}
 
 	if (NFP_FLOWER_LAYER_IPV4 & key_ls->key_layer) {
-		/* Populate Exact IPv4 Data. */
 		nfp_flower_compile_ipv4((struct nfp_flower_ipv4 *)ext,
-					flow, false);
-		/* Populate Mask IPv4 Data. */
-		nfp_flower_compile_ipv4((struct nfp_flower_ipv4 *)msk,
-					flow, true);
+					(struct nfp_flower_ipv4 *)msk,
+					flow);
 		ext += sizeof(struct nfp_flower_ipv4);
 		msk += sizeof(struct nfp_flower_ipv4);
 	}
 
 	if (NFP_FLOWER_LAYER_IPV6 & key_ls->key_layer) {
-		/* Populate Exact IPv4 Data. */
 		nfp_flower_compile_ipv6((struct nfp_flower_ipv6 *)ext,
-					flow, false);
-		/* Populate Mask IPv4 Data. */
-		nfp_flower_compile_ipv6((struct nfp_flower_ipv6 *)msk,
-					flow, true);
+					(struct nfp_flower_ipv6 *)msk,
+					flow);
 		ext += sizeof(struct nfp_flower_ipv6);
 		msk += sizeof(struct nfp_flower_ipv6);
 	}
@@ -395,10 +403,7 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
 	    key_ls->key_layer_two & NFP_FLOWER_LAYER2_GENEVE) {
 		__be32 tun_dst;
 
-		/* Populate Exact VXLAN Data. */
-		nfp_flower_compile_ipv4_udp_tun((void *)ext, flow, false);
-		/* Populate Mask VXLAN Data. */
-		nfp_flower_compile_ipv4_udp_tun((void *)msk, flow, true);
+		nfp_flower_compile_ipv4_udp_tun((void *)ext, (void *)msk, flow);
 		tun_dst = ((struct nfp_flower_ipv4_udp_tun *)ext)->ip_dst;
 		ext += sizeof(struct nfp_flower_ipv4_udp_tun);
 		msk += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -413,11 +418,7 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
 		nfp_tunnel_add_ipv4_off(app, tun_dst);
 
 		if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_GENEVE_OP) {
-			err = nfp_flower_compile_geneve_opt(ext, flow, false);
-			if (err)
-				return err;
-
-			err = nfp_flower_compile_geneve_opt(msk, flow, true);
+			err = nfp_flower_compile_geneve_opt(ext, msk, flow);
 			if (err)
 				return err;
 		}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 545d94168874..708331234908 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -102,23 +102,22 @@ nfp_flower_xmit_flow(struct nfp_app *app, struct nfp_fl_payload *nfp_flow,
 
 static bool nfp_flower_check_higher_than_mac(struct tc_cls_flower_offload *f)
 {
-	return dissector_uses_key(f->dissector,
-				  FLOW_DISSECTOR_KEY_IPV4_ADDRS) ||
-		dissector_uses_key(f->dissector,
-				   FLOW_DISSECTOR_KEY_IPV6_ADDRS) ||
-		dissector_uses_key(f->dissector,
-				   FLOW_DISSECTOR_KEY_PORTS) ||
-		dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ICMP);
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+
+	return flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS) ||
+	       flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS) ||
+	       flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS) ||
+	       flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ICMP);
 }
 
 static int
-nfp_flower_calc_opt_layer(struct flow_dissector_key_enc_opts *enc_opts,
+nfp_flower_calc_opt_layer(struct flow_match_enc_opts *enc_opts,
 			  u32 *key_layer_two, int *key_size)
 {
-	if (enc_opts->len > NFP_FL_MAX_GENEVE_OPT_KEY)
+	if (enc_opts->key->len > NFP_FL_MAX_GENEVE_OPT_KEY)
 		return -EOPNOTSUPP;
 
-	if (enc_opts->len > 0) {
+	if (enc_opts->key->len > 0) {
 		*key_layer_two |= NFP_FLOWER_LAYER2_GENEVE_OP;
 		*key_size += sizeof(struct nfp_flower_geneve_options);
 	}
@@ -133,20 +132,21 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 				struct tc_cls_flower_offload *flow,
 				enum nfp_flower_tun_type *tun_type)
 {
-	struct flow_dissector_key_basic *mask_basic = NULL;
-	struct flow_dissector_key_basic *key_basic = NULL;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+	struct flow_dissector *dissector = rule->match.dissector;
+	struct flow_match_basic basic = { NULL, NULL};
 	struct nfp_flower_priv *priv = app->priv;
 	u32 key_layer_two;
 	u8 key_layer;
 	int key_size;
 	int err;
 
-	if (flow->dissector->used_keys & ~NFP_FLOWER_WHITELIST_DISSECTOR)
+	if (dissector->used_keys & ~NFP_FLOWER_WHITELIST_DISSECTOR)
 		return -EOPNOTSUPP;
 
 	/* If any tun dissector is used then the required set must be used. */
-	if (flow->dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR &&
-	    (flow->dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
+	if (dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR &&
+	    (dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
 	    != NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
 		return -EOPNOTSUPP;
 
@@ -155,76 +155,53 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 	key_size = sizeof(struct nfp_flower_meta_tci) +
 		   sizeof(struct nfp_flower_in_port);
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS) ||
-	    dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_MPLS)) {
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS) ||
+	    flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS)) {
 		key_layer |= NFP_FLOWER_LAYER_MAC;
 		key_size += sizeof(struct nfp_flower_mac_mpls);
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
-		struct flow_dissector_key_vlan *flow_vlan;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_match_vlan vlan;
 
-		flow_vlan = skb_flow_dissector_target(flow->dissector,
-						      FLOW_DISSECTOR_KEY_VLAN,
-						      flow->mask);
+		flow_rule_match_vlan(rule, &vlan);
 		if (!(priv->flower_ext_feats & NFP_FL_FEATS_VLAN_PCP) &&
-		    flow_vlan->vlan_priority)
+		    vlan.key->vlan_priority)
 			return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(flow->dissector,
-			       FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
-		struct flow_dissector_key_ipv4_addrs *mask_ipv4 = NULL;
-		struct flow_dissector_key_ports *mask_enc_ports = NULL;
-		struct flow_dissector_key_enc_opts *enc_op = NULL;
-		struct flow_dissector_key_ports *enc_ports = NULL;
-		struct flow_dissector_key_control *mask_enc_ctl =
-			skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_CONTROL,
-						  flow->mask);
-		struct flow_dissector_key_control *enc_ctl =
-			skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_CONTROL,
-						  flow->key);
-
-		if (mask_enc_ctl->addr_type != 0xffff ||
-		    enc_ctl->addr_type != FLOW_DISSECTOR_KEY_IPV4_ADDRS)
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
+		struct flow_match_enc_opts enc_op = { NULL, NULL };
+		struct flow_match_ipv4_addrs ipv4_addrs;
+		struct flow_match_control enc_ctl;
+		struct flow_match_ports enc_ports;
+
+		flow_rule_match_enc_control(rule, &enc_ctl);
+
+		if (enc_ctl.mask->addr_type != 0xffff ||
+		    enc_ctl.key->addr_type != FLOW_DISSECTOR_KEY_IPV4_ADDRS)
 			return -EOPNOTSUPP;
 
 		/* These fields are already verified as used. */
-		mask_ipv4 =
-			skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
-						  flow->mask);
-		if (mask_ipv4->dst != cpu_to_be32(~0))
+		flow_rule_match_enc_ipv4_addrs(rule, &ipv4_addrs);
+		if (ipv4_addrs.mask->dst != cpu_to_be32(~0))
 			return -EOPNOTSUPP;
 
-		mask_enc_ports =
-			skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_PORTS,
-						  flow->mask);
-		enc_ports =
-			skb_flow_dissector_target(flow->dissector,
-						  FLOW_DISSECTOR_KEY_ENC_PORTS,
-						  flow->key);
 
-		if (mask_enc_ports->dst != cpu_to_be16(~0))
+		flow_rule_match_enc_ports(rule, &enc_ports);
+		if (enc_ports.mask->dst != cpu_to_be16(~0))
 			return -EOPNOTSUPP;
 
-		if (dissector_uses_key(flow->dissector,
-				       FLOW_DISSECTOR_KEY_ENC_OPTS)) {
-			enc_op = skb_flow_dissector_target(flow->dissector,
-							   FLOW_DISSECTOR_KEY_ENC_OPTS,
-							   flow->key);
-		}
+		if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS))
+			flow_rule_match_enc_opts(rule, &enc_op);
 
-		switch (enc_ports->dst) {
+		switch (enc_ports.key->dst) {
 		case htons(NFP_FL_VXLAN_PORT):
 			*tun_type = NFP_FL_TUNNEL_VXLAN;
 			key_layer |= NFP_FLOWER_LAYER_VXLAN;
 			key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
 
-			if (enc_op)
+			if (enc_op.key)
 				return -EOPNOTSUPP;
 			break;
 		case htons(NFP_FL_GENEVE_PORT):
@@ -236,11 +213,11 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 			key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
 			key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
 
-			if (!enc_op)
+			if (!enc_op.key)
 				break;
 			if (!(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE_OPT))
 				return -EOPNOTSUPP;
-			err = nfp_flower_calc_opt_layer(enc_op, &key_layer_two,
+			err = nfp_flower_calc_opt_layer(&enc_op, &key_layer_two,
 							&key_size);
 			if (err)
 				return err;
@@ -254,19 +231,12 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 			return -EOPNOTSUPP;
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		mask_basic = skb_flow_dissector_target(flow->dissector,
-						       FLOW_DISSECTOR_KEY_BASIC,
-						       flow->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC))
+		flow_rule_match_basic(rule, &basic);
 
-		key_basic = skb_flow_dissector_target(flow->dissector,
-						      FLOW_DISSECTOR_KEY_BASIC,
-						      flow->key);
-	}
-
-	if (mask_basic && mask_basic->n_proto) {
+	if (basic.mask && basic.mask->n_proto) {
 		/* Ethernet type is present in the key. */
-		switch (key_basic->n_proto) {
+		switch (basic.key->n_proto) {
 		case cpu_to_be16(ETH_P_IP):
 			key_layer |= NFP_FLOWER_LAYER_IPV4;
 			key_size += sizeof(struct nfp_flower_ipv4);
@@ -305,9 +275,9 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 		}
 	}
 
-	if (mask_basic && mask_basic->ip_proto) {
+	if (basic.mask && basic.mask->ip_proto) {
 		/* Ethernet type is present in the key. */
-		switch (key_basic->ip_proto) {
+		switch (basic.key->ip_proto) {
 		case IPPROTO_TCP:
 		case IPPROTO_UDP:
 		case IPPROTO_SCTP:
@@ -324,14 +294,12 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 		}
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_TCP)) {
-		struct flow_dissector_key_tcp *tcp;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_TCP)) {
+		struct flow_match_tcp tcp;
 		u32 tcp_flags;
 
-		tcp = skb_flow_dissector_target(flow->dissector,
-						FLOW_DISSECTOR_KEY_TCP,
-						flow->key);
-		tcp_flags = be16_to_cpu(tcp->flags);
+		flow_rule_match_tcp(rule, &tcp);
+		tcp_flags = be16_to_cpu(tcp.key->flags);
 
 		if (tcp_flags & ~NFP_FLOWER_SUPPORTED_TCPFLAGS)
 			return -EOPNOTSUPP;
@@ -353,14 +321,11 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
 		}
 	}
 
-	if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
-		struct flow_dissector_key_control *key_ctl;
-
-		key_ctl = skb_flow_dissector_target(flow->dissector,
-						    FLOW_DISSECTOR_KEY_CONTROL,
-						    flow->key);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_match_control ctl;
 
-		if (key_ctl->flags & ~NFP_FLOWER_SUPPORTED_CTLFLAGS)
+		flow_rule_match_control(rule, &ctl);
+		if (ctl.key->flags & ~NFP_FLOWER_SUPPORTED_CTLFLAGS)
 			return -EOPNOTSUPP;
 	}
 
diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
index b16ce7d93caf..81d5b9304229 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
@@ -2033,24 +2033,20 @@ qede_tc_parse_ports(struct qede_dev *edev,
 		    struct tc_cls_flower_offload *f,
 		    struct qede_arfs_tuple *t)
 {
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
-		struct flow_dissector_key_ports *key, *mask;
-
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_PORTS,
-						f->key);
-		mask = skb_flow_dissector_target(f->dissector,
-						 FLOW_DISSECTOR_KEY_PORTS,
-						 f->mask);
-
-		if ((key->src && mask->src != U16_MAX) ||
-		    (key->dst && mask->dst != U16_MAX)) {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_match_ports match;
+
+		flow_rule_match_ports(rule, &match);
+		if ((match.key->src && match.mask->src != U16_MAX) ||
+		    (match.key->dst && match.mask->dst != U16_MAX)) {
 			DP_NOTICE(edev, "Do not support ports masks\n");
 			return -EINVAL;
 		}
 
-		t->src_port = key->src;
-		t->dst_port = key->dst;
+		t->src_port = match.key->src;
+		t->dst_port = match.key->dst;
 	}
 
 	return 0;
@@ -2061,32 +2057,27 @@ qede_tc_parse_v6_common(struct qede_dev *edev,
 			struct tc_cls_flower_offload *f,
 			struct qede_arfs_tuple *t)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
 	struct in6_addr zero_addr, addr;
 
 	memset(&zero_addr, 0, sizeof(addr));
 	memset(&addr, 0xff, sizeof(addr));
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
-		struct flow_dissector_key_ipv6_addrs *key, *mask;
-
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						f->key);
-		mask = skb_flow_dissector_target(f->dissector,
-						 FLOW_DISSECTOR_KEY_IPV6_ADDRS,
-						 f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS)) {
+		struct flow_match_ipv6_addrs match;
 
-		if ((memcmp(&key->src, &zero_addr, sizeof(addr)) &&
-		     memcmp(&mask->src, &addr, sizeof(addr))) ||
-		    (memcmp(&key->dst, &zero_addr, sizeof(addr)) &&
-		     memcmp(&mask->dst, &addr, sizeof(addr)))) {
+		flow_rule_match_ipv6_addrs(rule, &match);
+		if ((memcmp(&match.key->src, &zero_addr, sizeof(addr)) &&
+		     memcmp(&match.mask->src, &addr, sizeof(addr))) ||
+		    (memcmp(&match.key->dst, &zero_addr, sizeof(addr)) &&
+		     memcmp(&match.mask->dst, &addr, sizeof(addr)))) {
 			DP_NOTICE(edev,
 				  "Do not support IPv6 address prefix/mask\n");
 			return -EINVAL;
 		}
 
-		memcpy(&t->src_ipv6, &key->src, sizeof(addr));
-		memcpy(&t->dst_ipv6, &key->dst, sizeof(addr));
+		memcpy(&t->src_ipv6, &match.key->src, sizeof(addr));
+		memcpy(&t->dst_ipv6, &match.key->dst, sizeof(addr));
 	}
 
 	if (qede_tc_parse_ports(edev, f, t))
@@ -2100,24 +2091,20 @@ qede_tc_parse_v4_common(struct qede_dev *edev,
 			struct tc_cls_flower_offload *f,
 			struct qede_arfs_tuple *t)
 {
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
-		struct flow_dissector_key_ipv4_addrs *key, *mask;
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
 
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						f->key);
-		mask = skb_flow_dissector_target(f->dissector,
-						 FLOW_DISSECTOR_KEY_IPV4_ADDRS,
-						 f->mask);
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS)) {
+		struct flow_match_ipv4_addrs match;
 
-		if ((key->src && mask->src != U32_MAX) ||
-		    (key->dst && mask->dst != U32_MAX)) {
+		flow_rule_match_ipv4_addrs(rule, &match);
+		if ((match.key->src && match.mask->src != U32_MAX) ||
+		    (match.key->dst && match.mask->dst != U32_MAX)) {
 			DP_NOTICE(edev, "Do not support ipv4 prefix/masks\n");
 			return -EINVAL;
 		}
 
-		t->src_ipv4 = key->src;
-		t->dst_ipv4 = key->dst;
+		t->src_ipv4 = match.key->src;
+		t->dst_ipv4 = match.key->dst;
 	}
 
 	if (qede_tc_parse_ports(edev, f, t))
@@ -2175,19 +2162,21 @@ qede_parse_flower_attr(struct qede_dev *edev, __be16 proto,
 		       struct tc_cls_flower_offload *f,
 		       struct qede_arfs_tuple *tuple)
 {
+	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
 	int rc = -EINVAL;
 	u8 ip_proto = 0;
 
 	memset(tuple, 0, sizeof(*tuple));
 
-	if (f->dissector->used_keys &
+	if (dissector->used_keys &
 	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
 	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
 	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
 	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
 	      BIT(FLOW_DISSECTOR_KEY_PORTS))) {
 		DP_NOTICE(edev, "Unsupported key set:0x%x\n",
-			  f->dissector->used_keys);
+			  dissector->used_keys);
 		return -EOPNOTSUPP;
 	}
 
@@ -2197,13 +2186,11 @@ qede_parse_flower_attr(struct qede_dev *edev, __be16 proto,
 		return -EPROTONOSUPPORT;
 	}
 
-	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
-		struct flow_dissector_key_basic *key;
+	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_match_basic match;
 
-		key = skb_flow_dissector_target(f->dissector,
-						FLOW_DISSECTOR_KEY_BASIC,
-						f->key);
-		ip_proto = key->ip_proto;
+		flow_rule_match_basic(rule, &match);
+		ip_proto = match.key->ip_proto;
 	}
 
 	if (ip_proto == IPPROTO_TCP && proto == htons(ETH_P_IP))
diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
index 6a4586dcdede..965a82b8d881 100644
--- a/include/net/flow_dissector.h
+++ b/include/net/flow_dissector.h
@@ -305,4 +305,111 @@ static inline void *skb_flow_dissector_target(struct flow_dissector *flow_dissec
 	return ((char *)target_container) + flow_dissector->offset[key_id];
 }
 
+struct flow_match {
+	struct flow_dissector	*dissector;
+	void			*mask;
+	void			*key;
+};
+
+struct flow_match_basic {
+	struct flow_dissector_key_basic *key, *mask;
+};
+
+struct flow_match_control {
+	struct flow_dissector_key_control *key, *mask;
+};
+
+struct flow_match_eth_addrs {
+	struct flow_dissector_key_eth_addrs *key, *mask;
+};
+
+struct flow_match_vlan {
+	struct flow_dissector_key_vlan *key, *mask;
+};
+
+struct flow_match_ipv4_addrs {
+	struct flow_dissector_key_ipv4_addrs *key, *mask;
+};
+
+struct flow_match_ipv6_addrs {
+	struct flow_dissector_key_ipv6_addrs *key, *mask;
+};
+
+struct flow_match_ip {
+	struct flow_dissector_key_ip *key, *mask;
+};
+
+struct flow_match_ports {
+	struct flow_dissector_key_ports *key, *mask;
+};
+
+struct flow_match_icmp {
+	struct flow_dissector_key_icmp *key, *mask;
+};
+
+struct flow_match_tcp {
+	struct flow_dissector_key_tcp *key, *mask;
+};
+
+struct flow_match_mpls {
+	struct flow_dissector_key_mpls *key, *mask;
+};
+
+struct flow_match_enc_keyid {
+	struct flow_dissector_key_keyid *key, *mask;
+};
+
+struct flow_match_enc_opts {
+	struct flow_dissector_key_enc_opts *key, *mask;
+};
+
+struct flow_rule;
+
+void flow_rule_match_basic(const struct flow_rule *rule,
+			   struct flow_match_basic *out);
+void flow_rule_match_control(const struct flow_rule *rule,
+			     struct flow_match_control *out);
+void flow_rule_match_eth_addrs(const struct flow_rule *rule,
+			       struct flow_match_eth_addrs *out);
+void flow_rule_match_vlan(const struct flow_rule *rule,
+			  struct flow_match_vlan *out);
+void flow_rule_match_ipv4_addrs(const struct flow_rule *rule,
+				struct flow_match_ipv4_addrs *out);
+void flow_rule_match_ipv6_addrs(const struct flow_rule *rule,
+				struct flow_match_ipv6_addrs *out);
+void flow_rule_match_ip(const struct flow_rule *rule,
+			struct flow_match_ip *out);
+void flow_rule_match_ports(const struct flow_rule *rule,
+			   struct flow_match_ports *out);
+void flow_rule_match_tcp(const struct flow_rule *rule,
+			 struct flow_match_tcp *out);
+void flow_rule_match_icmp(const struct flow_rule *rule,
+			  struct flow_match_icmp *out);
+void flow_rule_match_mpls(const struct flow_rule *rule,
+			  struct flow_match_mpls *out);
+void flow_rule_match_enc_control(const struct flow_rule *rule,
+				 struct flow_match_control *out);
+void flow_rule_match_enc_ipv4_addrs(const struct flow_rule *rule,
+				    struct flow_match_ipv4_addrs *out);
+void flow_rule_match_enc_ipv6_addrs(const struct flow_rule *rule,
+				    struct flow_match_ipv6_addrs *out);
+void flow_rule_match_enc_ip(const struct flow_rule *rule,
+			    struct flow_match_ip *out);
+void flow_rule_match_enc_ports(const struct flow_rule *rule,
+			       struct flow_match_ports *out);
+void flow_rule_match_enc_keyid(const struct flow_rule *rule,
+			       struct flow_match_enc_keyid *out);
+void flow_rule_match_enc_opts(const struct flow_rule *rule,
+			      struct flow_match_enc_opts *out);
+
+struct flow_rule {
+	struct flow_match	match;
+};
+
+static inline bool flow_rule_match_key(const struct flow_rule *rule,
+				       enum flow_dissector_key_id key)
+{
+	return dissector_uses_key(rule->match.dissector, key);
+}
+
 #endif
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index c497ada7f591..8b79a1a3a5c7 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -759,13 +759,17 @@ struct tc_cls_flower_offload {
 	struct tc_cls_common_offload common;
 	enum tc_fl_command command;
 	unsigned long cookie;
-	struct flow_dissector *dissector;
-	struct fl_flow_key *mask;
-	struct fl_flow_key *key;
+	struct flow_rule rule;
 	struct tcf_exts *exts;
 	u32 classid;
 };
 
+static inline struct flow_rule *
+tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
+{
+	return &tc_flow_cmd->rule;
+}
+
 enum tc_matchall_command {
 	TC_CLSMATCHALL_REPLACE,
 	TC_CLSMATCHALL_DESTROY,
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 2e8d91e54179..186089b8d852 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -125,6 +125,139 @@ static __be16 skb_flow_get_be16(const struct sk_buff *skb, int poff,
 	return 0;
 }
 
+#define FLOW_DISSECTOR_MATCH(__rule, __type, __out)				\
+	const struct flow_match *__m = &(__rule)->match;			\
+	struct flow_dissector *__d = (__m)->dissector;				\
+										\
+	(__out)->key = skb_flow_dissector_target(__d, __type, (__m)->key);	\
+	(__out)->mask = skb_flow_dissector_target(__d, __type, (__m)->mask);	\
+
+void flow_rule_match_basic(const struct flow_rule *rule,
+			   struct flow_match_basic *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_BASIC, out);
+}
+EXPORT_SYMBOL(flow_rule_match_basic);
+
+void flow_rule_match_control(const struct flow_rule *rule,
+			     struct flow_match_control *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CONTROL, out);
+}
+EXPORT_SYMBOL(flow_rule_match_control);
+
+void flow_rule_match_eth_addrs(const struct flow_rule *rule,
+			       struct flow_match_eth_addrs *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_eth_addrs);
+
+void flow_rule_match_vlan(const struct flow_rule *rule,
+			  struct flow_match_vlan *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_VLAN, out);
+}
+EXPORT_SYMBOL(flow_rule_match_vlan);
+
+void flow_rule_match_ipv4_addrs(const struct flow_rule *rule,
+				struct flow_match_ipv4_addrs *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_ipv4_addrs);
+
+void flow_rule_match_ipv6_addrs(const struct flow_rule *rule,
+				struct flow_match_ipv6_addrs *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_ipv6_addrs);
+
+void flow_rule_match_ip(const struct flow_rule *rule,
+			struct flow_match_ip *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_ip);
+
+void flow_rule_match_ports(const struct flow_rule *rule,
+			   struct flow_match_ports *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_PORTS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_ports);
+
+void flow_rule_match_tcp(const struct flow_rule *rule,
+			 struct flow_match_tcp *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_TCP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_tcp);
+
+void flow_rule_match_icmp(const struct flow_rule *rule,
+			  struct flow_match_icmp *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ICMP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_icmp);
+
+void flow_rule_match_mpls(const struct flow_rule *rule,
+			  struct flow_match_mpls *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_MPLS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_mpls);
+
+void flow_rule_match_enc_control(const struct flow_rule *rule,
+				 struct flow_match_control *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_control);
+
+void flow_rule_match_enc_ipv4_addrs(const struct flow_rule *rule,
+				    struct flow_match_ipv4_addrs *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_ipv4_addrs);
+
+void flow_rule_match_enc_ipv6_addrs(const struct flow_rule *rule,
+				    struct flow_match_ipv6_addrs *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_ipv6_addrs);
+
+void flow_rule_match_enc_ip(const struct flow_rule *rule,
+			    struct flow_match_ip *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_ip);
+
+void flow_rule_match_enc_ports(const struct flow_rule *rule,
+			       struct flow_match_ports *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_PORTS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_ports);
+
+void flow_rule_match_enc_keyid(const struct flow_rule *rule,
+			       struct flow_match_enc_keyid *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_KEYID, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_keyid);
+
+void flow_rule_match_enc_opts(const struct flow_rule *rule,
+			      struct flow_match_enc_opts *out)
+{
+	FLOW_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_OPTS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_enc_opts);
+
 /**
  * __skb_flow_get_ports - extract the upper layer ports and return them
  * @skb: sk_buff to extract the ports from
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 85e9f8e1da10..26fc129ed504 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -385,9 +385,9 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 	tc_cls_common_offload_init(&cls_flower.common, tp, f->flags, extack);
 	cls_flower.command = TC_CLSFLOWER_REPLACE;
 	cls_flower.cookie = (unsigned long) f;
-	cls_flower.dissector = &f->mask->dissector;
-	cls_flower.mask = &f->mask->key;
-	cls_flower.key = &f->mkey;
+	cls_flower.rule.match.dissector = &f->mask->dissector;
+	cls_flower.rule.match.mask = &f->mask->key;
+	cls_flower.rule.match.key = &f->mkey;
 	cls_flower.exts = &f->exts;
 	cls_flower.classid = f->res.classid;
 
@@ -1466,9 +1466,9 @@ static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
 			cls_flower.command = add ?
 				TC_CLSFLOWER_REPLACE : TC_CLSFLOWER_DESTROY;
 			cls_flower.cookie = (unsigned long)f;
-			cls_flower.dissector = &mask->dissector;
-			cls_flower.mask = &mask->key;
-			cls_flower.key = &f->mkey;
+			cls_flower.rule.match.dissector = &mask->dissector;
+			cls_flower.rule.match.mask = &mask->key;
+			cls_flower.rule.match.key = &f->mkey;
 			cls_flower.exts = &f->exts;
 			cls_flower.classid = f->res.classid;
 
@@ -1497,9 +1497,9 @@ static void fl_hw_create_tmplt(struct tcf_chain *chain,
 	cls_flower.common.chain_index = chain->index;
 	cls_flower.command = TC_CLSFLOWER_TMPLT_CREATE;
 	cls_flower.cookie = (unsigned long) tmplt;
-	cls_flower.dissector = &tmplt->dissector;
-	cls_flower.mask = &tmplt->mask;
-	cls_flower.key = &tmplt->dummy_key;
+	cls_flower.rule.match.dissector = &tmplt->dissector;
+	cls_flower.rule.match.mask = &tmplt->mask;
+	cls_flower.rule.match.key = &tmplt->dummy_key;
 	cls_flower.exts = &dummy_exts;
 
 	/* We don't care if driver (any of them) fails to handle this
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 02/12] net/mlx5e: support for two independent packet edit actions
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 01/12] flow_dissector: add flow_rule and flow_match structures and use them Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure Pablo Neira Ayuso
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This patch adds pedit_headers_action structure to store the result of
parsing tc pedit actions. Then, it calls alloc_tc_pedit_action() to
populate the mlx5e hardware intermediate representation once all actions
have been parsed.

This patch comes in preparation for the new flow_action infrastructure,
where each packet mangling comes in an separated action, ie. not packed
as in tc pedit.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 81 ++++++++++++++++++-------
 1 file changed, 59 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 6a22f7f22890..2645e5d1e790 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1748,6 +1748,12 @@ struct pedit_headers {
 	struct udphdr  udp;
 };
 
+struct pedit_headers_action {
+	struct pedit_headers	vals;
+	struct pedit_headers	masks;
+	u32			pedits;
+};
+
 static int pedit_header_offsets[] = {
 	[TCA_PEDIT_KEY_EX_HDR_TYPE_ETH] = offsetof(struct pedit_headers, eth),
 	[TCA_PEDIT_KEY_EX_HDR_TYPE_IP4] = offsetof(struct pedit_headers, ip4),
@@ -1759,16 +1765,15 @@ static int pedit_header_offsets[] = {
 #define pedit_header(_ph, _htype) ((void *)(_ph) + pedit_header_offsets[_htype])
 
 static int set_pedit_val(u8 hdr_type, u32 mask, u32 val, u32 offset,
-			 struct pedit_headers *masks,
-			 struct pedit_headers *vals)
+			 struct pedit_headers_action *hdrs)
 {
 	u32 *curr_pmask, *curr_pval;
 
 	if (hdr_type >= __PEDIT_HDR_TYPE_MAX)
 		goto out_err;
 
-	curr_pmask = (u32 *)(pedit_header(masks, hdr_type) + offset);
-	curr_pval  = (u32 *)(pedit_header(vals, hdr_type) + offset);
+	curr_pmask = (u32 *)(pedit_header(&hdrs->masks, hdr_type) + offset);
+	curr_pval  = (u32 *)(pedit_header(&hdrs->vals, hdr_type) + offset);
 
 	if (*curr_pmask & mask)  /* disallow acting twice on the same location */
 		goto out_err;
@@ -1824,8 +1829,7 @@ static struct mlx5_fields fields[] = {
  * max from the SW pedit action. On success, it says how many HW actions were
  * actually parsed.
  */
-static int offload_pedit_fields(struct pedit_headers *masks,
-				struct pedit_headers *vals,
+static int offload_pedit_fields(struct pedit_headers_action *hdrs,
 				struct mlx5e_tc_flow_parse_attr *parse_attr,
 				struct netlink_ext_ack *extack)
 {
@@ -1840,10 +1844,10 @@ static int offload_pedit_fields(struct pedit_headers *masks,
 	__be16 mask_be16;
 	void *action;
 
-	set_masks = &masks[TCA_PEDIT_KEY_EX_CMD_SET];
-	add_masks = &masks[TCA_PEDIT_KEY_EX_CMD_ADD];
-	set_vals = &vals[TCA_PEDIT_KEY_EX_CMD_SET];
-	add_vals = &vals[TCA_PEDIT_KEY_EX_CMD_ADD];
+	set_masks = &hdrs[TCA_PEDIT_KEY_EX_CMD_SET].masks;
+	add_masks = &hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].masks;
+	set_vals = &hdrs[TCA_PEDIT_KEY_EX_CMD_SET].vals;
+	add_vals = &hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].vals;
 
 	action_size = MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto);
 	action = parse_attr->mod_hdr_actions;
@@ -1939,12 +1943,14 @@ static int offload_pedit_fields(struct pedit_headers *masks,
 }
 
 static int alloc_mod_hdr_actions(struct mlx5e_priv *priv,
-				 const struct tc_action *a, int namespace,
+				 struct pedit_headers_action *hdrs,
+				 int namespace,
 				 struct mlx5e_tc_flow_parse_attr *parse_attr)
 {
 	int nkeys, action_size, max_actions;
 
-	nkeys = tcf_pedit_nkeys(a);
+	nkeys = hdrs[TCA_PEDIT_KEY_EX_CMD_SET].pedits +
+		hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].pedits;
 	action_size = MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto);
 
 	if (namespace == MLX5_FLOW_NAMESPACE_FDB) /* FDB offloading */
@@ -1968,18 +1974,15 @@ static const struct pedit_headers zero_masks = {};
 static int parse_tc_pedit_action(struct mlx5e_priv *priv,
 				 const struct tc_action *a, int namespace,
 				 struct mlx5e_tc_flow_parse_attr *parse_attr,
+				 struct pedit_headers_action *hdrs,
 				 struct netlink_ext_ack *extack)
 {
-	struct pedit_headers masks[__PEDIT_CMD_MAX], vals[__PEDIT_CMD_MAX], *cmd_masks;
 	int nkeys, i, err = -EOPNOTSUPP;
 	u32 mask, val, offset;
 	u8 cmd, htype;
 
 	nkeys = tcf_pedit_nkeys(a);
 
-	memset(masks, 0, sizeof(struct pedit_headers) * __PEDIT_CMD_MAX);
-	memset(vals,  0, sizeof(struct pedit_headers) * __PEDIT_CMD_MAX);
-
 	for (i = 0; i < nkeys; i++) {
 		htype = tcf_pedit_htype(a, i);
 		cmd = tcf_pedit_cmd(a, i);
@@ -2000,21 +2003,37 @@ static int parse_tc_pedit_action(struct mlx5e_priv *priv,
 		val = tcf_pedit_val(a, i);
 		offset = tcf_pedit_offset(a, i);
 
-		err = set_pedit_val(htype, ~mask, val, offset, &masks[cmd], &vals[cmd]);
+		err = set_pedit_val(htype, ~mask, val, offset, &hdrs[cmd]);
 		if (err)
 			goto out_err;
+
+		hdrs[cmd].pedits++;
 	}
 
-	err = alloc_mod_hdr_actions(priv, a, namespace, parse_attr);
+	return 0;
+out_err:
+	return err;
+}
+
+static int alloc_tc_pedit_action(struct mlx5e_priv *priv, int namespace,
+				 struct mlx5e_tc_flow_parse_attr *parse_attr,
+				 struct pedit_headers_action *hdrs,
+				 struct netlink_ext_ack *extack)
+{
+	struct pedit_headers *cmd_masks;
+	int err;
+	u8 cmd;
+
+	err = alloc_mod_hdr_actions(priv, hdrs, namespace, parse_attr);
 	if (err)
 		goto out_err;
 
-	err = offload_pedit_fields(masks, vals, parse_attr, extack);
+	err = offload_pedit_fields(hdrs, parse_attr, extack);
 	if (err < 0)
 		goto out_dealloc_parsed_actions;
 
 	for (cmd = 0; cmd < __PEDIT_CMD_MAX; cmd++) {
-		cmd_masks = &masks[cmd];
+		cmd_masks = &hdrs[cmd].masks;
 		if (memcmp(cmd_masks, &zero_masks, sizeof(zero_masks))) {
 			NL_SET_ERR_MSG_MOD(extack,
 					   "attempt to offload an unsupported field");
@@ -2156,6 +2175,7 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 				struct mlx5e_tc_flow *flow,
 				struct netlink_ext_ack *extack)
 {
+	struct pedit_headers_action hdrs[__PEDIT_CMD_MAX] = {};
 	struct mlx5_nic_flow_attr *attr = flow->nic_attr;
 	const struct tc_action *a;
 	LIST_HEAD(actions);
@@ -2178,7 +2198,7 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 
 		if (is_tcf_pedit(a)) {
 			err = parse_tc_pedit_action(priv, a, MLX5_FLOW_NAMESPACE_KERNEL,
-						    parse_attr, extack);
+						    parse_attr, hdrs, extack);
 			if (err)
 				return err;
 
@@ -2232,6 +2252,14 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 		return -EINVAL;
 	}
 
+	if (hdrs[TCA_PEDIT_KEY_EX_CMD_SET].pedits ||
+	    hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].pedits) {
+		err = alloc_tc_pedit_action(priv, MLX5_FLOW_NAMESPACE_KERNEL,
+					    parse_attr, hdrs, extack);
+		if (err)
+			return err;
+	}
+
 	attr->action = action;
 	if (!actions_match_supported(priv, exts, parse_attr, flow, extack))
 		return -EOPNOTSUPP;
@@ -2778,6 +2806,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 				struct mlx5e_tc_flow *flow,
 				struct netlink_ext_ack *extack)
 {
+	struct pedit_headers_action hdrs[__PEDIT_CMD_MAX] = {};
 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
 	struct mlx5_esw_flow_attr *attr = flow->esw_attr;
 	struct mlx5e_rep_priv *rpriv = priv->ppriv;
@@ -2803,7 +2832,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 
 		if (is_tcf_pedit(a)) {
 			err = parse_tc_pedit_action(priv, a, MLX5_FLOW_NAMESPACE_FDB,
-						    parse_attr, extack);
+						    parse_attr, hdrs, extack);
 			if (err)
 				return err;
 
@@ -2909,6 +2938,14 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 		return -EINVAL;
 	}
 
+	if (hdrs[TCA_PEDIT_KEY_EX_CMD_SET].pedits ||
+	    hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].pedits) {
+		err = alloc_tc_pedit_action(priv, MLX5_FLOW_NAMESPACE_KERNEL,
+					    parse_attr, hdrs, extack);
+		if (err)
+			return err;
+	}
+
 	attr->action = action;
 	if (!actions_match_supported(priv, exts, parse_attr, flow, extack))
 		return -EOPNOTSUPP;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 01/12] flow_dissector: add flow_rule and flow_match structures and use them Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 02/12] net/mlx5e: support for two independent packet edit actions Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19 11:56   ` Jiri Pirko
  2018-11-19 14:03   ` Jiri Pirko
  2018-11-19  0:15 ` [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation Pablo Neira Ayuso
                   ` (10 subsequent siblings)
  13 siblings, 2 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This new infrastructure defines the nic actions that you can perform
from existing network drivers. This infrastructure allows us to avoid a
direct dependency with the native software TC action representation.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/flow_dissector.h | 70 ++++++++++++++++++++++++++++++++++++++++++++
 net/core/flow_dissector.c    | 18 ++++++++++++
 2 files changed, 88 insertions(+)

diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
index 965a82b8d881..925c208816f1 100644
--- a/include/net/flow_dissector.h
+++ b/include/net/flow_dissector.h
@@ -402,8 +402,78 @@ void flow_rule_match_enc_keyid(const struct flow_rule *rule,
 void flow_rule_match_enc_opts(const struct flow_rule *rule,
 			      struct flow_match_enc_opts *out);
 
+enum flow_action_key_id {
+	FLOW_ACTION_KEY_ACCEPT		= 0,
+	FLOW_ACTION_KEY_DROP,
+	FLOW_ACTION_KEY_TRAP,
+	FLOW_ACTION_KEY_GOTO,
+	FLOW_ACTION_KEY_REDIRECT,
+	FLOW_ACTION_KEY_MIRRED,
+	FLOW_ACTION_KEY_VLAN_PUSH,
+	FLOW_ACTION_KEY_VLAN_POP,
+	FLOW_ACTION_KEY_VLAN_MANGLE,
+	FLOW_ACTION_KEY_TUNNEL_ENCAP,
+	FLOW_ACTION_KEY_TUNNEL_DECAP,
+	FLOW_ACTION_KEY_MANGLE,
+	FLOW_ACTION_KEY_ADD,
+	FLOW_ACTION_KEY_CSUM,
+	FLOW_ACTION_KEY_MARK,
+};
+
+/* This is mirroring enum pedit_header_type definition for easy mapping between
+ * tc pedit action. Legacy TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK is mapped to
+ * FLOW_ACT_MANGLE_UNSPEC, which is supported by no driver.
+ */
+enum flow_act_mangle_base {
+	FLOW_ACT_MANGLE_UNSPEC		= 0,
+	FLOW_ACT_MANGLE_HDR_TYPE_ETH,
+	FLOW_ACT_MANGLE_HDR_TYPE_IP4,
+	FLOW_ACT_MANGLE_HDR_TYPE_IP6,
+	FLOW_ACT_MANGLE_HDR_TYPE_TCP,
+	FLOW_ACT_MANGLE_HDR_TYPE_UDP,
+};
+
+struct flow_action_key {
+	enum flow_action_key_id		id;
+	union {
+		u32			chain_index;	/* FLOW_ACTION_KEY_GOTO */
+		struct net_device	*dev;		/* FLOW_ACTION_KEY_REDIRECT */
+		struct {				/* FLOW_ACTION_KEY_VLAN */
+			u16		vid;
+			__be16		proto;
+			u8		prio;
+		} vlan;
+		struct {				/* FLOW_ACTION_KEY_PACKET_EDIT */
+			enum flow_act_mangle_base htype;
+			u32		offset;
+			u32		mask;
+			u32		val;
+		} mangle;
+		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
+		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
+		u32			mark;		/* FLOW_ACTION_KEY_MARK */
+	};
+};
+
+struct flow_action {
+	int			num_keys;
+	struct flow_action_key	*keys;
+};
+
+int flow_action_init(struct flow_action *flow_action, int num_acts);
+void flow_action_free(struct flow_action *flow_action);
+
+static inline bool flow_action_has_keys(const struct flow_action *action)
+{
+	return action->num_keys;
+}
+
+#define flow_action_for_each(__i, __act, __actions)			\
+        for (__i = 0, __act = &(__actions)->keys[0]; __i < (__actions)->num_keys; __act = &(__actions)->keys[++__i])
+
 struct flow_rule {
 	struct flow_match	match;
+	struct flow_action	action;
 };
 
 static inline bool flow_rule_match_key(const struct flow_rule *rule,
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 186089b8d852..b9368349f0f7 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -258,6 +258,24 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
 }
 EXPORT_SYMBOL(flow_rule_match_enc_opts);
 
+int flow_action_init(struct flow_action *flow_action, int num_acts)
+{
+	flow_action->keys = kmalloc(sizeof(struct flow_action_key) * num_acts,
+				    GFP_KERNEL);
+	if (!flow_action->keys)
+		return -ENOMEM;
+
+	flow_action->num_keys = num_acts;
+	return 0;
+}
+EXPORT_SYMBOL(flow_action_init);
+
+void flow_action_free(struct flow_action *flow_action)
+{
+	kfree(flow_action->keys);
+}
+EXPORT_SYMBOL(flow_action_free);
+
 /**
  * __skb_flow_get_ports - extract the upper layer ports and return them
  * @skb: sk_buff to extract the ports from
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (2 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19 12:12   ` Jiri Pirko
  2018-11-19 12:16   ` Jiri Pirko
  2018-11-19  0:15 ` [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it Pablo Neira Ayuso
                   ` (9 subsequent siblings)
  13 siblings, 2 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This patch implements a new function to translate from native TC action
to the new flow_action representation. Moreover, this patch also updates
cls_flower to use this new function.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/pkt_cls.h  |   3 ++
 net/sched/cls_api.c    | 113 +++++++++++++++++++++++++++++++++++++++++++++++++
 net/sched/cls_flower.c |  15 ++++++-
 3 files changed, 130 insertions(+), 1 deletion(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 8b79a1a3a5c7..7d7aefa5fcd2 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -619,6 +619,9 @@ tcf_match_indev(struct sk_buff *skb, int ifindex)
 }
 #endif /* CONFIG_NET_CLS_IND */
 
+int tc_setup_flow_action(struct flow_action *flow_action,
+			 const struct tcf_exts *exts);
+
 int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
 		     enum tc_setup_type type, void *type_data, bool err_stop);
 
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index d92f44ac4c39..6ab44e650f43 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -31,6 +31,14 @@
 #include <net/netlink.h>
 #include <net/pkt_sched.h>
 #include <net/pkt_cls.h>
+#include <net/tc_act/tc_mirred.h>
+#include <net/tc_act/tc_vlan.h>
+#include <net/tc_act/tc_tunnel_key.h>
+#include <net/tc_act/tc_pedit.h>
+#include <net/tc_act/tc_csum.h>
+#include <net/tc_act/tc_gact.h>
+#include <net/tc_act/tc_skbedit.h>
+#include <net/tc_act/tc_mirred.h>
 
 extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
 
@@ -2567,6 +2575,111 @@ int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
 }
 EXPORT_SYMBOL(tc_setup_cb_call);
 
+int tc_setup_flow_action(struct flow_action *flow_action,
+			 const struct tcf_exts *exts)
+{
+	const struct tc_action *act;
+	int num_acts = 0, i, j, k;
+
+	if (!exts)
+		return 0;
+
+	tcf_exts_for_each_action(i, act, exts) {
+		if (is_tcf_pedit(act))
+			num_acts += tcf_pedit_nkeys(act);
+		else
+			num_acts++;
+	}
+	if (!num_acts)
+		return 0;
+
+	if (flow_action_init(flow_action, num_acts) < 0)
+		return -ENOMEM;
+
+	j = 0;
+	tcf_exts_for_each_action(i, act, exts) {
+		struct flow_action_key *key;
+
+		key = &flow_action->keys[j];
+		if (is_tcf_gact_ok(act)) {
+			key->id = FLOW_ACTION_KEY_ACCEPT;
+		} else if (is_tcf_gact_shot(act)) {
+			key->id = FLOW_ACTION_KEY_DROP;
+		} else if (is_tcf_gact_trap(act)) {
+			key->id = FLOW_ACTION_KEY_TRAP;
+		} else if (is_tcf_gact_goto_chain(act)) {
+			key->id = FLOW_ACTION_KEY_GOTO;
+			key->chain_index = tcf_gact_goto_chain_index(act);
+		} else if (is_tcf_mirred_egress_redirect(act)) {
+			key->id = FLOW_ACTION_KEY_REDIRECT;
+			key->dev = tcf_mirred_dev(act);
+		} else if (is_tcf_mirred_egress_mirror(act)) {
+			key->id = FLOW_ACTION_KEY_MIRRED;
+			key->dev = tcf_mirred_dev(act);
+		} else if (is_tcf_vlan(act)) {
+			switch (tcf_vlan_action(act)) {
+			case TCA_VLAN_ACT_PUSH:
+				key->id = FLOW_ACTION_KEY_VLAN_PUSH;
+				key->vlan.vid = tcf_vlan_push_vid(act);
+				key->vlan.proto = tcf_vlan_push_proto(act);
+				key->vlan.prio = tcf_vlan_push_prio(act);
+				break;
+			case TCA_VLAN_ACT_POP:
+				key->id = FLOW_ACTION_KEY_VLAN_POP;
+				break;
+			case TCA_VLAN_ACT_MODIFY:
+				key->id = FLOW_ACTION_KEY_VLAN_MANGLE;
+				key->vlan.vid = tcf_vlan_push_vid(act);
+				key->vlan.proto = tcf_vlan_push_proto(act);
+				key->vlan.prio = tcf_vlan_push_prio(act);
+				break;
+			default:
+				goto err_out;
+			}
+		} else if (is_tcf_tunnel_set(act)) {
+			key->id = FLOW_ACTION_KEY_TUNNEL_ENCAP;
+			key->tunnel = tcf_tunnel_info(act);
+		} else if (is_tcf_tunnel_release(act)) {
+			key->id = FLOW_ACTION_KEY_TUNNEL_DECAP;
+			key->tunnel = tcf_tunnel_info(act);
+		} else if (is_tcf_pedit(act)) {
+			for (k = 0; k < tcf_pedit_nkeys(act); k++) {
+				switch (tcf_pedit_cmd(act, k)) {
+				case TCA_PEDIT_KEY_EX_CMD_SET:
+					key->id = FLOW_ACTION_KEY_MANGLE;
+					break;
+				case TCA_PEDIT_KEY_EX_CMD_ADD:
+					key->id = FLOW_ACTION_KEY_ADD;
+					break;
+				default:
+					goto err_out;
+				}
+				key->mangle.htype = tcf_pedit_htype(act, k);
+				key->mangle.mask = tcf_pedit_mask(act, k);
+				key->mangle.val = tcf_pedit_val(act, k);
+				key->mangle.offset = tcf_pedit_offset(act, k);
+				key = &flow_action->keys[++j];
+			}
+		} else if (is_tcf_csum(act)) {
+			key->id = FLOW_ACTION_KEY_CSUM;
+			key->csum_flags = tcf_csum_update_flags(act);
+		} else if (is_tcf_skbedit_mark(act)) {
+			key->id = FLOW_ACTION_KEY_MARK;
+			key->mark = tcf_skbedit_mark(act);
+		} else {
+			goto err_out;
+		}
+
+		if (!is_tcf_pedit(act))
+			j++;
+	}
+	return 0;
+err_out:
+	flow_action_free(flow_action);
+	return -EOPNOTSUPP;
+}
+EXPORT_SYMBOL(tc_setup_flow_action);
+
 static __net_init int tcf_net_init(struct net *net)
 {
 	struct tcf_net *tn = net_generic(net, tcf_net_id);
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 26fc129ed504..a301fb8e68e7 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -104,6 +104,7 @@ struct cls_fl_filter {
 	u32 in_hw_count;
 	struct rcu_work rwork;
 	struct net_device *hw_dev;
+	struct flow_action action;
 };
 
 static const struct rhashtable_params mask_ht_params = {
@@ -391,18 +392,27 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 	cls_flower.exts = &f->exts;
 	cls_flower.classid = f->res.classid;
 
+	if (tc_setup_flow_action(&f->action, &f->exts) < 0)
+		return -ENOMEM;
+
+	cls_flower.rule.action.keys = f->action.keys;
+	cls_flower.rule.action.num_keys = f->action.num_keys;
+
 	err = tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			       &cls_flower, skip_sw);
 	if (err < 0) {
 		fl_hw_destroy_filter(tp, f, NULL);
+		flow_action_free(&f->action);
 		return err;
 	} else if (err > 0) {
 		f->in_hw_count = err;
 		tcf_block_offload_inc(block, &f->flags);
 	}
 
-	if (skip_sw && !(f->flags & TCA_CLS_FLAGS_IN_HW))
+	if (skip_sw && !(f->flags & TCA_CLS_FLAGS_IN_HW)) {
+		flow_action_free(&f->action);
 		return -EINVAL;
+	}
 
 	return 0;
 }
@@ -429,6 +439,7 @@ static bool __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f,
 	bool async = tcf_exts_get_net(&f->exts);
 	bool last;
 
+	flow_action_free(&f->action);
 	idr_remove(&head->handle_idr, f->handle);
 	list_del_rcu(&f->list);
 	last = fl_mask_put(head, f->mask, async);
@@ -1470,6 +1481,8 @@ static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
 			cls_flower.rule.match.mask = &mask->key;
 			cls_flower.rule.match.key = &f->mkey;
 			cls_flower.exts = &f->exts;
+			cls_flower.rule.action.num_keys = f->action.num_keys;
+			cls_flower.rule.action.keys = f->action.keys;
 			cls_flower.classid = f->res.classid;
 
 			err = cb(TC_SETUP_CLSFLOWER, &cls_flower, cb_priv);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (3 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19 13:57   ` Jiri Pirko
  2018-11-19  0:15 ` [PATCH net-next,v2 06/12] drivers: net: use flow action infrastructure Pablo Neira Ayuso
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This patch provides a tc_cls_flower_stats structure that acts as
container for tc_cls_flower_offload, then we can use to restore the
statistics on the existing TC actions. Hence, tcf_exts_stats_update() is
not used from drivers.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c          |  4 ++--
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c  |  6 +++---
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c       |  2 +-
 drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c |  2 +-
 drivers/net/ethernet/netronome/nfp/flower/offload.c   |  6 +++---
 include/net/pkt_cls.h                                 | 15 +++++++++++++++
 net/sched/cls_flower.c                                |  4 ++++
 7 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index b82143d6cdde..3d71b2530d67 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -1366,8 +1366,8 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
 	lastused = flow->lastused;
 	spin_unlock(&flow->stats_lock);
 
-	tcf_exts_stats_update(tc_flow_cmd->exts, stats.bytes, stats.packets,
-			      lastused);
+	tc_cls_flower_stats_update(tc_flow_cmd, stats.bytes, stats.packets,
+				   lastused);
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
index 39c5af5dad3d..2c7d1aebe214 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
@@ -807,9 +807,9 @@ int cxgb4_tc_flower_stats(struct net_device *dev,
 	if (ofld_stats->packet_count != packets) {
 		if (ofld_stats->prev_packet_count != packets)
 			ofld_stats->last_used = jiffies;
-		tcf_exts_stats_update(cls->exts, bytes - ofld_stats->byte_count,
-				      packets - ofld_stats->packet_count,
-				      ofld_stats->last_used);
+		tc_cls_flower_stats_update(cls, bytes - ofld_stats->byte_count,
+					   packets - ofld_stats->packet_count,
+					   ofld_stats->last_used);
 
 		ofld_stats->packet_count = packets;
 		ofld_stats->byte_count = bytes;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 2645e5d1e790..c5f0b826fa91 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -3224,7 +3224,7 @@ int mlx5e_stats_flower(struct mlx5e_priv *priv,
 
 	mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
 
-	tcf_exts_stats_update(f->exts, bytes, packets, lastuse);
+	tc_cls_flower_stats_update(f, bytes, packets, lastuse);
 
 	return 0;
 }
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 193a6f9acf79..3398984ffb2a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -460,7 +460,7 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
 	if (err)
 		goto err_rule_get_stats;
 
-	tcf_exts_stats_update(f->exts, bytes, packets, lastuse);
+	tc_cls_flower_stats_update(f, bytes, packets, lastuse);
 
 	mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
 	return 0;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 708331234908..bec74d84756c 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -532,9 +532,9 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
 	ctx_id = be32_to_cpu(nfp_flow->meta.host_ctx_id);
 
 	spin_lock_bh(&priv->stats_lock);
-	tcf_exts_stats_update(flow->exts, priv->stats[ctx_id].bytes,
-			      priv->stats[ctx_id].pkts,
-			      priv->stats[ctx_id].used);
+	tc_cls_flower_stats_update(flow, priv->stats[ctx_id].bytes,
+				   priv->stats[ctx_id].pkts,
+				   priv->stats[ctx_id].used);
 
 	priv->stats[ctx_id].pkts = 0;
 	priv->stats[ctx_id].bytes = 0;
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 7d7aefa5fcd2..7f9a8d5ca945 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -758,6 +758,12 @@ enum tc_fl_command {
 	TC_CLSFLOWER_TMPLT_DESTROY,
 };
 
+struct tc_cls_flower_stats {
+	u64	pkts;
+	u64	bytes;
+	u64	lastused;
+};
+
 struct tc_cls_flower_offload {
 	struct tc_cls_common_offload common;
 	enum tc_fl_command command;
@@ -765,6 +771,7 @@ struct tc_cls_flower_offload {
 	struct flow_rule rule;
 	struct tcf_exts *exts;
 	u32 classid;
+	struct tc_cls_flower_stats stats;
 };
 
 static inline struct flow_rule *
@@ -773,6 +780,14 @@ tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
 	return &tc_flow_cmd->rule;
 }
 
+static inline void tc_cls_flower_stats_update(struct tc_cls_flower_offload *cls_flower,
+					      u64 pkts, u64 bytes, u64 lastused)
+{
+	cls_flower->stats.pkts		= pkts;
+	cls_flower->stats.bytes		= bytes;
+	cls_flower->stats.lastused	= lastused;
+}
+
 enum tc_matchall_command {
 	TC_CLSMATCHALL_REPLACE,
 	TC_CLSMATCHALL_DESTROY,
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index a301fb8e68e7..ee67f1ae8786 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -430,6 +430,10 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
 
 	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
 			 &cls_flower, false);
+
+	tcf_exts_stats_update(&f->exts, cls_flower.stats.bytes,
+			      cls_flower.stats.pkts,
+			      cls_flower.stats.lastused);
 }
 
 static bool __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 06/12] drivers: net: use flow action infrastructure
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (4 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 07/12] cls_flower: don't expose TC actions to drivers anymore Pablo Neira Ayuso
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This patch updates drivers to use the new flow action infrastructure.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c       |  74 +++---
 .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c   | 250 +++++++++----------
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 266 ++++++++++-----------
 drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c |   2 +-
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  |  55 +++--
 drivers/net/ethernet/netronome/nfp/flower/action.c | 185 +++++++-------
 drivers/net/ethernet/qlogic/qede/qede_filter.c     |  12 +-
 7 files changed, 417 insertions(+), 427 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index 3d71b2530d67..11c5a0b495b6 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -61,9 +61,9 @@ static u16 bnxt_flow_get_dst_fid(struct bnxt *pf_bp, struct net_device *dev)
 
 static int bnxt_tc_parse_redir(struct bnxt *bp,
 			       struct bnxt_tc_actions *actions,
-			       const struct tc_action *tc_act)
+			       const struct flow_action_key *act)
 {
-	struct net_device *dev = tcf_mirred_dev(tc_act);
+	struct net_device *dev = act->dev;
 
 	if (!dev) {
 		netdev_info(bp->dev, "no dev in mirred action");
@@ -77,16 +77,16 @@ static int bnxt_tc_parse_redir(struct bnxt *bp,
 
 static int bnxt_tc_parse_vlan(struct bnxt *bp,
 			      struct bnxt_tc_actions *actions,
-			      const struct tc_action *tc_act)
+			      const struct flow_action_key *act)
 {
-	switch (tcf_vlan_action(tc_act)) {
-	case TCA_VLAN_ACT_POP:
+	switch (act->id) {
+	case FLOW_ACTION_KEY_VLAN_POP:
 		actions->flags |= BNXT_TC_ACTION_FLAG_POP_VLAN;
 		break;
-	case TCA_VLAN_ACT_PUSH:
+	case FLOW_ACTION_KEY_VLAN_PUSH:
 		actions->flags |= BNXT_TC_ACTION_FLAG_PUSH_VLAN;
-		actions->push_vlan_tci = htons(tcf_vlan_push_vid(tc_act));
-		actions->push_vlan_tpid = tcf_vlan_push_proto(tc_act);
+		actions->push_vlan_tci = htons(act->vlan.vid);
+		actions->push_vlan_tpid = act->vlan.proto;
 		break;
 	default:
 		return -EOPNOTSUPP;
@@ -96,10 +96,10 @@ static int bnxt_tc_parse_vlan(struct bnxt *bp,
 
 static int bnxt_tc_parse_tunnel_set(struct bnxt *bp,
 				    struct bnxt_tc_actions *actions,
-				    const struct tc_action *tc_act)
+				    const struct flow_action_key *act)
 {
-	struct ip_tunnel_info *tun_info = tcf_tunnel_info(tc_act);
-	struct ip_tunnel_key *tun_key = &tun_info->key;
+	const struct ip_tunnel_info *tun_info = act->tunnel;
+	const struct ip_tunnel_key *tun_key = &tun_info->key;
 
 	if (ip_tunnel_info_af(tun_info) != AF_INET) {
 		netdev_info(bp->dev, "only IPv4 tunnel-encap is supported");
@@ -113,51 +113,43 @@ static int bnxt_tc_parse_tunnel_set(struct bnxt *bp,
 
 static int bnxt_tc_parse_actions(struct bnxt *bp,
 				 struct bnxt_tc_actions *actions,
-				 struct tcf_exts *tc_exts)
+				 struct flow_action *flow_action)
 {
-	const struct tc_action *tc_act;
+	struct flow_action_key *act;
 	int i, rc;
 
-	if (!tcf_exts_has_actions(tc_exts)) {
+	if (!flow_action_has_keys(flow_action)) {
 		netdev_info(bp->dev, "no actions");
 		return -EINVAL;
 	}
 
-	tcf_exts_for_each_action(i, tc_act, tc_exts) {
-		/* Drop action */
-		if (is_tcf_gact_shot(tc_act)) {
+	flow_action_for_each(i, act, flow_action) {
+		switch (act->id) {
+		case FLOW_ACTION_KEY_DROP:
 			actions->flags |= BNXT_TC_ACTION_FLAG_DROP;
 			return 0; /* don't bother with other actions */
-		}
-
-		/* Redirect action */
-		if (is_tcf_mirred_egress_redirect(tc_act)) {
-			rc = bnxt_tc_parse_redir(bp, actions, tc_act);
+		case FLOW_ACTION_KEY_REDIRECT:
+			rc = bnxt_tc_parse_redir(bp, actions, act);
 			if (rc)
 				return rc;
-			continue;
-		}
-
-		/* Push/pop VLAN */
-		if (is_tcf_vlan(tc_act)) {
-			rc = bnxt_tc_parse_vlan(bp, actions, tc_act);
+			break;
+		case FLOW_ACTION_KEY_VLAN_POP:
+		case FLOW_ACTION_KEY_VLAN_PUSH:
+		case FLOW_ACTION_KEY_VLAN_MANGLE:
+			rc = bnxt_tc_parse_vlan(bp, actions, act);
 			if (rc)
 				return rc;
-			continue;
-		}
-
-		/* Tunnel encap */
-		if (is_tcf_tunnel_set(tc_act)) {
-			rc = bnxt_tc_parse_tunnel_set(bp, actions, tc_act);
+			break;
+		case FLOW_ACTION_KEY_TUNNEL_ENCAP:
+			rc = bnxt_tc_parse_tunnel_set(bp, actions, act);
 			if (rc)
 				return rc;
-			continue;
-		}
-
-		/* Tunnel decap */
-		if (is_tcf_tunnel_release(tc_act)) {
+			break;
+		case FLOW_ACTION_KEY_TUNNEL_DECAP:
 			actions->flags |= BNXT_TC_ACTION_FLAG_TUNNEL_DECAP;
-			continue;
+			break;
+		default:
+			break;
 		}
 	}
 
@@ -308,7 +300,7 @@ static int bnxt_tc_parse_flow(struct bnxt *bp,
 		flow->tun_mask.tp_src = match.mask->src;
 	}
 
-	return bnxt_tc_parse_actions(bp, &flow->actions, tc_flow_cmd->exts);
+	return bnxt_tc_parse_actions(bp, &flow->actions, &rule->action);
 }
 
 static int bnxt_hwrm_cfa_flow_free(struct bnxt *bp, __le16 flow_handle)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
index 2c7d1aebe214..27513f6fa093 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
@@ -292,7 +292,7 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
 				u32 mask, u32 offset, u8 htype)
 {
 	switch (htype) {
-	case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
+	case FLOW_ACT_MANGLE_HDR_TYPE_ETH:
 		switch (offset) {
 		case PEDIT_ETH_DMAC_31_0:
 			fs->newdmac = 1;
@@ -310,7 +310,7 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
 			offload_pedit(fs, val, mask, ETH_SMAC_47_16);
 		}
 		break;
-	case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4:
+	case FLOW_ACT_MANGLE_HDR_TYPE_IP4:
 		switch (offset) {
 		case PEDIT_IP4_SRC:
 			offload_pedit(fs, val, mask, IP4_SRC);
@@ -320,7 +320,7 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
 		}
 		fs->nat_mode = NAT_MODE_ALL;
 		break;
-	case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6:
+	case FLOW_ACT_MANGLE_HDR_TYPE_IP6:
 		switch (offset) {
 		case PEDIT_IP6_SRC_31_0:
 			offload_pedit(fs, val, mask, IP6_SRC_31_0);
@@ -348,7 +348,7 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
 		}
 		fs->nat_mode = NAT_MODE_ALL;
 		break;
-	case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
+	case FLOW_ACT_MANGLE_HDR_TYPE_TCP:
 		switch (offset) {
 		case PEDIT_TCP_SPORT_DPORT:
 			if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
@@ -361,7 +361,7 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
 		}
 		fs->nat_mode = NAT_MODE_ALL;
 		break;
-	case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
+	case FLOW_ACT_MANGLE_HDR_TYPE_UDP:
 		switch (offset) {
 		case PEDIT_UDP_SPORT_DPORT:
 			if (~mask & PEDIT_TCP_UDP_SPORT_MASK)
@@ -380,56 +380,63 @@ static void cxgb4_process_flow_actions(struct net_device *in,
 				       struct tc_cls_flower_offload *cls,
 				       struct ch_filter_specification *fs)
 {
-	const struct tc_action *a;
+	struct flow_rule *rule = &cls->rule;
+	struct flow_action_key *act;
 	int i;
 
-	tcf_exts_for_each_action(i, a, cls->exts) {
-		if (is_tcf_gact_ok(a)) {
+	flow_action_for_each(i, act, &rule->action) {
+		switch (act->id) {
+		case FLOW_ACTION_KEY_ACCEPT:
 			fs->action = FILTER_PASS;
-		} else if (is_tcf_gact_shot(a)) {
+			break;
+		case FLOW_ACTION_KEY_DROP:
 			fs->action = FILTER_DROP;
-		} else if (is_tcf_mirred_egress_redirect(a)) {
-			struct net_device *out = tcf_mirred_dev(a);
+			break;
+		case FLOW_ACTION_KEY_REDIRECT: {
+			struct net_device *out = act->dev;
 			struct port_info *pi = netdev_priv(out);
 
 			fs->action = FILTER_SWITCH;
 			fs->eport = pi->port_id;
-		} else if (is_tcf_vlan(a)) {
-			u32 vlan_action = tcf_vlan_action(a);
-			u8 prio = tcf_vlan_push_prio(a);
-			u16 vid = tcf_vlan_push_vid(a);
+			}
+			break;
+		case FLOW_ACTION_KEY_VLAN_POP:
+		case FLOW_ACTION_KEY_VLAN_PUSH:
+		case FLOW_ACTION_KEY_VLAN_MANGLE: {
+			u8 prio = act->vlan.prio;
+			u16 vid = act->vlan.vid;
 			u16 vlan_tci = (prio << VLAN_PRIO_SHIFT) | vid;
-
-			switch (vlan_action) {
-			case TCA_VLAN_ACT_POP:
+			switch (act->id) {
+			case FLOW_ACTION_KEY_VLAN_POP:
 				fs->newvlan |= VLAN_REMOVE;
 				break;
-			case TCA_VLAN_ACT_PUSH:
+			case FLOW_ACTION_KEY_VLAN_PUSH:
 				fs->newvlan |= VLAN_INSERT;
 				fs->vlan = vlan_tci;
 				break;
-			case TCA_VLAN_ACT_MODIFY:
+			case FLOW_ACTION_KEY_VLAN_MANGLE:
 				fs->newvlan |= VLAN_REWRITE;
 				fs->vlan = vlan_tci;
 				break;
 			default:
 				break;
 			}
-		} else if (is_tcf_pedit(a)) {
+			}
+			break;
+		case FLOW_ACTION_KEY_MANGLE: {
 			u32 mask, val, offset;
-			int nkeys, i;
 			u8 htype;
 
-			nkeys = tcf_pedit_nkeys(a);
-			for (i = 0; i < nkeys; i++) {
-				htype = tcf_pedit_htype(a, i);
-				mask = tcf_pedit_mask(a, i);
-				val = tcf_pedit_val(a, i);
-				offset = tcf_pedit_offset(a, i);
+			htype = act->mangle.htype;
+			mask = act->mangle.mask;
+			val = act->mangle.val;
+			offset = act->mangle.offset;
 
-				process_pedit_field(fs, val, mask, offset,
-						    htype);
+			process_pedit_field(fs, val, mask, offset, htype);
 			}
+			break;
+		default:
+			break;
 		}
 	}
 }
@@ -448,101 +455,89 @@ static bool valid_l4_mask(u32 mask)
 }
 
 static bool valid_pedit_action(struct net_device *dev,
-			       const struct tc_action *a)
+			       const struct flow_action_key *act)
 {
 	u32 mask, offset;
-	u8 cmd, htype;
-	int nkeys, i;
-
-	nkeys = tcf_pedit_nkeys(a);
-	for (i = 0; i < nkeys; i++) {
-		htype = tcf_pedit_htype(a, i);
-		cmd = tcf_pedit_cmd(a, i);
-		mask = tcf_pedit_mask(a, i);
-		offset = tcf_pedit_offset(a, i);
-
-		if (cmd != TCA_PEDIT_KEY_EX_CMD_SET) {
-			netdev_err(dev, "%s: Unsupported pedit cmd\n",
+	u8 htype;
+
+	htype = act->mangle.htype;
+	mask = act->mangle.mask;
+	offset = act->mangle.offset;
+
+	switch (htype) {
+	case FLOW_ACT_MANGLE_HDR_TYPE_ETH:
+		switch (offset) {
+		case PEDIT_ETH_DMAC_31_0:
+		case PEDIT_ETH_DMAC_47_32_SMAC_15_0:
+		case PEDIT_ETH_SMAC_47_16:
+			break;
+		default:
+			netdev_err(dev, "%s: Unsupported pedit field\n",
 				   __func__);
 			return false;
 		}
-
-		switch (htype) {
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
-			switch (offset) {
-			case PEDIT_ETH_DMAC_31_0:
-			case PEDIT_ETH_DMAC_47_32_SMAC_15_0:
-			case PEDIT_ETH_SMAC_47_16:
-				break;
-			default:
-				netdev_err(dev, "%s: Unsupported pedit field\n",
-					   __func__);
-				return false;
-			}
-			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4:
-			switch (offset) {
-			case PEDIT_IP4_SRC:
-			case PEDIT_IP4_DST:
-				break;
-			default:
-				netdev_err(dev, "%s: Unsupported pedit field\n",
-					   __func__);
-				return false;
-			}
+		break;
+	case FLOW_ACT_MANGLE_HDR_TYPE_IP4:
+		switch (offset) {
+		case PEDIT_IP4_SRC:
+		case PEDIT_IP4_DST:
 			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6:
-			switch (offset) {
-			case PEDIT_IP6_SRC_31_0:
-			case PEDIT_IP6_SRC_63_32:
-			case PEDIT_IP6_SRC_95_64:
-			case PEDIT_IP6_SRC_127_96:
-			case PEDIT_IP6_DST_31_0:
-			case PEDIT_IP6_DST_63_32:
-			case PEDIT_IP6_DST_95_64:
-			case PEDIT_IP6_DST_127_96:
-				break;
-			default:
-				netdev_err(dev, "%s: Unsupported pedit field\n",
-					   __func__);
-				return false;
-			}
+		default:
+			netdev_err(dev, "%s: Unsupported pedit field\n",
+				   __func__);
+			return false;
+		}
+		break;
+	case FLOW_ACT_MANGLE_HDR_TYPE_IP6:
+		switch (offset) {
+		case PEDIT_IP6_SRC_31_0:
+		case PEDIT_IP6_SRC_63_32:
+		case PEDIT_IP6_SRC_95_64:
+		case PEDIT_IP6_SRC_127_96:
+		case PEDIT_IP6_DST_31_0:
+		case PEDIT_IP6_DST_63_32:
+		case PEDIT_IP6_DST_95_64:
+		case PEDIT_IP6_DST_127_96:
 			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
-			switch (offset) {
-			case PEDIT_TCP_SPORT_DPORT:
-				if (!valid_l4_mask(~mask)) {
-					netdev_err(dev, "%s: Unsupported mask for TCP L4 ports\n",
-						   __func__);
-					return false;
-				}
-				break;
-			default:
-				netdev_err(dev, "%s: Unsupported pedit field\n",
+		default:
+			netdev_err(dev, "%s: Unsupported pedit field\n",
+				   __func__);
+			return false;
+		}
+		break;
+	case FLOW_ACT_MANGLE_HDR_TYPE_TCP:
+		switch (offset) {
+		case PEDIT_TCP_SPORT_DPORT:
+			if (!valid_l4_mask(~mask)) {
+				netdev_err(dev, "%s: Unsupported mask for TCP L4 ports\n",
 					   __func__);
 				return false;
 			}
 			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
-			switch (offset) {
-			case PEDIT_UDP_SPORT_DPORT:
-				if (!valid_l4_mask(~mask)) {
-					netdev_err(dev, "%s: Unsupported mask for UDP L4 ports\n",
-						   __func__);
-					return false;
-				}
-				break;
-			default:
-				netdev_err(dev, "%s: Unsupported pedit field\n",
+		default:
+			netdev_err(dev, "%s: Unsupported pedit field\n",
+				   __func__);
+			return false;
+		}
+		break;
+	case FLOW_ACT_MANGLE_HDR_TYPE_UDP:
+		switch (offset) {
+		case PEDIT_UDP_SPORT_DPORT:
+			if (!valid_l4_mask(~mask)) {
+				netdev_err(dev, "%s: Unsupported mask for UDP L4 ports\n",
 					   __func__);
 				return false;
 			}
 			break;
 		default:
-			netdev_err(dev, "%s: Unsupported pedit type\n",
+			netdev_err(dev, "%s: Unsupported pedit field\n",
 				   __func__);
 			return false;
 		}
+		break;
+	default:
+		netdev_err(dev, "%s: Unsupported pedit type\n", __func__);
+		return false;
 	}
 	return true;
 }
@@ -550,24 +545,26 @@ static bool valid_pedit_action(struct net_device *dev,
 static int cxgb4_validate_flow_actions(struct net_device *dev,
 				       struct tc_cls_flower_offload *cls)
 {
-	const struct tc_action *a;
+	struct flow_rule *rule = &cls->rule;
+	struct flow_action_key *act;
 	bool act_redir = false;
 	bool act_pedit = false;
 	bool act_vlan = false;
 	int i;
 
-	tcf_exts_for_each_action(i, a, cls->exts) {
-		if (is_tcf_gact_ok(a)) {
-			/* Do nothing */
-		} else if (is_tcf_gact_shot(a)) {
+	flow_action_for_each(i, act, &rule->action) {
+		switch (act->id) {
+		case FLOW_ACTION_KEY_ACCEPT:
+		case FLOW_ACTION_KEY_DROP:
 			/* Do nothing */
-		} else if (is_tcf_mirred_egress_redirect(a)) {
+			break;
+		case FLOW_ACTION_KEY_REDIRECT: {
 			struct adapter *adap = netdev2adap(dev);
 			struct net_device *n_dev, *target_dev;
 			unsigned int i;
 			bool found = false;
 
-			target_dev = tcf_mirred_dev(a);
+			target_dev = act->dev;
 			for_each_port(adap, i) {
 				n_dev = adap->port[i];
 				if (target_dev == n_dev) {
@@ -585,15 +582,18 @@ static int cxgb4_validate_flow_actions(struct net_device *dev,
 				return -EINVAL;
 			}
 			act_redir = true;
-		} else if (is_tcf_vlan(a)) {
-			u16 proto = be16_to_cpu(tcf_vlan_push_proto(a));
-			u32 vlan_action = tcf_vlan_action(a);
+			}
+			break;
+		case FLOW_ACTION_KEY_VLAN_POP:
+		case FLOW_ACTION_KEY_VLAN_PUSH:
+		case FLOW_ACTION_KEY_VLAN_MANGLE: {
+			u16 proto = be16_to_cpu(act->vlan.proto);
 
-			switch (vlan_action) {
-			case TCA_VLAN_ACT_POP:
+			switch (act->id) {
+			case FLOW_ACTION_KEY_VLAN_POP:
 				break;
-			case TCA_VLAN_ACT_PUSH:
-			case TCA_VLAN_ACT_MODIFY:
+			case FLOW_ACTION_KEY_VLAN_PUSH:
+			case FLOW_ACTION_KEY_VLAN_MANGLE:
 				if (proto != ETH_P_8021Q) {
 					netdev_err(dev, "%s: Unsupported vlan proto\n",
 						   __func__);
@@ -606,13 +606,17 @@ static int cxgb4_validate_flow_actions(struct net_device *dev,
 				return -EOPNOTSUPP;
 			}
 			act_vlan = true;
-		} else if (is_tcf_pedit(a)) {
-			bool pedit_valid = valid_pedit_action(dev, a);
+			}
+			break;
+		case FLOW_ACTION_KEY_MANGLE: {
+			bool pedit_valid = valid_pedit_action(dev, act);
 
 			if (!pedit_valid)
 				return -EOPNOTSUPP;
 			act_pedit = true;
-		} else {
+			}
+			break;
+		default:
 			netdev_err(dev, "%s: Unsupported action\n", __func__);
 			return -EOPNOTSUPP;
 		}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index c5f0b826fa91..d0f70f8af13b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1755,11 +1755,11 @@ struct pedit_headers_action {
 };
 
 static int pedit_header_offsets[] = {
-	[TCA_PEDIT_KEY_EX_HDR_TYPE_ETH] = offsetof(struct pedit_headers, eth),
-	[TCA_PEDIT_KEY_EX_HDR_TYPE_IP4] = offsetof(struct pedit_headers, ip4),
-	[TCA_PEDIT_KEY_EX_HDR_TYPE_IP6] = offsetof(struct pedit_headers, ip6),
-	[TCA_PEDIT_KEY_EX_HDR_TYPE_TCP] = offsetof(struct pedit_headers, tcp),
-	[TCA_PEDIT_KEY_EX_HDR_TYPE_UDP] = offsetof(struct pedit_headers, udp),
+	[FLOW_ACT_MANGLE_HDR_TYPE_ETH] = offsetof(struct pedit_headers, eth),
+	[FLOW_ACT_MANGLE_HDR_TYPE_IP4] = offsetof(struct pedit_headers, ip4),
+	[FLOW_ACT_MANGLE_HDR_TYPE_IP6] = offsetof(struct pedit_headers, ip6),
+	[FLOW_ACT_MANGLE_HDR_TYPE_TCP] = offsetof(struct pedit_headers, tcp),
+	[FLOW_ACT_MANGLE_HDR_TYPE_UDP] = offsetof(struct pedit_headers, udp),
 };
 
 #define pedit_header(_ph, _htype) ((void *)(_ph) + pedit_header_offsets[_htype])
@@ -1769,7 +1769,7 @@ static int set_pedit_val(u8 hdr_type, u32 mask, u32 val, u32 offset,
 {
 	u32 *curr_pmask, *curr_pval;
 
-	if (hdr_type >= __PEDIT_HDR_TYPE_MAX)
+	if (hdr_type >= 2)
 		goto out_err;
 
 	curr_pmask = (u32 *)(pedit_header(&hdrs->masks, hdr_type) + offset);
@@ -1844,10 +1844,10 @@ static int offload_pedit_fields(struct pedit_headers_action *hdrs,
 	__be16 mask_be16;
 	void *action;
 
-	set_masks = &hdrs[TCA_PEDIT_KEY_EX_CMD_SET].masks;
-	add_masks = &hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].masks;
-	set_vals = &hdrs[TCA_PEDIT_KEY_EX_CMD_SET].vals;
-	add_vals = &hdrs[TCA_PEDIT_KEY_EX_CMD_ADD].vals;
+	set_masks = &hdrs[0].masks;
+	add_masks = &hdrs[1].masks;
+	set_vals = &hdrs[0].vals;
+	add_vals = &hdrs[1].vals;
 
 	action_size = MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto);
 	action = parse_attr->mod_hdr_actions;
@@ -1972,43 +1972,33 @@ static int alloc_mod_hdr_actions(struct mlx5e_priv *priv,
 static const struct pedit_headers zero_masks = {};
 
 static int parse_tc_pedit_action(struct mlx5e_priv *priv,
-				 const struct tc_action *a, int namespace,
+				 const struct flow_action_key *act, int namespace,
 				 struct mlx5e_tc_flow_parse_attr *parse_attr,
 				 struct pedit_headers_action *hdrs,
 				 struct netlink_ext_ack *extack)
 {
-	int nkeys, i, err = -EOPNOTSUPP;
+	u8 cmd = (act->id == FLOW_ACTION_KEY_MANGLE) ? 0 : 1;
+	int err = -EOPNOTSUPP;
 	u32 mask, val, offset;
-	u8 cmd, htype;
+	u8 htype;
 
-	nkeys = tcf_pedit_nkeys(a);
+	htype = act->mangle.htype;
+	err = -EOPNOTSUPP; /* can't be all optimistic */
 
-	for (i = 0; i < nkeys; i++) {
-		htype = tcf_pedit_htype(a, i);
-		cmd = tcf_pedit_cmd(a, i);
-		err = -EOPNOTSUPP; /* can't be all optimistic */
-
-		if (htype == TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK) {
-			NL_SET_ERR_MSG_MOD(extack,
-					   "legacy pedit isn't offloaded");
-			goto out_err;
-		}
-
-		if (cmd != TCA_PEDIT_KEY_EX_CMD_SET && cmd != TCA_PEDIT_KEY_EX_CMD_ADD) {
-			NL_SET_ERR_MSG_MOD(extack, "pedit cmd isn't offloaded");
-			goto out_err;
-		}
+	if (htype == FLOW_ACT_MANGLE_UNSPEC) {
+		NL_SET_ERR_MSG_MOD(extack, "legacy pedit isn't offloaded");
+		goto out_err;
+	}
 
-		mask = tcf_pedit_mask(a, i);
-		val = tcf_pedit_val(a, i);
-		offset = tcf_pedit_offset(a, i);
+	mask = act->mangle.mask;
+	val = act->mangle.val;
+	offset = act->mangle.offset;
 
-		err = set_pedit_val(htype, ~mask, val, offset, &hdrs[cmd]);
-		if (err)
-			goto out_err;
+	err = set_pedit_val(htype, ~mask, val, offset, &hdrs[cmd]);
+	if (err)
+		goto out_err;
 
-		hdrs[cmd].pedits++;
-	}
+	hdrs[cmd].pedits++;
 
 	return 0;
 out_err:
@@ -2083,16 +2073,16 @@ static bool csum_offload_supported(struct mlx5e_priv *priv,
 }
 
 static bool modify_header_match_supported(struct mlx5_flow_spec *spec,
-					  struct tcf_exts *exts,
+					  struct flow_action *flow_action,
 					  struct netlink_ext_ack *extack)
 {
-	const struct tc_action *a;
+	const struct flow_action_key *act;
 	bool modify_ip_header;
 	LIST_HEAD(actions);
 	u8 htype, ip_proto;
 	void *headers_v;
 	u16 ethertype;
-	int nkeys, i;
+	int i;
 
 	headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, outer_headers);
 	ethertype = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ethertype);
@@ -2102,20 +2092,16 @@ static bool modify_header_match_supported(struct mlx5_flow_spec *spec,
 		goto out_ok;
 
 	modify_ip_header = false;
-	tcf_exts_for_each_action(i, a, exts) {
-		int k;
-
-		if (!is_tcf_pedit(a))
+	flow_action_for_each(i, act, flow_action) {
+		if (act->id != FLOW_ACTION_KEY_MANGLE &&
+		    act->id != FLOW_ACTION_KEY_ADD)
 			continue;
 
-		nkeys = tcf_pedit_nkeys(a);
-		for (k = 0; k < nkeys; k++) {
-			htype = tcf_pedit_htype(a, k);
-			if (htype == TCA_PEDIT_KEY_EX_HDR_TYPE_IP4 ||
-			    htype == TCA_PEDIT_KEY_EX_HDR_TYPE_IP6) {
-				modify_ip_header = true;
-				break;
-			}
+		htype = act->mangle.htype;
+		if (htype == FLOW_ACT_MANGLE_HDR_TYPE_IP4 ||
+		    htype == FLOW_ACT_MANGLE_HDR_TYPE_IP6) {
+			modify_ip_header = true;
+			break;
 		}
 	}
 
@@ -2133,7 +2119,7 @@ static bool modify_header_match_supported(struct mlx5_flow_spec *spec,
 }
 
 static bool actions_match_supported(struct mlx5e_priv *priv,
-				    struct tcf_exts *exts,
+				    struct flow_action *flow_action,
 				    struct mlx5e_tc_flow_parse_attr *parse_attr,
 				    struct mlx5e_tc_flow *flow,
 				    struct netlink_ext_ack *extack)
@@ -2150,7 +2136,8 @@ static bool actions_match_supported(struct mlx5e_priv *priv,
 		return false;
 
 	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
-		return modify_header_match_supported(&parse_attr->spec, exts,
+		return modify_header_match_supported(&parse_attr->spec,
+						     flow_action,
 						     extack);
 
 	return true;
@@ -2170,54 +2157,51 @@ static bool same_hw_devs(struct mlx5e_priv *priv, struct mlx5e_priv *peer_priv)
 	return (fsystem_guid == psystem_guid);
 }
 
-static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+static int parse_tc_nic_actions(struct mlx5e_priv *priv,
+				struct flow_action *flow_action,
 				struct mlx5e_tc_flow_parse_attr *parse_attr,
 				struct mlx5e_tc_flow *flow,
 				struct netlink_ext_ack *extack)
 {
-	struct pedit_headers_action hdrs[__PEDIT_CMD_MAX] = {};
 	struct mlx5_nic_flow_attr *attr = flow->nic_attr;
-	const struct tc_action *a;
+	struct pedit_headers_action hdrs[2] = {};
+	const struct flow_action_key *act;
 	LIST_HEAD(actions);
 	u32 action = 0;
 	int err, i;
 
-	if (!tcf_exts_has_actions(exts))
+	if (!flow_action_has_keys(flow_action))
 		return -EINVAL;
 
 	attr->flow_tag = MLX5_FS_DEFAULT_FLOW_TAG;
 
-	tcf_exts_for_each_action(i, a, exts) {
-		if (is_tcf_gact_shot(a)) {
+	flow_action_for_each(i, act, flow_action) {
+		switch (act->id) {
+		case FLOW_ACTION_KEY_DROP:
 			action |= MLX5_FLOW_CONTEXT_ACTION_DROP;
 			if (MLX5_CAP_FLOWTABLE(priv->mdev,
 					       flow_table_properties_nic_receive.flow_counter))
 				action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
-			continue;
-		}
-
-		if (is_tcf_pedit(a)) {
-			err = parse_tc_pedit_action(priv, a, MLX5_FLOW_NAMESPACE_KERNEL,
+			break;
+		case FLOW_ACTION_KEY_MANGLE:
+		case FLOW_ACTION_KEY_ADD:
+			err = parse_tc_pedit_action(priv, act, MLX5_FLOW_NAMESPACE_KERNEL,
 						    parse_attr, hdrs, extack);
 			if (err)
 				return err;
 
 			action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR |
 				  MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
-			continue;
-		}
-
-		if (is_tcf_csum(a)) {
+			break;
+		case FLOW_ACTION_KEY_CSUM:
 			if (csum_offload_supported(priv, action,
-						   tcf_csum_update_flags(a),
+						   act->csum_flags,
 						   extack))
-				continue;
+				break;
 
 			return -EOPNOTSUPP;
-		}
-
-		if (is_tcf_mirred_egress_redirect(a)) {
-			struct net_device *peer_dev = tcf_mirred_dev(a);
+		case FLOW_ACTION_KEY_REDIRECT: {
+			struct net_device *peer_dev = act->dev;
 
 			if (priv->netdev->netdev_ops == peer_dev->netdev_ops &&
 			    same_hw_devs(priv, netdev_priv(peer_dev))) {
@@ -2232,11 +2216,10 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 					    peer_dev->name);
 				return -EINVAL;
 			}
-			continue;
-		}
-
-		if (is_tcf_skbedit_mark(a)) {
-			u32 mark = tcf_skbedit_mark(a);
+			}
+			break;
+		case FLOW_ACTION_KEY_MARK: {
+			u32 mark = act->mark;
 
 			if (mark & ~MLX5E_TC_FLOW_ID_MASK) {
 				NL_SET_ERR_MSG_MOD(extack,
@@ -2246,10 +2229,11 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 
 			attr->flow_tag = mark;
 			action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
-			continue;
+			}
+			break;
+		default:
+			return -EINVAL;
 		}
-
-		return -EINVAL;
 	}
 
 	if (hdrs[TCA_PEDIT_KEY_EX_CMD_SET].pedits ||
@@ -2261,7 +2245,7 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 	}
 
 	attr->action = action;
-	if (!actions_match_supported(priv, exts, parse_attr, flow, extack))
+	if (!actions_match_supported(priv, flow_action, parse_attr, flow, extack))
 		return -EOPNOTSUPP;
 
 	return 0;
@@ -2752,7 +2736,7 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
 }
 
 static int parse_tc_vlan_action(struct mlx5e_priv *priv,
-				const struct tc_action *a,
+				const struct flow_action_key *act,
 				struct mlx5_esw_flow_attr *attr,
 				u32 *action)
 {
@@ -2761,7 +2745,8 @@ static int parse_tc_vlan_action(struct mlx5e_priv *priv,
 	if (vlan_idx >= MLX5_FS_VLAN_DEPTH)
 		return -EOPNOTSUPP;
 
-	if (tcf_vlan_action(a) == TCA_VLAN_ACT_POP) {
+	switch (act->id) {
+	case FLOW_ACTION_KEY_VLAN_POP:
 		if (vlan_idx) {
 			if (!mlx5_eswitch_vlan_actions_supported(priv->mdev,
 								 MLX5_FS_VLAN_DEPTH))
@@ -2771,10 +2756,11 @@ static int parse_tc_vlan_action(struct mlx5e_priv *priv,
 		} else {
 			*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP;
 		}
-	} else if (tcf_vlan_action(a) == TCA_VLAN_ACT_PUSH) {
-		attr->vlan_vid[vlan_idx] = tcf_vlan_push_vid(a);
-		attr->vlan_prio[vlan_idx] = tcf_vlan_push_prio(a);
-		attr->vlan_proto[vlan_idx] = tcf_vlan_push_proto(a);
+		break;
+	case FLOW_ACTION_KEY_VLAN_PUSH:
+		attr->vlan_vid[vlan_idx] = act->vlan.vid;
+		attr->vlan_prio[vlan_idx] = act->vlan.prio;
+		attr->vlan_proto[vlan_idx] = act->vlan.proto;
 		if (!attr->vlan_proto[vlan_idx])
 			attr->vlan_proto[vlan_idx] = htons(ETH_P_8021Q);
 
@@ -2786,13 +2772,15 @@ static int parse_tc_vlan_action(struct mlx5e_priv *priv,
 			*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2;
 		} else {
 			if (!mlx5_eswitch_vlan_actions_supported(priv->mdev, 1) &&
-			    (tcf_vlan_push_proto(a) != htons(ETH_P_8021Q) ||
-			     tcf_vlan_push_prio(a)))
+			    (act->vlan.proto != htons(ETH_P_8021Q) ||
+			     act->vlan.prio))
 				return -EOPNOTSUPP;
 
 			*action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
 		}
-	} else { /* action is TCA_VLAN_ACT_MODIFY */
+		break;
+	default:
+		/* action is FLOW_ACT_VLAN_MANGLE */
 		return -EOPNOTSUPP;
 	}
 
@@ -2801,60 +2789,57 @@ static int parse_tc_vlan_action(struct mlx5e_priv *priv,
 	return 0;
 }
 
-static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
+				struct flow_action *flow_action,
 				struct mlx5e_tc_flow_parse_attr *parse_attr,
 				struct mlx5e_tc_flow *flow,
 				struct netlink_ext_ack *extack)
 {
-	struct pedit_headers_action hdrs[__PEDIT_CMD_MAX] = {};
+	struct pedit_headers_action hdrs[2] = {};
 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
 	struct mlx5_esw_flow_attr *attr = flow->esw_attr;
 	struct mlx5e_rep_priv *rpriv = priv->ppriv;
-	struct ip_tunnel_info *info = NULL;
-	const struct tc_action *a;
+	const struct ip_tunnel_info *info = NULL;
+	const struct flow_action_key *act;
 	LIST_HEAD(actions);
 	bool encap = false;
 	u32 action = 0;
 	int err, i;
 
-	if (!tcf_exts_has_actions(exts))
+	if (!flow_action_has_keys(flow_action))
 		return -EINVAL;
 
 	attr->in_rep = rpriv->rep;
 	attr->in_mdev = priv->mdev;
 
-	tcf_exts_for_each_action(i, a, exts) {
-		if (is_tcf_gact_shot(a)) {
+	flow_action_for_each(i, act, flow_action) {
+		switch (act->id) {
+		case FLOW_ACTION_KEY_DROP:
 			action |= MLX5_FLOW_CONTEXT_ACTION_DROP |
 				  MLX5_FLOW_CONTEXT_ACTION_COUNT;
-			continue;
-		}
-
-		if (is_tcf_pedit(a)) {
-			err = parse_tc_pedit_action(priv, a, MLX5_FLOW_NAMESPACE_FDB,
+			break;
+		case FLOW_ACTION_KEY_MANGLE:
+		case FLOW_ACTION_KEY_ADD:
+			err = parse_tc_pedit_action(priv, act, MLX5_FLOW_NAMESPACE_FDB,
 						    parse_attr, hdrs, extack);
 			if (err)
 				return err;
 
 			action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
 			attr->mirror_count = attr->out_count;
-			continue;
-		}
-
-		if (is_tcf_csum(a)) {
+			break;
+		case FLOW_ACTION_KEY_CSUM:
 			if (csum_offload_supported(priv, action,
-						   tcf_csum_update_flags(a),
-						   extack))
-				continue;
+						   act->csum_flags, extack))
+				break;
 
 			return -EOPNOTSUPP;
-		}
-
-		if (is_tcf_mirred_egress_redirect(a) || is_tcf_mirred_egress_mirror(a)) {
+		case FLOW_ACTION_KEY_REDIRECT:
+		case FLOW_ACTION_KEY_MIRRED: {
 			struct mlx5e_priv *out_priv;
 			struct net_device *out_dev;
 
-			out_dev = tcf_mirred_dev(a);
+			out_dev = act->dev;
 
 			if (attr->out_count >= MLX5_MAX_FLOW_FWD_VPORTS) {
 				NL_SET_ERR_MSG_MOD(extack,
@@ -2888,36 +2873,29 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 				       priv->netdev->name, out_dev->name);
 				return -EINVAL;
 			}
-			continue;
-		}
-
-		if (is_tcf_tunnel_set(a)) {
-			info = tcf_tunnel_info(a);
+			}
+			break;
+		case FLOW_ACTION_KEY_TUNNEL_ENCAP:
+			info = act->tunnel;
 			if (info)
 				encap = true;
 			else
 				return -EOPNOTSUPP;
 			attr->mirror_count = attr->out_count;
-			continue;
-		}
-
-		if (is_tcf_vlan(a)) {
-			err = parse_tc_vlan_action(priv, a, attr, &action);
-
+			break;
+		case FLOW_ACTION_KEY_VLAN_PUSH:
+		case FLOW_ACTION_KEY_VLAN_POP:
+			err = parse_tc_vlan_action(priv, act, attr, &action);
 			if (err)
 				return err;
 
 			attr->mirror_count = attr->out_count;
-			continue;
-		}
-
-		if (is_tcf_tunnel_release(a)) {
+			break;
+		case FLOW_ACTION_KEY_TUNNEL_DECAP:
 			action |= MLX5_FLOW_CONTEXT_ACTION_DECAP;
-			continue;
-		}
-
-		if (is_tcf_gact_goto_chain(a)) {
-			u32 dest_chain = tcf_gact_goto_chain_index(a);
+			break;
+		case FLOW_ACTION_KEY_GOTO: {
+			u32 dest_chain = act->chain_index;
 			u32 max_chain = mlx5_eswitch_get_chain_range(esw);
 
 			if (dest_chain <= attr->chain) {
@@ -2931,11 +2909,11 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 			action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
 				  MLX5_FLOW_CONTEXT_ACTION_COUNT;
 			attr->dest_chain = dest_chain;
-
-			continue;
+			break;
+			}
+		default:
+			return -EINVAL;
 		}
-
-		return -EINVAL;
 	}
 
 	if (hdrs[TCA_PEDIT_KEY_EX_CMD_SET].pedits ||
@@ -2947,7 +2925,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 	}
 
 	attr->action = action;
-	if (!actions_match_supported(priv, exts, parse_attr, flow, extack))
+	if (!actions_match_supported(priv, flow_action, parse_attr, flow, extack))
 		return -EOPNOTSUPP;
 
 	if (attr->out_count > 1 && !mlx5_esw_has_fwd_fdb(priv->mdev)) {
@@ -3035,6 +3013,7 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
 {
 	struct netlink_ext_ack *extack = f->common.extack;
 	struct mlx5e_tc_flow_parse_attr *parse_attr;
+	struct flow_rule *rule = &f->rule;
 	struct mlx5e_tc_flow *flow;
 	int attr_size, err;
 
@@ -3047,7 +3026,7 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
 
 	flow->esw_attr->chain = f->common.chain_index;
 	flow->esw_attr->prio = TC_H_MAJ(f->common.prio) >> 16;
-	err = parse_tc_fdb_actions(priv, f->exts, parse_attr, flow, extack);
+	err = parse_tc_fdb_actions(priv, &rule->action, parse_attr, flow, extack);
 	if (err)
 		goto err_free;
 
@@ -3078,6 +3057,7 @@ mlx5e_add_nic_flow(struct mlx5e_priv *priv,
 {
 	struct netlink_ext_ack *extack = f->common.extack;
 	struct mlx5e_tc_flow_parse_attr *parse_attr;
+	struct flow_rule *rule = &f->rule;
 	struct mlx5e_tc_flow *flow;
 	int attr_size, err;
 
@@ -3092,7 +3072,7 @@ mlx5e_add_nic_flow(struct mlx5e_priv *priv,
 	if (err)
 		goto out;
 
-	err = parse_tc_nic_actions(priv, f->exts, parse_attr, flow, extack);
+	err = parse_tc_nic_actions(priv, &rule->action, parse_attr, flow, extack);
 	if (err)
 		goto err_free;
 
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
index c4f9238591e6..d214d1a58e07 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
@@ -579,7 +579,7 @@ int mlxsw_sp_acl_rulei_act_vlan(struct mlxsw_sp *mlxsw_sp,
 {
 	u8 ethertype;
 
-	if (action == TCA_VLAN_ACT_MODIFY) {
+	if (action == FLOW_ACTION_KEY_VLAN_MANGLE) {
 		switch (proto) {
 		case ETH_P_8021Q:
 			ethertype = 0;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 3398984ffb2a..821459594a43 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -17,13 +17,13 @@
 static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 					 struct mlxsw_sp_acl_block *block,
 					 struct mlxsw_sp_acl_rule_info *rulei,
-					 struct tcf_exts *exts,
+					 struct flow_action *flow_action,
 					 struct netlink_ext_ack *extack)
 {
-	const struct tc_action *a;
+	const struct flow_action_key *act;
 	int err, i;
 
-	if (!tcf_exts_has_actions(exts))
+	if (!flow_action_has_keys(flow_action))
 		return 0;
 
 	/* Count action is inserted first */
@@ -31,27 +31,31 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 	if (err)
 		return err;
 
-	tcf_exts_for_each_action(i, a, exts) {
-		if (is_tcf_gact_ok(a)) {
+	flow_action_for_each(i, act, flow_action) {
+		switch (act->id) {
+		case FLOW_ACTION_KEY_ACCEPT:
 			err = mlxsw_sp_acl_rulei_act_terminate(rulei);
 			if (err) {
 				NL_SET_ERR_MSG_MOD(extack, "Cannot append terminate action");
 				return err;
 			}
-		} else if (is_tcf_gact_shot(a)) {
+			break;
+		case FLOW_ACTION_KEY_DROP:
 			err = mlxsw_sp_acl_rulei_act_drop(rulei);
 			if (err) {
 				NL_SET_ERR_MSG_MOD(extack, "Cannot append drop action");
 				return err;
 			}
-		} else if (is_tcf_gact_trap(a)) {
+			break;
+		case FLOW_ACTION_KEY_TRAP:
 			err = mlxsw_sp_acl_rulei_act_trap(rulei);
 			if (err) {
 				NL_SET_ERR_MSG_MOD(extack, "Cannot append trap action");
 				return err;
 			}
-		} else if (is_tcf_gact_goto_chain(a)) {
-			u32 chain_index = tcf_gact_goto_chain_index(a);
+			break;
+		case FLOW_ACTION_KEY_GOTO: {
+			u32 chain_index = act->chain_index;
 			struct mlxsw_sp_acl_ruleset *ruleset;
 			u16 group_id;
 
@@ -67,7 +71,9 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 				NL_SET_ERR_MSG_MOD(extack, "Cannot append jump action");
 				return err;
 			}
-		} else if (is_tcf_mirred_egress_redirect(a)) {
+			}
+			break;
+		case FLOW_ACTION_KEY_REDIRECT: {
 			struct net_device *out_dev;
 			struct mlxsw_sp_fid *fid;
 			u16 fid_index;
@@ -79,29 +85,34 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 			if (err)
 				return err;
 
-			out_dev = tcf_mirred_dev(a);
+			out_dev = act->dev;
 			err = mlxsw_sp_acl_rulei_act_fwd(mlxsw_sp, rulei,
 							 out_dev, extack);
 			if (err)
 				return err;
-		} else if (is_tcf_mirred_egress_mirror(a)) {
-			struct net_device *out_dev = tcf_mirred_dev(a);
+			}
+			break;
+		case FLOW_ACTION_KEY_MIRRED: {
+			struct net_device *out_dev = act->dev;
 
 			err = mlxsw_sp_acl_rulei_act_mirror(mlxsw_sp, rulei,
 							    block, out_dev,
 							    extack);
 			if (err)
 				return err;
-		} else if (is_tcf_vlan(a)) {
-			u16 proto = be16_to_cpu(tcf_vlan_push_proto(a));
-			u32 action = tcf_vlan_action(a);
-			u8 prio = tcf_vlan_push_prio(a);
-			u16 vid = tcf_vlan_push_vid(a);
+			}
+			break;
+		case FLOW_ACTION_KEY_VLAN_PUSH:
+		case FLOW_ACTION_KEY_VLAN_POP: {
+			u16 proto = be16_to_cpu(act->vlan.proto);
+			u8 prio = act->vlan.prio;
+			u16 vid = act->vlan.vid;
 
 			return mlxsw_sp_acl_rulei_act_vlan(mlxsw_sp, rulei,
-							   action, vid,
+							   act->id, vid,
 							   proto, prio, extack);
-		} else {
+			}
+		default:
 			NL_SET_ERR_MSG_MOD(extack, "Unsupported action");
 			dev_err(mlxsw_sp->bus_info->dev, "Unsupported action\n");
 			return -EOPNOTSUPP;
@@ -361,8 +372,8 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
 	if (err)
 		return err;
 
-	return mlxsw_sp_flower_parse_actions(mlxsw_sp, block, rulei, f->exts,
-					     f->common.extack);
+	return mlxsw_sp_flower_parse_actions(mlxsw_sp, block, rulei,
+					     &f->rule.action, f->common.extack);
 }
 
 int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp,
diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
index 43192640bdd1..2c754b372cc7 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
@@ -37,7 +37,7 @@ static void nfp_fl_pop_vlan(struct nfp_fl_pop_vlan *pop_vlan)
 
 static void
 nfp_fl_push_vlan(struct nfp_fl_push_vlan *push_vlan,
-		 const struct tc_action *action)
+		 const struct flow_action_key *act)
 {
 	size_t act_size = sizeof(struct nfp_fl_push_vlan);
 	u16 tmp_push_vlan_tci;
@@ -45,17 +45,17 @@ nfp_fl_push_vlan(struct nfp_fl_push_vlan *push_vlan,
 	push_vlan->head.jump_id = NFP_FL_ACTION_OPCODE_PUSH_VLAN;
 	push_vlan->head.len_lw = act_size >> NFP_FL_LW_SIZ;
 	push_vlan->reserved = 0;
-	push_vlan->vlan_tpid = tcf_vlan_push_proto(action);
+	push_vlan->vlan_tpid = act->vlan.proto;
 
 	tmp_push_vlan_tci =
-		FIELD_PREP(NFP_FL_PUSH_VLAN_PRIO, tcf_vlan_push_prio(action)) |
-		FIELD_PREP(NFP_FL_PUSH_VLAN_VID, tcf_vlan_push_vid(action)) |
+		FIELD_PREP(NFP_FL_PUSH_VLAN_PRIO, act->vlan.prio) |
+		FIELD_PREP(NFP_FL_PUSH_VLAN_VID, act->vlan.vid) |
 		NFP_FL_PUSH_VLAN_CFI;
 	push_vlan->vlan_tci = cpu_to_be16(tmp_push_vlan_tci);
 }
 
 static int
-nfp_fl_pre_lag(struct nfp_app *app, const struct tc_action *action,
+nfp_fl_pre_lag(struct nfp_app *app, const struct flow_action_key *act,
 	       struct nfp_fl_payload *nfp_flow, int act_len)
 {
 	size_t act_size = sizeof(struct nfp_fl_pre_lag);
@@ -63,7 +63,7 @@ nfp_fl_pre_lag(struct nfp_app *app, const struct tc_action *action,
 	struct net_device *out_dev;
 	int err;
 
-	out_dev = tcf_mirred_dev(action);
+	out_dev = act->dev;
 	if (!out_dev || !netif_is_lag_master(out_dev))
 		return 0;
 
@@ -92,7 +92,8 @@ nfp_fl_pre_lag(struct nfp_app *app, const struct tc_action *action,
 
 static int
 nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
-	      const struct tc_action *action, struct nfp_fl_payload *nfp_flow,
+	      const struct flow_action_key *act,
+	      struct nfp_fl_payload *nfp_flow,
 	      bool last, struct net_device *in_dev,
 	      enum nfp_flower_tun_type tun_type, int *tun_out_cnt)
 {
@@ -104,7 +105,7 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
 	output->head.jump_id = NFP_FL_ACTION_OPCODE_OUTPUT;
 	output->head.len_lw = act_size >> NFP_FL_LW_SIZ;
 
-	out_dev = tcf_mirred_dev(action);
+	out_dev = act->dev;
 	if (!out_dev)
 		return -EOPNOTSUPP;
 
@@ -155,9 +156,9 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
 
 static enum nfp_flower_tun_type
 nfp_fl_get_tun_from_act_l4_port(struct nfp_app *app,
-				const struct tc_action *action)
+				const struct flow_action_key *act)
 {
-	struct ip_tunnel_info *tun = tcf_tunnel_info(action);
+	const struct ip_tunnel_info *tun = act->tunnel;
 	struct nfp_flower_priv *priv = app->priv;
 
 	switch (tun->key.tp_dst) {
@@ -195,9 +196,9 @@ static struct nfp_fl_pre_tunnel *nfp_fl_pre_tunnel(char *act_data, int act_len)
 
 static int
 nfp_fl_push_geneve_options(struct nfp_fl_payload *nfp_fl, int *list_len,
-			   const struct tc_action *action)
+			   const struct flow_action_key *act)
 {
-	struct ip_tunnel_info *ip_tun = tcf_tunnel_info(action);
+	struct ip_tunnel_info *ip_tun = (struct ip_tunnel_info *)act->tunnel;
 	int opt_len, opt_cnt, act_start, tot_push_len;
 	u8 *src = ip_tunnel_info_opts(ip_tun);
 
@@ -259,13 +260,13 @@ nfp_fl_push_geneve_options(struct nfp_fl_payload *nfp_fl, int *list_len,
 static int
 nfp_fl_set_ipv4_udp_tun(struct nfp_app *app,
 			struct nfp_fl_set_ipv4_udp_tun *set_tun,
-			const struct tc_action *action,
+			const struct flow_action_key *act,
 			struct nfp_fl_pre_tunnel *pre_tun,
 			enum nfp_flower_tun_type tun_type,
 			struct net_device *netdev)
 {
 	size_t act_size = sizeof(struct nfp_fl_set_ipv4_udp_tun);
-	struct ip_tunnel_info *ip_tun = tcf_tunnel_info(action);
+	const struct ip_tunnel_info *ip_tun = act->tunnel;
 	struct nfp_flower_priv *priv = app->priv;
 	u32 tmp_set_ip_tun_type_index = 0;
 	/* Currently support one pre-tunnel so index is always 0. */
@@ -345,7 +346,7 @@ static void nfp_fl_set_helper32(u32 value, u32 mask, u8 *p_exact, u8 *p_mask)
 }
 
 static int
-nfp_fl_set_eth(const struct tc_action *action, int idx, u32 off,
+nfp_fl_set_eth(const struct flow_action_key *act, int idx, u32 off,
 	       struct nfp_fl_set_eth *set_eth)
 {
 	u32 exact, mask;
@@ -353,8 +354,8 @@ nfp_fl_set_eth(const struct tc_action *action, int idx, u32 off,
 	if (off + 4 > ETH_ALEN * 2)
 		return -EOPNOTSUPP;
 
-	mask = ~tcf_pedit_mask(action, idx);
-	exact = tcf_pedit_val(action, idx);
+	mask = ~act->mangle.mask;
+	exact = act->mangle.val;
 
 	if (exact & ~mask)
 		return -EOPNOTSUPP;
@@ -376,7 +377,7 @@ struct ipv4_ttl_word {
 };
 
 static int
-nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+nfp_fl_set_ip4(const struct flow_action_key *act, int idx, u32 off,
 	       struct nfp_fl_set_ip4_addrs *set_ip_addr,
 	       struct nfp_fl_set_ip4_ttl_tos *set_ip_ttl_tos)
 {
@@ -387,8 +388,8 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
 	__be32 exact, mask;
 
 	/* We are expecting tcf_pedit to return a big endian value */
-	mask = (__force __be32)~tcf_pedit_mask(action, idx);
-	exact = (__force __be32)tcf_pedit_val(action, idx);
+	mask = (__force __be32)~act->mangle.mask;
+	exact = (__force __be32)act->mangle.val;
 
 	if (exact & ~mask)
 		return -EOPNOTSUPP;
@@ -505,7 +506,7 @@ nfp_fl_set_ip6_hop_limit_flow_label(u32 off, __be32 exact, __be32 mask,
 }
 
 static int
-nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+nfp_fl_set_ip6(const struct flow_action_key *act, int idx, u32 off,
 	       struct nfp_fl_set_ipv6_addr *ip_dst,
 	       struct nfp_fl_set_ipv6_addr *ip_src,
 	       struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl)
@@ -515,8 +516,8 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
 	u8 word;
 
 	/* We are expecting tcf_pedit to return a big endian value */
-	mask = (__force __be32)~tcf_pedit_mask(action, idx);
-	exact = (__force __be32)tcf_pedit_val(action, idx);
+	mask = (__force __be32)~act->mangle.mask;
+	exact = (__force __be32)act->mangle.val;
 
 	if (exact & ~mask)
 		return -EOPNOTSUPP;
@@ -541,7 +542,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
 }
 
 static int
-nfp_fl_set_tport(const struct tc_action *action, int idx, u32 off,
+nfp_fl_set_tport(const struct flow_action_key *act, int idx, u32 off,
 		 struct nfp_fl_set_tport *set_tport, int opcode)
 {
 	u32 exact, mask;
@@ -549,8 +550,8 @@ nfp_fl_set_tport(const struct tc_action *action, int idx, u32 off,
 	if (off)
 		return -EOPNOTSUPP;
 
-	mask = ~tcf_pedit_mask(action, idx);
-	exact = tcf_pedit_val(action, idx);
+	mask = ~act->mangle.mask;
+	exact = act->mangle.val;
 
 	if (exact & ~mask)
 		return -EOPNOTSUPP;
@@ -584,7 +585,8 @@ static u32 nfp_fl_csum_l4_to_flag(u8 ip_proto)
 }
 
 static int
-nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
+nfp_fl_pedit(const struct flow_action_key *act,
+	     struct tc_cls_flower_offload *flow,
 	     char *nfp_action, int *a_len, u32 *csum_updated)
 {
 	struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
@@ -595,10 +597,10 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
 	struct nfp_fl_set_tport set_tport;
 	struct nfp_fl_set_eth set_eth;
 	enum pedit_header_type htype;
-	int idx, nkeys, err;
 	size_t act_size = 0;
-	u32 offset, cmd;
 	u8 ip_proto = 0;
+	int idx, err;
+	u32 offset;
 
 	memset(&set_ip6_tc_hl_fl, 0, sizeof(set_ip6_tc_hl_fl));
 	memset(&set_ip_ttl_tos, 0, sizeof(set_ip_ttl_tos));
@@ -607,42 +609,35 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
 	memset(&set_ip_addr, 0, sizeof(set_ip_addr));
 	memset(&set_tport, 0, sizeof(set_tport));
 	memset(&set_eth, 0, sizeof(set_eth));
-	nkeys = tcf_pedit_nkeys(action);
 
-	for (idx = 0; idx < nkeys; idx++) {
-		cmd = tcf_pedit_cmd(action, idx);
-		htype = tcf_pedit_htype(action, idx);
-		offset = tcf_pedit_offset(action, idx);
+	htype = act->mangle.htype;
+	offset = act->mangle.offset;
 
-		if (cmd != TCA_PEDIT_KEY_EX_CMD_SET)
-			return -EOPNOTSUPP;
-
-		switch (htype) {
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
-			err = nfp_fl_set_eth(action, idx, offset, &set_eth);
-			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4:
-			err = nfp_fl_set_ip4(action, idx, offset, &set_ip_addr,
-					     &set_ip_ttl_tos);
-			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6:
-			err = nfp_fl_set_ip6(action, idx, offset, &set_ip6_dst,
-					     &set_ip6_src, &set_ip6_tc_hl_fl);
-			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
-			err = nfp_fl_set_tport(action, idx, offset, &set_tport,
-					       NFP_FL_ACTION_OPCODE_SET_TCP);
-			break;
-		case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
-			err = nfp_fl_set_tport(action, idx, offset, &set_tport,
-					       NFP_FL_ACTION_OPCODE_SET_UDP);
-			break;
-		default:
-			return -EOPNOTSUPP;
-		}
-		if (err)
-			return err;
+	switch (htype) {
+	case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
+		err = nfp_fl_set_eth(act, idx, offset, &set_eth);
+		break;
+	case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4:
+		err = nfp_fl_set_ip4(act, idx, offset, &set_ip_addr,
+				     &set_ip_ttl_tos);
+		break;
+	case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6:
+		err = nfp_fl_set_ip6(act, idx, offset, &set_ip6_dst,
+				     &set_ip6_src, &set_ip6_tc_hl_fl);
+		break;
+	case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
+		err = nfp_fl_set_tport(act, idx, offset, &set_tport,
+				       NFP_FL_ACTION_OPCODE_SET_TCP);
+		break;
+	case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
+		err = nfp_fl_set_tport(act, idx, offset, &set_tport,
+				       NFP_FL_ACTION_OPCODE_SET_UDP);
+		break;
+	default:
+		return -EOPNOTSUPP;
 	}
+	if (err)
+		return err;
 
 	if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
 		struct flow_match_basic match;
@@ -732,7 +727,7 @@ nfp_fl_pedit(const struct tc_action *action, struct tc_cls_flower_offload *flow,
 }
 
 static int
-nfp_flower_output_action(struct nfp_app *app, const struct tc_action *a,
+nfp_flower_output_action(struct nfp_app *app, const struct flow_action_key *act,
 			 struct nfp_fl_payload *nfp_fl, int *a_len,
 			 struct net_device *netdev, bool last,
 			 enum nfp_flower_tun_type *tun_type, int *tun_out_cnt,
@@ -752,7 +747,7 @@ nfp_flower_output_action(struct nfp_app *app, const struct tc_action *a,
 		return -EOPNOTSUPP;
 
 	output = (struct nfp_fl_output *)&nfp_fl->action_data[*a_len];
-	err = nfp_fl_output(app, output, a, nfp_fl, last, netdev, *tun_type,
+	err = nfp_fl_output(app, output, act, nfp_fl, last, netdev, *tun_type,
 			    tun_out_cnt);
 	if (err)
 		return err;
@@ -763,7 +758,7 @@ nfp_flower_output_action(struct nfp_app *app, const struct tc_action *a,
 		/* nfp_fl_pre_lag returns -err or size of prelag action added.
 		 * This will be 0 if it is not egressing to a lag dev.
 		 */
-		prelag_size = nfp_fl_pre_lag(app, a, nfp_fl, *a_len);
+		prelag_size = nfp_fl_pre_lag(app, act, nfp_fl, *a_len);
 		if (prelag_size < 0)
 			return prelag_size;
 		else if (prelag_size > 0 && (!last || *out_cnt))
@@ -777,7 +772,7 @@ nfp_flower_output_action(struct nfp_app *app, const struct tc_action *a,
 }
 
 static int
-nfp_flower_loop_action(struct nfp_app *app, const struct tc_action *a,
+nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_key *act,
 		       struct tc_cls_flower_offload *flow,
 		       struct nfp_fl_payload *nfp_fl, int *a_len,
 		       struct net_device *netdev,
@@ -790,23 +785,25 @@ nfp_flower_loop_action(struct nfp_app *app, const struct tc_action *a,
 	struct nfp_fl_pop_vlan *pop_v;
 	int err;
 
-	if (is_tcf_gact_shot(a)) {
+	switch (act->id) {
+	case FLOW_ACTION_KEY_DROP:
 		nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_DROP);
-	} else if (is_tcf_mirred_egress_redirect(a)) {
-		err = nfp_flower_output_action(app, a, nfp_fl, a_len, netdev,
+		break;
+	case FLOW_ACTION_KEY_REDIRECT:
+		err = nfp_flower_output_action(app, act, nfp_fl, a_len, netdev,
 					       true, tun_type, tun_out_cnt,
 					       out_cnt, csum_updated);
 		if (err)
 			return err;
-
-	} else if (is_tcf_mirred_egress_mirror(a)) {
-		err = nfp_flower_output_action(app, a, nfp_fl, a_len, netdev,
+		break;
+	case FLOW_ACTION_KEY_MIRRED:
+		err = nfp_flower_output_action(app, act, nfp_fl, a_len, netdev,
 					       false, tun_type, tun_out_cnt,
 					       out_cnt, csum_updated);
 		if (err)
 			return err;
-
-	} else if (is_tcf_vlan(a) && tcf_vlan_action(a) == TCA_VLAN_ACT_POP) {
+		break;
+	case FLOW_ACTION_KEY_VLAN_POP:
 		if (*a_len + sizeof(struct nfp_fl_pop_vlan) > NFP_FL_MAX_A_SIZ)
 			return -EOPNOTSUPP;
 
@@ -815,19 +812,21 @@ nfp_flower_loop_action(struct nfp_app *app, const struct tc_action *a,
 
 		nfp_fl_pop_vlan(pop_v);
 		*a_len += sizeof(struct nfp_fl_pop_vlan);
-	} else if (is_tcf_vlan(a) && tcf_vlan_action(a) == TCA_VLAN_ACT_PUSH) {
+		break;
+	case FLOW_ACTION_KEY_VLAN_PUSH:
 		if (*a_len + sizeof(struct nfp_fl_push_vlan) > NFP_FL_MAX_A_SIZ)
 			return -EOPNOTSUPP;
 
 		psh_v = (struct nfp_fl_push_vlan *)&nfp_fl->action_data[*a_len];
 		nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_NULL);
 
-		nfp_fl_push_vlan(psh_v, a);
+		nfp_fl_push_vlan(psh_v, act);
 		*a_len += sizeof(struct nfp_fl_push_vlan);
-	} else if (is_tcf_tunnel_set(a)) {
-		struct ip_tunnel_info *ip_tun = tcf_tunnel_info(a);
+		break;
+	case FLOW_ACTION_KEY_TUNNEL_ENCAP: {
+		const struct ip_tunnel_info *ip_tun = act->tunnel;
 
-		*tun_type = nfp_fl_get_tun_from_act_l4_port(app, a);
+		*tun_type = nfp_fl_get_tun_from_act_l4_port(app, act);
 		if (*tun_type == NFP_FL_TUNNEL_NONE)
 			return -EOPNOTSUPP;
 
@@ -846,32 +845,36 @@ nfp_flower_loop_action(struct nfp_app *app, const struct tc_action *a,
 		nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_NULL);
 		*a_len += sizeof(struct nfp_fl_pre_tunnel);
 
-		err = nfp_fl_push_geneve_options(nfp_fl, a_len, a);
+		err = nfp_fl_push_geneve_options(nfp_fl, a_len, act);
 		if (err)
 			return err;
 
 		set_tun = (void *)&nfp_fl->action_data[*a_len];
-		err = nfp_fl_set_ipv4_udp_tun(app, set_tun, a, pre_tun,
+		err = nfp_fl_set_ipv4_udp_tun(app, set_tun, act, pre_tun,
 					      *tun_type, netdev);
 		if (err)
 			return err;
 		*a_len += sizeof(struct nfp_fl_set_ipv4_udp_tun);
-	} else if (is_tcf_tunnel_release(a)) {
+		}
+		break;
+	case FLOW_ACTION_KEY_TUNNEL_DECAP:
 		/* Tunnel decap is handled by default so accept action. */
 		return 0;
-	} else if (is_tcf_pedit(a)) {
-		if (nfp_fl_pedit(a, flow, &nfp_fl->action_data[*a_len],
+	case FLOW_ACTION_KEY_MANGLE:
+		if (nfp_fl_pedit(act, flow, &nfp_fl->action_data[*a_len],
 				 a_len, csum_updated))
 			return -EOPNOTSUPP;
-	} else if (is_tcf_csum(a)) {
+		break;
+	case FLOW_ACTION_KEY_CSUM:
 		/* csum action requests recalc of something we have not fixed */
-		if (tcf_csum_update_flags(a) & ~*csum_updated)
+		if (act->csum_flags & ~*csum_updated)
 			return -EOPNOTSUPP;
 		/* If we will correctly fix the csum we can remove it from the
 		 * csum update list. Which will later be used to check support.
 		 */
-		*csum_updated &= ~tcf_csum_update_flags(a);
-	} else {
+		*csum_updated &= ~act->csum_flags;
+		break;
+	default:
 		/* Currently we do not handle any other actions. */
 		return -EOPNOTSUPP;
 	}
@@ -886,7 +889,7 @@ int nfp_flower_compile_action(struct nfp_app *app,
 {
 	int act_len, act_cnt, err, tun_out_cnt, out_cnt, i;
 	enum nfp_flower_tun_type tun_type;
-	const struct tc_action *a;
+	struct flow_action_key *act;
 	u32 csum_updated = 0;
 
 	memset(nfp_flow->action_data, 0, NFP_FL_MAX_A_SIZ);
@@ -897,8 +900,8 @@ int nfp_flower_compile_action(struct nfp_app *app,
 	tun_out_cnt = 0;
 	out_cnt = 0;
 
-	tcf_exts_for_each_action(i, a, flow->exts) {
-		err = nfp_flower_loop_action(app, a, flow, nfp_flow, &act_len,
+	flow_action_for_each(i, act, &flow->rule.action) {
+		err = nfp_flower_loop_action(app, act, flow, nfp_flow, &act_len,
 					     netdev, &tun_type, &tun_out_cnt,
 					     &out_cnt, &csum_updated);
 		if (err)
diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
index 81d5b9304229..e71e0ff13452 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
@@ -2004,21 +2004,21 @@ int qede_get_arfs_filter_count(struct qede_dev *edev)
 }
 
 static int qede_parse_actions(struct qede_dev *edev,
-			      struct tcf_exts *exts)
+			      struct flow_action *flow_action)
 {
+	const struct flow_action_key *act;
 	int rc = -EINVAL, num_act = 0, i;
-	const struct tc_action *a;
 	bool is_drop = false;
 
-	if (!tcf_exts_has_actions(exts)) {
+	if (!flow_action_has_keys(flow_action)) {
 		DP_NOTICE(edev, "No tc actions received\n");
 		return rc;
 	}
 
-	tcf_exts_for_each_action(i, a, exts) {
+	flow_action_for_each(i, act, flow_action) {
 		num_act++;
 
-		if (is_tcf_gact_shot(a))
+		if (act->id == FLOW_ACTION_KEY_DROP)
 			is_drop = true;
 	}
 
@@ -2235,7 +2235,7 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
 	}
 
 	/* parse tc actions and get the vf_id */
-	if (qede_parse_actions(edev, f->exts))
+	if (qede_parse_actions(edev, &f->rule.action))
 		goto unlock;
 
 	if (qede_flow_find_fltr(edev, &t)) {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 07/12] cls_flower: don't expose TC actions to drivers anymore
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (5 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 06/12] drivers: net: use flow action infrastructure Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next,v2 08/12] flow_dissector: add wake-up-on-lan and queue to flow_action Pablo Neira Ayuso
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Now that drivers have been converted to use the flow action
infrastructure, remove this field from the tc_cls_flower_offload
structure.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/pkt_cls.h  | 1 -
 net/sched/cls_flower.c | 5 -----
 2 files changed, 6 deletions(-)

diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index 7f9a8d5ca945..fe64638034f8 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -769,7 +769,6 @@ struct tc_cls_flower_offload {
 	enum tc_fl_command command;
 	unsigned long cookie;
 	struct flow_rule rule;
-	struct tcf_exts *exts;
 	u32 classid;
 	struct tc_cls_flower_stats stats;
 };
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index ee67f1ae8786..440d475c55d0 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -389,7 +389,6 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 	cls_flower.rule.match.dissector = &f->mask->dissector;
 	cls_flower.rule.match.mask = &f->mask->key;
 	cls_flower.rule.match.key = &f->mkey;
-	cls_flower.exts = &f->exts;
 	cls_flower.classid = f->res.classid;
 
 	if (tc_setup_flow_action(&f->action, &f->exts) < 0)
@@ -425,7 +424,6 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
 	tc_cls_common_offload_init(&cls_flower.common, tp, f->flags, NULL);
 	cls_flower.command = TC_CLSFLOWER_STATS;
 	cls_flower.cookie = (unsigned long) f;
-	cls_flower.exts = &f->exts;
 	cls_flower.classid = f->res.classid;
 
 	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
@@ -1484,7 +1482,6 @@ static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
 			cls_flower.rule.match.dissector = &mask->dissector;
 			cls_flower.rule.match.mask = &mask->key;
 			cls_flower.rule.match.key = &f->mkey;
-			cls_flower.exts = &f->exts;
 			cls_flower.rule.action.num_keys = f->action.num_keys;
 			cls_flower.rule.action.keys = f->action.keys;
 			cls_flower.classid = f->res.classid;
@@ -1509,7 +1506,6 @@ static void fl_hw_create_tmplt(struct tcf_chain *chain,
 {
 	struct tc_cls_flower_offload cls_flower = {};
 	struct tcf_block *block = chain->block;
-	struct tcf_exts dummy_exts = { 0, };
 
 	cls_flower.common.chain_index = chain->index;
 	cls_flower.command = TC_CLSFLOWER_TMPLT_CREATE;
@@ -1517,7 +1513,6 @@ static void fl_hw_create_tmplt(struct tcf_chain *chain,
 	cls_flower.rule.match.dissector = &tmplt->dissector;
 	cls_flower.rule.match.mask = &tmplt->mask;
 	cls_flower.rule.match.key = &tmplt->dummy_key;
-	cls_flower.exts = &dummy_exts;
 
 	/* We don't care if driver (any of them) fails to handle this
 	 * call. It serves just as a hint for it.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 08/12] flow_dissector: add wake-up-on-lan and queue to flow_action
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (6 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 07/12] cls_flower: don't expose TC actions to drivers anymore Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19 13:59   ` Jiri Pirko
  2018-11-19  0:15 ` [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator Pablo Neira Ayuso
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

These actions need to be added to support bcm sf2 features available
through the ethtool_rx_flow interface.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/flow_dissector.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
index 925c208816f1..7a4683646d5a 100644
--- a/include/net/flow_dissector.h
+++ b/include/net/flow_dissector.h
@@ -418,6 +418,8 @@ enum flow_action_key_id {
 	FLOW_ACTION_KEY_ADD,
 	FLOW_ACTION_KEY_CSUM,
 	FLOW_ACTION_KEY_MARK,
+	FLOW_ACTION_KEY_WAKE,
+	FLOW_ACTION_KEY_QUEUE,
 };
 
 /* This is mirroring enum pedit_header_type definition for easy mapping between
@@ -452,6 +454,7 @@ struct flow_action_key {
 		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
 		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
 		u32			mark;		/* FLOW_ACTION_KEY_MARK */
+		u32			queue_index;	/* FLOW_ACTION_KEY_QUEUE */
 	};
 };
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (7 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 08/12] flow_dissector: add wake-up-on-lan and queue to flow_action Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19 14:17   ` Jiri Pirko
  2018-11-19 14:49   ` Jiri Pirko
  2018-11-19  0:15 ` [PATCH net-next,v2 10/12] dsa: bcm_sf2: use flow_rule infrastructure Pablo Neira Ayuso
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This patch adds a function to translate the ethtool_rx_flow_spec
structure to the flow_rule representation.

This allows us to reuse code from the driver side given that both flower
and ethtool_rx_flow interfaces use the same representation.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: no changes.

 include/net/flow_dissector.h |   5 ++
 net/core/flow_dissector.c    | 190 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 195 insertions(+)

diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
index 7a4683646d5a..ec9036232538 100644
--- a/include/net/flow_dissector.h
+++ b/include/net/flow_dissector.h
@@ -485,4 +485,9 @@ static inline bool flow_rule_match_key(const struct flow_rule *rule,
 	return dissector_uses_key(rule->match.dissector, key);
 }
 
+struct ethtool_rx_flow_spec;
+
+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs);
+void ethtool_rx_flow_rule_free(struct flow_rule *rule);
+
 #endif
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index b9368349f0f7..ef5bdb62620c 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -17,6 +17,7 @@
 #include <linux/dccp.h>
 #include <linux/if_tunnel.h>
 #include <linux/if_pppox.h>
+#include <uapi/linux/ethtool.h>
 #include <linux/ppp_defs.h>
 #include <linux/stddef.h>
 #include <linux/if_ether.h>
@@ -276,6 +277,195 @@ void flow_action_free(struct flow_action *flow_action)
 }
 EXPORT_SYMBOL(flow_action_free);
 
+struct ethtool_rx_flow_key {
+	struct flow_dissector_key_basic			basic;
+	union {
+		struct flow_dissector_key_ipv4_addrs	ipv4;
+		struct flow_dissector_key_ipv6_addrs	ipv6;
+	};
+	struct flow_dissector_key_ports			tp;
+	struct flow_dissector_key_ip			ip;
+} __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
+
+struct ethtool_rx_flow_match {
+	struct flow_dissector		dissector;
+	struct ethtool_rx_flow_key	key;
+	struct ethtool_rx_flow_key	mask;
+};
+
+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs)
+{
+	static struct in6_addr zero_addr = {};
+	struct ethtool_rx_flow_match *match;
+	struct flow_action_key *act;
+	struct flow_rule *rule;
+
+	rule = kmalloc(sizeof(struct flow_rule), GFP_KERNEL);
+	if (!rule)
+		return NULL;
+
+	match = kzalloc(sizeof(struct ethtool_rx_flow_match), GFP_KERNEL);
+	if (!match)
+		goto err_match;
+
+	rule->match.dissector	= &match->dissector;
+	rule->match.mask	= &match->mask;
+	rule->match.key		= &match->key;
+
+	match->mask.basic.n_proto = 0xffff;
+
+	switch (fs->flow_type & ~FLOW_EXT) {
+	case TCP_V4_FLOW:
+	case UDP_V4_FLOW: {
+		const struct ethtool_tcpip4_spec *v4_spec, *v4_m_spec;
+
+		match->key.basic.n_proto = htons(ETH_P_IP);
+
+		v4_spec = &fs->h_u.tcp_ip4_spec;
+		v4_m_spec = &fs->m_u.tcp_ip4_spec;
+
+		if (v4_m_spec->ip4src) {
+			match->key.ipv4.src = v4_spec->ip4src;
+			match->mask.ipv4.src = v4_m_spec->ip4src;
+		}
+		if (v4_m_spec->ip4dst) {
+			match->key.ipv4.dst = v4_spec->ip4dst;
+			match->mask.ipv4.dst = v4_m_spec->ip4dst;
+		}
+		if (v4_m_spec->ip4src ||
+		    v4_m_spec->ip4dst) {
+			match->dissector.used_keys |=
+				FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV4_ADDRS] =
+				offsetof(struct ethtool_rx_flow_key, ipv4);
+		}
+		if (v4_m_spec->psrc) {
+			match->key.tp.src = v4_spec->psrc;
+			match->mask.tp.src = v4_m_spec->psrc;
+		}
+		if (v4_m_spec->pdst) {
+			match->key.tp.dst = v4_spec->pdst;
+			match->mask.tp.dst = v4_m_spec->pdst;
+		}
+		if (v4_m_spec->psrc ||
+		    v4_m_spec->pdst) {
+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
+				offsetof(struct ethtool_rx_flow_key, tp);
+		}
+		if (v4_m_spec->tos) {
+			match->key.ip.tos = v4_spec->pdst;
+			match->mask.ip.tos = v4_m_spec->pdst;
+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
+				offsetof(struct ethtool_rx_flow_key, ip);
+		}
+		}
+		break;
+	case TCP_V6_FLOW:
+	case UDP_V6_FLOW: {
+		const struct ethtool_tcpip6_spec *v6_spec, *v6_m_spec;
+
+		match->key.basic.n_proto = htons(ETH_P_IPV6);
+
+		v6_spec = &fs->h_u.tcp_ip6_spec;
+		v6_m_spec = &fs->m_u.tcp_ip6_spec;
+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
+			memcpy(&match->key.ipv6.src, v6_spec->ip6src,
+			       sizeof(match->key.ipv6.src));
+			memcpy(&match->mask.ipv6.src, v6_m_spec->ip6src,
+			       sizeof(match->mask.ipv6.src));
+		}
+		if (memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
+			memcpy(&match->key.ipv6.dst, v6_spec->ip6dst,
+			       sizeof(match->key.ipv6.dst));
+			memcpy(&match->mask.ipv6.dst, v6_m_spec->ip6dst,
+			       sizeof(match->mask.ipv6.dst));
+		}
+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) ||
+		    memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
+			match->dissector.used_keys |=
+				FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
+				offsetof(struct ethtool_rx_flow_key, ipv6);
+		}
+		if (v6_m_spec->psrc) {
+			match->key.tp.src = v6_spec->psrc;
+			match->mask.tp.src = v6_m_spec->psrc;
+		}
+		if (v6_m_spec->pdst) {
+			match->key.tp.dst = v6_spec->pdst;
+			match->mask.tp.dst = v6_m_spec->pdst;
+		}
+		if (v6_m_spec->psrc ||
+		    v6_m_spec->pdst) {
+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
+				offsetof(struct ethtool_rx_flow_key, tp);
+		}
+		if (v6_m_spec->tclass) {
+			match->key.ip.tos = v6_spec->tclass;
+			match->mask.ip.tos = v6_m_spec->tclass;
+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
+				offsetof(struct ethtool_rx_flow_key, ip);
+		}
+		}
+		break;
+	}
+
+	switch (fs->flow_type & ~FLOW_EXT) {
+	case TCP_V4_FLOW:
+	case TCP_V6_FLOW:
+		match->key.basic.ip_proto = IPPROTO_TCP;
+		break;
+	case UDP_V4_FLOW:
+	case UDP_V6_FLOW:
+		match->key.basic.ip_proto = IPPROTO_UDP;
+		break;
+	}
+	match->mask.basic.ip_proto = 0xff;
+
+	match->dissector.used_keys |= FLOW_DISSECTOR_KEY_BASIC;
+	match->dissector.offset[FLOW_DISSECTOR_KEY_BASIC] =
+		offsetof(struct ethtool_rx_flow_key, basic);
+
+	/* ethtool_rx supports only one single action per rule. */
+	if (flow_action_init(&rule->action, 1) < 0)
+		goto err_action;
+
+	act = &rule->action.keys[0];
+	switch (fs->ring_cookie) {
+	case RX_CLS_FLOW_DISC:
+		act->id = FLOW_ACTION_KEY_DROP;
+		break;
+	case RX_CLS_FLOW_WAKE:
+		act->id = FLOW_ACTION_KEY_WAKE;
+		break;
+	default:
+		act->id = FLOW_ACTION_KEY_QUEUE;
+		act->queue_index = fs->ring_cookie;
+		break;
+	}
+
+	return rule;
+
+err_action:
+	kfree(match);
+err_match:
+	kfree(rule);
+	return NULL;
+}
+EXPORT_SYMBOL(ethtool_rx_flow_rule);
+
+void ethtool_rx_flow_rule_free(struct flow_rule *rule)
+{
+	kfree((struct flow_match *)rule->match.dissector);
+	flow_action_free(&rule->action);
+	kfree(rule);
+}
+EXPORT_SYMBOL(ethtool_rx_flow_rule_free);
+
 /**
  * __skb_flow_get_ports - extract the upper layer ports and return them
  * @skb: sk_buff to extract the ports from
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next,v2 10/12] dsa: bcm_sf2: use flow_rule infrastructure
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (8 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next 11/12] qede: place ethtool_rx_flow_spec after code after TC flower codebase Pablo Neira Ayuso
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Update this driver to use the flow_rule infrastructure, hence we can use
the same code to populate hardware IR from ethtool_rx_flow and the
cls_flower interfaces.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
v2: remove unused variables, requested by David S. Miller.

 drivers/net/dsa/bcm_sf2_cfp.c | 108 +++++++++++++++++++++++++++---------------
 1 file changed, 70 insertions(+), 38 deletions(-)

diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
index e14663ab6dbc..8d8f00c7d43f 100644
--- a/drivers/net/dsa/bcm_sf2_cfp.c
+++ b/drivers/net/dsa/bcm_sf2_cfp.c
@@ -257,7 +257,8 @@ static int bcm_sf2_cfp_act_pol_set(struct bcm_sf2_priv *priv,
 }
 
 static void bcm_sf2_cfp_slice_ipv4(struct bcm_sf2_priv *priv,
-				   struct ethtool_tcpip4_spec *v4_spec,
+				   struct flow_dissector_key_ipv4_addrs *addrs,
+				   struct flow_dissector_key_ports *ports,
 				   unsigned int slice_num,
 				   bool mask)
 {
@@ -278,7 +279,7 @@ static void bcm_sf2_cfp_slice_ipv4(struct bcm_sf2_priv *priv,
 	 * UDF_n_A6		[23:8]
 	 * UDF_n_A5		[7:0]
 	 */
-	reg = be16_to_cpu(v4_spec->pdst) >> 8;
+	reg = be16_to_cpu(ports->dst) >> 8;
 	if (mask)
 		offset = CORE_CFP_MASK_PORT(3);
 	else
@@ -289,9 +290,9 @@ static void bcm_sf2_cfp_slice_ipv4(struct bcm_sf2_priv *priv,
 	 * UDF_n_A4		[23:8]
 	 * UDF_n_A3		[7:0]
 	 */
-	reg = (be16_to_cpu(v4_spec->pdst) & 0xff) << 24 |
-	      (u32)be16_to_cpu(v4_spec->psrc) << 8 |
-	      (be32_to_cpu(v4_spec->ip4dst) & 0x0000ff00) >> 8;
+	reg = (be16_to_cpu(ports->dst) & 0xff) << 24 |
+	      (u32)be16_to_cpu(ports->src) << 8 |
+	      (be32_to_cpu(addrs->dst) & 0x0000ff00) >> 8;
 	if (mask)
 		offset = CORE_CFP_MASK_PORT(2);
 	else
@@ -302,9 +303,9 @@ static void bcm_sf2_cfp_slice_ipv4(struct bcm_sf2_priv *priv,
 	 * UDF_n_A2		[23:8]
 	 * UDF_n_A1		[7:0]
 	 */
-	reg = (u32)(be32_to_cpu(v4_spec->ip4dst) & 0xff) << 24 |
-	      (u32)(be32_to_cpu(v4_spec->ip4dst) >> 16) << 8 |
-	      (be32_to_cpu(v4_spec->ip4src) & 0x0000ff00) >> 8;
+	reg = (u32)(be32_to_cpu(addrs->dst) & 0xff) << 24 |
+	      (u32)(be32_to_cpu(addrs->dst) >> 16) << 8 |
+	      (be32_to_cpu(addrs->src) & 0x0000ff00) >> 8;
 	if (mask)
 		offset = CORE_CFP_MASK_PORT(1);
 	else
@@ -317,8 +318,8 @@ static void bcm_sf2_cfp_slice_ipv4(struct bcm_sf2_priv *priv,
 	 * Slice ID		[3:2]
 	 * Slice valid		[1:0]
 	 */
-	reg = (u32)(be32_to_cpu(v4_spec->ip4src) & 0xff) << 24 |
-	      (u32)(be32_to_cpu(v4_spec->ip4src) >> 16) << 8 |
+	reg = (u32)(be32_to_cpu(addrs->src) & 0xff) << 24 |
+	      (u32)(be32_to_cpu(addrs->src) >> 16) << 8 |
 	      SLICE_NUM(slice_num) | SLICE_VALID;
 	if (mask)
 		offset = CORE_CFP_MASK_PORT(0);
@@ -332,9 +333,13 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 				     unsigned int queue_num,
 				     struct ethtool_rx_flow_spec *fs)
 {
-	struct ethtool_tcpip4_spec *v4_spec, *v4_m_spec;
 	const struct cfp_udf_layout *layout;
 	unsigned int slice_num, rule_index;
+	struct flow_match_ipv4_addrs ipv4;
+	struct flow_match_ports ports;
+	struct flow_match_basic basic;
+	struct flow_rule *flow_rule;
+	struct flow_match_ip ip;
 	u8 ip_proto, ip_frag;
 	u8 num_udf;
 	u32 reg;
@@ -343,13 +348,9 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 	switch (fs->flow_type & ~FLOW_EXT) {
 	case TCP_V4_FLOW:
 		ip_proto = IPPROTO_TCP;
-		v4_spec = &fs->h_u.tcp_ip4_spec;
-		v4_m_spec = &fs->m_u.tcp_ip4_spec;
 		break;
 	case UDP_V4_FLOW:
 		ip_proto = IPPROTO_UDP;
-		v4_spec = &fs->h_u.udp_ip4_spec;
-		v4_m_spec = &fs->m_u.udp_ip4_spec;
 		break;
 	default:
 		return -EINVAL;
@@ -367,11 +368,22 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 	if (rule_index > bcm_sf2_cfp_rule_size(priv))
 		return -ENOSPC;
 
+	flow_rule = ethtool_rx_flow_rule(fs);
+	if (!flow_rule)
+		return -ENOMEM;
+
+	flow_rule_match_ipv4_addrs(flow_rule, &ipv4);
+	flow_rule_match_ports(flow_rule, &ports);
+	flow_rule_match_basic(flow_rule, &basic);
+	flow_rule_match_ip(flow_rule, &ip);
+
 	layout = &udf_tcpip4_layout;
 	/* We only use one UDF slice for now */
 	slice_num = bcm_sf2_get_slice_number(layout, 0);
-	if (slice_num == UDF_NUM_SLICES)
-		return -EINVAL;
+	if (slice_num == UDF_NUM_SLICES) {
+		ret = -EINVAL;
+		goto out_err_flow_rule;
+	}
 
 	num_udf = bcm_sf2_get_num_udf_slices(layout->udfs[slice_num].slices);
 
@@ -398,9 +410,10 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 	 * Reserved		[1]
 	 * UDF_Valid[8]		[0]
 	 */
-	core_writel(priv, v4_spec->tos << IPTOS_SHIFT |
-		    ip_proto << IPPROTO_SHIFT | ip_frag << IP_FRAG_SHIFT |
-		    udf_upper_bits(num_udf),
+	core_writel(priv, ip.key->tos << IPTOS_SHIFT |
+			  basic.key->n_proto << IPPROTO_SHIFT |
+			  ip_frag << IP_FRAG_SHIFT |
+			  udf_upper_bits(num_udf),
 		    CORE_CFP_DATA_PORT(6));
 
 	/* Mask with the specific layout for IPv4 packets */
@@ -417,8 +430,8 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 	core_writel(priv, udf_lower_bits(num_udf) << 24, CORE_CFP_MASK_PORT(5));
 
 	/* Program the match and the mask */
-	bcm_sf2_cfp_slice_ipv4(priv, v4_spec, slice_num, false);
-	bcm_sf2_cfp_slice_ipv4(priv, v4_m_spec, SLICE_NUM_MASK, true);
+	bcm_sf2_cfp_slice_ipv4(priv, ipv4.key, ports.key, slice_num, false);
+	bcm_sf2_cfp_slice_ipv4(priv, ipv4.mask, ports.mask, SLICE_NUM_MASK, true);
 
 	/* Insert into TCAM now */
 	bcm_sf2_cfp_rule_addr_set(priv, rule_index);
@@ -426,14 +439,14 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 	ret = bcm_sf2_cfp_op(priv, OP_SEL_WRITE | TCAM_SEL);
 	if (ret) {
 		pr_err("TCAM entry at addr %d failed\n", rule_index);
-		return ret;
+		goto out_err_flow_rule;
 	}
 
 	/* Insert into Action and policer RAMs now */
 	ret = bcm_sf2_cfp_act_pol_set(priv, rule_index, port_num,
 				      queue_num, true);
 	if (ret)
-		return ret;
+		goto out_err_flow_rule;
 
 	/* Turn on CFP for this rule now */
 	reg = core_readl(priv, CORE_CFP_CTL_REG);
@@ -446,6 +459,10 @@ static int bcm_sf2_cfp_ipv4_rule_set(struct bcm_sf2_priv *priv, int port,
 	fs->location = rule_index;
 
 	return 0;
+
+out_err_flow_rule:
+	ethtool_rx_flow_rule_free(flow_rule);
+	return ret;
 }
 
 static void bcm_sf2_cfp_slice_ipv6(struct bcm_sf2_priv *priv,
@@ -584,6 +601,10 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	struct ethtool_tcpip6_spec *v6_spec, *v6_m_spec;
 	unsigned int slice_num, rule_index[2];
 	const struct cfp_udf_layout *layout;
+	struct flow_match_ipv6_addrs ipv6;
+	struct flow_match_ports ports;
+	struct flow_match_basic basic;
+	struct flow_rule *flow_rule;
 	u8 ip_proto, ip_frag;
 	int ret = 0;
 	u8 num_udf;
@@ -645,6 +666,15 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 		goto out_err;
 	}
 
+	flow_rule = ethtool_rx_flow_rule(fs);
+	if (!flow_rule) {
+		ret = -ENOMEM;
+		goto out_err;
+	}
+	flow_rule_match_ipv6_addrs(flow_rule, &ipv6);
+	flow_rule_match_ports(flow_rule, &ports);
+	flow_rule_match_basic(flow_rule, &basic);
+
 	/* Apply the UDF layout for this filter */
 	bcm_sf2_cfp_udf_set(priv, layout, slice_num);
 
@@ -668,7 +698,7 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	 * Reserved		[1]
 	 * UDF_Valid[8]		[0]
 	 */
-	reg = 1 << L3_FRAMING_SHIFT | ip_proto << IPPROTO_SHIFT |
+	reg = 1 << L3_FRAMING_SHIFT | basic.key->n_proto << IPPROTO_SHIFT |
 		ip_frag << IP_FRAG_SHIFT | udf_upper_bits(num_udf);
 	core_writel(priv, reg, CORE_CFP_DATA_PORT(6));
 
@@ -688,10 +718,10 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	core_writel(priv, udf_lower_bits(num_udf) << 24, CORE_CFP_MASK_PORT(5));
 
 	/* Slice the IPv6 source address and port */
-	bcm_sf2_cfp_slice_ipv6(priv, v6_spec->ip6src, v6_spec->psrc,
-				slice_num, false);
-	bcm_sf2_cfp_slice_ipv6(priv, v6_m_spec->ip6src, v6_m_spec->psrc,
-				SLICE_NUM_MASK, true);
+	bcm_sf2_cfp_slice_ipv6(priv, ipv6.key->src.in6_u.u6_addr32,
+			       ports.key->src, slice_num, false);
+	bcm_sf2_cfp_slice_ipv6(priv, ipv6.mask->src.in6_u.u6_addr32,
+			       ports.mask->src, SLICE_NUM_MASK, true);
 
 	/* Insert into TCAM now because we need to insert a second rule */
 	bcm_sf2_cfp_rule_addr_set(priv, rule_index[0]);
@@ -699,20 +729,20 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	ret = bcm_sf2_cfp_op(priv, OP_SEL_WRITE | TCAM_SEL);
 	if (ret) {
 		pr_err("TCAM entry at addr %d failed\n", rule_index[0]);
-		goto out_err;
+		goto out_err_flow_rule;
 	}
 
 	/* Insert into Action and policer RAMs now */
 	ret = bcm_sf2_cfp_act_pol_set(priv, rule_index[0], port_num,
 				      queue_num, false);
 	if (ret)
-		goto out_err;
+		goto out_err_flow_rule;
 
 	/* Now deal with the second slice to chain this rule */
 	slice_num = bcm_sf2_get_slice_number(layout, slice_num + 1);
 	if (slice_num == UDF_NUM_SLICES) {
 		ret = -EINVAL;
-		goto out_err;
+		goto out_err_flow_rule;
 	}
 
 	num_udf = bcm_sf2_get_num_udf_slices(layout->udfs[slice_num].slices);
@@ -748,10 +778,10 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	/* Mask all */
 	core_writel(priv, 0, CORE_CFP_MASK_PORT(5));
 
-	bcm_sf2_cfp_slice_ipv6(priv, v6_spec->ip6dst, v6_spec->pdst, slice_num,
-			       false);
-	bcm_sf2_cfp_slice_ipv6(priv, v6_m_spec->ip6dst, v6_m_spec->pdst,
-			       SLICE_NUM_MASK, true);
+	bcm_sf2_cfp_slice_ipv6(priv, ipv6.key->dst.in6_u.u6_addr32,
+			       ports.key->dst, slice_num, false);
+	bcm_sf2_cfp_slice_ipv6(priv, ipv6.mask->dst.in6_u.u6_addr32,
+			       ports.key->dst, SLICE_NUM_MASK, true);
 
 	/* Insert into TCAM now */
 	bcm_sf2_cfp_rule_addr_set(priv, rule_index[1]);
@@ -759,7 +789,7 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	ret = bcm_sf2_cfp_op(priv, OP_SEL_WRITE | TCAM_SEL);
 	if (ret) {
 		pr_err("TCAM entry at addr %d failed\n", rule_index[1]);
-		goto out_err;
+		goto out_err_flow_rule;
 	}
 
 	/* Insert into Action and policer RAMs now, set chain ID to
@@ -768,7 +798,7 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 	ret = bcm_sf2_cfp_act_pol_set(priv, rule_index[1], port_num,
 				      queue_num, true);
 	if (ret)
-		goto out_err;
+		goto out_err_flow_rule;
 
 	/* Turn on CFP for this rule now */
 	reg = core_readl(priv, CORE_CFP_CTL_REG);
@@ -784,6 +814,8 @@ static int bcm_sf2_cfp_ipv6_rule_set(struct bcm_sf2_priv *priv, int port,
 
 	return ret;
 
+out_err_flow_rule:
+	ethtool_rx_flow_rule_free(flow_rule);
 out_err:
 	clear_bit(rule_index[1], priv->cfp.used);
 	return ret;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next 11/12] qede: place ethtool_rx_flow_spec after code after TC flower codebase
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (9 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next,v2 10/12] dsa: bcm_sf2: use flow_rule infrastructure Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19  0:15 ` [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code Pablo Neira Ayuso
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

This is a preparation patch to reuse the existing TC flower codebase
from ethtool_rx_flow_spec.

This patch is merely moving the core ethtool_rx_flow_spec parser after
tc flower offload driver code so we can skip a few forward function
declarations in the follow up patch.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
 drivers/net/ethernet/qlogic/qede/qede_filter.c | 264 ++++++++++++-------------
 1 file changed, 132 insertions(+), 132 deletions(-)

diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
index e71e0ff13452..aca302c3261b 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
@@ -1791,72 +1791,6 @@ static int qede_flow_spec_to_tuple_udpv6(struct qede_dev *edev,
 	return 0;
 }
 
-static int qede_flow_spec_to_tuple(struct qede_dev *edev,
-				   struct qede_arfs_tuple *t,
-				   struct ethtool_rx_flow_spec *fs)
-{
-	memset(t, 0, sizeof(*t));
-
-	if (qede_flow_spec_validate_unused(edev, fs))
-		return -EOPNOTSUPP;
-
-	switch ((fs->flow_type & ~FLOW_EXT)) {
-	case TCP_V4_FLOW:
-		return qede_flow_spec_to_tuple_tcpv4(edev, t, fs);
-	case UDP_V4_FLOW:
-		return qede_flow_spec_to_tuple_udpv4(edev, t, fs);
-	case TCP_V6_FLOW:
-		return qede_flow_spec_to_tuple_tcpv6(edev, t, fs);
-	case UDP_V6_FLOW:
-		return qede_flow_spec_to_tuple_udpv6(edev, t, fs);
-	default:
-		DP_VERBOSE(edev, NETIF_MSG_IFUP,
-			   "Can't support flow of type %08x\n", fs->flow_type);
-		return -EOPNOTSUPP;
-	}
-
-	return 0;
-}
-
-static int qede_flow_spec_validate(struct qede_dev *edev,
-				   struct ethtool_rx_flow_spec *fs,
-				   struct qede_arfs_tuple *t)
-{
-	if (fs->location >= QEDE_RFS_MAX_FLTR) {
-		DP_INFO(edev, "Location out-of-bounds\n");
-		return -EINVAL;
-	}
-
-	/* Check location isn't already in use */
-	if (test_bit(fs->location, edev->arfs->arfs_fltr_bmap)) {
-		DP_INFO(edev, "Location already in use\n");
-		return -EINVAL;
-	}
-
-	/* Check if the filtering-mode could support the filter */
-	if (edev->arfs->filter_count &&
-	    edev->arfs->mode != t->mode) {
-		DP_INFO(edev,
-			"flow_spec would require filtering mode %08x, but %08x is configured\n",
-			t->mode, edev->arfs->filter_count);
-		return -EINVAL;
-	}
-
-	/* If drop requested then no need to validate other data */
-	if (fs->ring_cookie == RX_CLS_FLOW_DISC)
-		return 0;
-
-	if (ethtool_get_flow_spec_ring_vf(fs->ring_cookie))
-		return 0;
-
-	if (fs->ring_cookie >= QEDE_RSS_COUNT(edev)) {
-		DP_INFO(edev, "Queue out-of-bounds\n");
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
 /* Must be called while qede lock is held */
 static struct qede_arfs_fltr_node *
 qede_flow_find_fltr(struct qede_dev *edev, struct qede_arfs_tuple *t)
@@ -1896,72 +1830,6 @@ static void qede_flow_set_destination(struct qede_dev *edev,
 			   "Configuring N-tuple for VF 0x%02x\n", n->vfid - 1);
 }
 
-int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
-{
-	struct ethtool_rx_flow_spec *fsp = &info->fs;
-	struct qede_arfs_fltr_node *n;
-	struct qede_arfs_tuple t;
-	int min_hlen, rc;
-
-	__qede_lock(edev);
-
-	if (!edev->arfs) {
-		rc = -EPERM;
-		goto unlock;
-	}
-
-	/* Translate the flow specification into something fittign our DB */
-	rc = qede_flow_spec_to_tuple(edev, &t, fsp);
-	if (rc)
-		goto unlock;
-
-	/* Make sure location is valid and filter isn't already set */
-	rc = qede_flow_spec_validate(edev, fsp, &t);
-	if (rc)
-		goto unlock;
-
-	if (qede_flow_find_fltr(edev, &t)) {
-		rc = -EINVAL;
-		goto unlock;
-	}
-
-	n = kzalloc(sizeof(*n), GFP_KERNEL);
-	if (!n) {
-		rc = -ENOMEM;
-		goto unlock;
-	}
-
-	min_hlen = qede_flow_get_min_header_size(&t);
-	n->data = kzalloc(min_hlen, GFP_KERNEL);
-	if (!n->data) {
-		kfree(n);
-		rc = -ENOMEM;
-		goto unlock;
-	}
-
-	n->sw_id = fsp->location;
-	set_bit(n->sw_id, edev->arfs->arfs_fltr_bmap);
-	n->buf_len = min_hlen;
-
-	memcpy(&n->tuple, &t, sizeof(n->tuple));
-
-	qede_flow_set_destination(edev, n, fsp);
-
-	/* Build a minimal header according to the flow */
-	n->tuple.build_hdr(&n->tuple, n->data);
-
-	rc = qede_enqueue_fltr_and_config_searcher(edev, n, 0);
-	if (rc)
-		goto unlock;
-
-	qede_configure_arfs_fltr(edev, n, n->rxq_id, true);
-	rc = qede_poll_arfs_filter_config(edev, n);
-unlock:
-	__qede_unlock(edev);
-
-	return rc;
-}
-
 int qede_delete_flow_filter(struct qede_dev *edev, u64 cookie)
 {
 	struct qede_arfs_fltr_node *fltr = NULL;
@@ -2277,3 +2145,135 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
 	__qede_unlock(edev);
 	return rc;
 }
+
+static int qede_flow_spec_validate(struct qede_dev *edev,
+				   struct ethtool_rx_flow_spec *fs,
+				   struct qede_arfs_tuple *t)
+{
+	if (fs->location >= QEDE_RFS_MAX_FLTR) {
+		DP_INFO(edev, "Location out-of-bounds\n");
+		return -EINVAL;
+	}
+
+	/* Check location isn't already in use */
+	if (test_bit(fs->location, edev->arfs->arfs_fltr_bmap)) {
+		DP_INFO(edev, "Location already in use\n");
+		return -EINVAL;
+	}
+
+	/* Check if the filtering-mode could support the filter */
+	if (edev->arfs->filter_count &&
+	    edev->arfs->mode != t->mode) {
+		DP_INFO(edev,
+			"flow_spec would require filtering mode %08x, but %08x is configured\n",
+			t->mode, edev->arfs->filter_count);
+		return -EINVAL;
+	}
+
+	/* If drop requested then no need to validate other data */
+	if (fs->ring_cookie == RX_CLS_FLOW_DISC)
+		return 0;
+
+	if (ethtool_get_flow_spec_ring_vf(fs->ring_cookie))
+		return 0;
+
+	if (fs->ring_cookie >= QEDE_RSS_COUNT(edev)) {
+		DP_INFO(edev, "Queue out-of-bounds\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int qede_flow_spec_to_tuple(struct qede_dev *edev,
+				   struct qede_arfs_tuple *t,
+				   struct ethtool_rx_flow_spec *fs)
+{
+	memset(t, 0, sizeof(*t));
+
+	if (qede_flow_spec_validate_unused(edev, fs))
+		return -EOPNOTSUPP;
+
+	switch ((fs->flow_type & ~FLOW_EXT)) {
+	case TCP_V4_FLOW:
+		return qede_flow_spec_to_tuple_tcpv4(edev, t, fs);
+	case UDP_V4_FLOW:
+		return qede_flow_spec_to_tuple_udpv4(edev, t, fs);
+	case TCP_V6_FLOW:
+		return qede_flow_spec_to_tuple_tcpv6(edev, t, fs);
+	case UDP_V6_FLOW:
+		return qede_flow_spec_to_tuple_udpv6(edev, t, fs);
+	default:
+		DP_VERBOSE(edev, NETIF_MSG_IFUP,
+			   "Can't support flow of type %08x\n", fs->flow_type);
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
+int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
+{
+	struct ethtool_rx_flow_spec *fsp = &info->fs;
+	struct qede_arfs_fltr_node *n;
+	struct qede_arfs_tuple t;
+	int min_hlen, rc;
+
+	__qede_lock(edev);
+
+	if (!edev->arfs) {
+		rc = -EPERM;
+		goto unlock;
+	}
+
+	/* Translate the flow specification into something fittign our DB */
+	rc = qede_flow_spec_to_tuple(edev, &t, fsp);
+	if (rc)
+		goto unlock;
+
+	/* Make sure location is valid and filter isn't already set */
+	rc = qede_flow_spec_validate(edev, fsp, &t);
+	if (rc)
+		goto unlock;
+
+	if (qede_flow_find_fltr(edev, &t)) {
+		rc = -EINVAL;
+		goto unlock;
+	}
+
+	n = kzalloc(sizeof(*n), GFP_KERNEL);
+	if (!n) {
+		rc = -ENOMEM;
+		goto unlock;
+	}
+
+	min_hlen = qede_flow_get_min_header_size(&t);
+	n->data = kzalloc(min_hlen, GFP_KERNEL);
+	if (!n->data) {
+		kfree(n);
+		rc = -ENOMEM;
+		goto unlock;
+	}
+
+	n->sw_id = fsp->location;
+	set_bit(n->sw_id, edev->arfs->arfs_fltr_bmap);
+	n->buf_len = min_hlen;
+
+	memcpy(&n->tuple, &t, sizeof(n->tuple));
+
+	qede_flow_set_destination(edev, n, fsp);
+
+	/* Build a minimal header according to the flow */
+	n->tuple.build_hdr(&n->tuple, n->data);
+
+	rc = qede_enqueue_fltr_and_config_searcher(edev, n, 0);
+	if (rc)
+		goto unlock;
+
+	qede_configure_arfs_fltr(edev, n, n->rxq_id, true);
+	rc = qede_poll_arfs_filter_config(edev, n);
+unlock:
+	__qede_unlock(edev);
+
+	return rc;
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (10 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next 11/12] qede: place ethtool_rx_flow_spec after code after TC flower codebase Pablo Neira Ayuso
@ 2018-11-19  0:15 ` Pablo Neira Ayuso
  2018-11-19 16:00   ` Jiri Pirko
  2018-11-19 21:44   ` Chopra, Manish
  2018-11-19  9:20 ` [PATCH 00/12 net-next,v2] add flow_rule infrastructure Jose Abreu
  2018-11-19 20:12 ` David Miller
  13 siblings, 2 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19  0:15 UTC (permalink / raw)
  To: netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

The qede driver supports for ethtool_rx_flow_spec and flower, both
codebases look very similar.

This patch uses the ethtool_rx_flow_rule() infrastructure to remove the
duplicated ethtool_rx_flow_spec parser and consolidate ACL offload
support around the flow_rule infrastructure.

Furthermore, more code can be consolidated by merging
qede_add_cls_rule() and qede_add_tc_flower_fltr(), these two functions
also look very similar.

This driver currently provides simple ACL support, such as 5-tuple
matching, drop policy and queue to CPU.

Drivers that support more features can benefit from this infrastructure
to save even more redundant codebase.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
---
Note that, after this patch, qede_add_cls_rule() and
qede_add_tc_flower_fltr() can be also consolidated since their code is
redundant.

 drivers/net/ethernet/qlogic/qede/qede_filter.c | 246 ++++++-------------------
 1 file changed, 53 insertions(+), 193 deletions(-)

diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
index aca302c3261b..f82b26ba8f80 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
@@ -1578,30 +1578,6 @@ static void qede_flow_build_ipv6_hdr(struct qede_arfs_tuple *t,
 	ports[1] = t->dst_port;
 }
 
-/* Validate fields which are set and not accepted by the driver */
-static int qede_flow_spec_validate_unused(struct qede_dev *edev,
-					  struct ethtool_rx_flow_spec *fs)
-{
-	if (fs->flow_type & FLOW_MAC_EXT) {
-		DP_INFO(edev, "Don't support MAC extensions\n");
-		return -EOPNOTSUPP;
-	}
-
-	if ((fs->flow_type & FLOW_EXT) &&
-	    (fs->h_ext.vlan_etype || fs->h_ext.vlan_tci)) {
-		DP_INFO(edev, "Don't support vlan-based classification\n");
-		return -EOPNOTSUPP;
-	}
-
-	if ((fs->flow_type & FLOW_EXT) &&
-	    (fs->h_ext.data[0] || fs->h_ext.data[1])) {
-		DP_INFO(edev, "Don't support user defined data\n");
-		return -EOPNOTSUPP;
-	}
-
-	return 0;
-}
-
 static int qede_set_v4_tuple_to_profile(struct qede_dev *edev,
 					struct qede_arfs_tuple *t)
 {
@@ -1665,132 +1641,6 @@ static int qede_set_v6_tuple_to_profile(struct qede_dev *edev,
 	return 0;
 }
 
-static int qede_flow_spec_to_tuple_ipv4_common(struct qede_dev *edev,
-					       struct qede_arfs_tuple *t,
-					       struct ethtool_rx_flow_spec *fs)
-{
-	if ((fs->h_u.tcp_ip4_spec.ip4src &
-	     fs->m_u.tcp_ip4_spec.ip4src) != fs->h_u.tcp_ip4_spec.ip4src) {
-		DP_INFO(edev, "Don't support IP-masks\n");
-		return -EOPNOTSUPP;
-	}
-
-	if ((fs->h_u.tcp_ip4_spec.ip4dst &
-	     fs->m_u.tcp_ip4_spec.ip4dst) != fs->h_u.tcp_ip4_spec.ip4dst) {
-		DP_INFO(edev, "Don't support IP-masks\n");
-		return -EOPNOTSUPP;
-	}
-
-	if ((fs->h_u.tcp_ip4_spec.psrc &
-	     fs->m_u.tcp_ip4_spec.psrc) != fs->h_u.tcp_ip4_spec.psrc) {
-		DP_INFO(edev, "Don't support port-masks\n");
-		return -EOPNOTSUPP;
-	}
-
-	if ((fs->h_u.tcp_ip4_spec.pdst &
-	     fs->m_u.tcp_ip4_spec.pdst) != fs->h_u.tcp_ip4_spec.pdst) {
-		DP_INFO(edev, "Don't support port-masks\n");
-		return -EOPNOTSUPP;
-	}
-
-	if (fs->h_u.tcp_ip4_spec.tos) {
-		DP_INFO(edev, "Don't support tos\n");
-		return -EOPNOTSUPP;
-	}
-
-	t->eth_proto = htons(ETH_P_IP);
-	t->src_ipv4 = fs->h_u.tcp_ip4_spec.ip4src;
-	t->dst_ipv4 = fs->h_u.tcp_ip4_spec.ip4dst;
-	t->src_port = fs->h_u.tcp_ip4_spec.psrc;
-	t->dst_port = fs->h_u.tcp_ip4_spec.pdst;
-
-	return qede_set_v4_tuple_to_profile(edev, t);
-}
-
-static int qede_flow_spec_to_tuple_tcpv4(struct qede_dev *edev,
-					 struct qede_arfs_tuple *t,
-					 struct ethtool_rx_flow_spec *fs)
-{
-	t->ip_proto = IPPROTO_TCP;
-
-	if (qede_flow_spec_to_tuple_ipv4_common(edev, t, fs))
-		return -EINVAL;
-
-	return 0;
-}
-
-static int qede_flow_spec_to_tuple_udpv4(struct qede_dev *edev,
-					 struct qede_arfs_tuple *t,
-					 struct ethtool_rx_flow_spec *fs)
-{
-	t->ip_proto = IPPROTO_UDP;
-
-	if (qede_flow_spec_to_tuple_ipv4_common(edev, t, fs))
-		return -EINVAL;
-
-	return 0;
-}
-
-static int qede_flow_spec_to_tuple_ipv6_common(struct qede_dev *edev,
-					       struct qede_arfs_tuple *t,
-					       struct ethtool_rx_flow_spec *fs)
-{
-	struct in6_addr zero_addr;
-
-	memset(&zero_addr, 0, sizeof(zero_addr));
-
-	if ((fs->h_u.tcp_ip6_spec.psrc &
-	     fs->m_u.tcp_ip6_spec.psrc) != fs->h_u.tcp_ip6_spec.psrc) {
-		DP_INFO(edev, "Don't support port-masks\n");
-		return -EOPNOTSUPP;
-	}
-
-	if ((fs->h_u.tcp_ip6_spec.pdst &
-	     fs->m_u.tcp_ip6_spec.pdst) != fs->h_u.tcp_ip6_spec.pdst) {
-		DP_INFO(edev, "Don't support port-masks\n");
-		return -EOPNOTSUPP;
-	}
-
-	if (fs->h_u.tcp_ip6_spec.tclass) {
-		DP_INFO(edev, "Don't support tclass\n");
-		return -EOPNOTSUPP;
-	}
-
-	t->eth_proto = htons(ETH_P_IPV6);
-	memcpy(&t->src_ipv6, &fs->h_u.tcp_ip6_spec.ip6src,
-	       sizeof(struct in6_addr));
-	memcpy(&t->dst_ipv6, &fs->h_u.tcp_ip6_spec.ip6dst,
-	       sizeof(struct in6_addr));
-	t->src_port = fs->h_u.tcp_ip6_spec.psrc;
-	t->dst_port = fs->h_u.tcp_ip6_spec.pdst;
-
-	return qede_set_v6_tuple_to_profile(edev, t, &zero_addr);
-}
-
-static int qede_flow_spec_to_tuple_tcpv6(struct qede_dev *edev,
-					 struct qede_arfs_tuple *t,
-					 struct ethtool_rx_flow_spec *fs)
-{
-	t->ip_proto = IPPROTO_TCP;
-
-	if (qede_flow_spec_to_tuple_ipv6_common(edev, t, fs))
-		return -EINVAL;
-
-	return 0;
-}
-
-static int qede_flow_spec_to_tuple_udpv6(struct qede_dev *edev,
-					 struct qede_arfs_tuple *t,
-					 struct ethtool_rx_flow_spec *fs)
-{
-	t->ip_proto = IPPROTO_UDP;
-
-	if (qede_flow_spec_to_tuple_ipv6_common(edev, t, fs))
-		return -EINVAL;
-
-	return 0;
-}
-
 /* Must be called while qede lock is held */
 static struct qede_arfs_fltr_node *
 qede_flow_find_fltr(struct qede_dev *edev, struct qede_arfs_tuple *t)
@@ -1875,25 +1725,32 @@ static int qede_parse_actions(struct qede_dev *edev,
 			      struct flow_action *flow_action)
 {
 	const struct flow_action_key *act;
-	int rc = -EINVAL, num_act = 0, i;
-	bool is_drop = false;
+	int i;
 
 	if (!flow_action_has_keys(flow_action)) {
-		DP_NOTICE(edev, "No tc actions received\n");
-		return rc;
+		DP_NOTICE(edev, "No actions received\n");
+		return -EINVAL;
 	}
 
 	flow_action_for_each(i, act, flow_action) {
-		num_act++;
+		switch (act->id) {
+		case FLOW_ACTION_KEY_DROP:
+			break;
+		case FLOW_ACTION_KEY_QUEUE:
+			if (ethtool_get_flow_spec_ring_vf(act->queue_index))
+				break;
 
-		if (act->id == FLOW_ACTION_KEY_DROP)
-			is_drop = true;
+			if (act->queue_index >= QEDE_RSS_COUNT(edev)) {
+				DP_INFO(edev, "Queue out-of-bounds\n");
+				return -EINVAL;
+			}
+			break;
+		default:
+			return -EINVAL;
+		}
 	}
 
-	if (num_act == 1 && is_drop)
-		return 0;
-
-	return rc;
+	return 0;
 }
 
 static int
@@ -2147,16 +2004,17 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
 }
 
 static int qede_flow_spec_validate(struct qede_dev *edev,
-				   struct ethtool_rx_flow_spec *fs,
-				   struct qede_arfs_tuple *t)
+				   struct flow_action *flow_action,
+				   struct qede_arfs_tuple *t,
+				   __u32 location)
 {
-	if (fs->location >= QEDE_RFS_MAX_FLTR) {
+	if (location >= QEDE_RFS_MAX_FLTR) {
 		DP_INFO(edev, "Location out-of-bounds\n");
 		return -EINVAL;
 	}
 
 	/* Check location isn't already in use */
-	if (test_bit(fs->location, edev->arfs->arfs_fltr_bmap)) {
+	if (test_bit(location, edev->arfs->arfs_fltr_bmap)) {
 		DP_INFO(edev, "Location already in use\n");
 		return -EINVAL;
 	}
@@ -2170,46 +2028,53 @@ static int qede_flow_spec_validate(struct qede_dev *edev,
 		return -EINVAL;
 	}
 
-	/* If drop requested then no need to validate other data */
-	if (fs->ring_cookie == RX_CLS_FLOW_DISC)
-		return 0;
-
-	if (ethtool_get_flow_spec_ring_vf(fs->ring_cookie))
-		return 0;
-
-	if (fs->ring_cookie >= QEDE_RSS_COUNT(edev)) {
-		DP_INFO(edev, "Queue out-of-bounds\n");
+	if (qede_parse_actions(edev, flow_action))
 		return -EINVAL;
-	}
 
 	return 0;
 }
 
-static int qede_flow_spec_to_tuple(struct qede_dev *edev,
-				   struct qede_arfs_tuple *t,
-				   struct ethtool_rx_flow_spec *fs)
+static int qede_flow_spec_to_rule(struct qede_dev *edev,
+				  struct qede_arfs_tuple *t,
+				  struct ethtool_rx_flow_spec *fs)
 {
-	memset(t, 0, sizeof(*t));
-
-	if (qede_flow_spec_validate_unused(edev, fs))
-		return -EOPNOTSUPP;
+	struct tc_cls_flower_offload f = {};
+	struct flow_rule *flow_rule;
+	__be16 proto;
+	int err = 0;
 
 	switch ((fs->flow_type & ~FLOW_EXT)) {
 	case TCP_V4_FLOW:
-		return qede_flow_spec_to_tuple_tcpv4(edev, t, fs);
 	case UDP_V4_FLOW:
-		return qede_flow_spec_to_tuple_udpv4(edev, t, fs);
+		proto = htons(ETH_P_IP);
+		break;
 	case TCP_V6_FLOW:
-		return qede_flow_spec_to_tuple_tcpv6(edev, t, fs);
 	case UDP_V6_FLOW:
-		return qede_flow_spec_to_tuple_udpv6(edev, t, fs);
+		proto = htons(ETH_P_IPV6);
+		break;
 	default:
 		DP_VERBOSE(edev, NETIF_MSG_IFUP,
 			   "Can't support flow of type %08x\n", fs->flow_type);
 		return -EOPNOTSUPP;
 	}
 
-	return 0;
+	flow_rule = ethtool_rx_flow_rule(fs);
+	if (!flow_rule)
+		return -ENOMEM;
+
+	f.rule = *flow_rule;
+
+	if (qede_parse_flower_attr(edev, proto, &f, t)) {
+		err = -EINVAL;
+		goto err_out;
+	}
+
+	/* Make sure location is valid and filter isn't already set */
+	err = qede_flow_spec_validate(edev, &f.rule.action, t, fs->location);
+err_out:
+	ethtool_rx_flow_rule_free(flow_rule);
+	return err;
+
 }
 
 int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
@@ -2227,12 +2092,7 @@ int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
 	}
 
 	/* Translate the flow specification into something fittign our DB */
-	rc = qede_flow_spec_to_tuple(edev, &t, fsp);
-	if (rc)
-		goto unlock;
-
-	/* Make sure location is valid and filter isn't already set */
-	rc = qede_flow_spec_validate(edev, fsp, &t);
+	rc = qede_flow_spec_to_rule(edev, &t, fsp);
 	if (rc)
 		goto unlock;
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (11 preceding siblings ...)
  2018-11-19  0:15 ` [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code Pablo Neira Ayuso
@ 2018-11-19  9:20 ` Jose Abreu
  2018-11-19 10:19   ` Pablo Neira Ayuso
  2018-11-19 20:12 ` David Miller
  13 siblings, 1 reply; 44+ messages in thread
From: Jose Abreu @ 2018-11-19  9:20 UTC (permalink / raw)
  To: Pablo Neira Ayuso, netdev
  Cc: davem, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, jose.abreu, linux-net-drivers, ganeshgr,
	ogerlitz

Hello Pablo,

On 19-11-2018 00:15, Pablo Neira Ayuso wrote:
> Hi,
>
> This patchset introduces a kernel intermediate representation (IR) to
> express ACL hardware offloads, as already described in previous RFC and
> v1 patchset [1] [2]. The idea is to normalize the frontend U/APIs to use
> the flow dissectors and the flow actions so drivers can reuse the
> existing TC offload driver codebase - that has been converted to use the
> flow_rule infrastructure.
>
> After this patch, as Or previously described, there is one extra layer:
>
> kernel frontend U/API X --> kernel parser Y --> IR --> driver --> HW API
> kernel frontend U/API Z --> kernel parser W --> IR --> driver --> HW API
>
> However, cost of this layer is very small, adding 1 million rules via
> tc -batch, perf shows:
>
>      0.06%  tc               [kernel.vmlinux]    [k] tc_setup_flow_action
>
> at position 187 in the call graph, far from the top ten. The flow_match
> representation uses the flow dissector infrastructure, just like
> cls_flower, therefore, there is no need for conversion of the rule match
> side.
>
> The flow_action representation is very similar to the TC action plus
> this includes wake-up-on-lan and queue to CPU actions that are needed
> for the ethtool_rx_flow_spec interface in the bcm_sf2 driver, that is
> converted in this patchset to use it. It is now possible to add tc
> cls_flower support for bcm_sf2 and reuse the existing parser that was
> originally designed for the ethtool_rx_flow_spec interface.
>
> As requested, this new patchset also converts qlogic/qede to use this
> new infrastructure (see patch 12/12). This driver currently has two
> parsers, one for ethtool_rx_flow_spec and another for tc cls_flower.
> This driver supports for simple 5-tuple matching and available actions
> are packet drop and queue. This patch updates the driver code to use one
> single parser to populate HW IR.
>
> Thanks.
>
> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lwn.net_Articles_766695_&d=DwIBAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=yaVFU4TjGY0gVF8El1uKcisy6TPsyCl9uN7Wsis-qhY&m=tB4p4lWxJv3Np8Tg05scy415_MEU7RQO3Q6KGdcTMCg&s=EUCsiqp58Et5K2u9hHsxXyF6ep1yfhnASEBdXur33oQ&e=
> [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__marc.info_-3Fl-3Dlinux-2Dnetdev-26m-3D154233253114506-26w-3D2&d=DwIBAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=yaVFU4TjGY0gVF8El1uKcisy6TPsyCl9uN7Wsis-qhY&m=tB4p4lWxJv3Np8Tg05scy415_MEU7RQO3Q6KGdcTMCg&s=1XOZeTYyELUCyM8yS76_bRQOVGOy19pnV9cH937FjkY&e=
>
> Pablo Neira Ayuso (12):
>   flow_dissector: add flow_rule and flow_match structures and use them
>   net/mlx5e: support for two independent packet edit actions
>   flow_dissector: add flow action infrastructure
>   cls_api: add translator to flow_action representation
>   cls_flower: add statistics retrieval infrastructure and use it
>   drivers: net: use flow action infrastructure
>   cls_flower: don't expose TC actions to drivers anymore
>   flow_dissector: add wake-up-on-lan and queue to flow_action
>   flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure
>     translator
>   dsa: bcm_sf2: use flow_rule infrastructure
>   qede: place ethtool_rx_flow_spec after code after TC flower codebase
>   qede: use ethtool_rx_flow_rule() to remove duplicated parser code
>
>  drivers/net/dsa/bcm_sf2_cfp.c                      | 108 +--
>  drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c       | 252 +++----
>  .../net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c   | 450 ++++++-------
>  drivers/net/ethernet/intel/i40e/i40e_main.c        | 178 ++---
>  drivers/net/ethernet/intel/iavf/iavf_main.c        | 195 +++---
>  drivers/net/ethernet/intel/igb/igb_main.c          |  64 +-
>  drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 743 ++++++++++-----------
>  drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c |   2 +-
>  .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  | 259 ++++---
>  drivers/net/ethernet/netronome/nfp/flower/action.c | 196 +++---
>  drivers/net/ethernet/netronome/nfp/flower/match.c  | 417 ++++++------
>  .../net/ethernet/netronome/nfp/flower/offload.c    | 151 ++---
>  drivers/net/ethernet/qlogic/qede/qede_filter.c     | 537 ++++++---------
>  include/net/flow_dissector.h                       | 185 +++++
>  include/net/pkt_cls.h                              |  29 +-
>  net/core/flow_dissector.c                          | 341 ++++++++++
>  net/sched/cls_api.c                                | 113 ++++
>  net/sched/cls_flower.c                             |  42 +-
>  18 files changed, 2279 insertions(+), 1983 deletions(-)
>

Although I was cc'ed in the thread I'm not seeing stmmac driver
in this conversion. Can you please add it ?

Thanks and Best Regards,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-19  9:20 ` [PATCH 00/12 net-next,v2] add flow_rule infrastructure Jose Abreu
@ 2018-11-19 10:19   ` Pablo Neira Ayuso
  0 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 10:19 UTC (permalink / raw)
  To: Jose Abreu
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 09:20:43AM +0000, Jose Abreu wrote:
[...]
> Although I was cc'ed in the thread I'm not seeing stmmac driver
> in this conversion.

My intention was to attract the attention of driver maintainers that
are using tc offloads in some way in their infrastructure. That's why
you've been Cc.

> Can you please add it ?

stmmac is using cls_u32, and this patchset only converts cls_flower at
this stage.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19  0:15 ` [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure Pablo Neira Ayuso
@ 2018-11-19 11:56   ` Jiri Pirko
  2018-11-19 12:35     ` Pablo Neira Ayuso
  2018-11-19 14:03   ` Jiri Pirko
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 11:56 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:10AM CET, pablo@netfilter.org wrote:
>This new infrastructure defines the nic actions that you can perform
>from existing network drivers. This infrastructure allows us to avoid a
>direct dependency with the native software TC action representation.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> include/net/flow_dissector.h | 70 ++++++++++++++++++++++++++++++++++++++++++++
> net/core/flow_dissector.c    | 18 ++++++++++++
> 2 files changed, 88 insertions(+)
>
>diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
>index 965a82b8d881..925c208816f1 100644
>--- a/include/net/flow_dissector.h
>+++ b/include/net/flow_dissector.h
>@@ -402,8 +402,78 @@ void flow_rule_match_enc_keyid(const struct flow_rule *rule,
> void flow_rule_match_enc_opts(const struct flow_rule *rule,
> 			      struct flow_match_enc_opts *out);
> 
>+enum flow_action_key_id {

Why "key"? Why not just "flow_action_id"


>+	FLOW_ACTION_KEY_ACCEPT		= 0,
>+	FLOW_ACTION_KEY_DROP,
>+	FLOW_ACTION_KEY_TRAP,
>+	FLOW_ACTION_KEY_GOTO,
>+	FLOW_ACTION_KEY_REDIRECT,
>+	FLOW_ACTION_KEY_MIRRED,
>+	FLOW_ACTION_KEY_VLAN_PUSH,
>+	FLOW_ACTION_KEY_VLAN_POP,
>+	FLOW_ACTION_KEY_VLAN_MANGLE,
>+	FLOW_ACTION_KEY_TUNNEL_ENCAP,
>+	FLOW_ACTION_KEY_TUNNEL_DECAP,
>+	FLOW_ACTION_KEY_MANGLE,
>+	FLOW_ACTION_KEY_ADD,
>+	FLOW_ACTION_KEY_CSUM,
>+	FLOW_ACTION_KEY_MARK,
>+};
>+
>+/* This is mirroring enum pedit_header_type definition for easy mapping between
>+ * tc pedit action. Legacy TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK is mapped to
>+ * FLOW_ACT_MANGLE_UNSPEC, which is supported by no driver.
>+ */
>+enum flow_act_mangle_base {

Please be consistent in naming: "act" vs "action"


>+	FLOW_ACT_MANGLE_UNSPEC		= 0,
>+	FLOW_ACT_MANGLE_HDR_TYPE_ETH,
>+	FLOW_ACT_MANGLE_HDR_TYPE_IP4,
>+	FLOW_ACT_MANGLE_HDR_TYPE_IP6,
>+	FLOW_ACT_MANGLE_HDR_TYPE_TCP,
>+	FLOW_ACT_MANGLE_HDR_TYPE_UDP,
>+};
>+
>+struct flow_action_key {

And here "struct flow_action"


>+	enum flow_action_key_id		id;
>+	union {
>+		u32			chain_index;	/* FLOW_ACTION_KEY_GOTO */
>+		struct net_device	*dev;		/* FLOW_ACTION_KEY_REDIRECT */
>+		struct {				/* FLOW_ACTION_KEY_VLAN */
>+			u16		vid;
>+			__be16		proto;
>+			u8		prio;
>+		} vlan;
>+		struct {				/* FLOW_ACTION_KEY_PACKET_EDIT */
>+			enum flow_act_mangle_base htype;
>+			u32		offset;
>+			u32		mask;
>+			u32		val;
>+		} mangle;
>+		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
>+		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
>+		u32			mark;		/* FLOW_ACTION_KEY_MARK */
>+	};
>+};
>+
>+struct flow_action {

And here "struct flow_actions"


>+	int			num_keys;

unsigned int;


>+	struct flow_action_key	*keys;
>+};
>+
>+int flow_action_init(struct flow_action *flow_action, int num_acts);
>+void flow_action_free(struct flow_action *flow_action);
>+
>+static inline bool flow_action_has_keys(const struct flow_action *action)
>+{
>+	return action->num_keys;
>+}
>+
>+#define flow_action_for_each(__i, __act, __actions)			\
>+        for (__i = 0, __act = &(__actions)->keys[0]; __i < (__actions)->num_keys; __act = &(__actions)->keys[++__i])
>+
> struct flow_rule {
> 	struct flow_match	match;
>+	struct flow_action	action;
> };
> 
> static inline bool flow_rule_match_key(const struct flow_rule *rule,
>diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
>index 186089b8d852..b9368349f0f7 100644
>--- a/net/core/flow_dissector.c
>+++ b/net/core/flow_dissector.c
>@@ -258,6 +258,24 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
> }
> EXPORT_SYMBOL(flow_rule_match_enc_opts);
> 
>+int flow_action_init(struct flow_action *flow_action, int num_acts)
>+{
>+	flow_action->keys = kmalloc(sizeof(struct flow_action_key) * num_acts,
>+				    GFP_KERNEL);
>+	if (!flow_action->keys)
>+		return -ENOMEM;
>+
>+	flow_action->num_keys = num_acts;
>+	return 0;
>+}
>+EXPORT_SYMBOL(flow_action_init);
>+
>+void flow_action_free(struct flow_action *flow_action)
>+{
>+	kfree(flow_action->keys);
>+}
>+EXPORT_SYMBOL(flow_action_free);
>+
> /**
>  * __skb_flow_get_ports - extract the upper layer ports and return them
>  * @skb: sk_buff to extract the ports from
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation
  2018-11-19  0:15 ` [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation Pablo Neira Ayuso
@ 2018-11-19 12:12   ` Jiri Pirko
  2018-11-19 13:21     ` Pablo Neira Ayuso
  2018-11-19 12:16   ` Jiri Pirko
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 12:12 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:11AM CET, pablo@netfilter.org wrote:
>This patch implements a new function to translate from native TC action
>to the new flow_action representation. Moreover, this patch also updates
>cls_flower to use this new function.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> include/net/pkt_cls.h  |   3 ++
> net/sched/cls_api.c    | 113 +++++++++++++++++++++++++++++++++++++++++++++++++
> net/sched/cls_flower.c |  15 ++++++-
> 3 files changed, 130 insertions(+), 1 deletion(-)
>
>diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
>index 8b79a1a3a5c7..7d7aefa5fcd2 100644
>--- a/include/net/pkt_cls.h
>+++ b/include/net/pkt_cls.h
>@@ -619,6 +619,9 @@ tcf_match_indev(struct sk_buff *skb, int ifindex)
> }
> #endif /* CONFIG_NET_CLS_IND */
> 
>+int tc_setup_flow_action(struct flow_action *flow_action,
>+			 const struct tcf_exts *exts);
>+
> int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
> 		     enum tc_setup_type type, void *type_data, bool err_stop);
> 
>diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
>index d92f44ac4c39..6ab44e650f43 100644
>--- a/net/sched/cls_api.c
>+++ b/net/sched/cls_api.c
>@@ -31,6 +31,14 @@
> #include <net/netlink.h>
> #include <net/pkt_sched.h>
> #include <net/pkt_cls.h>
>+#include <net/tc_act/tc_mirred.h>
>+#include <net/tc_act/tc_vlan.h>
>+#include <net/tc_act/tc_tunnel_key.h>
>+#include <net/tc_act/tc_pedit.h>
>+#include <net/tc_act/tc_csum.h>
>+#include <net/tc_act/tc_gact.h>
>+#include <net/tc_act/tc_skbedit.h>
>+#include <net/tc_act/tc_mirred.h>
> 
> extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
> 
>@@ -2567,6 +2575,111 @@ int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
> }
> EXPORT_SYMBOL(tc_setup_cb_call);
> 
>+int tc_setup_flow_action(struct flow_action *flow_action,
>+			 const struct tcf_exts *exts)
>+{
>+	const struct tc_action *act;
>+	int num_acts = 0, i, j, k;
>+
>+	if (!exts)
>+		return 0;
>+
>+	tcf_exts_for_each_action(i, act, exts) {
>+		if (is_tcf_pedit(act))
>+			num_acts += tcf_pedit_nkeys(act);
>+		else
>+			num_acts++;
>+	}
>+	if (!num_acts)
>+		return 0;
>+
>+	if (flow_action_init(flow_action, num_acts) < 0)

This is actually a "alloc" function. And the counterpart is "free".
How about to allocate the container struct which would have the [0]
trick for the array of action?

[...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation
  2018-11-19  0:15 ` [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation Pablo Neira Ayuso
  2018-11-19 12:12   ` Jiri Pirko
@ 2018-11-19 12:16   ` Jiri Pirko
  2018-11-19 12:37     ` Pablo Neira Ayuso
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 12:16 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:11AM CET, pablo@netfilter.org wrote:
>This patch implements a new function to translate from native TC action
>to the new flow_action representation. Moreover, this patch also updates
>cls_flower to use this new function.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> include/net/pkt_cls.h  |   3 ++
> net/sched/cls_api.c    | 113 +++++++++++++++++++++++++++++++++++++++++++++++++
> net/sched/cls_flower.c |  15 ++++++-
> 3 files changed, 130 insertions(+), 1 deletion(-)
>
>diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
>index 8b79a1a3a5c7..7d7aefa5fcd2 100644
>--- a/include/net/pkt_cls.h
>+++ b/include/net/pkt_cls.h
>@@ -619,6 +619,9 @@ tcf_match_indev(struct sk_buff *skb, int ifindex)
> }
> #endif /* CONFIG_NET_CLS_IND */
> 
>+int tc_setup_flow_action(struct flow_action *flow_action,
>+			 const struct tcf_exts *exts);
>+
> int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
> 		     enum tc_setup_type type, void *type_data, bool err_stop);
> 
>diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
>index d92f44ac4c39..6ab44e650f43 100644
>--- a/net/sched/cls_api.c
>+++ b/net/sched/cls_api.c
>@@ -31,6 +31,14 @@
> #include <net/netlink.h>
> #include <net/pkt_sched.h>
> #include <net/pkt_cls.h>
>+#include <net/tc_act/tc_mirred.h>
>+#include <net/tc_act/tc_vlan.h>
>+#include <net/tc_act/tc_tunnel_key.h>
>+#include <net/tc_act/tc_pedit.h>
>+#include <net/tc_act/tc_csum.h>
>+#include <net/tc_act/tc_gact.h>
>+#include <net/tc_act/tc_skbedit.h>
>+#include <net/tc_act/tc_mirred.h>
> 
> extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
> 
>@@ -2567,6 +2575,111 @@ int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
> }
> EXPORT_SYMBOL(tc_setup_cb_call);
> 
>+int tc_setup_flow_action(struct flow_action *flow_action,
>+			 const struct tcf_exts *exts)
>+{
>+	const struct tc_action *act;
>+	int num_acts = 0, i, j, k;
>+
>+	if (!exts)
>+		return 0;
>+
>+	tcf_exts_for_each_action(i, act, exts) {
>+		if (is_tcf_pedit(act))
>+			num_acts += tcf_pedit_nkeys(act);
>+		else
>+			num_acts++;
>+	}
>+	if (!num_acts)
>+		return 0;
>+
>+	if (flow_action_init(flow_action, num_acts) < 0)
>+		return -ENOMEM;
>+
>+	j = 0;
>+	tcf_exts_for_each_action(i, act, exts) {
>+		struct flow_action_key *key;
>+
>+		key = &flow_action->keys[j];
>+		if (is_tcf_gact_ok(act)) {
>+			key->id = FLOW_ACTION_KEY_ACCEPT;
>+		} else if (is_tcf_gact_shot(act)) {
>+			key->id = FLOW_ACTION_KEY_DROP;
>+		} else if (is_tcf_gact_trap(act)) {
>+			key->id = FLOW_ACTION_KEY_TRAP;
>+		} else if (is_tcf_gact_goto_chain(act)) {
>+			key->id = FLOW_ACTION_KEY_GOTO;
>+			key->chain_index = tcf_gact_goto_chain_index(act);
>+		} else if (is_tcf_mirred_egress_redirect(act)) {
>+			key->id = FLOW_ACTION_KEY_REDIRECT;
>+			key->dev = tcf_mirred_dev(act);
>+		} else if (is_tcf_mirred_egress_mirror(act)) {
>+			key->id = FLOW_ACTION_KEY_MIRRED;
>+			key->dev = tcf_mirred_dev(act);
>+		} else if (is_tcf_vlan(act)) {
>+			switch (tcf_vlan_action(act)) {
>+			case TCA_VLAN_ACT_PUSH:
>+				key->id = FLOW_ACTION_KEY_VLAN_PUSH;
>+				key->vlan.vid = tcf_vlan_push_vid(act);
>+				key->vlan.proto = tcf_vlan_push_proto(act);
>+				key->vlan.prio = tcf_vlan_push_prio(act);
>+				break;
>+			case TCA_VLAN_ACT_POP:
>+				key->id = FLOW_ACTION_KEY_VLAN_POP;
>+				break;
>+			case TCA_VLAN_ACT_MODIFY:
>+				key->id = FLOW_ACTION_KEY_VLAN_MANGLE;
>+				key->vlan.vid = tcf_vlan_push_vid(act);
>+				key->vlan.proto = tcf_vlan_push_proto(act);
>+				key->vlan.prio = tcf_vlan_push_prio(act);
>+				break;
>+			default:
>+				goto err_out;
>+			}
>+		} else if (is_tcf_tunnel_set(act)) {
>+			key->id = FLOW_ACTION_KEY_TUNNEL_ENCAP;
>+			key->tunnel = tcf_tunnel_info(act);
>+		} else if (is_tcf_tunnel_release(act)) {
>+			key->id = FLOW_ACTION_KEY_TUNNEL_DECAP;
>+			key->tunnel = tcf_tunnel_info(act);
>+		} else if (is_tcf_pedit(act)) {
>+			for (k = 0; k < tcf_pedit_nkeys(act); k++) {
>+				switch (tcf_pedit_cmd(act, k)) {
>+				case TCA_PEDIT_KEY_EX_CMD_SET:
>+					key->id = FLOW_ACTION_KEY_MANGLE;
>+					break;
>+				case TCA_PEDIT_KEY_EX_CMD_ADD:
>+					key->id = FLOW_ACTION_KEY_ADD;
>+					break;
>+				default:
>+					goto err_out;
>+				}
>+				key->mangle.htype = tcf_pedit_htype(act, k);
>+				key->mangle.mask = tcf_pedit_mask(act, k);
>+				key->mangle.val = tcf_pedit_val(act, k);
>+				key->mangle.offset = tcf_pedit_offset(act, k);
>+				key = &flow_action->keys[++j];
>+			}
>+		} else if (is_tcf_csum(act)) {
>+			key->id = FLOW_ACTION_KEY_CSUM;
>+			key->csum_flags = tcf_csum_update_flags(act);
>+		} else if (is_tcf_skbedit_mark(act)) {
>+			key->id = FLOW_ACTION_KEY_MARK;
>+			key->mark = tcf_skbedit_mark(act);
>+		} else {
>+			goto err_out;
>+		}
>+
>+		if (!is_tcf_pedit(act))
>+			j++;
>+	}
>+	return 0;
>+err_out:
>+	flow_action_free(flow_action);
>+	return -EOPNOTSUPP;
>+}
>+EXPORT_SYMBOL(tc_setup_flow_action);
>+
> static __net_init int tcf_net_init(struct net *net)
> {
> 	struct tcf_net *tn = net_generic(net, tcf_net_id);
>diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
>index 26fc129ed504..a301fb8e68e7 100644
>--- a/net/sched/cls_flower.c
>+++ b/net/sched/cls_flower.c
>@@ -104,6 +104,7 @@ struct cls_fl_filter {
> 	u32 in_hw_count;
> 	struct rcu_work rwork;
> 	struct net_device *hw_dev;
>+	struct flow_action action;
> };
> 
> static const struct rhashtable_params mask_ht_params = {
>@@ -391,18 +392,27 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
> 	cls_flower.exts = &f->exts;
> 	cls_flower.classid = f->res.classid;
> 
>+	if (tc_setup_flow_action(&f->action, &f->exts) < 0)
>+		return -ENOMEM;
>+
>+	cls_flower.rule.action.keys = f->action.keys;
>+	cls_flower.rule.action.num_keys = f->action.num_keys;

Hmm, I think flow actions should be only field in rule. Flower does not
use it internally, so it does not really make sense to have f->action


>+
> 	err = tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
> 			       &cls_flower, skip_sw);
> 	if (err < 0) {
> 		fl_hw_destroy_filter(tp, f, NULL);
>+		flow_action_free(&f->action);
> 		return err;
> 	} else if (err > 0) {
> 		f->in_hw_count = err;
> 		tcf_block_offload_inc(block, &f->flags);
> 	}
> 
>-	if (skip_sw && !(f->flags & TCA_CLS_FLAGS_IN_HW))
>+	if (skip_sw && !(f->flags & TCA_CLS_FLAGS_IN_HW)) {
>+		flow_action_free(&f->action);
> 		return -EINVAL;
>+	}
> 
> 	return 0;
> }
>@@ -429,6 +439,7 @@ static bool __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f,
> 	bool async = tcf_exts_get_net(&f->exts);
> 	bool last;
> 
>+	flow_action_free(&f->action);
> 	idr_remove(&head->handle_idr, f->handle);
> 	list_del_rcu(&f->list);
> 	last = fl_mask_put(head, f->mask, async);
>@@ -1470,6 +1481,8 @@ static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
> 			cls_flower.rule.match.mask = &mask->key;
> 			cls_flower.rule.match.key = &f->mkey;
> 			cls_flower.exts = &f->exts;
>+			cls_flower.rule.action.num_keys = f->action.num_keys;
>+			cls_flower.rule.action.keys = f->action.keys;
> 			cls_flower.classid = f->res.classid;
> 
> 			err = cb(TC_SETUP_CLSFLOWER, &cls_flower, cb_priv);
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19 11:56   ` Jiri Pirko
@ 2018-11-19 12:35     ` Pablo Neira Ayuso
  2018-11-19 13:05       ` Jiri Pirko
  0 siblings, 1 reply; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 12:35 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 12:56:23PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 01:15:10AM CET, pablo@netfilter.org wrote:
> >This new infrastructure defines the nic actions that you can perform
> >from existing network drivers. This infrastructure allows us to avoid a
> >direct dependency with the native software TC action representation.
> >
> >Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> >---
> >v2: no changes.
> >
> > include/net/flow_dissector.h | 70 ++++++++++++++++++++++++++++++++++++++++++++
> > net/core/flow_dissector.c    | 18 ++++++++++++
> > 2 files changed, 88 insertions(+)
> >
> >diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
> >index 965a82b8d881..925c208816f1 100644
> >--- a/include/net/flow_dissector.h
> >+++ b/include/net/flow_dissector.h
> >@@ -402,8 +402,78 @@ void flow_rule_match_enc_keyid(const struct flow_rule *rule,
> > void flow_rule_match_enc_opts(const struct flow_rule *rule,
> > 			      struct flow_match_enc_opts *out);
> > 
> >+enum flow_action_key_id {
> 
> Why "key"? Why not just "flow_action_id"

Sure, will rename this.

> >+	FLOW_ACTION_KEY_ACCEPT		= 0,
> >+	FLOW_ACTION_KEY_DROP,
> >+	FLOW_ACTION_KEY_TRAP,
> >+	FLOW_ACTION_KEY_GOTO,
> >+	FLOW_ACTION_KEY_REDIRECT,
> >+	FLOW_ACTION_KEY_MIRRED,
> >+	FLOW_ACTION_KEY_VLAN_PUSH,
> >+	FLOW_ACTION_KEY_VLAN_POP,
> >+	FLOW_ACTION_KEY_VLAN_MANGLE,
> >+	FLOW_ACTION_KEY_TUNNEL_ENCAP,
> >+	FLOW_ACTION_KEY_TUNNEL_DECAP,
> >+	FLOW_ACTION_KEY_MANGLE,
> >+	FLOW_ACTION_KEY_ADD,
> >+	FLOW_ACTION_KEY_CSUM,
> >+	FLOW_ACTION_KEY_MARK,

I assume I should remove _KEY_ from this enum definitions too.

> >+};
> >+
> >+/* This is mirroring enum pedit_header_type definition for easy mapping between
> >+ * tc pedit action. Legacy TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK is mapped to
> >+ * FLOW_ACT_MANGLE_UNSPEC, which is supported by no driver.
> >+ */
> >+enum flow_act_mangle_base {
> 
> Please be consistent in naming: "act" vs "action"

OK.

> >+	FLOW_ACT_MANGLE_UNSPEC		= 0,
> >+	FLOW_ACT_MANGLE_HDR_TYPE_ETH,
> >+	FLOW_ACT_MANGLE_HDR_TYPE_IP4,
> >+	FLOW_ACT_MANGLE_HDR_TYPE_IP6,
> >+	FLOW_ACT_MANGLE_HDR_TYPE_TCP,
> >+	FLOW_ACT_MANGLE_HDR_TYPE_UDP,
> >+};
> >+
> >+struct flow_action_key {
> 
> And here "struct flow_action"

OK.

> >+	enum flow_action_key_id		id;
> >+	union {
> >+		u32			chain_index;	/* FLOW_ACTION_KEY_GOTO */
> >+		struct net_device	*dev;		/* FLOW_ACTION_KEY_REDIRECT */
> >+		struct {				/* FLOW_ACTION_KEY_VLAN */
> >+			u16		vid;
> >+			__be16		proto;
> >+			u8		prio;
> >+		} vlan;
> >+		struct {				/* FLOW_ACTION_KEY_PACKET_EDIT */
> >+			enum flow_act_mangle_base htype;
> >+			u32		offset;
> >+			u32		mask;
> >+			u32		val;
> >+		} mangle;
> >+		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
> >+		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
> >+		u32			mark;		/* FLOW_ACTION_KEY_MARK */
> >+	};
> >+};
> >+
> >+struct flow_action {
> 
> And here "struct flow_actions"
> 
> 
> >+	int			num_keys;
> 
> unsigned int;

OK.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation
  2018-11-19 12:16   ` Jiri Pirko
@ 2018-11-19 12:37     ` Pablo Neira Ayuso
  0 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 12:37 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 01:16:30PM +0100, Jiri Pirko wrote:
> >@@ -391,18 +392,27 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
> > 	cls_flower.exts = &f->exts;
> > 	cls_flower.classid = f->res.classid;
> > 
> >+	if (tc_setup_flow_action(&f->action, &f->exts) < 0)
> >+		return -ENOMEM;
> >+
> >+	cls_flower.rule.action.keys = f->action.keys;
> >+	cls_flower.rule.action.num_keys = f->action.num_keys;
> 
> Hmm, I think flow actions should be only field in rule. Flower does not
> use it internally, so it does not really make sense to have f->action

OK, will remove this new field from flower.

Thanks!

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19 12:35     ` Pablo Neira Ayuso
@ 2018-11-19 13:05       ` Jiri Pirko
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 13:05 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:35:48PM CET, pablo@netfilter.org wrote:
>On Mon, Nov 19, 2018 at 12:56:23PM +0100, Jiri Pirko wrote:
>> Mon, Nov 19, 2018 at 01:15:10AM CET, pablo@netfilter.org wrote:
>> >This new infrastructure defines the nic actions that you can perform
>> >from existing network drivers. This infrastructure allows us to avoid a
>> >direct dependency with the native software TC action representation.
>> >
>> >Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>> >---
>> >v2: no changes.
>> >
>> > include/net/flow_dissector.h | 70 ++++++++++++++++++++++++++++++++++++++++++++
>> > net/core/flow_dissector.c    | 18 ++++++++++++
>> > 2 files changed, 88 insertions(+)
>> >
>> >diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
>> >index 965a82b8d881..925c208816f1 100644
>> >--- a/include/net/flow_dissector.h
>> >+++ b/include/net/flow_dissector.h
>> >@@ -402,8 +402,78 @@ void flow_rule_match_enc_keyid(const struct flow_rule *rule,
>> > void flow_rule_match_enc_opts(const struct flow_rule *rule,
>> > 			      struct flow_match_enc_opts *out);
>> > 
>> >+enum flow_action_key_id {
>> 
>> Why "key"? Why not just "flow_action_id"
>
>Sure, will rename this.
>
>> >+	FLOW_ACTION_KEY_ACCEPT		= 0,
>> >+	FLOW_ACTION_KEY_DROP,
>> >+	FLOW_ACTION_KEY_TRAP,
>> >+	FLOW_ACTION_KEY_GOTO,
>> >+	FLOW_ACTION_KEY_REDIRECT,
>> >+	FLOW_ACTION_KEY_MIRRED,
>> >+	FLOW_ACTION_KEY_VLAN_PUSH,
>> >+	FLOW_ACTION_KEY_VLAN_POP,
>> >+	FLOW_ACTION_KEY_VLAN_MANGLE,
>> >+	FLOW_ACTION_KEY_TUNNEL_ENCAP,
>> >+	FLOW_ACTION_KEY_TUNNEL_DECAP,
>> >+	FLOW_ACTION_KEY_MANGLE,
>> >+	FLOW_ACTION_KEY_ADD,
>> >+	FLOW_ACTION_KEY_CSUM,
>> >+	FLOW_ACTION_KEY_MARK,
>
>I assume I should remove _KEY_ from this enum definitions too.

Sure.

>
>> >+};
>> >+
>> >+/* This is mirroring enum pedit_header_type definition for easy mapping between
>> >+ * tc pedit action. Legacy TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK is mapped to
>> >+ * FLOW_ACT_MANGLE_UNSPEC, which is supported by no driver.
>> >+ */
>> >+enum flow_act_mangle_base {
>> 
>> Please be consistent in naming: "act" vs "action"
>
>OK.
>
>> >+	FLOW_ACT_MANGLE_UNSPEC		= 0,
>> >+	FLOW_ACT_MANGLE_HDR_TYPE_ETH,
>> >+	FLOW_ACT_MANGLE_HDR_TYPE_IP4,
>> >+	FLOW_ACT_MANGLE_HDR_TYPE_IP6,
>> >+	FLOW_ACT_MANGLE_HDR_TYPE_TCP,
>> >+	FLOW_ACT_MANGLE_HDR_TYPE_UDP,
>> >+};
>> >+
>> >+struct flow_action_key {
>> 
>> And here "struct flow_action"
>
>OK.
>
>> >+	enum flow_action_key_id		id;
>> >+	union {
>> >+		u32			chain_index;	/* FLOW_ACTION_KEY_GOTO */
>> >+		struct net_device	*dev;		/* FLOW_ACTION_KEY_REDIRECT */
>> >+		struct {				/* FLOW_ACTION_KEY_VLAN */
>> >+			u16		vid;
>> >+			__be16		proto;
>> >+			u8		prio;
>> >+		} vlan;
>> >+		struct {				/* FLOW_ACTION_KEY_PACKET_EDIT */
>> >+			enum flow_act_mangle_base htype;
>> >+			u32		offset;
>> >+			u32		mask;
>> >+			u32		val;
>> >+		} mangle;
>> >+		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
>> >+		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
>> >+		u32			mark;		/* FLOW_ACTION_KEY_MARK */
>> >+	};
>> >+};
>> >+
>> >+struct flow_action {
>> 
>> And here "struct flow_actions"
>> 
>> 
>> >+	int			num_keys;
>> 
>> unsigned int;
>
>OK.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation
  2018-11-19 12:12   ` Jiri Pirko
@ 2018-11-19 13:21     ` Pablo Neira Ayuso
  2018-11-19 13:22       ` Jiri Pirko
  0 siblings, 1 reply; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 13:21 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 01:12:51PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 01:15:11AM CET, pablo@netfilter.org wrote:
> >@@ -2567,6 +2575,111 @@ int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
> > }
> > EXPORT_SYMBOL(tc_setup_cb_call);
> > 
> >+int tc_setup_flow_action(struct flow_action *flow_action,
> >+			 const struct tcf_exts *exts)
> >+{
> >+	const struct tc_action *act;
> >+	int num_acts = 0, i, j, k;
> >+
> >+	if (!exts)
> >+		return 0;
> >+
> >+	tcf_exts_for_each_action(i, act, exts) {
> >+		if (is_tcf_pedit(act))
> >+			num_acts += tcf_pedit_nkeys(act);
> >+		else
> >+			num_acts++;
> >+	}
> >+	if (!num_acts)
> >+		return 0;
> >+
> >+	if (flow_action_init(flow_action, num_acts) < 0)
> 
> This is actually a "alloc" function. And the counterpart is "free".

I can rename it to _alloc() if you prefer.

> How about to allocate the container struct which would have the [0]
> trick for the array of action?

You mean turn *keys into keys[0] stub in struct flow_action? This is
embedded into struct tc_cls_flower_offload, I may need to make a
second look but I think it won't fly.

BTW, side note: I will rename keys to "array" given keys is not
semantically appropriate as you mentioned, BTW.

Thanks!

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation
  2018-11-19 13:21     ` Pablo Neira Ayuso
@ 2018-11-19 13:22       ` Jiri Pirko
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 13:22 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 02:21:41PM CET, pablo@netfilter.org wrote:
>On Mon, Nov 19, 2018 at 01:12:51PM +0100, Jiri Pirko wrote:
>> Mon, Nov 19, 2018 at 01:15:11AM CET, pablo@netfilter.org wrote:
>> >@@ -2567,6 +2575,111 @@ int tc_setup_cb_call(struct tcf_block *block, struct tcf_exts *exts,
>> > }
>> > EXPORT_SYMBOL(tc_setup_cb_call);
>> > 
>> >+int tc_setup_flow_action(struct flow_action *flow_action,
>> >+			 const struct tcf_exts *exts)
>> >+{
>> >+	const struct tc_action *act;
>> >+	int num_acts = 0, i, j, k;
>> >+
>> >+	if (!exts)
>> >+		return 0;
>> >+
>> >+	tcf_exts_for_each_action(i, act, exts) {
>> >+		if (is_tcf_pedit(act))
>> >+			num_acts += tcf_pedit_nkeys(act);
>> >+		else
>> >+			num_acts++;
>> >+	}
>> >+	if (!num_acts)
>> >+		return 0;
>> >+
>> >+	if (flow_action_init(flow_action, num_acts) < 0)
>> 
>> This is actually a "alloc" function. And the counterpart is "free".
>
>I can rename it to _alloc() if you prefer.
>
>> How about to allocate the container struct which would have the [0]
>> trick for the array of action?
>
>You mean turn *keys into keys[0] stub in struct flow_action? This is
>embedded into struct tc_cls_flower_offload, I may need to make a
>second look but I think it won't fly.
>
>BTW, side note: I will rename keys to "array" given keys is not
>semantically appropriate as you mentioned, BTW.

What I suggest is this:

struct flow_actions {
       unsinged int action_count;
       struct flow_action action[0];
};


And then to have 
struct flow_actions *flow_actions_alloc(unsigned int action_count)
{
	return kzalloc(sizeof(struct flow_actions) + sizeof(struct flow_action) * action_count, ..);
}

Something like this.


>
>Thanks!

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it
  2018-11-19  0:15 ` [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it Pablo Neira Ayuso
@ 2018-11-19 13:57   ` Jiri Pirko
  2018-11-19 14:48     ` Pablo Neira Ayuso
  0 siblings, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 13:57 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:12AM CET, pablo@netfilter.org wrote:
>This patch provides a tc_cls_flower_stats structure that acts as
>container for tc_cls_flower_offload, then we can use to restore the
>statistics on the existing TC actions. Hence, tcf_exts_stats_update() is
>not used from drivers.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c          |  4 ++--
> drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c  |  6 +++---
> drivers/net/ethernet/mellanox/mlx5/core/en_tc.c       |  2 +-
> drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c |  2 +-
> drivers/net/ethernet/netronome/nfp/flower/offload.c   |  6 +++---
> include/net/pkt_cls.h                                 | 15 +++++++++++++++
> net/sched/cls_flower.c                                |  4 ++++
> 7 files changed, 29 insertions(+), 10 deletions(-)
>
>diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
>index b82143d6cdde..3d71b2530d67 100644
>--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
>+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
>@@ -1366,8 +1366,8 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
> 	lastused = flow->lastused;
> 	spin_unlock(&flow->stats_lock);
> 
>-	tcf_exts_stats_update(tc_flow_cmd->exts, stats.bytes, stats.packets,
>-			      lastused);
>+	tc_cls_flower_stats_update(tc_flow_cmd, stats.bytes, stats.packets,
>+				   lastused);
> 	return 0;
> }
> 
>diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
>index 39c5af5dad3d..2c7d1aebe214 100644
>--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
>+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
>@@ -807,9 +807,9 @@ int cxgb4_tc_flower_stats(struct net_device *dev,
> 	if (ofld_stats->packet_count != packets) {
> 		if (ofld_stats->prev_packet_count != packets)
> 			ofld_stats->last_used = jiffies;
>-		tcf_exts_stats_update(cls->exts, bytes - ofld_stats->byte_count,
>-				      packets - ofld_stats->packet_count,
>-				      ofld_stats->last_used);
>+		tc_cls_flower_stats_update(cls, bytes - ofld_stats->byte_count,
>+					   packets - ofld_stats->packet_count,
>+					   ofld_stats->last_used);
> 
> 		ofld_stats->packet_count = packets;
> 		ofld_stats->byte_count = bytes;
>diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
>index 2645e5d1e790..c5f0b826fa91 100644
>--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
>+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
>@@ -3224,7 +3224,7 @@ int mlx5e_stats_flower(struct mlx5e_priv *priv,
> 
> 	mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
> 
>-	tcf_exts_stats_update(f->exts, bytes, packets, lastuse);
>+	tc_cls_flower_stats_update(f, bytes, packets, lastuse);
> 
> 	return 0;
> }
>diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
>index 193a6f9acf79..3398984ffb2a 100644
>--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
>+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
>@@ -460,7 +460,7 @@ int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
> 	if (err)
> 		goto err_rule_get_stats;
> 
>-	tcf_exts_stats_update(f->exts, bytes, packets, lastuse);
>+	tc_cls_flower_stats_update(f, bytes, packets, lastuse);
> 
> 	mlxsw_sp_acl_ruleset_put(mlxsw_sp, ruleset);
> 	return 0;
>diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>index 708331234908..bec74d84756c 100644
>--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
>+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>@@ -532,9 +532,9 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
> 	ctx_id = be32_to_cpu(nfp_flow->meta.host_ctx_id);
> 
> 	spin_lock_bh(&priv->stats_lock);
>-	tcf_exts_stats_update(flow->exts, priv->stats[ctx_id].bytes,
>-			      priv->stats[ctx_id].pkts,
>-			      priv->stats[ctx_id].used);
>+	tc_cls_flower_stats_update(flow, priv->stats[ctx_id].bytes,
>+				   priv->stats[ctx_id].pkts,
>+				   priv->stats[ctx_id].used);
> 
> 	priv->stats[ctx_id].pkts = 0;
> 	priv->stats[ctx_id].bytes = 0;
>diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
>index 7d7aefa5fcd2..7f9a8d5ca945 100644
>--- a/include/net/pkt_cls.h
>+++ b/include/net/pkt_cls.h
>@@ -758,6 +758,12 @@ enum tc_fl_command {
> 	TC_CLSFLOWER_TMPLT_DESTROY,
> };
> 
>+struct tc_cls_flower_stats {
>+	u64	pkts;
>+	u64	bytes;
>+	u64	lastused;
>+};
>+
> struct tc_cls_flower_offload {
> 	struct tc_cls_common_offload common;
> 	enum tc_fl_command command;
>@@ -765,6 +771,7 @@ struct tc_cls_flower_offload {
> 	struct flow_rule rule;
> 	struct tcf_exts *exts;
> 	u32 classid;
>+	struct tc_cls_flower_stats stats;
> };
> 
> static inline struct flow_rule *
>@@ -773,6 +780,14 @@ tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
> 	return &tc_flow_cmd->rule;
> }
> 
>+static inline void tc_cls_flower_stats_update(struct tc_cls_flower_offload *cls_flower,
>+					      u64 pkts, u64 bytes, u64 lastused)
>+{
>+	cls_flower->stats.pkts		= pkts;
>+	cls_flower->stats.bytes		= bytes;
>+	cls_flower->stats.lastused	= lastused;

Why do you need to store the values here in struct tc_cls_flower_offload?
Why don't you just call tcf_exts_stats_update()? Basically,
tc_cls_flower_stats_update() should be just wrapper around
tcf_exts_stats_update() so that drivers wouldn't use ->exts directly, as
you will remove them in follow-up patches, no?



>+}
>+
> enum tc_matchall_command {
> 	TC_CLSMATCHALL_REPLACE,
> 	TC_CLSMATCHALL_DESTROY,
>diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
>index a301fb8e68e7..ee67f1ae8786 100644
>--- a/net/sched/cls_flower.c
>+++ b/net/sched/cls_flower.c
>@@ -430,6 +430,10 @@ static void fl_hw_update_stats(struct tcf_proto *tp, struct cls_fl_filter *f)
> 
> 	tc_setup_cb_call(block, &f->exts, TC_SETUP_CLSFLOWER,
> 			 &cls_flower, false);
>+
>+	tcf_exts_stats_update(&f->exts, cls_flower.stats.bytes,
>+			      cls_flower.stats.pkts,
>+			      cls_flower.stats.lastused);
> }
> 
> static bool __fl_delete(struct tcf_proto *tp, struct cls_fl_filter *f,
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 08/12] flow_dissector: add wake-up-on-lan and queue to flow_action
  2018-11-19  0:15 ` [PATCH net-next,v2 08/12] flow_dissector: add wake-up-on-lan and queue to flow_action Pablo Neira Ayuso
@ 2018-11-19 13:59   ` Jiri Pirko
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 13:59 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:15AM CET, pablo@netfilter.org wrote:
>These actions need to be added to support bcm sf2 features available
>through the ethtool_rx_flow interface.
>
>Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>

For me it would be nicer to have this as 2 patches, one per action type.
But up to you.


>---
>v2: no changes.
>
> include/net/flow_dissector.h | 3 +++
> 1 file changed, 3 insertions(+)
>
>diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
>index 925c208816f1..7a4683646d5a 100644
>--- a/include/net/flow_dissector.h
>+++ b/include/net/flow_dissector.h
>@@ -418,6 +418,8 @@ enum flow_action_key_id {
> 	FLOW_ACTION_KEY_ADD,
> 	FLOW_ACTION_KEY_CSUM,
> 	FLOW_ACTION_KEY_MARK,
>+	FLOW_ACTION_KEY_WAKE,
>+	FLOW_ACTION_KEY_QUEUE,
> };
> 
> /* This is mirroring enum pedit_header_type definition for easy mapping between
>@@ -452,6 +454,7 @@ struct flow_action_key {
> 		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
> 		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
> 		u32			mark;		/* FLOW_ACTION_KEY_MARK */
>+		u32			queue_index;	/* FLOW_ACTION_KEY_QUEUE */
> 	};
> };
> 
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19  0:15 ` [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure Pablo Neira Ayuso
  2018-11-19 11:56   ` Jiri Pirko
@ 2018-11-19 14:03   ` Jiri Pirko
  2018-11-19 14:42     ` Pablo Neira Ayuso
  2018-11-19 16:26     ` Pablo Neira Ayuso
  1 sibling, 2 replies; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 14:03 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:10AM CET, pablo@netfilter.org wrote:
>This new infrastructure defines the nic actions that you can perform
>from existing network drivers. This infrastructure allows us to avoid a
>direct dependency with the native software TC action representation.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> include/net/flow_dissector.h | 70 ++++++++++++++++++++++++++++++++++++++++++++
> net/core/flow_dissector.c    | 18 ++++++++++++
> 2 files changed, 88 insertions(+)
>
>diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
>index 965a82b8d881..925c208816f1 100644
>--- a/include/net/flow_dissector.h
>+++ b/include/net/flow_dissector.h
>@@ -402,8 +402,78 @@ void flow_rule_match_enc_keyid(const struct flow_rule *rule,
> void flow_rule_match_enc_opts(const struct flow_rule *rule,
> 			      struct flow_match_enc_opts *out);
> 
>+enum flow_action_key_id {
>+	FLOW_ACTION_KEY_ACCEPT		= 0,
>+	FLOW_ACTION_KEY_DROP,
>+	FLOW_ACTION_KEY_TRAP,
>+	FLOW_ACTION_KEY_GOTO,
>+	FLOW_ACTION_KEY_REDIRECT,
>+	FLOW_ACTION_KEY_MIRRED,
>+	FLOW_ACTION_KEY_VLAN_PUSH,
>+	FLOW_ACTION_KEY_VLAN_POP,
>+	FLOW_ACTION_KEY_VLAN_MANGLE,
>+	FLOW_ACTION_KEY_TUNNEL_ENCAP,
>+	FLOW_ACTION_KEY_TUNNEL_DECAP,
>+	FLOW_ACTION_KEY_MANGLE,
>+	FLOW_ACTION_KEY_ADD,
>+	FLOW_ACTION_KEY_CSUM,
>+	FLOW_ACTION_KEY_MARK,
>+};
>+
>+/* This is mirroring enum pedit_header_type definition for easy mapping between
>+ * tc pedit action. Legacy TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK is mapped to
>+ * FLOW_ACT_MANGLE_UNSPEC, which is supported by no driver.
>+ */
>+enum flow_act_mangle_base {
>+	FLOW_ACT_MANGLE_UNSPEC		= 0,
>+	FLOW_ACT_MANGLE_HDR_TYPE_ETH,
>+	FLOW_ACT_MANGLE_HDR_TYPE_IP4,
>+	FLOW_ACT_MANGLE_HDR_TYPE_IP6,
>+	FLOW_ACT_MANGLE_HDR_TYPE_TCP,
>+	FLOW_ACT_MANGLE_HDR_TYPE_UDP,
>+};
>+
>+struct flow_action_key {
>+	enum flow_action_key_id		id;
>+	union {
>+		u32			chain_index;	/* FLOW_ACTION_KEY_GOTO */
>+		struct net_device	*dev;		/* FLOW_ACTION_KEY_REDIRECT */
>+		struct {				/* FLOW_ACTION_KEY_VLAN */
>+			u16		vid;
>+			__be16		proto;
>+			u8		prio;
>+		} vlan;
>+		struct {				/* FLOW_ACTION_KEY_PACKET_EDIT */
>+			enum flow_act_mangle_base htype;
>+			u32		offset;
>+			u32		mask;
>+			u32		val;
>+		} mangle;
>+		const struct ip_tunnel_info *tunnel;	/* FLOW_ACTION_KEY_TUNNEL_ENCAP */
>+		u32			csum_flags;	/* FLOW_ACTION_KEY_CSUM */
>+		u32			mark;		/* FLOW_ACTION_KEY_MARK */
>+	};
>+};
>+
>+struct flow_action {

Hmm, thinking about it a bit more, none of this is is related to flow
dissector so it is misleading to put the code in flow_dissector.[hc].

Maybe you can push this and related stuff into new files include/net/flow.h
and net/core/flow.c.



>+	int			num_keys;
>+	struct flow_action_key	*keys;
>+};
>+
>+int flow_action_init(struct flow_action *flow_action, int num_acts);
>+void flow_action_free(struct flow_action *flow_action);
>+
>+static inline bool flow_action_has_keys(const struct flow_action *action)
>+{
>+	return action->num_keys;
>+}
>+
>+#define flow_action_for_each(__i, __act, __actions)			\
>+        for (__i = 0, __act = &(__actions)->keys[0]; __i < (__actions)->num_keys; __act = &(__actions)->keys[++__i])
>+
> struct flow_rule {
> 	struct flow_match	match;
>+	struct flow_action	action;
> };
> 
> static inline bool flow_rule_match_key(const struct flow_rule *rule,
>diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
>index 186089b8d852..b9368349f0f7 100644
>--- a/net/core/flow_dissector.c
>+++ b/net/core/flow_dissector.c
>@@ -258,6 +258,24 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
> }
> EXPORT_SYMBOL(flow_rule_match_enc_opts);
> 
>+int flow_action_init(struct flow_action *flow_action, int num_acts)
>+{
>+	flow_action->keys = kmalloc(sizeof(struct flow_action_key) * num_acts,
>+				    GFP_KERNEL);
>+	if (!flow_action->keys)
>+		return -ENOMEM;
>+
>+	flow_action->num_keys = num_acts;
>+	return 0;
>+}
>+EXPORT_SYMBOL(flow_action_init);
>+
>+void flow_action_free(struct flow_action *flow_action)
>+{
>+	kfree(flow_action->keys);
>+}
>+EXPORT_SYMBOL(flow_action_free);
>+
> /**
>  * __skb_flow_get_ports - extract the upper layer ports and return them
>  * @skb: sk_buff to extract the ports from
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator
  2018-11-19  0:15 ` [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator Pablo Neira Ayuso
@ 2018-11-19 14:17   ` Jiri Pirko
  2018-11-19 14:43     ` Pablo Neira Ayuso
  2018-11-19 14:49   ` Jiri Pirko
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 14:17 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:16AM CET, pablo@netfilter.org wrote:
>This patch adds a function to translate the ethtool_rx_flow_spec
>structure to the flow_rule representation.
>
>This allows us to reuse code from the driver side given that both flower
>and ethtool_rx_flow interfaces use the same representation.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> include/net/flow_dissector.h |   5 ++
> net/core/flow_dissector.c    | 190 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 195 insertions(+)
>
>diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
>index 7a4683646d5a..ec9036232538 100644
>--- a/include/net/flow_dissector.h
>+++ b/include/net/flow_dissector.h
>@@ -485,4 +485,9 @@ static inline bool flow_rule_match_key(const struct flow_rule *rule,
> 	return dissector_uses_key(rule->match.dissector, key);
> }
> 
>+struct ethtool_rx_flow_spec;
>+
>+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs);
>+void ethtool_rx_flow_rule_free(struct flow_rule *rule);
>+
> #endif
>diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
>index b9368349f0f7..ef5bdb62620c 100644
>--- a/net/core/flow_dissector.c
>+++ b/net/core/flow_dissector.c
>@@ -17,6 +17,7 @@
> #include <linux/dccp.h>
> #include <linux/if_tunnel.h>
> #include <linux/if_pppox.h>
>+#include <uapi/linux/ethtool.h>
> #include <linux/ppp_defs.h>
> #include <linux/stddef.h>
> #include <linux/if_ether.h>
>@@ -276,6 +277,195 @@ void flow_action_free(struct flow_action *flow_action)
> }
> EXPORT_SYMBOL(flow_action_free);
> 
>+struct ethtool_rx_flow_key {
>+	struct flow_dissector_key_basic			basic;
>+	union {
>+		struct flow_dissector_key_ipv4_addrs	ipv4;
>+		struct flow_dissector_key_ipv6_addrs	ipv6;
>+	};
>+	struct flow_dissector_key_ports			tp;
>+	struct flow_dissector_key_ip			ip;
>+} __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
>+
>+struct ethtool_rx_flow_match {
>+	struct flow_dissector		dissector;
>+	struct ethtool_rx_flow_key	key;
>+	struct ethtool_rx_flow_key	mask;
>+};
>+
>+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs)
>+{
>+	static struct in6_addr zero_addr = {};
>+	struct ethtool_rx_flow_match *match;
>+	struct flow_action_key *act;
>+	struct flow_rule *rule;
>+
>+	rule = kmalloc(sizeof(struct flow_rule), GFP_KERNEL);
>+	if (!rule)
>+		return NULL;
>+
>+	match = kzalloc(sizeof(struct ethtool_rx_flow_match), GFP_KERNEL);
>+	if (!match)
>+		goto err_match;
>+
>+	rule->match.dissector	= &match->dissector;
>+	rule->match.mask	= &match->mask;
>+	rule->match.key		= &match->key;
>+
>+	match->mask.basic.n_proto = 0xffff;
>+
>+	switch (fs->flow_type & ~FLOW_EXT) {
>+	case TCP_V4_FLOW:
>+	case UDP_V4_FLOW: {
>+		const struct ethtool_tcpip4_spec *v4_spec, *v4_m_spec;
>+
>+		match->key.basic.n_proto = htons(ETH_P_IP);
>+
>+		v4_spec = &fs->h_u.tcp_ip4_spec;
>+		v4_m_spec = &fs->m_u.tcp_ip4_spec;
>+
>+		if (v4_m_spec->ip4src) {
>+			match->key.ipv4.src = v4_spec->ip4src;
>+			match->mask.ipv4.src = v4_m_spec->ip4src;
>+		}
>+		if (v4_m_spec->ip4dst) {
>+			match->key.ipv4.dst = v4_spec->ip4dst;
>+			match->mask.ipv4.dst = v4_m_spec->ip4dst;
>+		}
>+		if (v4_m_spec->ip4src ||
>+		    v4_m_spec->ip4dst) {
>+			match->dissector.used_keys |=
>+				FLOW_DISSECTOR_KEY_IPV4_ADDRS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV4_ADDRS] =
>+				offsetof(struct ethtool_rx_flow_key, ipv4);
>+		}
>+		if (v4_m_spec->psrc) {
>+			match->key.tp.src = v4_spec->psrc;
>+			match->mask.tp.src = v4_m_spec->psrc;
>+		}
>+		if (v4_m_spec->pdst) {
>+			match->key.tp.dst = v4_spec->pdst;
>+			match->mask.tp.dst = v4_m_spec->pdst;
>+		}
>+		if (v4_m_spec->psrc ||
>+		    v4_m_spec->pdst) {
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
>+				offsetof(struct ethtool_rx_flow_key, tp);
>+		}
>+		if (v4_m_spec->tos) {
>+			match->key.ip.tos = v4_spec->pdst;
>+			match->mask.ip.tos = v4_m_spec->pdst;
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
>+				offsetof(struct ethtool_rx_flow_key, ip);
>+		}
>+		}
>+		break;
>+	case TCP_V6_FLOW:
>+	case UDP_V6_FLOW: {
>+		const struct ethtool_tcpip6_spec *v6_spec, *v6_m_spec;
>+
>+		match->key.basic.n_proto = htons(ETH_P_IPV6);
>+
>+		v6_spec = &fs->h_u.tcp_ip6_spec;
>+		v6_m_spec = &fs->m_u.tcp_ip6_spec;
>+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
>+			memcpy(&match->key.ipv6.src, v6_spec->ip6src,
>+			       sizeof(match->key.ipv6.src));
>+			memcpy(&match->mask.ipv6.src, v6_m_spec->ip6src,
>+			       sizeof(match->mask.ipv6.src));
>+		}
>+		if (memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
>+			memcpy(&match->key.ipv6.dst, v6_spec->ip6dst,
>+			       sizeof(match->key.ipv6.dst));
>+			memcpy(&match->mask.ipv6.dst, v6_m_spec->ip6dst,
>+			       sizeof(match->mask.ipv6.dst));
>+		}
>+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) ||
>+		    memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
>+			match->dissector.used_keys |=
>+				FLOW_DISSECTOR_KEY_IPV6_ADDRS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
>+				offsetof(struct ethtool_rx_flow_key, ipv6);
>+		}
>+		if (v6_m_spec->psrc) {
>+			match->key.tp.src = v6_spec->psrc;
>+			match->mask.tp.src = v6_m_spec->psrc;
>+		}
>+		if (v6_m_spec->pdst) {
>+			match->key.tp.dst = v6_spec->pdst;
>+			match->mask.tp.dst = v6_m_spec->pdst;
>+		}
>+		if (v6_m_spec->psrc ||
>+		    v6_m_spec->pdst) {
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
>+				offsetof(struct ethtool_rx_flow_key, tp);
>+		}
>+		if (v6_m_spec->tclass) {
>+			match->key.ip.tos = v6_spec->tclass;
>+			match->mask.ip.tos = v6_m_spec->tclass;
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
>+				offsetof(struct ethtool_rx_flow_key, ip);
>+		}
>+		}
>+		break;
>+	}
>+
>+	switch (fs->flow_type & ~FLOW_EXT) {
>+	case TCP_V4_FLOW:
>+	case TCP_V6_FLOW:
>+		match->key.basic.ip_proto = IPPROTO_TCP;
>+		break;
>+	case UDP_V4_FLOW:
>+	case UDP_V6_FLOW:
>+		match->key.basic.ip_proto = IPPROTO_UDP;
>+		break;
>+	}
>+	match->mask.basic.ip_proto = 0xff;
>+
>+	match->dissector.used_keys |= FLOW_DISSECTOR_KEY_BASIC;
>+	match->dissector.offset[FLOW_DISSECTOR_KEY_BASIC] =
>+		offsetof(struct ethtool_rx_flow_key, basic);
>+
>+	/* ethtool_rx supports only one single action per rule. */
>+	if (flow_action_init(&rule->action, 1) < 0)
>+		goto err_action;
>+
>+	act = &rule->action.keys[0];
>+	switch (fs->ring_cookie) {
>+	case RX_CLS_FLOW_DISC:
>+		act->id = FLOW_ACTION_KEY_DROP;
>+		break;
>+	case RX_CLS_FLOW_WAKE:
>+		act->id = FLOW_ACTION_KEY_WAKE;
>+		break;
>+	default:
>+		act->id = FLOW_ACTION_KEY_QUEUE;
>+		act->queue_index = fs->ring_cookie;
>+		break;
>+	}
>+
>+	return rule;
>+
>+err_action:
>+	kfree(match);
>+err_match:
>+	kfree(rule);
>+	return NULL;
>+}
>+EXPORT_SYMBOL(ethtool_rx_flow_rule);
>+
>+void ethtool_rx_flow_rule_free(struct flow_rule *rule)
>+{
>+	kfree((struct flow_match *)rule->match.dissector);
>+	flow_action_free(&rule->action);
>+	kfree(rule);
>+}
>+EXPORT_SYMBOL(ethtool_rx_flow_rule_free);

Code that is doing ethtool->flow_rule conversion should be in ethtool
code. You should just provide helpers here (in flow.c) that could be
used to assemble the flow_rule structure.

Same applies to TC.


>+
> /**
>  * __skb_flow_get_ports - extract the upper layer ports and return them
>  * @skb: sk_buff to extract the ports from
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19 14:03   ` Jiri Pirko
@ 2018-11-19 14:42     ` Pablo Neira Ayuso
  2018-11-19 16:26     ` Pablo Neira Ayuso
  1 sibling, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 14:42 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 03:03:22PM +0100, Jiri Pirko wrote:
[...]
> >+struct flow_action {
> 
> Hmm, thinking about it a bit more, none of this is is related to flow
> dissector so it is misleading to put the code in flow_dissector.[hc].
> 
> Maybe you can push this and related stuff into new files include/net/flow.h
> and net/core/flow.c.

Will do this, thanks Jiri.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator
  2018-11-19 14:17   ` Jiri Pirko
@ 2018-11-19 14:43     ` Pablo Neira Ayuso
  0 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 14:43 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 03:17:43PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 01:15:16AM CET, pablo@netfilter.org wrote:
[...]
> >diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
> >index 7a4683646d5a..ec9036232538 100644
> >--- a/include/net/flow_dissector.h
> >+++ b/include/net/flow_dissector.h
[...]
> >+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs)
[...]
> Code that is doing ethtool->flow_rule conversion should be in ethtool
> code. You should just provide helpers here (in flow.c) that could be
> used to assemble the flow_rule structure.

OK, will place this in ethtool file.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it
  2018-11-19 13:57   ` Jiri Pirko
@ 2018-11-19 14:48     ` Pablo Neira Ayuso
  2018-11-19 15:04       ` Jiri Pirko
  0 siblings, 1 reply; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 14:48 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 02:57:05PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 01:15:12AM CET, pablo@netfilter.org wrote:
[...]
> >diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
> >index 7d7aefa5fcd2..7f9a8d5ca945 100644
> >--- a/include/net/pkt_cls.h
> >+++ b/include/net/pkt_cls.h
> >@@ -758,6 +758,12 @@ enum tc_fl_command {
> > 	TC_CLSFLOWER_TMPLT_DESTROY,
> > };
> > 
> >+struct tc_cls_flower_stats {
> >+	u64	pkts;
> >+	u64	bytes;
> >+	u64	lastused;
> >+};
> >+
> > struct tc_cls_flower_offload {
> > 	struct tc_cls_common_offload common;
> > 	enum tc_fl_command command;
> >@@ -765,6 +771,7 @@ struct tc_cls_flower_offload {
> > 	struct flow_rule rule;
> > 	struct tcf_exts *exts;
> > 	u32 classid;
> >+	struct tc_cls_flower_stats stats;
> > };
> > 
> > static inline struct flow_rule *
> >@@ -773,6 +780,14 @@ tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
> > 	return &tc_flow_cmd->rule;
> > }
> > 
> >+static inline void tc_cls_flower_stats_update(struct tc_cls_flower_offload *cls_flower,
> >+					      u64 pkts, u64 bytes, u64 lastused)
> >+{
> >+	cls_flower->stats.pkts		= pkts;
> >+	cls_flower->stats.bytes		= bytes;
> >+	cls_flower->stats.lastused	= lastused;
> 
> Why do you need to store the values here in struct tc_cls_flower_offload?
> Why don't you just call tcf_exts_stats_update()? Basically,
> tc_cls_flower_stats_update() should be just wrapper around
> tcf_exts_stats_update() so that drivers wouldn't use ->exts directly, as
> you will remove them in follow-up patches, no?

Patch 07/12 stops exposing tc action exts to drivers, so we need a
structure (struct tc_cls_flower_stats) to convey this statistics back
to the cls_flower frontend.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator
  2018-11-19  0:15 ` [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator Pablo Neira Ayuso
  2018-11-19 14:17   ` Jiri Pirko
@ 2018-11-19 14:49   ` Jiri Pirko
  2018-11-19 16:16     ` Pablo Neira Ayuso
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 14:49 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:16AM CET, pablo@netfilter.org wrote:
>This patch adds a function to translate the ethtool_rx_flow_spec
>structure to the flow_rule representation.
>
>This allows us to reuse code from the driver side given that both flower
>and ethtool_rx_flow interfaces use the same representation.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>v2: no changes.
>
> include/net/flow_dissector.h |   5 ++
> net/core/flow_dissector.c    | 190 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 195 insertions(+)
>
>diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
>index 7a4683646d5a..ec9036232538 100644
>--- a/include/net/flow_dissector.h
>+++ b/include/net/flow_dissector.h
>@@ -485,4 +485,9 @@ static inline bool flow_rule_match_key(const struct flow_rule *rule,
> 	return dissector_uses_key(rule->match.dissector, key);
> }
> 
>+struct ethtool_rx_flow_spec;
>+
>+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs);
>+void ethtool_rx_flow_rule_free(struct flow_rule *rule);
>+
> #endif
>diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
>index b9368349f0f7..ef5bdb62620c 100644
>--- a/net/core/flow_dissector.c
>+++ b/net/core/flow_dissector.c
>@@ -17,6 +17,7 @@
> #include <linux/dccp.h>
> #include <linux/if_tunnel.h>
> #include <linux/if_pppox.h>
>+#include <uapi/linux/ethtool.h>
> #include <linux/ppp_defs.h>
> #include <linux/stddef.h>
> #include <linux/if_ether.h>
>@@ -276,6 +277,195 @@ void flow_action_free(struct flow_action *flow_action)
> }
> EXPORT_SYMBOL(flow_action_free);
> 
>+struct ethtool_rx_flow_key {
>+	struct flow_dissector_key_basic			basic;
>+	union {
>+		struct flow_dissector_key_ipv4_addrs	ipv4;
>+		struct flow_dissector_key_ipv6_addrs	ipv6;
>+	};
>+	struct flow_dissector_key_ports			tp;
>+	struct flow_dissector_key_ip			ip;
>+} __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
>+
>+struct ethtool_rx_flow_match {
>+	struct flow_dissector		dissector;
>+	struct ethtool_rx_flow_key	key;
>+	struct ethtool_rx_flow_key	mask;
>+};
>+
>+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs)
>+{
>+	static struct in6_addr zero_addr = {};
>+	struct ethtool_rx_flow_match *match;
>+	struct flow_action_key *act;
>+	struct flow_rule *rule;
>+
>+	rule = kmalloc(sizeof(struct flow_rule), GFP_KERNEL);

Allocating struct flow_rule manually here seems wrong. There should
be some helpers. Please see below.***


>+	if (!rule)
>+		return NULL;
>+
>+	match = kzalloc(sizeof(struct ethtool_rx_flow_match), GFP_KERNEL);
>+	if (!match)
>+		goto err_match;
>+
>+	rule->match.dissector	= &match->dissector;
>+	rule->match.mask	= &match->mask;
>+	rule->match.key		= &match->key;
>+
>+	match->mask.basic.n_proto = 0xffff;
>+
>+	switch (fs->flow_type & ~FLOW_EXT) {
>+	case TCP_V4_FLOW:
>+	case UDP_V4_FLOW: {
>+		const struct ethtool_tcpip4_spec *v4_spec, *v4_m_spec;
>+
>+		match->key.basic.n_proto = htons(ETH_P_IP);
>+
>+		v4_spec = &fs->h_u.tcp_ip4_spec;
>+		v4_m_spec = &fs->m_u.tcp_ip4_spec;
>+
>+		if (v4_m_spec->ip4src) {
>+			match->key.ipv4.src = v4_spec->ip4src;
>+			match->mask.ipv4.src = v4_m_spec->ip4src;
>+		}
>+		if (v4_m_spec->ip4dst) {
>+			match->key.ipv4.dst = v4_spec->ip4dst;
>+			match->mask.ipv4.dst = v4_m_spec->ip4dst;
>+		}
>+		if (v4_m_spec->ip4src ||
>+		    v4_m_spec->ip4dst) {
>+			match->dissector.used_keys |=
>+				FLOW_DISSECTOR_KEY_IPV4_ADDRS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV4_ADDRS] =
>+				offsetof(struct ethtool_rx_flow_key, ipv4);
>+		}
>+		if (v4_m_spec->psrc) {
>+			match->key.tp.src = v4_spec->psrc;
>+			match->mask.tp.src = v4_m_spec->psrc;
>+		}
>+		if (v4_m_spec->pdst) {
>+			match->key.tp.dst = v4_spec->pdst;
>+			match->mask.tp.dst = v4_m_spec->pdst;
>+		}
>+		if (v4_m_spec->psrc ||
>+		    v4_m_spec->pdst) {
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
>+				offsetof(struct ethtool_rx_flow_key, tp);
>+		}
>+		if (v4_m_spec->tos) {
>+			match->key.ip.tos = v4_spec->pdst;
>+			match->mask.ip.tos = v4_m_spec->pdst;
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
>+				offsetof(struct ethtool_rx_flow_key, ip);
>+		}
>+		}
>+		break;
>+	case TCP_V6_FLOW:
>+	case UDP_V6_FLOW: {
>+		const struct ethtool_tcpip6_spec *v6_spec, *v6_m_spec;
>+
>+		match->key.basic.n_proto = htons(ETH_P_IPV6);
>+
>+		v6_spec = &fs->h_u.tcp_ip6_spec;
>+		v6_m_spec = &fs->m_u.tcp_ip6_spec;
>+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
>+			memcpy(&match->key.ipv6.src, v6_spec->ip6src,
>+			       sizeof(match->key.ipv6.src));
>+			memcpy(&match->mask.ipv6.src, v6_m_spec->ip6src,
>+			       sizeof(match->mask.ipv6.src));
>+		}
>+		if (memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
>+			memcpy(&match->key.ipv6.dst, v6_spec->ip6dst,
>+			       sizeof(match->key.ipv6.dst));
>+			memcpy(&match->mask.ipv6.dst, v6_m_spec->ip6dst,
>+			       sizeof(match->mask.ipv6.dst));
>+		}
>+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) ||
>+		    memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
>+			match->dissector.used_keys |=
>+				FLOW_DISSECTOR_KEY_IPV6_ADDRS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
>+				offsetof(struct ethtool_rx_flow_key, ipv6);
>+		}
>+		if (v6_m_spec->psrc) {
>+			match->key.tp.src = v6_spec->psrc;
>+			match->mask.tp.src = v6_m_spec->psrc;
>+		}
>+		if (v6_m_spec->pdst) {
>+			match->key.tp.dst = v6_spec->pdst;
>+			match->mask.tp.dst = v6_m_spec->pdst;
>+		}
>+		if (v6_m_spec->psrc ||
>+		    v6_m_spec->pdst) {
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
>+				offsetof(struct ethtool_rx_flow_key, tp);
>+		}
>+		if (v6_m_spec->tclass) {
>+			match->key.ip.tos = v6_spec->tclass;
>+			match->mask.ip.tos = v6_m_spec->tclass;
>+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
>+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
>+				offsetof(struct ethtool_rx_flow_key, ip);
>+		}
>+		}
>+		break;
>+	}
>+
>+	switch (fs->flow_type & ~FLOW_EXT) {
>+	case TCP_V4_FLOW:
>+	case TCP_V6_FLOW:
>+		match->key.basic.ip_proto = IPPROTO_TCP;
>+		break;
>+	case UDP_V4_FLOW:
>+	case UDP_V6_FLOW:
>+		match->key.basic.ip_proto = IPPROTO_UDP;
>+		break;
>+	}
>+	match->mask.basic.ip_proto = 0xff;
>+
>+	match->dissector.used_keys |= FLOW_DISSECTOR_KEY_BASIC;
>+	match->dissector.offset[FLOW_DISSECTOR_KEY_BASIC] =
>+		offsetof(struct ethtool_rx_flow_key, basic);
>+
>+	/* ethtool_rx supports only one single action per rule. */
>+	if (flow_action_init(&rule->action, 1) < 0)
>+		goto err_action;
>+
>+	act = &rule->action.keys[0];
>+	switch (fs->ring_cookie) {
>+	case RX_CLS_FLOW_DISC:
>+		act->id = FLOW_ACTION_KEY_DROP;
>+		break;
>+	case RX_CLS_FLOW_WAKE:
>+		act->id = FLOW_ACTION_KEY_WAKE;
>+		break;
>+	default:
>+		act->id = FLOW_ACTION_KEY_QUEUE;
>+		act->queue_index = fs->ring_cookie;
>+		break;
>+	}
>+
>+	return rule;
>+
>+err_action:
>+	kfree(match);
>+err_match:
>+	kfree(rule);
>+	return NULL;
>+}
>+EXPORT_SYMBOL(ethtool_rx_flow_rule);
>+
>+void ethtool_rx_flow_rule_free(struct flow_rule *rule)
>+{
>+	kfree((struct flow_match *)rule->match.dissector);

Ouch. I wonder if it cannot be stored rather in some rule->priv or
something.

On alloc, you can have a helper to allocate both:

***
struct flow_rule {
	...
	unsigned long priv[0];
};

struct flow_rule *flow_rule_alloc(size_t priv_size)
{
	return kzalloc(sizeof(struct flow_rule) + priv_size, ...);
}




>+	flow_action_free(&rule->action);
>+	kfree(rule);
>+}
>+EXPORT_SYMBOL(ethtool_rx_flow_rule_free);
>+
> /**
>  * __skb_flow_get_ports - extract the upper layer ports and return them
>  * @skb: sk_buff to extract the ports from
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it
  2018-11-19 14:48     ` Pablo Neira Ayuso
@ 2018-11-19 15:04       ` Jiri Pirko
  2018-11-19 16:15         ` Pablo Neira Ayuso
  0 siblings, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 15:04 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 03:48:50PM CET, pablo@netfilter.org wrote:
>On Mon, Nov 19, 2018 at 02:57:05PM +0100, Jiri Pirko wrote:
>> Mon, Nov 19, 2018 at 01:15:12AM CET, pablo@netfilter.org wrote:
>[...]
>> >diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
>> >index 7d7aefa5fcd2..7f9a8d5ca945 100644
>> >--- a/include/net/pkt_cls.h
>> >+++ b/include/net/pkt_cls.h
>> >@@ -758,6 +758,12 @@ enum tc_fl_command {
>> > 	TC_CLSFLOWER_TMPLT_DESTROY,
>> > };
>> > 
>> >+struct tc_cls_flower_stats {
>> >+	u64	pkts;
>> >+	u64	bytes;
>> >+	u64	lastused;
>> >+};
>> >+
>> > struct tc_cls_flower_offload {
>> > 	struct tc_cls_common_offload common;
>> > 	enum tc_fl_command command;
>> >@@ -765,6 +771,7 @@ struct tc_cls_flower_offload {
>> > 	struct flow_rule rule;
>> > 	struct tcf_exts *exts;
>> > 	u32 classid;
>> >+	struct tc_cls_flower_stats stats;
>> > };
>> > 
>> > static inline struct flow_rule *
>> >@@ -773,6 +780,14 @@ tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
>> > 	return &tc_flow_cmd->rule;
>> > }
>> > 
>> >+static inline void tc_cls_flower_stats_update(struct tc_cls_flower_offload *cls_flower,
>> >+					      u64 pkts, u64 bytes, u64 lastused)
>> >+{
>> >+	cls_flower->stats.pkts		= pkts;
>> >+	cls_flower->stats.bytes		= bytes;
>> >+	cls_flower->stats.lastused	= lastused;
>> 
>> Why do you need to store the values here in struct tc_cls_flower_offload?
>> Why don't you just call tcf_exts_stats_update()? Basically,
>> tc_cls_flower_stats_update() should be just wrapper around
>> tcf_exts_stats_update() so that drivers wouldn't use ->exts directly, as
>> you will remove them in follow-up patches, no?
>
>Patch 07/12 stops exposing tc action exts to drivers, so we need a
>structure (struct tc_cls_flower_stats) to convey this statistics back
>to the cls_flower frontend.

Hmm, shouldn't these stats be rather flow_rule related than flower
related?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code
  2018-11-19  0:15 ` [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code Pablo Neira Ayuso
@ 2018-11-19 16:00   ` Jiri Pirko
  2018-11-19 16:12     ` Pablo Neira Ayuso
  2018-11-19 21:44   ` Chopra, Manish
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 16:00 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 01:15:19AM CET, pablo@netfilter.org wrote:
>The qede driver supports for ethtool_rx_flow_spec and flower, both
>codebases look very similar.
>
>This patch uses the ethtool_rx_flow_rule() infrastructure to remove the
>duplicated ethtool_rx_flow_spec parser and consolidate ACL offload
>support around the flow_rule infrastructure.
>
>Furthermore, more code can be consolidated by merging
>qede_add_cls_rule() and qede_add_tc_flower_fltr(), these two functions
>also look very similar.
>
>This driver currently provides simple ACL support, such as 5-tuple
>matching, drop policy and queue to CPU.
>
>Drivers that support more features can benefit from this infrastructure
>to save even more redundant codebase.
>
>Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
>---
>Note that, after this patch, qede_add_cls_rule() and
>qede_add_tc_flower_fltr() can be also consolidated since their code is
>redundant.
>
> drivers/net/ethernet/qlogic/qede/qede_filter.c | 246 ++++++-------------------
> 1 file changed, 53 insertions(+), 193 deletions(-)
>
>diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
>index aca302c3261b..f82b26ba8f80 100644
>--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
>+++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
>@@ -1578,30 +1578,6 @@ static void qede_flow_build_ipv6_hdr(struct qede_arfs_tuple *t,
> 	ports[1] = t->dst_port;
> }
> 
>-/* Validate fields which are set and not accepted by the driver */
>-static int qede_flow_spec_validate_unused(struct qede_dev *edev,
>-					  struct ethtool_rx_flow_spec *fs)
>-{
>-	if (fs->flow_type & FLOW_MAC_EXT) {
>-		DP_INFO(edev, "Don't support MAC extensions\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if ((fs->flow_type & FLOW_EXT) &&
>-	    (fs->h_ext.vlan_etype || fs->h_ext.vlan_tci)) {
>-		DP_INFO(edev, "Don't support vlan-based classification\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if ((fs->flow_type & FLOW_EXT) &&
>-	    (fs->h_ext.data[0] || fs->h_ext.data[1])) {
>-		DP_INFO(edev, "Don't support user defined data\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	return 0;
>-}
>-
> static int qede_set_v4_tuple_to_profile(struct qede_dev *edev,
> 					struct qede_arfs_tuple *t)
> {
>@@ -1665,132 +1641,6 @@ static int qede_set_v6_tuple_to_profile(struct qede_dev *edev,
> 	return 0;
> }
> 
>-static int qede_flow_spec_to_tuple_ipv4_common(struct qede_dev *edev,
>-					       struct qede_arfs_tuple *t,
>-					       struct ethtool_rx_flow_spec *fs)
>-{
>-	if ((fs->h_u.tcp_ip4_spec.ip4src &
>-	     fs->m_u.tcp_ip4_spec.ip4src) != fs->h_u.tcp_ip4_spec.ip4src) {
>-		DP_INFO(edev, "Don't support IP-masks\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if ((fs->h_u.tcp_ip4_spec.ip4dst &
>-	     fs->m_u.tcp_ip4_spec.ip4dst) != fs->h_u.tcp_ip4_spec.ip4dst) {
>-		DP_INFO(edev, "Don't support IP-masks\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if ((fs->h_u.tcp_ip4_spec.psrc &
>-	     fs->m_u.tcp_ip4_spec.psrc) != fs->h_u.tcp_ip4_spec.psrc) {
>-		DP_INFO(edev, "Don't support port-masks\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if ((fs->h_u.tcp_ip4_spec.pdst &
>-	     fs->m_u.tcp_ip4_spec.pdst) != fs->h_u.tcp_ip4_spec.pdst) {
>-		DP_INFO(edev, "Don't support port-masks\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if (fs->h_u.tcp_ip4_spec.tos) {
>-		DP_INFO(edev, "Don't support tos\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	t->eth_proto = htons(ETH_P_IP);
>-	t->src_ipv4 = fs->h_u.tcp_ip4_spec.ip4src;
>-	t->dst_ipv4 = fs->h_u.tcp_ip4_spec.ip4dst;
>-	t->src_port = fs->h_u.tcp_ip4_spec.psrc;
>-	t->dst_port = fs->h_u.tcp_ip4_spec.pdst;
>-
>-	return qede_set_v4_tuple_to_profile(edev, t);
>-}
>-
>-static int qede_flow_spec_to_tuple_tcpv4(struct qede_dev *edev,
>-					 struct qede_arfs_tuple *t,
>-					 struct ethtool_rx_flow_spec *fs)
>-{
>-	t->ip_proto = IPPROTO_TCP;
>-
>-	if (qede_flow_spec_to_tuple_ipv4_common(edev, t, fs))
>-		return -EINVAL;
>-
>-	return 0;
>-}
>-
>-static int qede_flow_spec_to_tuple_udpv4(struct qede_dev *edev,
>-					 struct qede_arfs_tuple *t,
>-					 struct ethtool_rx_flow_spec *fs)
>-{
>-	t->ip_proto = IPPROTO_UDP;
>-
>-	if (qede_flow_spec_to_tuple_ipv4_common(edev, t, fs))
>-		return -EINVAL;
>-
>-	return 0;
>-}
>-
>-static int qede_flow_spec_to_tuple_ipv6_common(struct qede_dev *edev,
>-					       struct qede_arfs_tuple *t,
>-					       struct ethtool_rx_flow_spec *fs)
>-{
>-	struct in6_addr zero_addr;
>-
>-	memset(&zero_addr, 0, sizeof(zero_addr));
>-
>-	if ((fs->h_u.tcp_ip6_spec.psrc &
>-	     fs->m_u.tcp_ip6_spec.psrc) != fs->h_u.tcp_ip6_spec.psrc) {
>-		DP_INFO(edev, "Don't support port-masks\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if ((fs->h_u.tcp_ip6_spec.pdst &
>-	     fs->m_u.tcp_ip6_spec.pdst) != fs->h_u.tcp_ip6_spec.pdst) {
>-		DP_INFO(edev, "Don't support port-masks\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	if (fs->h_u.tcp_ip6_spec.tclass) {
>-		DP_INFO(edev, "Don't support tclass\n");
>-		return -EOPNOTSUPP;
>-	}
>-
>-	t->eth_proto = htons(ETH_P_IPV6);
>-	memcpy(&t->src_ipv6, &fs->h_u.tcp_ip6_spec.ip6src,
>-	       sizeof(struct in6_addr));
>-	memcpy(&t->dst_ipv6, &fs->h_u.tcp_ip6_spec.ip6dst,
>-	       sizeof(struct in6_addr));
>-	t->src_port = fs->h_u.tcp_ip6_spec.psrc;
>-	t->dst_port = fs->h_u.tcp_ip6_spec.pdst;
>-
>-	return qede_set_v6_tuple_to_profile(edev, t, &zero_addr);
>-}
>-
>-static int qede_flow_spec_to_tuple_tcpv6(struct qede_dev *edev,
>-					 struct qede_arfs_tuple *t,
>-					 struct ethtool_rx_flow_spec *fs)
>-{
>-	t->ip_proto = IPPROTO_TCP;
>-
>-	if (qede_flow_spec_to_tuple_ipv6_common(edev, t, fs))
>-		return -EINVAL;
>-
>-	return 0;
>-}
>-
>-static int qede_flow_spec_to_tuple_udpv6(struct qede_dev *edev,
>-					 struct qede_arfs_tuple *t,
>-					 struct ethtool_rx_flow_spec *fs)
>-{
>-	t->ip_proto = IPPROTO_UDP;
>-
>-	if (qede_flow_spec_to_tuple_ipv6_common(edev, t, fs))
>-		return -EINVAL;
>-
>-	return 0;
>-}
>-
> /* Must be called while qede lock is held */
> static struct qede_arfs_fltr_node *
> qede_flow_find_fltr(struct qede_dev *edev, struct qede_arfs_tuple *t)
>@@ -1875,25 +1725,32 @@ static int qede_parse_actions(struct qede_dev *edev,
> 			      struct flow_action *flow_action)
> {
> 	const struct flow_action_key *act;
>-	int rc = -EINVAL, num_act = 0, i;
>-	bool is_drop = false;
>+	int i;
> 
> 	if (!flow_action_has_keys(flow_action)) {
>-		DP_NOTICE(edev, "No tc actions received\n");
>-		return rc;
>+		DP_NOTICE(edev, "No actions received\n");
>+		return -EINVAL;
> 	}
> 
> 	flow_action_for_each(i, act, flow_action) {
>-		num_act++;
>+		switch (act->id) {
>+		case FLOW_ACTION_KEY_DROP:
>+			break;
>+		case FLOW_ACTION_KEY_QUEUE:
>+			if (ethtool_get_flow_spec_ring_vf(act->queue_index))
>+				break;
> 
>-		if (act->id == FLOW_ACTION_KEY_DROP)
>-			is_drop = true;
>+			if (act->queue_index >= QEDE_RSS_COUNT(edev)) {
>+				DP_INFO(edev, "Queue out-of-bounds\n");
>+				return -EINVAL;
>+			}
>+			break;
>+		default:
>+			return -EINVAL;
>+		}
> 	}
> 
>-	if (num_act == 1 && is_drop)
>-		return 0;
>-
>-	return rc;
>+	return 0;
> }
> 
> static int
>@@ -2147,16 +2004,17 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
> }
> 
> static int qede_flow_spec_validate(struct qede_dev *edev,
>-				   struct ethtool_rx_flow_spec *fs,
>-				   struct qede_arfs_tuple *t)
>+				   struct flow_action *flow_action,
>+				   struct qede_arfs_tuple *t,
>+				   __u32 location)
> {
>-	if (fs->location >= QEDE_RFS_MAX_FLTR) {
>+	if (location >= QEDE_RFS_MAX_FLTR) {
> 		DP_INFO(edev, "Location out-of-bounds\n");
> 		return -EINVAL;
> 	}
> 
> 	/* Check location isn't already in use */
>-	if (test_bit(fs->location, edev->arfs->arfs_fltr_bmap)) {
>+	if (test_bit(location, edev->arfs->arfs_fltr_bmap)) {
> 		DP_INFO(edev, "Location already in use\n");
> 		return -EINVAL;
> 	}
>@@ -2170,46 +2028,53 @@ static int qede_flow_spec_validate(struct qede_dev *edev,
> 		return -EINVAL;
> 	}
> 
>-	/* If drop requested then no need to validate other data */
>-	if (fs->ring_cookie == RX_CLS_FLOW_DISC)
>-		return 0;
>-
>-	if (ethtool_get_flow_spec_ring_vf(fs->ring_cookie))
>-		return 0;
>-
>-	if (fs->ring_cookie >= QEDE_RSS_COUNT(edev)) {
>-		DP_INFO(edev, "Queue out-of-bounds\n");
>+	if (qede_parse_actions(edev, flow_action))
> 		return -EINVAL;
>-	}
> 
> 	return 0;
> }
> 
>-static int qede_flow_spec_to_tuple(struct qede_dev *edev,
>-				   struct qede_arfs_tuple *t,
>-				   struct ethtool_rx_flow_spec *fs)
>+static int qede_flow_spec_to_rule(struct qede_dev *edev,
>+				  struct qede_arfs_tuple *t,
>+				  struct ethtool_rx_flow_spec *fs)
> {
>-	memset(t, 0, sizeof(*t));
>-
>-	if (qede_flow_spec_validate_unused(edev, fs))
>-		return -EOPNOTSUPP;
>+	struct tc_cls_flower_offload f = {};
>+	struct flow_rule *flow_rule;
>+	__be16 proto;
>+	int err = 0;
> 
> 	switch ((fs->flow_type & ~FLOW_EXT)) {
> 	case TCP_V4_FLOW:
>-		return qede_flow_spec_to_tuple_tcpv4(edev, t, fs);
> 	case UDP_V4_FLOW:
>-		return qede_flow_spec_to_tuple_udpv4(edev, t, fs);
>+		proto = htons(ETH_P_IP);
>+		break;
> 	case TCP_V6_FLOW:
>-		return qede_flow_spec_to_tuple_tcpv6(edev, t, fs);
> 	case UDP_V6_FLOW:
>-		return qede_flow_spec_to_tuple_udpv6(edev, t, fs);
>+		proto = htons(ETH_P_IPV6);
>+		break;
> 	default:
> 		DP_VERBOSE(edev, NETIF_MSG_IFUP,
> 			   "Can't support flow of type %08x\n", fs->flow_type);
> 		return -EOPNOTSUPP;
> 	}
> 
>-	return 0;
>+	flow_rule = ethtool_rx_flow_rule(fs);
>+	if (!flow_rule)
>+		return -ENOMEM;
>+
>+	f.rule = *flow_rule;

This does not look right. I undersntand that you want to use the same
driver code to parse same struct coming either from tc-flower or
ethtool. That is why you introduced flow_rule as a intermediate layer.
However, here, you use struct that is very tc-flower specific. Common
parser should work on struct flow_rule.

qede_parse_flower_attr() should accept struct flow_rule * and should be
renamed to something like qede_parse_flow_rule()


>+
>+	if (qede_parse_flower_attr(edev, proto, &f, t)) {
>+		err = -EINVAL;
>+		goto err_out;
>+	}
>+
>+	/* Make sure location is valid and filter isn't already set */
>+	err = qede_flow_spec_validate(edev, &f.rule.action, t, fs->location);
>+err_out:
>+	ethtool_rx_flow_rule_free(flow_rule);
>+	return err;
>+
> }
> 
> int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
>@@ -2227,12 +2092,7 @@ int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
> 	}
> 
> 	/* Translate the flow specification into something fittign our DB */
>-	rc = qede_flow_spec_to_tuple(edev, &t, fsp);
>-	if (rc)
>-		goto unlock;
>-
>-	/* Make sure location is valid and filter isn't already set */
>-	rc = qede_flow_spec_validate(edev, fsp, &t);
>+	rc = qede_flow_spec_to_rule(edev, &t, fsp);
> 	if (rc)
> 		goto unlock;
> 
>-- 
>2.11.0
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code
  2018-11-19 16:00   ` Jiri Pirko
@ 2018-11-19 16:12     ` Pablo Neira Ayuso
  0 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 16:12 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 05:00:13PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 01:15:19AM CET, pablo@netfilter.org wrote:
[...]
> >-static int qede_flow_spec_to_tuple(struct qede_dev *edev,
> >-				   struct qede_arfs_tuple *t,
> >-				   struct ethtool_rx_flow_spec *fs)
> >+static int qede_flow_spec_to_rule(struct qede_dev *edev,
> >+				  struct qede_arfs_tuple *t,
> >+				  struct ethtool_rx_flow_spec *fs)
> > {
> >-	memset(t, 0, sizeof(*t));
> >-
> >-	if (qede_flow_spec_validate_unused(edev, fs))
> >-		return -EOPNOTSUPP;
> >+	struct tc_cls_flower_offload f = {};
> >+	struct flow_rule *flow_rule;
> >+	__be16 proto;
> >+	int err = 0;
> > 
> > 	switch ((fs->flow_type & ~FLOW_EXT)) {
> > 	case TCP_V4_FLOW:
> >-		return qede_flow_spec_to_tuple_tcpv4(edev, t, fs);
> > 	case UDP_V4_FLOW:
> >-		return qede_flow_spec_to_tuple_udpv4(edev, t, fs);
> >+		proto = htons(ETH_P_IP);
> >+		break;
> > 	case TCP_V6_FLOW:
> >-		return qede_flow_spec_to_tuple_tcpv6(edev, t, fs);
> > 	case UDP_V6_FLOW:
> >-		return qede_flow_spec_to_tuple_udpv6(edev, t, fs);
> >+		proto = htons(ETH_P_IPV6);
> >+		break;
> > 	default:
> > 		DP_VERBOSE(edev, NETIF_MSG_IFUP,
> > 			   "Can't support flow of type %08x\n", fs->flow_type);
> > 		return -EOPNOTSUPP;
> > 	}
> > 
> >-	return 0;
> >+	flow_rule = ethtool_rx_flow_rule(fs);
> >+	if (!flow_rule)
> >+		return -ENOMEM;
> >+
> >+	f.rule = *flow_rule;
> 
> This does not look right. I undersntand that you want to use the same
> driver code to parse same struct coming either from tc-flower or
> ethtool. That is why you introduced flow_rule as a intermediate layer.
> However, here, you use struct that is very tc-flower specific. Common
> parser should work on struct flow_rule.
>
> qede_parse_flower_attr() should accept struct flow_rule * and should be
> renamed to something like qede_parse_flow_rule().

That was intentional, to keep the number of changes in this drivers as
small as possible while introducing the flow_rule infrastructure.

I'll rework the driver as you're asking, so I can pass struct
flow_rule instead to all parser functions.

The patchset is already rather large, and involving many driver, I was
trying to keep changes to minimal but I'll update this driver as you
request, no problem.

Thanks.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it
  2018-11-19 15:04       ` Jiri Pirko
@ 2018-11-19 16:15         ` Pablo Neira Ayuso
  0 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 16:15 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 04:04:16PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 03:48:50PM CET, pablo@netfilter.org wrote:
> >On Mon, Nov 19, 2018 at 02:57:05PM +0100, Jiri Pirko wrote:
> >> Mon, Nov 19, 2018 at 01:15:12AM CET, pablo@netfilter.org wrote:
> >[...]
> >> >diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
> >> >index 7d7aefa5fcd2..7f9a8d5ca945 100644
> >> >--- a/include/net/pkt_cls.h
> >> >+++ b/include/net/pkt_cls.h
> >> >@@ -758,6 +758,12 @@ enum tc_fl_command {
> >> > 	TC_CLSFLOWER_TMPLT_DESTROY,
> >> > };
> >> > 
> >> >+struct tc_cls_flower_stats {
> >> >+	u64	pkts;
> >> >+	u64	bytes;
> >> >+	u64	lastused;
> >> >+};
> >> >+
> >> > struct tc_cls_flower_offload {
> >> > 	struct tc_cls_common_offload common;
> >> > 	enum tc_fl_command command;
> >> >@@ -765,6 +771,7 @@ struct tc_cls_flower_offload {
> >> > 	struct flow_rule rule;
> >> > 	struct tcf_exts *exts;
> >> > 	u32 classid;
> >> >+	struct tc_cls_flower_stats stats;
> >> > };
> >> > 
> >> > static inline struct flow_rule *
> >> >@@ -773,6 +780,14 @@ tc_cls_flower_offload_flow_rule(struct tc_cls_flower_offload *tc_flow_cmd)
> >> > 	return &tc_flow_cmd->rule;
> >> > }
> >> > 
> >> >+static inline void tc_cls_flower_stats_update(struct tc_cls_flower_offload *cls_flower,
> >> >+					      u64 pkts, u64 bytes, u64 lastused)
> >> >+{
> >> >+	cls_flower->stats.pkts		= pkts;
> >> >+	cls_flower->stats.bytes		= bytes;
> >> >+	cls_flower->stats.lastused	= lastused;
> >> 
> >> Why do you need to store the values here in struct tc_cls_flower_offload?
> >> Why don't you just call tcf_exts_stats_update()? Basically,
> >> tc_cls_flower_stats_update() should be just wrapper around
> >> tcf_exts_stats_update() so that drivers wouldn't use ->exts directly, as
> >> you will remove them in follow-up patches, no?
> >
> >Patch 07/12 stops exposing tc action exts to drivers, so we need a
> >structure (struct tc_cls_flower_stats) to convey this statistics back
> >to the cls_flower frontend.
> 
> Hmm, shouldn't these stats be rather flow_rule related than flower
> related?

I can rename tc_cls_flower_stats to struct flow_stats and place this
in include/net/flow.h, given flow_rule is unset / not present from the
TC_CLSFLOWER_STATS path.

Thanks for reviewing!

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator
  2018-11-19 14:49   ` Jiri Pirko
@ 2018-11-19 16:16     ` Pablo Neira Ayuso
  0 siblings, 0 replies; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 16:16 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 03:49:01PM +0100, Jiri Pirko wrote:
> Mon, Nov 19, 2018 at 01:15:16AM CET, pablo@netfilter.org wrote:
> >This patch adds a function to translate the ethtool_rx_flow_spec
> >structure to the flow_rule representation.
> >
> >This allows us to reuse code from the driver side given that both flower
> >and ethtool_rx_flow interfaces use the same representation.
> >
> >Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> >---
> >v2: no changes.
> >
> > include/net/flow_dissector.h |   5 ++
> > net/core/flow_dissector.c    | 190 +++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 195 insertions(+)
> >
> >diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
> >index 7a4683646d5a..ec9036232538 100644
> >--- a/include/net/flow_dissector.h
> >+++ b/include/net/flow_dissector.h
> >@@ -485,4 +485,9 @@ static inline bool flow_rule_match_key(const struct flow_rule *rule,
> > 	return dissector_uses_key(rule->match.dissector, key);
> > }
> > 
> >+struct ethtool_rx_flow_spec;
> >+
> >+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs);
> >+void ethtool_rx_flow_rule_free(struct flow_rule *rule);
> >+
> > #endif
> >diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> >index b9368349f0f7..ef5bdb62620c 100644
> >--- a/net/core/flow_dissector.c
> >+++ b/net/core/flow_dissector.c
> >@@ -17,6 +17,7 @@
> > #include <linux/dccp.h>
> > #include <linux/if_tunnel.h>
> > #include <linux/if_pppox.h>
> >+#include <uapi/linux/ethtool.h>
> > #include <linux/ppp_defs.h>
> > #include <linux/stddef.h>
> > #include <linux/if_ether.h>
> >@@ -276,6 +277,195 @@ void flow_action_free(struct flow_action *flow_action)
> > }
> > EXPORT_SYMBOL(flow_action_free);
> > 
> >+struct ethtool_rx_flow_key {
> >+	struct flow_dissector_key_basic			basic;
> >+	union {
> >+		struct flow_dissector_key_ipv4_addrs	ipv4;
> >+		struct flow_dissector_key_ipv6_addrs	ipv6;
> >+	};
> >+	struct flow_dissector_key_ports			tp;
> >+	struct flow_dissector_key_ip			ip;
> >+} __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
> >+
> >+struct ethtool_rx_flow_match {
> >+	struct flow_dissector		dissector;
> >+	struct ethtool_rx_flow_key	key;
> >+	struct ethtool_rx_flow_key	mask;
> >+};
> >+
> >+struct flow_rule *ethtool_rx_flow_rule(const struct ethtool_rx_flow_spec *fs)
> >+{
> >+	static struct in6_addr zero_addr = {};
> >+	struct ethtool_rx_flow_match *match;
> >+	struct flow_action_key *act;
> >+	struct flow_rule *rule;
> >+
> >+	rule = kmalloc(sizeof(struct flow_rule), GFP_KERNEL);
> 
> Allocating struct flow_rule manually here seems wrong. There should
> be some helpers. Please see below.***

Will add them.

> >+	if (!rule)
> >+		return NULL;
> >+
> >+	match = kzalloc(sizeof(struct ethtool_rx_flow_match), GFP_KERNEL);
> >+	if (!match)
> >+		goto err_match;
> >+
> >+	rule->match.dissector	= &match->dissector;
> >+	rule->match.mask	= &match->mask;
> >+	rule->match.key		= &match->key;
> >+
> >+	match->mask.basic.n_proto = 0xffff;
> >+
> >+	switch (fs->flow_type & ~FLOW_EXT) {
> >+	case TCP_V4_FLOW:
> >+	case UDP_V4_FLOW: {
> >+		const struct ethtool_tcpip4_spec *v4_spec, *v4_m_spec;
> >+
> >+		match->key.basic.n_proto = htons(ETH_P_IP);
> >+
> >+		v4_spec = &fs->h_u.tcp_ip4_spec;
> >+		v4_m_spec = &fs->m_u.tcp_ip4_spec;
> >+
> >+		if (v4_m_spec->ip4src) {
> >+			match->key.ipv4.src = v4_spec->ip4src;
> >+			match->mask.ipv4.src = v4_m_spec->ip4src;
> >+		}
> >+		if (v4_m_spec->ip4dst) {
> >+			match->key.ipv4.dst = v4_spec->ip4dst;
> >+			match->mask.ipv4.dst = v4_m_spec->ip4dst;
> >+		}
> >+		if (v4_m_spec->ip4src ||
> >+		    v4_m_spec->ip4dst) {
> >+			match->dissector.used_keys |=
> >+				FLOW_DISSECTOR_KEY_IPV4_ADDRS;
> >+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV4_ADDRS] =
> >+				offsetof(struct ethtool_rx_flow_key, ipv4);
> >+		}
> >+		if (v4_m_spec->psrc) {
> >+			match->key.tp.src = v4_spec->psrc;
> >+			match->mask.tp.src = v4_m_spec->psrc;
> >+		}
> >+		if (v4_m_spec->pdst) {
> >+			match->key.tp.dst = v4_spec->pdst;
> >+			match->mask.tp.dst = v4_m_spec->pdst;
> >+		}
> >+		if (v4_m_spec->psrc ||
> >+		    v4_m_spec->pdst) {
> >+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
> >+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
> >+				offsetof(struct ethtool_rx_flow_key, tp);
> >+		}
> >+		if (v4_m_spec->tos) {
> >+			match->key.ip.tos = v4_spec->pdst;
> >+			match->mask.ip.tos = v4_m_spec->pdst;
> >+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
> >+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
> >+				offsetof(struct ethtool_rx_flow_key, ip);
> >+		}
> >+		}
> >+		break;
> >+	case TCP_V6_FLOW:
> >+	case UDP_V6_FLOW: {
> >+		const struct ethtool_tcpip6_spec *v6_spec, *v6_m_spec;
> >+
> >+		match->key.basic.n_proto = htons(ETH_P_IPV6);
> >+
> >+		v6_spec = &fs->h_u.tcp_ip6_spec;
> >+		v6_m_spec = &fs->m_u.tcp_ip6_spec;
> >+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
> >+			memcpy(&match->key.ipv6.src, v6_spec->ip6src,
> >+			       sizeof(match->key.ipv6.src));
> >+			memcpy(&match->mask.ipv6.src, v6_m_spec->ip6src,
> >+			       sizeof(match->mask.ipv6.src));
> >+		}
> >+		if (memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) {
> >+			memcpy(&match->key.ipv6.dst, v6_spec->ip6dst,
> >+			       sizeof(match->key.ipv6.dst));
> >+			memcpy(&match->mask.ipv6.dst, v6_m_spec->ip6dst,
> >+			       sizeof(match->mask.ipv6.dst));
> >+		}
> >+		if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) ||
> >+		    memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) {
> >+			match->dissector.used_keys |=
> >+				FLOW_DISSECTOR_KEY_IPV6_ADDRS;
> >+			match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
> >+				offsetof(struct ethtool_rx_flow_key, ipv6);
> >+		}
> >+		if (v6_m_spec->psrc) {
> >+			match->key.tp.src = v6_spec->psrc;
> >+			match->mask.tp.src = v6_m_spec->psrc;
> >+		}
> >+		if (v6_m_spec->pdst) {
> >+			match->key.tp.dst = v6_spec->pdst;
> >+			match->mask.tp.dst = v6_m_spec->pdst;
> >+		}
> >+		if (v6_m_spec->psrc ||
> >+		    v6_m_spec->pdst) {
> >+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_PORTS;
> >+			match->dissector.offset[FLOW_DISSECTOR_KEY_PORTS] =
> >+				offsetof(struct ethtool_rx_flow_key, tp);
> >+		}
> >+		if (v6_m_spec->tclass) {
> >+			match->key.ip.tos = v6_spec->tclass;
> >+			match->mask.ip.tos = v6_m_spec->tclass;
> >+			match->dissector.used_keys |= FLOW_DISSECTOR_KEY_IP;
> >+			match->dissector.offset[FLOW_DISSECTOR_KEY_IP] =
> >+				offsetof(struct ethtool_rx_flow_key, ip);
> >+		}
> >+		}
> >+		break;
> >+	}
> >+
> >+	switch (fs->flow_type & ~FLOW_EXT) {
> >+	case TCP_V4_FLOW:
> >+	case TCP_V6_FLOW:
> >+		match->key.basic.ip_proto = IPPROTO_TCP;
> >+		break;
> >+	case UDP_V4_FLOW:
> >+	case UDP_V6_FLOW:
> >+		match->key.basic.ip_proto = IPPROTO_UDP;
> >+		break;
> >+	}
> >+	match->mask.basic.ip_proto = 0xff;
> >+
> >+	match->dissector.used_keys |= FLOW_DISSECTOR_KEY_BASIC;
> >+	match->dissector.offset[FLOW_DISSECTOR_KEY_BASIC] =
> >+		offsetof(struct ethtool_rx_flow_key, basic);
> >+
> >+	/* ethtool_rx supports only one single action per rule. */
> >+	if (flow_action_init(&rule->action, 1) < 0)
> >+		goto err_action;
> >+
> >+	act = &rule->action.keys[0];
> >+	switch (fs->ring_cookie) {
> >+	case RX_CLS_FLOW_DISC:
> >+		act->id = FLOW_ACTION_KEY_DROP;
> >+		break;
> >+	case RX_CLS_FLOW_WAKE:
> >+		act->id = FLOW_ACTION_KEY_WAKE;
> >+		break;
> >+	default:
> >+		act->id = FLOW_ACTION_KEY_QUEUE;
> >+		act->queue_index = fs->ring_cookie;
> >+		break;
> >+	}
> >+
> >+	return rule;
> >+
> >+err_action:
> >+	kfree(match);
> >+err_match:
> >+	kfree(rule);
> >+	return NULL;
> >+}
> >+EXPORT_SYMBOL(ethtool_rx_flow_rule);
> >+
> >+void ethtool_rx_flow_rule_free(struct flow_rule *rule)
> >+{
> >+	kfree((struct flow_match *)rule->match.dissector);
> 
> Ouch. I wonder if it cannot be stored rather in some rule->priv or
> something.

I can use container_of instead, which is what I should be using here.
This works because rule->match.dissector is at the top of struct
flow_match at this moment.

> On alloc, you can have a helper to allocate both:
> 
> ***
> struct flow_rule {
> 	...
> 	unsigned long priv[0];
> };
> 
> struct flow_rule *flow_rule_alloc(size_t priv_size)
> {
> 	return kzalloc(sizeof(struct flow_rule) + priv_size, ...);
> }

Yes, will do that. Thanks.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19 14:03   ` Jiri Pirko
  2018-11-19 14:42     ` Pablo Neira Ayuso
@ 2018-11-19 16:26     ` Pablo Neira Ayuso
  2018-11-19 16:29       ` Jiri Pirko
  1 sibling, 1 reply; 44+ messages in thread
From: Pablo Neira Ayuso @ 2018-11-19 16:26 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

On Mon, Nov 19, 2018 at 03:03:22PM +0100, Jiri Pirko wrote:
[...]
> Maybe you can push this and related stuff into new files include/net/flow.h
> and net/core/flow.c.

Quick notice: These files already exist. Since you refer to _new_
files, not the existing one, I propose to use net/core/flow_offload.c
and include/net/flow_offload.h

Thanks.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure
  2018-11-19 16:26     ` Pablo Neira Ayuso
@ 2018-11-19 16:29       ` Jiri Pirko
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Pirko @ 2018-11-19 16:29 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: netdev, davem, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 05:26:04PM CET, pablo@netfilter.org wrote:
>On Mon, Nov 19, 2018 at 03:03:22PM +0100, Jiri Pirko wrote:
>[...]
>> Maybe you can push this and related stuff into new files include/net/flow.h
>> and net/core/flow.c.
>
>Quick notice: These files already exist. Since you refer to _new_
>files, not the existing one, I propose to use net/core/flow_offload.c
>and include/net/flow_offload.h

Ok.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
                   ` (12 preceding siblings ...)
  2018-11-19  9:20 ` [PATCH 00/12 net-next,v2] add flow_rule infrastructure Jose Abreu
@ 2018-11-19 20:12 ` David Miller
  2018-11-19 21:19   ` Or Gerlitz
  2018-11-20  7:39   ` Jiri Pirko
  13 siblings, 2 replies; 44+ messages in thread
From: David Miller @ 2018-11-19 20:12 UTC (permalink / raw)
  To: pablo
  Cc: netdev, thomas.lendacky, f.fainelli, ariel.elior, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

From: Pablo Neira Ayuso <pablo@netfilter.org>
Date: Mon, 19 Nov 2018 01:15:07 +0100

> This patchset introduces a kernel intermediate representation (IR) to
> express ACL hardware offloads, as already described in previous RFC and
> v1 patchset [1] [2]. The idea is to normalize the frontend U/APIs to use
> the flow dissectors and the flow actions so drivers can reuse the
> existing TC offload driver codebase - that has been converted to use the
> flow_rule infrastructure.

I'm go to bring up the elephant in the room.

I think the real motivation here is to offload netfilter rules to HW,
and you should be completely honest about that.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-19 20:12 ` David Miller
@ 2018-11-19 21:19   ` Or Gerlitz
  2018-11-20  7:39   ` Jiri Pirko
  1 sibling, 0 replies; 44+ messages in thread
From: Or Gerlitz @ 2018-11-19 21:19 UTC (permalink / raw)
  To: David Miller
  Cc: Pablo Neira Ayuso, Linux Netdev List, thomas.lendacky,
	Florian Fainelli, Ariel Elior, Michael Chan, santosh,
	madalin.bucur, Zhuangyuzeng (Yisen),
	Salil Mehta, Jeff Kirsher, Tariq Toukan, Saeed Mahameed,
	Jiri Pirko, Ido Schimmel, Jakub Kicinski, peppe.cavallaro,
	grygorii.strashko, Andrew Lunn, Vivien Didelot, alexandre.torgue,
	joabr

On Mon, Nov 19, 2018 at 10:14 PM David Miller <davem@davemloft.net> wrote:
> From: Pablo Neira Ayuso <pablo@netfilter.org>
> Date: Mon, 19 Nov 2018 01:15:07 +0100
>
> > This patchset introduces a kernel intermediate representation (IR) to
> > express ACL hardware offloads, as already described in previous RFC and
> > v1 patchset [1] [2]. The idea is to normalize the frontend U/APIs to use
> > the flow dissectors and the flow actions so drivers can reuse the
> > existing TC offload driver codebase - that has been converted to use the
> > flow_rule infrastructure.
>
> I'm go to bring up the elephant in the room.

> I think the real motivation here is to offload netfilter rules to HW,
> and you should be completely honest about that.

Thanks Dave for clarifying.

So.. (A) why the TC path isn't enough for CT offloading? if we could have
just do it, that would have sound really cool. (B) why do we have to deal
with EIRs (Elephants In Rooms)? (C) who can address A && B?

Or.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code
  2018-11-19  0:15 ` [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code Pablo Neira Ayuso
  2018-11-19 16:00   ` Jiri Pirko
@ 2018-11-19 21:44   ` Chopra, Manish
  1 sibling, 0 replies; 44+ messages in thread
From: Chopra, Manish @ 2018-11-19 21:44 UTC (permalink / raw)
  To: Pablo Neira Ayuso, netdev
  Cc: davem, thomas.lendacky, f.fainelli, Elior, Ariel, michael.chan,
	santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro@st.com

> -----Original Message-----
> From: netdev-owner@vger.kernel.org <netdev-owner@vger.kernel.org> On
> Behalf Of Pablo Neira Ayuso
> Sent: Monday, November 19, 2018 5:45 AM
> To: netdev@vger.kernel.org
> Cc: davem@davemloft.net; thomas.lendacky@amd.com;
> f.fainelli@gmail.com; Elior, Ariel <Ariel.Elior@cavium.com>;
> michael.chan@broadcom.com; santosh@chelsio.com;
> madalin.bucur@nxp.com; yisen.zhuang@huawei.com;
> salil.mehta@huawei.com; jeffrey.t.kirsher@intel.com; tariqt@mellanox.com;
> saeedm@mellanox.com; jiri@mellanox.com; idosch@mellanox.com;
> jakub.kicinski@netronome.com; peppe.cavallaro@st.com;
> grygorii.strashko@ti.com; andrew@lunn.ch;
> vivien.didelot@savoirfairelinux.com; alexandre.torgue@st.com;
> joabreu@synopsys.com; linux-net-drivers@solarflare.com;
> ganeshgr@chelsio.com; ogerlitz@mellanox.com
> Subject: [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove
> duplicated parser code
> 
> External Email
> 
> The qede driver supports for ethtool_rx_flow_spec and flower, both
> codebases look very similar.
> 
> This patch uses the ethtool_rx_flow_rule() infrastructure to remove the
> duplicated ethtool_rx_flow_spec parser and consolidate ACL offload support
> around the flow_rule infrastructure.
> 
> Furthermore, more code can be consolidated by merging
> qede_add_cls_rule() and qede_add_tc_flower_fltr(), these two functions
> also look very similar.
> 
> This driver currently provides simple ACL support, such as 5-tuple matching,
> drop policy and queue to CPU.
> 
> Drivers that support more features can benefit from this infrastructure to
> save even more redundant codebase.
> 
> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> ---
> Note that, after this patch, qede_add_cls_rule() and
> qede_add_tc_flower_fltr() can be also consolidated since their code is
> redundant.
> 
>  drivers/net/ethernet/qlogic/qede/qede_filter.c | 246 ++++++-------------------
>  1 file changed, 53 insertions(+), 193 deletions(-)
> 
> diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c
> b/drivers/net/ethernet/qlogic/qede/qede_filter.c
> index aca302c3261b..f82b26ba8f80 100644
> --- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
> +++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
> @@ -1578,30 +1578,6 @@ static void qede_flow_build_ipv6_hdr(struct
> qede_arfs_tuple *t,
>         ports[1] = t->dst_port;
>  }
> 
> -/* Validate fields which are set and not accepted by the driver */ -static int
> qede_flow_spec_validate_unused(struct qede_dev *edev,
> -                                         struct ethtool_rx_flow_spec *fs)
> -{
> -       if (fs->flow_type & FLOW_MAC_EXT) {
> -               DP_INFO(edev, "Don't support MAC extensions\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if ((fs->flow_type & FLOW_EXT) &&
> -           (fs->h_ext.vlan_etype || fs->h_ext.vlan_tci)) {
> -               DP_INFO(edev, "Don't support vlan-based classification\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if ((fs->flow_type & FLOW_EXT) &&
> -           (fs->h_ext.data[0] || fs->h_ext.data[1])) {
> -               DP_INFO(edev, "Don't support user defined data\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       return 0;
> -}
> -
>  static int qede_set_v4_tuple_to_profile(struct qede_dev *edev,
>                                         struct qede_arfs_tuple *t)  { @@ -1665,132 +1641,6
> @@ static int qede_set_v6_tuple_to_profile(struct qede_dev *edev,
>         return 0;
>  }
> 
> -static int qede_flow_spec_to_tuple_ipv4_common(struct qede_dev *edev,
> -                                              struct qede_arfs_tuple *t,
> -                                              struct ethtool_rx_flow_spec *fs)
> -{
> -       if ((fs->h_u.tcp_ip4_spec.ip4src &
> -            fs->m_u.tcp_ip4_spec.ip4src) != fs->h_u.tcp_ip4_spec.ip4src) {
> -               DP_INFO(edev, "Don't support IP-masks\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if ((fs->h_u.tcp_ip4_spec.ip4dst &
> -            fs->m_u.tcp_ip4_spec.ip4dst) != fs->h_u.tcp_ip4_spec.ip4dst) {
> -               DP_INFO(edev, "Don't support IP-masks\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if ((fs->h_u.tcp_ip4_spec.psrc &
> -            fs->m_u.tcp_ip4_spec.psrc) != fs->h_u.tcp_ip4_spec.psrc) {
> -               DP_INFO(edev, "Don't support port-masks\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if ((fs->h_u.tcp_ip4_spec.pdst &
> -            fs->m_u.tcp_ip4_spec.pdst) != fs->h_u.tcp_ip4_spec.pdst) {
> -               DP_INFO(edev, "Don't support port-masks\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if (fs->h_u.tcp_ip4_spec.tos) {
> -               DP_INFO(edev, "Don't support tos\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       t->eth_proto = htons(ETH_P_IP);
> -       t->src_ipv4 = fs->h_u.tcp_ip4_spec.ip4src;
> -       t->dst_ipv4 = fs->h_u.tcp_ip4_spec.ip4dst;
> -       t->src_port = fs->h_u.tcp_ip4_spec.psrc;
> -       t->dst_port = fs->h_u.tcp_ip4_spec.pdst;
> -
> -       return qede_set_v4_tuple_to_profile(edev, t);
> -}
> -
> -static int qede_flow_spec_to_tuple_tcpv4(struct qede_dev *edev,
> -                                        struct qede_arfs_tuple *t,
> -                                        struct ethtool_rx_flow_spec *fs)
> -{
> -       t->ip_proto = IPPROTO_TCP;
> -
> -       if (qede_flow_spec_to_tuple_ipv4_common(edev, t, fs))
> -               return -EINVAL;
> -
> -       return 0;
> -}
> -
> -static int qede_flow_spec_to_tuple_udpv4(struct qede_dev *edev,
> -                                        struct qede_arfs_tuple *t,
> -                                        struct ethtool_rx_flow_spec *fs)
> -{
> -       t->ip_proto = IPPROTO_UDP;
> -
> -       if (qede_flow_spec_to_tuple_ipv4_common(edev, t, fs))
> -               return -EINVAL;
> -
> -       return 0;
> -}
> -
> -static int qede_flow_spec_to_tuple_ipv6_common(struct qede_dev *edev,
> -                                              struct qede_arfs_tuple *t,
> -                                              struct ethtool_rx_flow_spec *fs)
> -{
> -       struct in6_addr zero_addr;
> -
> -       memset(&zero_addr, 0, sizeof(zero_addr));
> -
> -       if ((fs->h_u.tcp_ip6_spec.psrc &
> -            fs->m_u.tcp_ip6_spec.psrc) != fs->h_u.tcp_ip6_spec.psrc) {
> -               DP_INFO(edev, "Don't support port-masks\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if ((fs->h_u.tcp_ip6_spec.pdst &
> -            fs->m_u.tcp_ip6_spec.pdst) != fs->h_u.tcp_ip6_spec.pdst) {
> -               DP_INFO(edev, "Don't support port-masks\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       if (fs->h_u.tcp_ip6_spec.tclass) {
> -               DP_INFO(edev, "Don't support tclass\n");
> -               return -EOPNOTSUPP;
> -       }
> -
> -       t->eth_proto = htons(ETH_P_IPV6);
> -       memcpy(&t->src_ipv6, &fs->h_u.tcp_ip6_spec.ip6src,
> -              sizeof(struct in6_addr));
> -       memcpy(&t->dst_ipv6, &fs->h_u.tcp_ip6_spec.ip6dst,
> -              sizeof(struct in6_addr));
> -       t->src_port = fs->h_u.tcp_ip6_spec.psrc;
> -       t->dst_port = fs->h_u.tcp_ip6_spec.pdst;
> -
> -       return qede_set_v6_tuple_to_profile(edev, t, &zero_addr);
> -}
> -
> -static int qede_flow_spec_to_tuple_tcpv6(struct qede_dev *edev,
> -                                        struct qede_arfs_tuple *t,
> -                                        struct ethtool_rx_flow_spec *fs)
> -{
> -       t->ip_proto = IPPROTO_TCP;
> -
> -       if (qede_flow_spec_to_tuple_ipv6_common(edev, t, fs))
> -               return -EINVAL;
> -
> -       return 0;
> -}
> -
> -static int qede_flow_spec_to_tuple_udpv6(struct qede_dev *edev,
> -                                        struct qede_arfs_tuple *t,
> -                                        struct ethtool_rx_flow_spec *fs)
> -{
> -       t->ip_proto = IPPROTO_UDP;
> -
> -       if (qede_flow_spec_to_tuple_ipv6_common(edev, t, fs))
> -               return -EINVAL;
> -
> -       return 0;
> -}
> -
>  /* Must be called while qede lock is held */  static struct
> qede_arfs_fltr_node *  qede_flow_find_fltr(struct qede_dev *edev, struct
> qede_arfs_tuple *t) @@ -1875,25 +1725,32 @@ static int
> qede_parse_actions(struct qede_dev *edev,
>                               struct flow_action *flow_action)  {
>         const struct flow_action_key *act;
> -       int rc = -EINVAL, num_act = 0, i;
> -       bool is_drop = false;
> +       int i;
> 
>         if (!flow_action_has_keys(flow_action)) {
> -               DP_NOTICE(edev, "No tc actions received\n");
> -               return rc;
> +               DP_NOTICE(edev, "No actions received\n");
> +               return -EINVAL;
>         }
> 
>         flow_action_for_each(i, act, flow_action) {
> -               num_act++;
> +               switch (act->id) {
> +               case FLOW_ACTION_KEY_DROP:
> +                       break;
> +               case FLOW_ACTION_KEY_QUEUE:
> +                       if (ethtool_get_flow_spec_ring_vf(act->queue_index))
> +                               break;
> 
> -               if (act->id == FLOW_ACTION_KEY_DROP)
> -                       is_drop = true;
> +                       if (act->queue_index >= QEDE_RSS_COUNT(edev)) {
> +                               DP_INFO(edev, "Queue out-of-bounds\n");
> +                               return -EINVAL;
> +                       }
> +                       break;
> +               default:
> +                       return -EINVAL;
> +               }
>         }
> 
> -       if (num_act == 1 && is_drop)
> -               return 0;
> -
> -       return rc;
> +       return 0;
>  }
> 
>  static int
> @@ -2147,16 +2004,17 @@ int qede_add_tc_flower_fltr(struct qede_dev
> *edev, __be16 proto,  }
> 
>  static int qede_flow_spec_validate(struct qede_dev *edev,
> -                                  struct ethtool_rx_flow_spec *fs,
> -                                  struct qede_arfs_tuple *t)
> +                                  struct flow_action *flow_action,
> +                                  struct qede_arfs_tuple *t,
> +                                  __u32 location)
>  {
> -       if (fs->location >= QEDE_RFS_MAX_FLTR) {
> +       if (location >= QEDE_RFS_MAX_FLTR) {
>                 DP_INFO(edev, "Location out-of-bounds\n");
>                 return -EINVAL;
>         }
> 
>         /* Check location isn't already in use */
> -       if (test_bit(fs->location, edev->arfs->arfs_fltr_bmap)) {
> +       if (test_bit(location, edev->arfs->arfs_fltr_bmap)) {
>                 DP_INFO(edev, "Location already in use\n");
>                 return -EINVAL;
>         }
> @@ -2170,46 +2028,53 @@ static int qede_flow_spec_validate(struct
> qede_dev *edev,
>                 return -EINVAL;
>         }
> 
> -       /* If drop requested then no need to validate other data */
> -       if (fs->ring_cookie == RX_CLS_FLOW_DISC)
> -               return 0;
> -
> -       if (ethtool_get_flow_spec_ring_vf(fs->ring_cookie))
> -               return 0;
> -
> -       if (fs->ring_cookie >= QEDE_RSS_COUNT(edev)) {
> -               DP_INFO(edev, "Queue out-of-bounds\n");
> +       if (qede_parse_actions(edev, flow_action))
>                 return -EINVAL;
> -       }
> 
>         return 0;
>  }
> 
> -static int qede_flow_spec_to_tuple(struct qede_dev *edev,
> -                                  struct qede_arfs_tuple *t,
> -                                  struct ethtool_rx_flow_spec *fs)
> +static int qede_flow_spec_to_rule(struct qede_dev *edev,
> +                                 struct qede_arfs_tuple *t,
> +                                 struct ethtool_rx_flow_spec *fs)
>  {
> -       memset(t, 0, sizeof(*t));
> -
> -       if (qede_flow_spec_validate_unused(edev, fs))
> -               return -EOPNOTSUPP;
> +       struct tc_cls_flower_offload f = {};
> +       struct flow_rule *flow_rule;
> +       __be16 proto;
> +       int err = 0;
> 
>         switch ((fs->flow_type & ~FLOW_EXT)) {
>         case TCP_V4_FLOW:
> -               return qede_flow_spec_to_tuple_tcpv4(edev, t, fs);
>         case UDP_V4_FLOW:
> -               return qede_flow_spec_to_tuple_udpv4(edev, t, fs);
> +               proto = htons(ETH_P_IP);
> +               break;
>         case TCP_V6_FLOW:
> -               return qede_flow_spec_to_tuple_tcpv6(edev, t, fs);
>         case UDP_V6_FLOW:
> -               return qede_flow_spec_to_tuple_udpv6(edev, t, fs);
> +               proto = htons(ETH_P_IPV6);
> +               break;
>         default:
>                 DP_VERBOSE(edev, NETIF_MSG_IFUP,
>                            "Can't support flow of type %08x\n", fs->flow_type);
>                 return -EOPNOTSUPP;
>         }
> 
> -       return 0;
> +       flow_rule = ethtool_rx_flow_rule(fs);
> +       if (!flow_rule)
> +               return -ENOMEM;
> +
> +       f.rule = *flow_rule;
> +
> +       if (qede_parse_flower_attr(edev, proto, &f, t)) {
> +               err = -EINVAL;
> +               goto err_out;
> +       }
> +
> +       /* Make sure location is valid and filter isn't already set */
> +       err = qede_flow_spec_validate(edev, &f.rule.action, t,
> +fs->location);
> +err_out:
> +       ethtool_rx_flow_rule_free(flow_rule);
> +       return err;
> +
>  }
> 
>  int qede_add_cls_rule(struct qede_dev *edev, struct ethtool_rxnfc *info)
> @@ -2227,12 +2092,7 @@ int qede_add_cls_rule(struct qede_dev *edev,
> struct ethtool_rxnfc *info)
>         }
> 
>         /* Translate the flow specification into something fittign our DB */
> -       rc = qede_flow_spec_to_tuple(edev, &t, fsp);
> -       if (rc)
> -               goto unlock;
> -
> -       /* Make sure location is valid and filter isn't already set */
> -       rc = qede_flow_spec_validate(edev, fsp, &t);
> +       rc = qede_flow_spec_to_rule(edev, &t, fsp);
>         if (rc)
>                 goto unlock;
> 
> --
> 2.11.0

We will need some time to test these changes as it is changing this driver flows significantly.
Right now from the quick test of couple of ethtool commands below, which seems to be failing with this patch. They used to pass before this patch.

# ethtool -N eth0 flow-type tcp4 dst-port 5550 action -1
rmgr: Cannot insert RX class rule: Invalid argument
# dmesg -c
[12482.547161] [qede_parse_flower_attr:1930(eth0)]Invalid tc protocol request

It seem like ip_proto is not being set from below case, as flow_rule_match_key() seems to be returning false.

        if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
                struct flow_match_basic match;

                flow_rule_match_basic(rule, &match);
                ip_proto = match.key->ip_proto;
        }

# ethtool -N eth0 flow-type tcp4 src-ip 192.168.40.200 dst-ip 192.168.50.200 src-port 6660 dst-port 5550 action -1
rmgr: Cannot insert RX class rule: Invalid argument
# dmesg -c

For this case, in function qede_tc_parse_ports()  flow_rule_match_key() is returning false.

       if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
                struct flow_match_ports match;
	--
	--
        }

Thanks.
 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-19 20:12 ` David Miller
  2018-11-19 21:19   ` Or Gerlitz
@ 2018-11-20  7:39   ` Jiri Pirko
  2018-11-20 17:16     ` David Miller
  1 sibling, 1 reply; 44+ messages in thread
From: Jiri Pirko @ 2018-11-20  7:39 UTC (permalink / raw)
  To: David Miller
  Cc: pablo, netdev, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Mon, Nov 19, 2018 at 09:12:29PM CET, davem@davemloft.net wrote:
>From: Pablo Neira Ayuso <pablo@netfilter.org>
>Date: Mon, 19 Nov 2018 01:15:07 +0100
>
>> This patchset introduces a kernel intermediate representation (IR) to
>> express ACL hardware offloads, as already described in previous RFC and
>> v1 patchset [1] [2]. The idea is to normalize the frontend U/APIs to use
>> the flow dissectors and the flow actions so drivers can reuse the
>> existing TC offload driver codebase - that has been converted to use the
>> flow_rule infrastructure.
>
>I'm go to bring up the elephant in the room.
>
>I think the real motivation here is to offload netfilter rules to HW,
>and you should be completely honest about that.

Sure, but this patchset is mainly about making the parsing code in
drivers common no matter from where the "flow rule" comes. If later on
the netfilter code will use it, through another ndo/notifier/whatever,
that is side a nice side-effect in my opinion.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-20  7:39   ` Jiri Pirko
@ 2018-11-20 17:16     ` David Miller
  2018-11-21  7:46       ` Jiri Pirko
  0 siblings, 1 reply; 44+ messages in thread
From: David Miller @ 2018-11-20 17:16 UTC (permalink / raw)
  To: jiri
  Cc: pablo, netdev, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

From: Jiri Pirko <jiri@resnulli.us>
Date: Tue, 20 Nov 2018 08:39:12 +0100

> If later on the netfilter code will use it, through another
> ndo/notifier/whatever, that is side a nice side-effect in my
> opinion.

Netfilter HW offloading is the main motivation of these changes.

You can try to spin it any way you like, but I think this is pretty
clear.

Would the author of these changes be even be remotely interested in
this "cleanup" in areas of code he has never been involved in if that
were not the case?

I think it is very dishonest to portray the situation differently.

Thank you.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/12 net-next,v2] add flow_rule infrastructure
  2018-11-20 17:16     ` David Miller
@ 2018-11-21  7:46       ` Jiri Pirko
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Pirko @ 2018-11-21  7:46 UTC (permalink / raw)
  To: David Miller
  Cc: pablo, netdev, thomas.lendacky, f.fainelli, ariel.elior,
	michael.chan, santosh, madalin.bucur, yisen.zhuang, salil.mehta,
	jeffrey.t.kirsher, tariqt, saeedm, jiri, idosch, jakub.kicinski,
	peppe.cavallaro, grygorii.strashko, andrew, vivien.didelot,
	alexandre.torgue, joabreu, linux-net-drivers, ganeshgr, ogerlitz

Tue, Nov 20, 2018 at 06:16:40PM CET, davem@davemloft.net wrote:
>From: Jiri Pirko <jiri@resnulli.us>
>Date: Tue, 20 Nov 2018 08:39:12 +0100
>
>> If later on the netfilter code will use it, through another
>> ndo/notifier/whatever, that is side a nice side-effect in my
>> opinion.
>
>Netfilter HW offloading is the main motivation of these changes.
>
>You can try to spin it any way you like, but I think this is pretty
>clear.
>
>Would the author of these changes be even be remotely interested in
>this "cleanup" in areas of code he has never been involved in if that
>were not the case?

No, but of course. I'm just saying that the cleanup is nice and handy
even if the code would never be used by netfilter. Therefore I think
the info is irrelevant for the review. Anyway, I get your point.


>
>I think it is very dishonest to portray the situation differently.
>
>Thank you.

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2018-11-21 18:26 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-19  0:15 [PATCH 00/12 net-next,v2] add flow_rule infrastructure Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 01/12] flow_dissector: add flow_rule and flow_match structures and use them Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 02/12] net/mlx5e: support for two independent packet edit actions Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 03/12] flow_dissector: add flow action infrastructure Pablo Neira Ayuso
2018-11-19 11:56   ` Jiri Pirko
2018-11-19 12:35     ` Pablo Neira Ayuso
2018-11-19 13:05       ` Jiri Pirko
2018-11-19 14:03   ` Jiri Pirko
2018-11-19 14:42     ` Pablo Neira Ayuso
2018-11-19 16:26     ` Pablo Neira Ayuso
2018-11-19 16:29       ` Jiri Pirko
2018-11-19  0:15 ` [PATCH net-next,v2 04/12] cls_api: add translator to flow_action representation Pablo Neira Ayuso
2018-11-19 12:12   ` Jiri Pirko
2018-11-19 13:21     ` Pablo Neira Ayuso
2018-11-19 13:22       ` Jiri Pirko
2018-11-19 12:16   ` Jiri Pirko
2018-11-19 12:37     ` Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 05/12] cls_flower: add statistics retrieval infrastructure and use it Pablo Neira Ayuso
2018-11-19 13:57   ` Jiri Pirko
2018-11-19 14:48     ` Pablo Neira Ayuso
2018-11-19 15:04       ` Jiri Pirko
2018-11-19 16:15         ` Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 06/12] drivers: net: use flow action infrastructure Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 07/12] cls_flower: don't expose TC actions to drivers anymore Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 08/12] flow_dissector: add wake-up-on-lan and queue to flow_action Pablo Neira Ayuso
2018-11-19 13:59   ` Jiri Pirko
2018-11-19  0:15 ` [PATCH net-next,v2 09/12] flow_dissector: add basic ethtool_rx_flow_spec to flow_rule structure translator Pablo Neira Ayuso
2018-11-19 14:17   ` Jiri Pirko
2018-11-19 14:43     ` Pablo Neira Ayuso
2018-11-19 14:49   ` Jiri Pirko
2018-11-19 16:16     ` Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next,v2 10/12] dsa: bcm_sf2: use flow_rule infrastructure Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next 11/12] qede: place ethtool_rx_flow_spec after code after TC flower codebase Pablo Neira Ayuso
2018-11-19  0:15 ` [PATCH net-next 12/12] qede: use ethtool_rx_flow_rule() to remove duplicated parser code Pablo Neira Ayuso
2018-11-19 16:00   ` Jiri Pirko
2018-11-19 16:12     ` Pablo Neira Ayuso
2018-11-19 21:44   ` Chopra, Manish
2018-11-19  9:20 ` [PATCH 00/12 net-next,v2] add flow_rule infrastructure Jose Abreu
2018-11-19 10:19   ` Pablo Neira Ayuso
2018-11-19 20:12 ` David Miller
2018-11-19 21:19   ` Or Gerlitz
2018-11-20  7:39   ` Jiri Pirko
2018-11-20 17:16     ` David Miller
2018-11-21  7:46       ` Jiri Pirko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.