All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v3 0/7] tc-flower based cloud filters in i40e
@ 2017-09-13  9:59 ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

This patch series enables configuring cloud filters in i40e
using the tc-flower classifier. The only tc-filter action
supported is to redirect packets to a traffic class on the
same device. The mirror/redirect action is extended to
accept a traffic class to achieve this.

The cloud filters are added for a VSI and are cleaned up when
the VSI is deleted. The filters that match on L4 ports needs
enhanced admin queue functions with big buffer support for
extended fields in cloud filter commands.

Example:
# tc qdisc add dev eth0 ingress

# ethtool -K eth0 hw-tc-offload on

# tc filter add dev eth0 protocol ip parent ffff: prio 1 flower\
  dst_ip 192.168.1.1/32 ip_proto udp dst_port 22\
  skip_sw action mirred ingress redirect dev eth0 tclass 1

# tc filter show dev eth0 parent ffff:
filter protocol ip pref 1 flower chain 0
filter protocol ip pref 1 flower chain 0 handle 0x1
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.1.1
  dst_port 22
  skip_sw
  in_hw
        action order 1: mirred (Ingress Redirect to device eth0) stolen tclass 1
        index 7 ref 1 bind 1

v3: Added an extra patch to clean up white-space noise. Cleaned up
some lengthy function names. Used __be32 array for ipv6 address.
Used macro for IP version. Minor formatting changes.

---

Amritha Nambiar (7):
      tc_mirred: Clean up white-space noise
      sched: act_mirred: Traffic class option for mirror/redirect action
      i40e: Map TCs with the VSI seids
      i40e: Cloud filter mode for set_switch_config command
      i40e: Admin queue definitions for cloud filters
      i40e: Clean up of cloud filters
      i40e: Enable cloud filters via tc-flower


 drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c  |    2 
 drivers/net/ethernet/intel/i40e/i40e.h             |   59 +
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |  143 +++
 drivers/net/ethernet/intel/i40e/i40e_common.c      |  193 ++++
 drivers/net/ethernet/intel/i40e/i40e_ethtool.c     |    2 
 drivers/net/ethernet/intel/i40e/i40e_main.c        |  999 +++++++++++++++++++-
 drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   18 
 drivers/net/ethernet/intel/i40e/i40e_type.h        |   10 
 .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |  113 ++
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |    2 
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    |    2 
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |    3 
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  |    3 
 drivers/net/ethernet/netronome/nfp/bpf/offload.c   |    1 
 drivers/net/ethernet/netronome/nfp/flower/action.c |    4 
 include/net/tc_act/tc_mirred.h                     |   16 
 include/uapi/linux/tc_act/tc_mirred.h              |    9 
 net/dsa/slave.c                                    |    3 
 net/sched/act_mirred.c                             |   15 
 19 files changed, 1547 insertions(+), 50 deletions(-)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 0/7] tc-flower based cloud filters in i40e
@ 2017-09-13  9:59 ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

This patch series enables configuring cloud filters in i40e
using the tc-flower classifier. The only tc-filter action
supported is to redirect packets to a traffic class on the
same device. The mirror/redirect action is extended to
accept a traffic class to achieve this.

The cloud filters are added for a VSI and are cleaned up when
the VSI is deleted. The filters that match on L4 ports needs
enhanced admin queue functions with big buffer support for
extended fields in cloud filter commands.

Example:
# tc qdisc add dev eth0 ingress

# ethtool -K eth0 hw-tc-offload on

# tc filter add dev eth0 protocol ip parent ffff: prio 1 flower\
  dst_ip 192.168.1.1/32 ip_proto udp dst_port 22\
  skip_sw action mirred ingress redirect dev eth0 tclass 1

# tc filter show dev eth0 parent ffff:
filter protocol ip pref 1 flower chain 0
filter protocol ip pref 1 flower chain 0 handle 0x1
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.1.1
  dst_port 22
  skip_sw
  in_hw
        action order 1: mirred (Ingress Redirect to device eth0) stolen tclass 1
        index 7 ref 1 bind 1

v3: Added an extra patch to clean up white-space noise. Cleaned up
some lengthy function names. Used __be32 array for ipv6 address.
Used macro for IP version. Minor formatting changes.

---

Amritha Nambiar (7):
      tc_mirred: Clean up white-space noise
      sched: act_mirred: Traffic class option for mirror/redirect action
      i40e: Map TCs with the VSI seids
      i40e: Cloud filter mode for set_switch_config command
      i40e: Admin queue definitions for cloud filters
      i40e: Clean up of cloud filters
      i40e: Enable cloud filters via tc-flower


 drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c  |    2 
 drivers/net/ethernet/intel/i40e/i40e.h             |   59 +
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |  143 +++
 drivers/net/ethernet/intel/i40e/i40e_common.c      |  193 ++++
 drivers/net/ethernet/intel/i40e/i40e_ethtool.c     |    2 
 drivers/net/ethernet/intel/i40e/i40e_main.c        |  999 +++++++++++++++++++-
 drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   18 
 drivers/net/ethernet/intel/i40e/i40e_type.h        |   10 
 .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |  113 ++
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |    2 
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    |    2 
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |    3 
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  |    3 
 drivers/net/ethernet/netronome/nfp/bpf/offload.c   |    1 
 drivers/net/ethernet/netronome/nfp/flower/action.c |    4 
 include/net/tc_act/tc_mirred.h                     |   16 
 include/uapi/linux/tc_act/tc_mirred.h              |    9 
 net/dsa/slave.c                                    |    3 
 net/sched/act_mirred.c                             |   15 
 19 files changed, 1547 insertions(+), 50 deletions(-)

--

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 1/7] tc_mirred: Clean up white-space noise
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 include/uapi/linux/tc_act/tc_mirred.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/uapi/linux/tc_act/tc_mirred.h b/include/uapi/linux/tc_act/tc_mirred.h
index 3d7a2b3..69038c2 100644
--- a/include/uapi/linux/tc_act/tc_mirred.h
+++ b/include/uapi/linux/tc_act/tc_mirred.h
@@ -9,13 +9,13 @@
 #define TCA_EGRESS_MIRROR 2 /* mirror packet to EGRESS */
 #define TCA_INGRESS_REDIR 3  /* packet redirect to INGRESS*/
 #define TCA_INGRESS_MIRROR 4 /* mirror packet to INGRESS */
-                                                                                
+
 struct tc_mirred {
 	tc_gen;
 	int                     eaction;   /* one of IN/EGRESS_MIRROR/REDIR */
 	__u32                   ifindex;  /* ifindex of egress port */
 };
-                                                                                
+
 enum {
 	TCA_MIRRED_UNSPEC,
 	TCA_MIRRED_TM,
@@ -24,5 +24,5 @@ enum {
 	__TCA_MIRRED_MAX
 };
 #define TCA_MIRRED_MAX (__TCA_MIRRED_MAX - 1)
-                                                                                
+
 #endif

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 1/7] tc_mirred: Clean up white-space noise
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 include/uapi/linux/tc_act/tc_mirred.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/uapi/linux/tc_act/tc_mirred.h b/include/uapi/linux/tc_act/tc_mirred.h
index 3d7a2b3..69038c2 100644
--- a/include/uapi/linux/tc_act/tc_mirred.h
+++ b/include/uapi/linux/tc_act/tc_mirred.h
@@ -9,13 +9,13 @@
 #define TCA_EGRESS_MIRROR 2 /* mirror packet to EGRESS */
 #define TCA_INGRESS_REDIR 3  /* packet redirect to INGRESS*/
 #define TCA_INGRESS_MIRROR 4 /* mirror packet to INGRESS */
-                                                                                
+
 struct tc_mirred {
 	tc_gen;
 	int                     eaction;   /* one of IN/EGRESS_MIRROR/REDIR */
 	__u32                   ifindex;  /* ifindex of egress port */
 };
-                                                                                
+
 enum {
 	TCA_MIRRED_UNSPEC,
 	TCA_MIRRED_TM,
@@ -24,5 +24,5 @@ enum {
 	__TCA_MIRRED_MAX
 };
 #define TCA_MIRRED_MAX (__TCA_MIRRED_MAX - 1)
-                                                                                
+
 #endif


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

Adds optional traffic class parameter to the mirror/redirect action.
The mirror/redirect action is extended to forward to a traffic
class on the device if the traffic class index is provided in
addition to the device's ifindex.

Example:
# tc filter add dev eth0 protocol ip parent ffff: prio 1 flower\
  dst_ip 192.168.1.1/32 ip_proto udp dst_port 22\
  skip_sw action mirred ingress redirect dev eth0 tclass 1

v2: Introduced is_tcf_mirred_tc() helper function to check if
the rule is supported in current offloaders. Removed the
additional definitions for max number of TCs and its bitmask
and replaced their usages with existing defines in linux/netdevice.h.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c  |    2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |    2 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    |    2 +-
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |    3 ++-
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  |    3 ++-
 drivers/net/ethernet/netronome/nfp/bpf/offload.c   |    1 +
 drivers/net/ethernet/netronome/nfp/flower/action.c |    4 ++--
 include/net/tc_act/tc_mirred.h                     |   16 ++++++++++++++++
 include/uapi/linux/tc_act/tc_mirred.h              |    3 +++
 net/dsa/slave.c                                    |    3 ++-
 net/sched/act_mirred.c                             |   15 +++++++++++++++
 11 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
index 48970ba..54a7004 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
@@ -113,7 +113,7 @@ static int fill_action_fields(struct adapter *adap,
 		}
 
 		/* Re-direct to specified port in hardware. */
-		if (is_tcf_mirred_egress_redirect(a)) {
+		if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 			struct net_device *n_dev;
 			unsigned int i, index;
 			bool found = false;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 3d3739f..b46d45d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -8999,7 +8999,7 @@ static int parse_tc_actions(struct ixgbe_adapter *adapter,
 		}
 
 		/* Redirect to a VF or a offloaded macvlan */
-		if (is_tcf_mirred_egress_redirect(a)) {
+		if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 			int ifindex = tcf_mirred_ifindex(a);
 
 			err = handle_redirect_action(adapter, ifindex, queue,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index da503e6..f2352a0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1869,7 +1869,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 			return -EOPNOTSUPP;
 		}
 
-		if (is_tcf_mirred_egress_redirect(a)) {
+		if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 			int ifindex = tcf_mirred_ifindex(a);
 			struct net_device *out_dev, *encap_dev = NULL;
 			struct mlx5e_priv *out_priv;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index ed7cd6c..5ec56f4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1641,7 +1641,8 @@ static int mlxsw_sp_port_add_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
 	tcf_exts_to_list(f->exts, &actions);
 	a = list_first_entry(&actions, struct tc_action, list);
 
-	if (is_tcf_mirred_egress_mirror(a) && protocol == htons(ETH_P_ALL)) {
+	if (is_tcf_mirred_egress_mirror(a) && !is_tcf_mirred_tc(a) &&
+	    protocol == htons(ETH_P_ALL)) {
 		struct mlxsw_sp_port_mall_mirror_tc_entry *mirror;
 
 		mall_tc_entry->type = MLXSW_SP_PORT_MALL_MIRROR;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 8aace9a..88403a1 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -85,7 +85,8 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 
 			group_id = mlxsw_sp_acl_ruleset_group_id(ruleset);
 			mlxsw_sp_acl_rulei_act_jump(rulei, group_id);
-		} else if (is_tcf_mirred_egress_redirect(a)) {
+		} else if (is_tcf_mirred_egress_redirect(a) &&
+			   !is_tcf_mirred_tc(a)) {
 			int ifindex = tcf_mirred_ifindex(a);
 			struct net_device *out_dev;
 			struct mlxsw_sp_fid *fid;
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
index a88bb5b..3b00d4b 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
@@ -131,6 +131,7 @@ nfp_net_bpf_get_act(struct nfp_net *nn, struct tc_cls_bpf_offload *cls_bpf)
 			return NN_ACT_TC_DROP;
 
 		if (is_tcf_mirred_egress_redirect(a) &&
+		    !is_tcf_mirred_tc(a) &&
 		    tcf_mirred_ifindex(a) == nn->dp.netdev->ifindex)
 			return NN_ACT_TC_REDIR;
 	}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
index db97506..7ceeaa9 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
@@ -132,7 +132,7 @@ nfp_flower_loop_action(const struct tc_action *a,
 
 	if (is_tcf_gact_shot(a)) {
 		nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_DROP);
-	} else if (is_tcf_mirred_egress_redirect(a)) {
+	} else if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 		if (*a_len + sizeof(struct nfp_fl_output) > NFP_FL_MAX_A_SIZ)
 			return -EOPNOTSUPP;
 
@@ -142,7 +142,7 @@ nfp_flower_loop_action(const struct tc_action *a,
 			return err;
 
 		*a_len += sizeof(struct nfp_fl_output);
-	} else if (is_tcf_mirred_egress_mirror(a)) {
+	} else if (is_tcf_mirred_egress_mirror(a) && !is_tcf_mirred_tc(a)) {
 		if (*a_len + sizeof(struct nfp_fl_output) > NFP_FL_MAX_A_SIZ)
 			return -EOPNOTSUPP;
 
diff --git a/include/net/tc_act/tc_mirred.h b/include/net/tc_act/tc_mirred.h
index 604bc31..59cb935 100644
--- a/include/net/tc_act/tc_mirred.h
+++ b/include/net/tc_act/tc_mirred.h
@@ -9,6 +9,8 @@ struct tcf_mirred {
 	int			tcfm_eaction;
 	int			tcfm_ifindex;
 	bool			tcfm_mac_header_xmit;
+	u8			tcfm_tc;
+	u32			flags;
 	struct net_device __rcu	*tcfm_dev;
 	struct list_head	tcfm_list;
 };
@@ -37,4 +39,18 @@ static inline int tcf_mirred_ifindex(const struct tc_action *a)
 	return to_mirred(a)->tcfm_ifindex;
 }
 
+static inline bool is_tcf_mirred_tc(const struct tc_action *a)
+{
+#ifdef CONFIG_NET_CLS_ACT
+	if (a->ops && a->ops->type == TCA_ACT_MIRRED)
+		return to_mirred(a)->flags == MIRRED_F_TCLASS;
+#endif
+	return false;
+}
+
+static inline u8 tcf_mirred_tc(const struct tc_action *a)
+{
+	return to_mirred(a)->tcfm_tc;
+}
+
 #endif /* __NET_TC_MIR_H */
diff --git a/include/uapi/linux/tc_act/tc_mirred.h b/include/uapi/linux/tc_act/tc_mirred.h
index 69038c2..12f1767 100644
--- a/include/uapi/linux/tc_act/tc_mirred.h
+++ b/include/uapi/linux/tc_act/tc_mirred.h
@@ -10,6 +10,8 @@
 #define TCA_INGRESS_REDIR 3  /* packet redirect to INGRESS*/
 #define TCA_INGRESS_MIRROR 4 /* mirror packet to INGRESS */
 
+#define MIRRED_F_TCLASS	0x1
+
 struct tc_mirred {
 	tc_gen;
 	int                     eaction;   /* one of IN/EGRESS_MIRROR/REDIR */
@@ -21,6 +23,7 @@ enum {
 	TCA_MIRRED_TM,
 	TCA_MIRRED_PARMS,
 	TCA_MIRRED_PAD,
+	TCA_MIRRED_TCLASS,
 	__TCA_MIRRED_MAX
 };
 #define TCA_MIRRED_MAX (__TCA_MIRRED_MAX - 1)
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 2afa995..c0c2b1c 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -846,7 +846,8 @@ static int dsa_slave_add_cls_matchall(struct net_device *dev,
 	tcf_exts_to_list(cls->exts, &actions);
 	a = list_first_entry(&actions, struct tc_action, list);
 
-	if (is_tcf_mirred_egress_mirror(a) && protocol == htons(ETH_P_ALL)) {
+	if (is_tcf_mirred_egress_mirror(a) && !is_tcf_mirred_tc(a) &&
+	    protocol == htons(ETH_P_ALL)) {
 		struct dsa_mall_mirror_tc_entry *mirror;
 
 		ifindex = tcf_mirred_ifindex(a);
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
index 416627c..6938804 100644
--- a/net/sched/act_mirred.c
+++ b/net/sched/act_mirred.c
@@ -66,6 +66,7 @@ static void tcf_mirred_release(struct tc_action *a, int bind)
 
 static const struct nla_policy mirred_policy[TCA_MIRRED_MAX + 1] = {
 	[TCA_MIRRED_PARMS]	= { .len = sizeof(struct tc_mirred) },
+	[TCA_MIRRED_TCLASS]	= { .type = NLA_U8 },
 };
 
 static unsigned int mirred_net_id;
@@ -82,6 +83,8 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 	struct tcf_mirred *m;
 	struct net_device *dev;
 	bool exists = false;
+	u8 *tclass = NULL;
+	u32 flags = 0;
 	int ret;
 
 	if (nla == NULL)
@@ -91,6 +94,12 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 		return ret;
 	if (tb[TCA_MIRRED_PARMS] == NULL)
 		return -EINVAL;
+	if (tb[TCA_MIRRED_TCLASS]) {
+		tclass = nla_data(tb[TCA_MIRRED_TCLASS]);
+		if (*tclass >= TC_MAX_QUEUE)
+			return -EINVAL;
+		flags |= MIRRED_F_TCLASS;
+	}
 	parm = nla_data(tb[TCA_MIRRED_PARMS]);
 
 	exists = tcf_idr_check(tn, parm->index, a, bind);
@@ -138,6 +147,7 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 	ASSERT_RTNL();
 	m->tcf_action = parm->action;
 	m->tcfm_eaction = parm->eaction;
+	m->flags = flags;
 	if (dev != NULL) {
 		m->tcfm_ifindex = parm->ifindex;
 		if (ret != ACT_P_CREATED)
@@ -145,6 +155,8 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 		dev_hold(dev);
 		rcu_assign_pointer(m->tcfm_dev, dev);
 		m->tcfm_mac_header_xmit = mac_header_xmit;
+		if (flags & MIRRED_F_TCLASS)
+			m->tcfm_tc = *tclass & TC_BITMASK;
 	}
 
 	if (ret == ACT_P_CREATED) {
@@ -258,6 +270,9 @@ static int tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a, int bind,
 
 	if (nla_put(skb, TCA_MIRRED_PARMS, sizeof(opt), &opt))
 		goto nla_put_failure;
+	if ((m->flags & MIRRED_F_TCLASS) &&
+	    nla_put_u8(skb, TCA_MIRRED_TCLASS, m->tcfm_tc))
+		goto nla_put_failure;
 
 	tcf_tm_dump(&t, &m->tcf_tm);
 	if (nla_put_64bit(skb, TCA_MIRRED_TM, sizeof(t), &t, TCA_MIRRED_PAD))

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

Adds optional traffic class parameter to the mirror/redirect action.
The mirror/redirect action is extended to forward to a traffic
class on the device if the traffic class index is provided in
addition to the device's ifindex.

Example:
# tc filter add dev eth0 protocol ip parent ffff: prio 1 flower\
  dst_ip 192.168.1.1/32 ip_proto udp dst_port 22\
  skip_sw action mirred ingress redirect dev eth0 tclass 1

v2: Introduced is_tcf_mirred_tc() helper function to check if
the rule is supported in current offloaders. Removed the
additional definitions for max number of TCs and its bitmask
and replaced their usages with existing defines in linux/netdevice.h.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c  |    2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c      |    2 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    |    2 +-
 drivers/net/ethernet/mellanox/mlxsw/spectrum.c     |    3 ++-
 .../net/ethernet/mellanox/mlxsw/spectrum_flower.c  |    3 ++-
 drivers/net/ethernet/netronome/nfp/bpf/offload.c   |    1 +
 drivers/net/ethernet/netronome/nfp/flower/action.c |    4 ++--
 include/net/tc_act/tc_mirred.h                     |   16 ++++++++++++++++
 include/uapi/linux/tc_act/tc_mirred.h              |    3 +++
 net/dsa/slave.c                                    |    3 ++-
 net/sched/act_mirred.c                             |   15 +++++++++++++++
 11 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
index 48970ba..54a7004 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
@@ -113,7 +113,7 @@ static int fill_action_fields(struct adapter *adap,
 		}
 
 		/* Re-direct to specified port in hardware. */
-		if (is_tcf_mirred_egress_redirect(a)) {
+		if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 			struct net_device *n_dev;
 			unsigned int i, index;
 			bool found = false;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 3d3739f..b46d45d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -8999,7 +8999,7 @@ static int parse_tc_actions(struct ixgbe_adapter *adapter,
 		}
 
 		/* Redirect to a VF or a offloaded macvlan */
-		if (is_tcf_mirred_egress_redirect(a)) {
+		if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 			int ifindex = tcf_mirred_ifindex(a);
 
 			err = handle_redirect_action(adapter, ifindex, queue,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index da503e6..f2352a0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1869,7 +1869,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 			return -EOPNOTSUPP;
 		}
 
-		if (is_tcf_mirred_egress_redirect(a)) {
+		if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 			int ifindex = tcf_mirred_ifindex(a);
 			struct net_device *out_dev, *encap_dev = NULL;
 			struct mlx5e_priv *out_priv;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index ed7cd6c..5ec56f4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -1641,7 +1641,8 @@ static int mlxsw_sp_port_add_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
 	tcf_exts_to_list(f->exts, &actions);
 	a = list_first_entry(&actions, struct tc_action, list);
 
-	if (is_tcf_mirred_egress_mirror(a) && protocol == htons(ETH_P_ALL)) {
+	if (is_tcf_mirred_egress_mirror(a) && !is_tcf_mirred_tc(a) &&
+	    protocol == htons(ETH_P_ALL)) {
 		struct mlxsw_sp_port_mall_mirror_tc_entry *mirror;
 
 		mall_tc_entry->type = MLXSW_SP_PORT_MALL_MIRROR;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 8aace9a..88403a1 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -85,7 +85,8 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
 
 			group_id = mlxsw_sp_acl_ruleset_group_id(ruleset);
 			mlxsw_sp_acl_rulei_act_jump(rulei, group_id);
-		} else if (is_tcf_mirred_egress_redirect(a)) {
+		} else if (is_tcf_mirred_egress_redirect(a) &&
+			   !is_tcf_mirred_tc(a)) {
 			int ifindex = tcf_mirred_ifindex(a);
 			struct net_device *out_dev;
 			struct mlxsw_sp_fid *fid;
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
index a88bb5b..3b00d4b 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
@@ -131,6 +131,7 @@ nfp_net_bpf_get_act(struct nfp_net *nn, struct tc_cls_bpf_offload *cls_bpf)
 			return NN_ACT_TC_DROP;
 
 		if (is_tcf_mirred_egress_redirect(a) &&
+		    !is_tcf_mirred_tc(a) &&
 		    tcf_mirred_ifindex(a) == nn->dp.netdev->ifindex)
 			return NN_ACT_TC_REDIR;
 	}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
index db97506..7ceeaa9 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
@@ -132,7 +132,7 @@ nfp_flower_loop_action(const struct tc_action *a,
 
 	if (is_tcf_gact_shot(a)) {
 		nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_DROP);
-	} else if (is_tcf_mirred_egress_redirect(a)) {
+	} else if (is_tcf_mirred_egress_redirect(a) && !is_tcf_mirred_tc(a)) {
 		if (*a_len + sizeof(struct nfp_fl_output) > NFP_FL_MAX_A_SIZ)
 			return -EOPNOTSUPP;
 
@@ -142,7 +142,7 @@ nfp_flower_loop_action(const struct tc_action *a,
 			return err;
 
 		*a_len += sizeof(struct nfp_fl_output);
-	} else if (is_tcf_mirred_egress_mirror(a)) {
+	} else if (is_tcf_mirred_egress_mirror(a) && !is_tcf_mirred_tc(a)) {
 		if (*a_len + sizeof(struct nfp_fl_output) > NFP_FL_MAX_A_SIZ)
 			return -EOPNOTSUPP;
 
diff --git a/include/net/tc_act/tc_mirred.h b/include/net/tc_act/tc_mirred.h
index 604bc31..59cb935 100644
--- a/include/net/tc_act/tc_mirred.h
+++ b/include/net/tc_act/tc_mirred.h
@@ -9,6 +9,8 @@ struct tcf_mirred {
 	int			tcfm_eaction;
 	int			tcfm_ifindex;
 	bool			tcfm_mac_header_xmit;
+	u8			tcfm_tc;
+	u32			flags;
 	struct net_device __rcu	*tcfm_dev;
 	struct list_head	tcfm_list;
 };
@@ -37,4 +39,18 @@ static inline int tcf_mirred_ifindex(const struct tc_action *a)
 	return to_mirred(a)->tcfm_ifindex;
 }
 
+static inline bool is_tcf_mirred_tc(const struct tc_action *a)
+{
+#ifdef CONFIG_NET_CLS_ACT
+	if (a->ops && a->ops->type == TCA_ACT_MIRRED)
+		return to_mirred(a)->flags == MIRRED_F_TCLASS;
+#endif
+	return false;
+}
+
+static inline u8 tcf_mirred_tc(const struct tc_action *a)
+{
+	return to_mirred(a)->tcfm_tc;
+}
+
 #endif /* __NET_TC_MIR_H */
diff --git a/include/uapi/linux/tc_act/tc_mirred.h b/include/uapi/linux/tc_act/tc_mirred.h
index 69038c2..12f1767 100644
--- a/include/uapi/linux/tc_act/tc_mirred.h
+++ b/include/uapi/linux/tc_act/tc_mirred.h
@@ -10,6 +10,8 @@
 #define TCA_INGRESS_REDIR 3  /* packet redirect to INGRESS*/
 #define TCA_INGRESS_MIRROR 4 /* mirror packet to INGRESS */
 
+#define MIRRED_F_TCLASS	0x1
+
 struct tc_mirred {
 	tc_gen;
 	int                     eaction;   /* one of IN/EGRESS_MIRROR/REDIR */
@@ -21,6 +23,7 @@ enum {
 	TCA_MIRRED_TM,
 	TCA_MIRRED_PARMS,
 	TCA_MIRRED_PAD,
+	TCA_MIRRED_TCLASS,
 	__TCA_MIRRED_MAX
 };
 #define TCA_MIRRED_MAX (__TCA_MIRRED_MAX - 1)
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 2afa995..c0c2b1c 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -846,7 +846,8 @@ static int dsa_slave_add_cls_matchall(struct net_device *dev,
 	tcf_exts_to_list(cls->exts, &actions);
 	a = list_first_entry(&actions, struct tc_action, list);
 
-	if (is_tcf_mirred_egress_mirror(a) && protocol == htons(ETH_P_ALL)) {
+	if (is_tcf_mirred_egress_mirror(a) && !is_tcf_mirred_tc(a) &&
+	    protocol == htons(ETH_P_ALL)) {
 		struct dsa_mall_mirror_tc_entry *mirror;
 
 		ifindex = tcf_mirred_ifindex(a);
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
index 416627c..6938804 100644
--- a/net/sched/act_mirred.c
+++ b/net/sched/act_mirred.c
@@ -66,6 +66,7 @@ static void tcf_mirred_release(struct tc_action *a, int bind)
 
 static const struct nla_policy mirred_policy[TCA_MIRRED_MAX + 1] = {
 	[TCA_MIRRED_PARMS]	= { .len = sizeof(struct tc_mirred) },
+	[TCA_MIRRED_TCLASS]	= { .type = NLA_U8 },
 };
 
 static unsigned int mirred_net_id;
@@ -82,6 +83,8 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 	struct tcf_mirred *m;
 	struct net_device *dev;
 	bool exists = false;
+	u8 *tclass = NULL;
+	u32 flags = 0;
 	int ret;
 
 	if (nla == NULL)
@@ -91,6 +94,12 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 		return ret;
 	if (tb[TCA_MIRRED_PARMS] == NULL)
 		return -EINVAL;
+	if (tb[TCA_MIRRED_TCLASS]) {
+		tclass = nla_data(tb[TCA_MIRRED_TCLASS]);
+		if (*tclass >= TC_MAX_QUEUE)
+			return -EINVAL;
+		flags |= MIRRED_F_TCLASS;
+	}
 	parm = nla_data(tb[TCA_MIRRED_PARMS]);
 
 	exists = tcf_idr_check(tn, parm->index, a, bind);
@@ -138,6 +147,7 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 	ASSERT_RTNL();
 	m->tcf_action = parm->action;
 	m->tcfm_eaction = parm->eaction;
+	m->flags = flags;
 	if (dev != NULL) {
 		m->tcfm_ifindex = parm->ifindex;
 		if (ret != ACT_P_CREATED)
@@ -145,6 +155,8 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
 		dev_hold(dev);
 		rcu_assign_pointer(m->tcfm_dev, dev);
 		m->tcfm_mac_header_xmit = mac_header_xmit;
+		if (flags & MIRRED_F_TCLASS)
+			m->tcfm_tc = *tclass & TC_BITMASK;
 	}
 
 	if (ret == ACT_P_CREATED) {
@@ -258,6 +270,9 @@ static int tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a, int bind,
 
 	if (nla_put(skb, TCA_MIRRED_PARMS, sizeof(opt), &opt))
 		goto nla_put_failure;
+	if ((m->flags & MIRRED_F_TCLASS) &&
+	    nla_put_u8(skb, TCA_MIRRED_TCLASS, m->tcfm_tc))
+		goto nla_put_failure;
 
 	tcf_tm_dump(&t, &m->tcf_tm);
 	if (nla_put_64bit(skb, TCA_MIRRED_TM, sizeof(t), &t, TCA_MIRRED_PAD))


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 3/7] i40e: Map TCs with the VSI seids
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

Add mapping of TCs with the seids of the channel VSIs. TC0
will be mapped to the main VSI seid and all other TCs are
mapped to the seid of the corresponding channel VSI.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h      |    1 +
 drivers/net/ethernet/intel/i40e/i40e_main.c |    2 ++
 2 files changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 266e1dc..d846da9 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -738,6 +738,7 @@ struct i40e_vsi {
 	u16 next_base_queue;	/* next queue to be used for channel setup */
 
 	struct list_head ch_list;
+	u16 tc_seid_map[I40E_MAX_TRAFFIC_CLASS];
 
 	void *priv;	/* client driver data reference. */
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 5ef3927..0455283 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -6093,6 +6093,7 @@ static int i40e_configure_queue_channels(struct i40e_vsi *vsi)
 	int ret = 0, i;
 
 	/* Create app vsi with the TCs. Main VSI with TC0 is already set up */
+	vsi->tc_seid_map[0] = vsi->seid;
 	for (i = 1; i < I40E_MAX_TRAFFIC_CLASS; i++) {
 		if (vsi->tc_config.enabled_tc & BIT(i)) {
 			ch = kzalloc(sizeof(*ch), GFP_KERNEL);
@@ -6122,6 +6123,7 @@ static int i40e_configure_queue_channels(struct i40e_vsi *vsi)
 					i, ch->num_queue_pairs);
 				goto err_free;
 			}
+			vsi->tc_seid_map[i] = ch->seid;
 		}
 	}
 	return ret;

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 3/7] i40e: Map TCs with the VSI seids
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

Add mapping of TCs with the seids of the channel VSIs. TC0
will be mapped to the main VSI seid and all other TCs are
mapped to the seid of the corresponding channel VSI.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h      |    1 +
 drivers/net/ethernet/intel/i40e/i40e_main.c |    2 ++
 2 files changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 266e1dc..d846da9 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -738,6 +738,7 @@ struct i40e_vsi {
 	u16 next_base_queue;	/* next queue to be used for channel setup */
 
 	struct list_head ch_list;
+	u16 tc_seid_map[I40E_MAX_TRAFFIC_CLASS];
 
 	void *priv;	/* client driver data reference. */
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 5ef3927..0455283 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -6093,6 +6093,7 @@ static int i40e_configure_queue_channels(struct i40e_vsi *vsi)
 	int ret = 0, i;
 
 	/* Create app vsi with the TCs. Main VSI with TC0 is already set up */
+	vsi->tc_seid_map[0] = vsi->seid;
 	for (i = 1; i < I40E_MAX_TRAFFIC_CLASS; i++) {
 		if (vsi->tc_config.enabled_tc & BIT(i)) {
 			ch = kzalloc(sizeof(*ch), GFP_KERNEL);
@@ -6122,6 +6123,7 @@ static int i40e_configure_queue_channels(struct i40e_vsi *vsi)
 					i, ch->num_queue_pairs);
 				goto err_free;
 			}
+			vsi->tc_seid_map[i] = ch->seid;
 		}
 	}
 	return ret;


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 4/7] i40e: Cloud filter mode for set_switch_config command
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

Add definitions for L4 filters and switch modes based on cloud filters
modes and extend the set switch config command to include the
additional cloud filter mode.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h |   30 ++++++++++++++++++++-
 drivers/net/ethernet/intel/i40e/i40e_common.c     |    4 ++-
 drivers/net/ethernet/intel/i40e/i40e_ethtool.c    |    2 +
 drivers/net/ethernet/intel/i40e/i40e_main.c       |    2 +
 drivers/net/ethernet/intel/i40e/i40e_prototype.h  |    2 +
 drivers/net/ethernet/intel/i40e/i40e_type.h       |    9 ++++++
 6 files changed, 44 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index a8f65ae..e41050a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -790,7 +790,35 @@ struct i40e_aqc_set_switch_config {
 	 */
 	__le16	first_tag;
 	__le16	second_tag;
-	u8	reserved[6];
+	/* Next byte is split into following:
+	 * Bit 7 : 0: No action, 1: Switch to mode defined by bits 6:0
+	 * Bit 6: 0 : Destination Port, 1: source port
+	 * Bit 5..4: L4 type
+	 * 0: rsvd
+	 * 1: TCP
+	 * 2: UDP
+	 * 3: Both TCP and UDP
+	 * Bits 3:0 Mode
+	 * 0: default mode
+	 * 1: L4 port only mode
+	 * 2: non-tunneled mode
+	 * 3: tunneled mode
+	 */
+#define I40E_AQ_SET_SWITCH_BIT7_VALID		0x80
+
+#define I40E_AQ_SET_SWITCH_L4_SRC_PORT		0x40
+
+#define I40E_AQ_SET_SWITCH_L4_TYPE_RSVD		0x00
+#define I40E_AQ_SET_SWITCH_L4_TYPE_TCP		0x10
+#define I40E_AQ_SET_SWITCH_L4_TYPE_UDP		0x20
+#define I40E_AQ_SET_SWITCH_L4_TYPE_BOTH		0x30
+
+#define I40E_AQ_SET_SWITCH_MODE_DEFAULT		0x00
+#define I40E_AQ_SET_SWITCH_MODE_L4_PORT		0x01
+#define I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL	0x02
+#define I40E_AQ_SET_SWITCH_MODE_TUNNEL		0x03
+	u8	mode;
+	u8	rsvd5[5];
 };
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_set_switch_config);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index e7d8a01..9567702 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -2405,13 +2405,14 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
  * @hw: pointer to the hardware structure
  * @flags: bit flag values to set
  * @valid_flags: which bit flags to set
+ * @mode: cloud filter mode
  * @cmd_details: pointer to command details structure or NULL
  *
  * Set switch configuration bits
  **/
 enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
 						u16 flags,
-						u16 valid_flags,
+						u16 valid_flags, u8 mode,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
@@ -2423,6 +2424,7 @@ enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
 					  i40e_aqc_opc_set_switch_config);
 	scfg->flags = cpu_to_le16(flags);
 	scfg->valid_flags = cpu_to_le16(valid_flags);
+	scfg->mode = mode;
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		scfg->switch_tag = cpu_to_le16(hw->switch_tag);
 		scfg->first_tag = cpu_to_le16(hw->first_tag);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 3fa90a6..7a0aa08 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -4186,7 +4186,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
 			sw_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
 		valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
 		ret = i40e_aq_set_switch_config(&pf->hw, sw_flags, valid_flags,
-						NULL);
+						0, NULL);
 		if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
 			dev_info(&pf->pdev->dev,
 				 "couldn't set switch config bits, err %s aq_err %s\n",
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 0455283..60c689a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -12146,7 +12146,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 		u16 valid_flags;
 
 		valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
-		ret = i40e_aq_set_switch_config(&pf->hw, flags, valid_flags,
+		ret = i40e_aq_set_switch_config(&pf->hw, flags, valid_flags, 0,
 						NULL);
 		if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
 			dev_info(&pf->pdev->dev,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 0150256..92869f5 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -190,7 +190,7 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
 				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
 						u16 flags,
-						u16 valid_flags,
+						u16 valid_flags, u8 mode,
 				struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
 				enum i40e_aq_resources_ids resource,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index 0410fcb..c019f46 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -279,6 +279,15 @@ struct i40e_hw_capabilities {
 #define I40E_NVM_IMAGE_TYPE_CLOUD	0x2
 #define I40E_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
 
+	/* Cloud filter modes:
+	 * Mode1: Filter on L4 port only
+	 * Mode2: Filter for non-tunneled traffic
+	 * Mode3: Filter for tunnel traffic
+	 */
+#define I40E_NVM_IMAGE_TYPE_MODE1	0x6
+#define I40E_NVM_IMAGE_TYPE_MODE2	0x7
+#define I40E_NVM_IMAGE_TYPE_MODE3	0x8
+
 	u32  management_mode;
 	u32  mng_protocols_over_mctp;
 #define I40E_MNG_PROTOCOL_PLDM		0x2

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 4/7] i40e: Cloud filter mode for set_switch_config command
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

Add definitions for L4 filters and switch modes based on cloud filters
modes and extend the set switch config command to include the
additional cloud filter mode.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h |   30 ++++++++++++++++++++-
 drivers/net/ethernet/intel/i40e/i40e_common.c     |    4 ++-
 drivers/net/ethernet/intel/i40e/i40e_ethtool.c    |    2 +
 drivers/net/ethernet/intel/i40e/i40e_main.c       |    2 +
 drivers/net/ethernet/intel/i40e/i40e_prototype.h  |    2 +
 drivers/net/ethernet/intel/i40e/i40e_type.h       |    9 ++++++
 6 files changed, 44 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index a8f65ae..e41050a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -790,7 +790,35 @@ struct i40e_aqc_set_switch_config {
 	 */
 	__le16	first_tag;
 	__le16	second_tag;
-	u8	reserved[6];
+	/* Next byte is split into following:
+	 * Bit 7 : 0: No action, 1: Switch to mode defined by bits 6:0
+	 * Bit 6: 0 : Destination Port, 1: source port
+	 * Bit 5..4: L4 type
+	 * 0: rsvd
+	 * 1: TCP
+	 * 2: UDP
+	 * 3: Both TCP and UDP
+	 * Bits 3:0 Mode
+	 * 0: default mode
+	 * 1: L4 port only mode
+	 * 2: non-tunneled mode
+	 * 3: tunneled mode
+	 */
+#define I40E_AQ_SET_SWITCH_BIT7_VALID		0x80
+
+#define I40E_AQ_SET_SWITCH_L4_SRC_PORT		0x40
+
+#define I40E_AQ_SET_SWITCH_L4_TYPE_RSVD		0x00
+#define I40E_AQ_SET_SWITCH_L4_TYPE_TCP		0x10
+#define I40E_AQ_SET_SWITCH_L4_TYPE_UDP		0x20
+#define I40E_AQ_SET_SWITCH_L4_TYPE_BOTH		0x30
+
+#define I40E_AQ_SET_SWITCH_MODE_DEFAULT		0x00
+#define I40E_AQ_SET_SWITCH_MODE_L4_PORT		0x01
+#define I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL	0x02
+#define I40E_AQ_SET_SWITCH_MODE_TUNNEL		0x03
+	u8	mode;
+	u8	rsvd5[5];
 };
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_set_switch_config);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index e7d8a01..9567702 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -2405,13 +2405,14 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
  * @hw: pointer to the hardware structure
  * @flags: bit flag values to set
  * @valid_flags: which bit flags to set
+ * @mode: cloud filter mode
  * @cmd_details: pointer to command details structure or NULL
  *
  * Set switch configuration bits
  **/
 enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
 						u16 flags,
-						u16 valid_flags,
+						u16 valid_flags, u8 mode,
 				struct i40e_asq_cmd_details *cmd_details)
 {
 	struct i40e_aq_desc desc;
@@ -2423,6 +2424,7 @@ enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
 					  i40e_aqc_opc_set_switch_config);
 	scfg->flags = cpu_to_le16(flags);
 	scfg->valid_flags = cpu_to_le16(valid_flags);
+	scfg->mode = mode;
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		scfg->switch_tag = cpu_to_le16(hw->switch_tag);
 		scfg->first_tag = cpu_to_le16(hw->first_tag);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 3fa90a6..7a0aa08 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -4186,7 +4186,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
 			sw_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
 		valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
 		ret = i40e_aq_set_switch_config(&pf->hw, sw_flags, valid_flags,
-						NULL);
+						0, NULL);
 		if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
 			dev_info(&pf->pdev->dev,
 				 "couldn't set switch config bits, err %s aq_err %s\n",
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 0455283..60c689a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -12146,7 +12146,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 		u16 valid_flags;
 
 		valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
-		ret = i40e_aq_set_switch_config(&pf->hw, flags, valid_flags,
+		ret = i40e_aq_set_switch_config(&pf->hw, flags, valid_flags, 0,
 						NULL);
 		if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
 			dev_info(&pf->pdev->dev,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 0150256..92869f5 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -190,7 +190,7 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
 				struct i40e_asq_cmd_details *cmd_details);
 enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
 						u16 flags,
-						u16 valid_flags,
+						u16 valid_flags, u8 mode,
 				struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
 				enum i40e_aq_resources_ids resource,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index 0410fcb..c019f46 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -279,6 +279,15 @@ struct i40e_hw_capabilities {
 #define I40E_NVM_IMAGE_TYPE_CLOUD	0x2
 #define I40E_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
 
+	/* Cloud filter modes:
+	 * Mode1: Filter on L4 port only
+	 * Mode2: Filter for non-tunneled traffic
+	 * Mode3: Filter for tunnel traffic
+	 */
+#define I40E_NVM_IMAGE_TYPE_MODE1	0x6
+#define I40E_NVM_IMAGE_TYPE_MODE2	0x7
+#define I40E_NVM_IMAGE_TYPE_MODE3	0x8
+
 	u32  management_mode;
 	u32  mng_protocols_over_mctp;
 #define I40E_MNG_PROTOCOL_PLDM		0x2


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 5/7] i40e: Admin queue definitions for cloud filters
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

Add new admin queue definitions and extended fields for cloud
filter support. Define big buffer for extended general fields
in Add/Remove Cloud filters command.

v3: Shortened some lengthy struct names.
v2: Added I40E_CHECK_STRUCT_LEN check to AQ command structs and
added AQ definitions to i40evf for consistency based on Shannon's
feedback.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |  110 ++++++++++++++++++++
 .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |  110 ++++++++++++++++++++
 2 files changed, 216 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index e41050a..2e567c2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -1371,14 +1371,16 @@ struct i40e_aqc_add_remove_cloud_filters {
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
 					I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
-	u8	reserved2[4];
+	u8	big_buffer_flag;
+#define I40E_AQC_ADD_CLOUD_CMD_BB	1
+	u8	reserved2[3];
 	__le32	addr_high;
 	__le32	addr_low;
 };
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_cloud_filters);
 
-struct i40e_aqc_add_remove_cloud_filters_element_data {
+struct i40e_aqc_cloud_filters_element_data {
 	u8	outer_mac[6];
 	u8	inner_mac[6];
 	__le16	inner_vlan;
@@ -1408,6 +1410,13 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 #define I40E_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
 #define I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
 #define I40E_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+/* flag to be used when adding cloud filter: IP + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_IP_PORT		0x0010
+/* flag to be used when adding cloud filter: Dest MAC + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT		0x0011
+/* flag to be used when adding cloud filter: Dest MAC + VLAN + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT		0x0012
 
 #define I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
 #define I40E_AQC_ADD_CLOUD_VNK_SHIFT			6
@@ -1442,6 +1451,49 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 	u8	response_reserved[7];
 };
 
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_cloud_filters_element_data);
+
+/* i40e_aqc_cloud_filters_element_bb is used when
+ * I40E_AQC_CLOUD_CMD_BB flag is set.
+ */
+struct i40e_aqc_cloud_filters_element_bb {
+	struct i40e_aqc_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+I40E_CHECK_STRUCT_LEN(0x80, i40e_aqc_cloud_filters_element_bb);
+
 struct i40e_aqc_remove_cloud_filters_completion {
 	__le16 perfect_ovlan_used;
 	__le16 perfect_ovlan_free;
@@ -1453,6 +1505,60 @@ struct i40e_aqc_remove_cloud_filters_completion {
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_cloud_filters_completion);
 
+/* Replace filter Command 0x025F
+ * uses the i40e_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct i40e_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+I40E_CHECK_STRUCT_LEN(4, i40e_filter_data);
+
+struct i40e_aqc_replace_cloud_filters_cmd {
+	u8      valid_flags;
+#define I40E_AQC_REPLACE_L1_FILTER		0x0
+#define I40E_AQC_REPLACE_CLOUD_FILTER		0x1
+#define I40E_AQC_GET_CLOUD_FILTERS		0x2
+#define I40E_AQC_MIRROR_CLOUD_FILTER		0x4
+#define I40E_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8      old_filter_type;
+	u8      new_filter_type;
+	u8      tr_bit;
+	u8      reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_replace_cloud_filters_cmd);
+
+struct i40e_aqc_replace_cloud_filters_cmd_buf {
+	u8      data[32];
+/* Filter type INPUT codes*/
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	BIT(7)
+
+/* Field Vector offsets */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA	0
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH	6
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG	7
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN	8
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN	9
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN	10
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY	11
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC	12
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA	14
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA	15
+
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN	37
+	struct i40e_filter_data filters[8];
+};
+
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_replace_cloud_filters_cmd_buf);
+
 /* Add Mirror Rule (indirect or direct 0x0260)
  * Delete Mirror Rule (indirect or direct 0x0261)
  * note: some rule types (4,5) do not use an external buffer.
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
index 60c892f..b8c78bf 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
@@ -1339,14 +1339,16 @@ struct i40e_aqc_add_remove_cloud_filters {
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
 					I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
-	u8	reserved2[4];
+	u8	big_buffer_flag;
+#define I40E_AQC_ADD_CLOUD_CMD_BB	1
+	u8	reserved2[3];
 	__le32	addr_high;
 	__le32	addr_low;
 };
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_cloud_filters);
 
-struct i40e_aqc_add_remove_cloud_filters_element_data {
+struct i40e_aqc_cloud_filters_element_data {
 	u8	outer_mac[6];
 	u8	inner_mac[6];
 	__le16	inner_vlan;
@@ -1376,6 +1378,13 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 #define I40E_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
 #define I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
 #define I40E_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+/* flag to be used when adding cloud filter: IP + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_IP_PORT		0x0010
+/* flag to be used when adding cloud filter: Dest MAC + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT		0x0011
+/* flag to be used when adding cloud filter: Dest MAC + VLAN + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT		0x0012
 
 #define I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
 #define I40E_AQC_ADD_CLOUD_VNK_SHIFT			6
@@ -1410,6 +1419,49 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 	u8	response_reserved[7];
 };
 
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_cloud_filters_element_data);
+
+/* i40e_aqc_cloud_filters_element_bb is used when
+ * I40E_AQC_ADD_CLOUD_CMD_BB flag is set.
+ */
+struct i40e_aqc_cloud_filters_element_bb {
+	struct i40e_aqc_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+I40E_CHECK_STRUCT_LEN(0x80, i40e_aqc_cloud_filters_element_bb);
+
 struct i40e_aqc_remove_cloud_filters_completion {
 	__le16 perfect_ovlan_used;
 	__le16 perfect_ovlan_free;
@@ -1421,6 +1473,60 @@ struct i40e_aqc_remove_cloud_filters_completion {
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_cloud_filters_completion);
 
+/* Replace filter Command 0x025F
+ * uses the i40e_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct i40e_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+I40E_CHECK_STRUCT_LEN(4, i40e_filter_data);
+
+struct i40e_aqc_replace_cloud_filters_cmd {
+	u8      valid_flags;
+#define I40E_AQC_REPLACE_L1_FILTER		0x0
+#define I40E_AQC_REPLACE_CLOUD_FILTER		0x1
+#define I40E_AQC_GET_CLOUD_FILTERS		0x2
+#define I40E_AQC_MIRROR_CLOUD_FILTER		0x4
+#define I40E_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8      old_filter_type;
+	u8      new_filter_type;
+	u8      tr_bit;
+	u8      reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_replace_cloud_filters_cmd);
+
+struct i40e_aqc_replace_cloud_filters_cmd_buf {
+	u8      data[32];
+/* Filter type INPUT codes*/
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	BIT(7)
+
+/* Field Vector offsets */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA	0
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH	6
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG	7
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN	8
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN	9
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN	10
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY	11
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC	12
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA	14
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA	15
+
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN	37
+	struct i40e_filter_data filters[8];
+};
+
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_replace_cloud_filters_cmd_buf);
+
 /* Add Mirror Rule (indirect or direct 0x0260)
  * Delete Mirror Rule (indirect or direct 0x0261)
  * note: some rule types (4,5) do not use an external buffer.

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 5/7] i40e: Admin queue definitions for cloud filters
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

Add new admin queue definitions and extended fields for cloud
filter support. Define big buffer for extended general fields
in Add/Remove Cloud filters command.

v3: Shortened some lengthy struct names.
v2: Added I40E_CHECK_STRUCT_LEN check to AQ command structs and
added AQ definitions to i40evf for consistency based on Shannon's
feedback.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |  110 ++++++++++++++++++++
 .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |  110 ++++++++++++++++++++
 2 files changed, 216 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index e41050a..2e567c2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -1371,14 +1371,16 @@ struct i40e_aqc_add_remove_cloud_filters {
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
 					I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
-	u8	reserved2[4];
+	u8	big_buffer_flag;
+#define I40E_AQC_ADD_CLOUD_CMD_BB	1
+	u8	reserved2[3];
 	__le32	addr_high;
 	__le32	addr_low;
 };
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_cloud_filters);
 
-struct i40e_aqc_add_remove_cloud_filters_element_data {
+struct i40e_aqc_cloud_filters_element_data {
 	u8	outer_mac[6];
 	u8	inner_mac[6];
 	__le16	inner_vlan;
@@ -1408,6 +1410,13 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 #define I40E_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
 #define I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
 #define I40E_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+/* flag to be used when adding cloud filter: IP + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_IP_PORT		0x0010
+/* flag to be used when adding cloud filter: Dest MAC + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT		0x0011
+/* flag to be used when adding cloud filter: Dest MAC + VLAN + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT		0x0012
 
 #define I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
 #define I40E_AQC_ADD_CLOUD_VNK_SHIFT			6
@@ -1442,6 +1451,49 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 	u8	response_reserved[7];
 };
 
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_cloud_filters_element_data);
+
+/* i40e_aqc_cloud_filters_element_bb is used when
+ * I40E_AQC_CLOUD_CMD_BB flag is set.
+ */
+struct i40e_aqc_cloud_filters_element_bb {
+	struct i40e_aqc_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+I40E_CHECK_STRUCT_LEN(0x80, i40e_aqc_cloud_filters_element_bb);
+
 struct i40e_aqc_remove_cloud_filters_completion {
 	__le16 perfect_ovlan_used;
 	__le16 perfect_ovlan_free;
@@ -1453,6 +1505,60 @@ struct i40e_aqc_remove_cloud_filters_completion {
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_cloud_filters_completion);
 
+/* Replace filter Command 0x025F
+ * uses the i40e_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct i40e_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+I40E_CHECK_STRUCT_LEN(4, i40e_filter_data);
+
+struct i40e_aqc_replace_cloud_filters_cmd {
+	u8      valid_flags;
+#define I40E_AQC_REPLACE_L1_FILTER		0x0
+#define I40E_AQC_REPLACE_CLOUD_FILTER		0x1
+#define I40E_AQC_GET_CLOUD_FILTERS		0x2
+#define I40E_AQC_MIRROR_CLOUD_FILTER		0x4
+#define I40E_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8      old_filter_type;
+	u8      new_filter_type;
+	u8      tr_bit;
+	u8      reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_replace_cloud_filters_cmd);
+
+struct i40e_aqc_replace_cloud_filters_cmd_buf {
+	u8      data[32];
+/* Filter type INPUT codes*/
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	BIT(7)
+
+/* Field Vector offsets */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA	0
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH	6
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG	7
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN	8
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN	9
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN	10
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY	11
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC	12
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA	14
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA	15
+
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN	37
+	struct i40e_filter_data filters[8];
+};
+
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_replace_cloud_filters_cmd_buf);
+
 /* Add Mirror Rule (indirect or direct 0x0260)
  * Delete Mirror Rule (indirect or direct 0x0261)
  * note: some rule types (4,5) do not use an external buffer.
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
index 60c892f..b8c78bf 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
@@ -1339,14 +1339,16 @@ struct i40e_aqc_add_remove_cloud_filters {
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
 #define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
 					I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
-	u8	reserved2[4];
+	u8	big_buffer_flag;
+#define I40E_AQC_ADD_CLOUD_CMD_BB	1
+	u8	reserved2[3];
 	__le32	addr_high;
 	__le32	addr_low;
 };
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_cloud_filters);
 
-struct i40e_aqc_add_remove_cloud_filters_element_data {
+struct i40e_aqc_cloud_filters_element_data {
 	u8	outer_mac[6];
 	u8	inner_mac[6];
 	__le16	inner_vlan;
@@ -1376,6 +1378,13 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 #define I40E_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
 #define I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
 #define I40E_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+/* flag to be used when adding cloud filter: IP + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_IP_PORT		0x0010
+/* flag to be used when adding cloud filter: Dest MAC + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT		0x0011
+/* flag to be used when adding cloud filter: Dest MAC + VLAN + L4 Port */
+#define I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT		0x0012
 
 #define I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
 #define I40E_AQC_ADD_CLOUD_VNK_SHIFT			6
@@ -1410,6 +1419,49 @@ struct i40e_aqc_add_remove_cloud_filters_element_data {
 	u8	response_reserved[7];
 };
 
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_cloud_filters_element_data);
+
+/* i40e_aqc_cloud_filters_element_bb is used when
+ * I40E_AQC_ADD_CLOUD_CMD_BB flag is set.
+ */
+struct i40e_aqc_cloud_filters_element_bb {
+	struct i40e_aqc_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define I40E_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+I40E_CHECK_STRUCT_LEN(0x80, i40e_aqc_cloud_filters_element_bb);
+
 struct i40e_aqc_remove_cloud_filters_completion {
 	__le16 perfect_ovlan_used;
 	__le16 perfect_ovlan_free;
@@ -1421,6 +1473,60 @@ struct i40e_aqc_remove_cloud_filters_completion {
 
 I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_cloud_filters_completion);
 
+/* Replace filter Command 0x025F
+ * uses the i40e_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct i40e_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+I40E_CHECK_STRUCT_LEN(4, i40e_filter_data);
+
+struct i40e_aqc_replace_cloud_filters_cmd {
+	u8      valid_flags;
+#define I40E_AQC_REPLACE_L1_FILTER		0x0
+#define I40E_AQC_REPLACE_CLOUD_FILTER		0x1
+#define I40E_AQC_GET_CLOUD_FILTERS		0x2
+#define I40E_AQC_MIRROR_CLOUD_FILTER		0x4
+#define I40E_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8      old_filter_type;
+	u8      new_filter_type;
+	u8      tr_bit;
+	u8      reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_replace_cloud_filters_cmd);
+
+struct i40e_aqc_replace_cloud_filters_cmd_buf {
+	u8      data[32];
+/* Filter type INPUT codes*/
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	BIT(7)
+
+/* Field Vector offsets */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA	0
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH	6
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG	7
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN	8
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN	9
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN	10
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY	11
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC	12
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA	14
+/* big FLU */
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA	15
+
+#define I40E_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN	37
+	struct i40e_filter_data filters[8];
+};
+
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_replace_cloud_filters_cmd_buf);
+
 /* Add Mirror Rule (indirect or direct 0x0260)
  * Delete Mirror Rule (indirect or direct 0x0261)
  * note: some rule types (4,5) do not use an external buffer.


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 6/7] i40e: Clean up of cloud filters
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

Introduce the cloud filter datastructure and cleanup of cloud
filters associated with the device.

v2: Moved field comments in struct i40e_cloud_filter to the right.
Removed hlist_empty check from i40e_cloud_filter_exit()

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h      |    9 +++++++++
 drivers/net/ethernet/intel/i40e/i40e_main.c |   24 ++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index d846da9..6018fb6 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -252,6 +252,12 @@ struct i40e_fdir_filter {
 	u32 fd_id;
 };
 
+struct i40e_cloud_filter {
+	struct hlist_node cloud_node;
+	unsigned long cookie;
+	u16 seid;	/* filter control */
+};
+
 #define I40E_ETH_P_LLDP			0x88cc
 
 #define I40E_DCB_PRIO_TYPE_STRICT	0
@@ -419,6 +425,9 @@ struct i40e_pf {
 	struct i40e_udp_port_config udp_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
 	u16 pending_udp_bitmap;
 
+	struct hlist_head cloud_filter_list;
+	u16 num_cloud_filters;
+
 	enum i40e_interrupt_policy int_policy;
 	u16 rx_itr_default;
 	u16 tx_itr_default;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 60c689a..afcf08a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -6922,6 +6922,26 @@ static void i40e_fdir_filter_exit(struct i40e_pf *pf)
 }
 
 /**
+ * i40e_cloud_filter_exit - Cleans up the Cloud Filters
+ * @pf: Pointer to PF
+ *
+ * This function destroys the hlist where all the Cloud Filters
+ * filters were saved.
+ **/
+static void i40e_cloud_filter_exit(struct i40e_pf *pf)
+{
+	struct i40e_cloud_filter *cfilter;
+	struct hlist_node *node;
+
+	hlist_for_each_entry_safe(cfilter, node,
+				  &pf->cloud_filter_list, cloud_node) {
+		hlist_del(&cfilter->cloud_node);
+		kfree(cfilter);
+	}
+	pf->num_cloud_filters = 0;
+}
+
+/**
  * i40e_close - Disables a network interface
  * @netdev: network interface device structure
  *
@@ -12176,6 +12196,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 			vsi = i40e_vsi_reinit_setup(pf->vsi[pf->lan_vsi]);
 		if (!vsi) {
 			dev_info(&pf->pdev->dev, "setup of MAIN VSI failed\n");
+			i40e_cloud_filter_exit(pf);
 			i40e_fdir_teardown(pf);
 			return -EAGAIN;
 		}
@@ -13010,6 +13031,8 @@ static void i40e_remove(struct pci_dev *pdev)
 	if (pf->vsi[pf->lan_vsi])
 		i40e_vsi_release(pf->vsi[pf->lan_vsi]);
 
+	i40e_cloud_filter_exit(pf);
+
 	/* remove attached clients */
 	if (pf->flags & I40E_FLAG_IWARP_ENABLED) {
 		ret_code = i40e_lan_del_device(pf);
@@ -13241,6 +13264,7 @@ static void i40e_shutdown(struct pci_dev *pdev)
 
 	del_timer_sync(&pf->service_timer);
 	cancel_work_sync(&pf->service_task);
+	i40e_cloud_filter_exit(pf);
 	i40e_fdir_teardown(pf);
 
 	/* Client close must be called explicitly here because the timer

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 6/7] i40e: Clean up of cloud filters
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

Introduce the cloud filter datastructure and cleanup of cloud
filters associated with the device.

v2: Moved field comments in struct i40e_cloud_filter to the right.
Removed hlist_empty check from i40e_cloud_filter_exit()

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h      |    9 +++++++++
 drivers/net/ethernet/intel/i40e/i40e_main.c |   24 ++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index d846da9..6018fb6 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -252,6 +252,12 @@ struct i40e_fdir_filter {
 	u32 fd_id;
 };
 
+struct i40e_cloud_filter {
+	struct hlist_node cloud_node;
+	unsigned long cookie;
+	u16 seid;	/* filter control */
+};
+
 #define I40E_ETH_P_LLDP			0x88cc
 
 #define I40E_DCB_PRIO_TYPE_STRICT	0
@@ -419,6 +425,9 @@ struct i40e_pf {
 	struct i40e_udp_port_config udp_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
 	u16 pending_udp_bitmap;
 
+	struct hlist_head cloud_filter_list;
+	u16 num_cloud_filters;
+
 	enum i40e_interrupt_policy int_policy;
 	u16 rx_itr_default;
 	u16 tx_itr_default;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 60c689a..afcf08a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -6922,6 +6922,26 @@ static void i40e_fdir_filter_exit(struct i40e_pf *pf)
 }
 
 /**
+ * i40e_cloud_filter_exit - Cleans up the Cloud Filters
+ * @pf: Pointer to PF
+ *
+ * This function destroys the hlist where all the Cloud Filters
+ * filters were saved.
+ **/
+static void i40e_cloud_filter_exit(struct i40e_pf *pf)
+{
+	struct i40e_cloud_filter *cfilter;
+	struct hlist_node *node;
+
+	hlist_for_each_entry_safe(cfilter, node,
+				  &pf->cloud_filter_list, cloud_node) {
+		hlist_del(&cfilter->cloud_node);
+		kfree(cfilter);
+	}
+	pf->num_cloud_filters = 0;
+}
+
+/**
  * i40e_close - Disables a network interface
  * @netdev: network interface device structure
  *
@@ -12176,6 +12196,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 			vsi = i40e_vsi_reinit_setup(pf->vsi[pf->lan_vsi]);
 		if (!vsi) {
 			dev_info(&pf->pdev->dev, "setup of MAIN VSI failed\n");
+			i40e_cloud_filter_exit(pf);
 			i40e_fdir_teardown(pf);
 			return -EAGAIN;
 		}
@@ -13010,6 +13031,8 @@ static void i40e_remove(struct pci_dev *pdev)
 	if (pf->vsi[pf->lan_vsi])
 		i40e_vsi_release(pf->vsi[pf->lan_vsi]);
 
+	i40e_cloud_filter_exit(pf);
+
 	/* remove attached clients */
 	if (pf->flags & I40E_FLAG_IWARP_ENABLED) {
 		ret_code = i40e_lan_del_device(pf);
@@ -13241,6 +13264,7 @@ static void i40e_shutdown(struct pci_dev *pdev)
 
 	del_timer_sync(&pf->service_timer);
 	cancel_work_sync(&pf->service_task);
+	i40e_cloud_filter_exit(pf);
 	i40e_fdir_teardown(pf);
 
 	/* Client close must be called explicitly here because the timer


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13  9:59   ` Amritha Nambiar
  -1 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan, jeffrey.t.kirsher
  Cc: alexander.h.duyck, netdev, amritha.nambiar

This patch enables tc-flower based hardware offloads. tc flower
filter provided by the kernel is configured as driver specific
cloud filter. The patch implements functions and admin queue
commands needed to support cloud filters in the driver and
adds cloud filters to configure these tc-flower filters.

The only action supported is to redirect packets to a traffic class
on the same device.

# tc qdisc add dev eth0 ingress
# ethtool -K eth0 hw-tc-offload on

# tc filter add dev eth0 protocol ip parent ffff:\
  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
  action mirred ingress redirect dev eth0 tclass 0

# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_sw\
  action mirred ingress redirect dev eth0 tclass 1

# tc filter add dev eth0 protocol ipv6 parent ffff:\
  prio 3 flower dst_ip fe8::200:1\
  ip_proto udp dst_port 66 skip_sw\
  action mirred ingress redirect dev eth0 tclass 1

Delete tc flower filter:
Example:

# tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
# tc filter del dev eth0 parent ffff:

Flow Director Sideband is disabled while configuring cloud filters
via tc-flower and until any cloud filter exists.

Unsupported matches when cloud filters are added using enhanced
big buffer cloud filter mode of underlying switch include:
1. source port and source IP
2. Combined MAC address and IP fields.
3. Not specifying L4 port

These filter matches can however be used to redirect traffic to
the main VSI (tc 0) which does not require the enhanced big buffer
cloud filter support.

v3: Cleaned up some lengthy function names. Changed ipv6 address to
__be32 array instead of u8 array. Used macro for IP version. Minor
formatting changes.
v2:
1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
2. Moved dev_info for add/deleting cloud filters in else condition
3. Fixed some format specifier in dev_err logs
4. Refactored i40e_get_capabilities to take an additional
   list_type parameter and use it to query device and function
   level capabilities.
5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
   to verify if redirect to a traffic class is supported.
6. Added comments for Geneve fix in cloud filter big buffer AQ
   function definitions.
7. Cleaned up setup_tc interface to rebase and work with Jiri's
   updates, separate function to process tc cls flower offloads.
8. Changes to make Flow Director Sideband and Cloud filters mutually
   exclusive.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
 drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
 drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
 drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
 drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
 .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
 7 files changed, 1202 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 6018fb6..b110519 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -55,6 +55,8 @@
 #include <linux/net_tstamp.h>
 #include <linux/ptp_clock_kernel.h>
 #include <net/pkt_cls.h>
+#include <net/tc_act/tc_gact.h>
+#include <net/tc_act/tc_mirred.h>
 #include "i40e_type.h"
 #include "i40e_prototype.h"
 #include "i40e_client.h"
@@ -252,9 +254,52 @@ struct i40e_fdir_filter {
 	u32 fd_id;
 };
 
+#define IPV4_VERSION 4
+#define IPV6_VERSION 6
+
+#define I40E_CLOUD_FIELD_OMAC	0x01
+#define I40E_CLOUD_FIELD_IMAC	0x02
+#define I40E_CLOUD_FIELD_IVLAN	0x04
+#define I40E_CLOUD_FIELD_TEN_ID	0x08
+#define I40E_CLOUD_FIELD_IIP	0x10
+
+#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
+#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
+						 I40E_CLOUD_FIELD_IVLAN)
+#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
+						 I40E_CLOUD_FIELD_TEN_ID)
+#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
+						  I40E_CLOUD_FIELD_IMAC | \
+						  I40E_CLOUD_FIELD_TEN_ID)
+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
+						   I40E_CLOUD_FIELD_IVLAN | \
+						   I40E_CLOUD_FIELD_TEN_ID)
+#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
+
 struct i40e_cloud_filter {
 	struct hlist_node cloud_node;
 	unsigned long cookie;
+	/* cloud filter input set follows */
+	u8 dst_mac[ETH_ALEN];
+	u8 src_mac[ETH_ALEN];
+	__be16 vlan_id;
+	__be32 dst_ip;
+	__be32 src_ip;
+	__be32 dst_ipv6[4];
+	__be32 src_ipv6[4];
+	__be16 dst_port;
+	__be16 src_port;
+	u32 ip_version;
+	u8 ip_proto;	/* IPPROTO value */
+	/* L4 port type: src or destination port */
+#define I40E_CLOUD_FILTER_PORT_SRC	0x01
+#define I40E_CLOUD_FILTER_PORT_DEST	0x02
+	u8 port_type;
+	u32 tenant_id;
+	u8 flags;
+#define I40E_CLOUD_TNL_TYPE_NONE	0xff
+	u8 tunnel_type;
 	u16 seid;	/* filter control */
 };
 
@@ -491,6 +536,8 @@ struct i40e_pf {
 #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
 #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
 #define I40E_FLAG_TC_MQPRIO			BIT(29)
+#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
+#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
 
 	struct i40e_client_instance *cinst;
 	bool stat_offsets_loaded;
@@ -573,6 +620,8 @@ struct i40e_pf {
 	u16 phy_led_val;
 
 	u16 override_q_count;
+	u16 last_sw_conf_flags;
+	u16 last_sw_conf_valid_flags;
 };
 
 /**
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index 2e567c2..feb3d42 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
 		struct {
 			u8 data[16];
 		} v6;
+		struct {
+			__le16 data[8];
+		} raw_v6;
 	} ipaddr;
 	__le16	flags;
 #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index 9567702..d9c9665 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
 
 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
 				   track_id, &offset, &info, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_add_cloud_filters
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to add cloud filters from
+ * @filters: Buffer which contains the filters to be added
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Set the cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
+ * of the function.
+ *
+ **/
+enum i40e_status_code
+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	enum i40e_status_code status;
+	u16 buff_len;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_add_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_add_cloud_filters_bb
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to add cloud filters from
+ * @filters: Buffer which contains the filters in big buffer to be added
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Set the big buffer cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
+ * function.
+ *
+ **/
+i40e_status
+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	i40e_status status;
+	u16 buff_len;
+	int i;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_add_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
+
+	for (i = 0; i < filter_count; i++) {
+		u16 tnl_type;
+		u32 ti;
+
+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
+
+		/* For Geneve, the VNI should be placed in offset shifted by a
+		 * byte than the offset for the Tenant ID for rest of the
+		 * tunnels.
+		 */
+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
+			ti = le32_to_cpu(filters[i].element.tenant_id);
+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
+		}
+	}
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_rem_cloud_filters
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters to be removed
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Remove the cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
+ * of the function.
+ *
+ **/
+enum i40e_status_code
+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	enum i40e_status_code status;
+	u16 buff_len;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_remove_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_rem_cloud_filters_bb
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters in big buffer to be removed
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Remove the big buffer cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
+ * function.
+ *
+ **/
+i40e_status
+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	i40e_status status;
+	u16 buff_len;
+	int i;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_remove_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
+
+	for (i = 0; i < filter_count; i++) {
+		u16 tnl_type;
+		u32 ti;
+
+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
+
+		/* For Geneve, the VNI should be placed in offset shifted by a
+		 * byte than the offset for the Tenant ID for rest of the
+		 * tunnels.
+		 */
+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
+			ti = le32_to_cpu(filters[i].element.tenant_id);
+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
+		}
+	}
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
 	return status;
 }
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index afcf08a..96ee608 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
 static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
 static void i40e_fdir_sb_setup(struct i40e_pf *pf);
 static int i40e_veb_get_bw_info(struct i40e_veb *veb);
+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
+				     struct i40e_cloud_filter *filter,
+				     bool add);
+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+					     struct i40e_cloud_filter *filter,
+					     bool add);
+static int i40e_get_capabilities(struct i40e_pf *pf,
+				 enum i40e_admin_queue_opc list_type);
+
 
 /* i40e_pci_tbl - PCI Device ID Table
  *
@@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
  **/
 static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
 {
+	enum i40e_admin_queue_err last_aq_status;
+	struct i40e_cloud_filter *cfilter;
 	struct i40e_channel *ch, *ch_tmp;
+	struct i40e_pf *pf = vsi->back;
+	struct hlist_node *node;
 	int ret, i;
 
 	/* Reset rss size that was stored when reconfiguring rss for
@@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
 				 "Failed to reset tx rate for ch->seid %u\n",
 				 ch->seid);
 
+		/* delete cloud filters associated with this channel */
+		hlist_for_each_entry_safe(cfilter, node,
+					  &pf->cloud_filter_list, cloud_node) {
+			if (cfilter->seid != ch->seid)
+				continue;
+
+			hash_del(&cfilter->cloud_node);
+			if (cfilter->dst_port)
+				ret = i40e_add_del_cloud_filter_big_buf(vsi,
+									cfilter,
+									false);
+			else
+				ret = i40e_add_del_cloud_filter(vsi, cfilter,
+								false);
+			last_aq_status = pf->hw.aq.asq_last_status;
+			if (ret)
+				dev_info(&pf->pdev->dev,
+					 "Failed to delete cloud filter, err %s aq_err %s\n",
+					 i40e_stat_str(&pf->hw, ret),
+					 i40e_aq_str(&pf->hw, last_aq_status));
+			kfree(cfilter);
+		}
+
 		/* delete VSI from FW */
 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
 					     NULL);
@@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
 }
 
 /**
+ * i40e_validate_and_set_switch_mode - sets up switch mode correctly
+ * @vsi: ptr to VSI which has PF backing
+ * @l4type: true for TCP ond false for UDP
+ * @port_type: true if port is destination and false if port is source
+ *
+ * Sets up switch mode correctly if it needs to be changed and perform
+ * what are allowed modes.
+ **/
+static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
+					     bool port_type)
+{
+	u8 mode;
+	struct i40e_pf *pf = vsi->back;
+	struct i40e_hw *hw = &pf->hw;
+	int ret;
+
+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
+	if (ret)
+		return -EINVAL;
+
+	if (hw->dev_caps.switch_mode) {
+		/* if switch mode is set, support mode2 (non-tunneled for
+		 * cloud filter) for now
+		 */
+		u32 switch_mode = hw->dev_caps.switch_mode &
+							I40E_SWITCH_MODE_MASK;
+		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
+			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
+				return 0;
+			dev_err(&pf->pdev->dev,
+				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
+				hw->dev_caps.switch_mode);
+			return -EINVAL;
+		}
+	}
+
+	/* port_type: true for destination port and false for source port
+	 * For now, supports only destination port type
+	 */
+	if (!port_type) {
+		dev_err(&pf->pdev->dev, "src port type not supported\n");
+		return -EINVAL;
+	}
+
+	/* Set Bit 7 to be valid */
+	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
+
+	/* Set L4type to both TCP and UDP support */
+	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
+
+	/* Set cloud filter mode */
+	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
+
+	/* Prep mode field for set_switch_config */
+	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
+					pf->last_sw_conf_valid_flags,
+					mode, NULL);
+	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
+		dev_err(&pf->pdev->dev,
+			"couldn't set switch config bits, err %s aq_err %s\n",
+			i40e_stat_str(hw, ret),
+			i40e_aq_str(hw,
+				    hw->aq.asq_last_status));
+
+	return ret;
+}
+
+/**
  * i40e_create_queue_channel - function to create channel
  * @vsi: VSI to be configured
  * @ch: ptr to channel (it contains channel specific params)
@@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
 	return ret;
 }
 
+/**
+ * i40e_set_cld_element - sets cloud filter element data
+ * @filter: cloud filter rule
+ * @cld: ptr to cloud filter element data
+ *
+ * This is helper function to copy data into cloud filter element
+ **/
+static inline void
+i40e_set_cld_element(struct i40e_cloud_filter *filter,
+		     struct i40e_aqc_cloud_filters_element_data *cld)
+{
+	int i, j;
+	u32 ipa;
+
+	memset(cld, 0, sizeof(*cld));
+	ether_addr_copy(cld->outer_mac, filter->dst_mac);
+	ether_addr_copy(cld->inner_mac, filter->src_mac);
+
+	if (filter->ip_version == IPV6_VERSION) {
+#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
+		for (i = 0, j = 0; i < 4; i++, j += 2) {
+			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
+			ipa = cpu_to_le32(ipa);
+			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
+		}
+	} else {
+		ipa = be32_to_cpu(filter->dst_ip);
+		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
+	}
+
+	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
+
+	/* tenant_id is not supported by FW now, once the support is enabled
+	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
+	 */
+	if (filter->tenant_id)
+		return;
+}
+
+/**
+ * i40e_add_del_cloud_filter - Add/del cloud filter
+ * @vsi: pointer to VSI
+ * @filter: cloud filter rule
+ * @add: if true, add, if false, delete
+ *
+ * Add or delete a cloud filter for a specific flow spec.
+ * Returns 0 if the filter were successfully added.
+ **/
+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
+				     struct i40e_cloud_filter *filter, bool add)
+{
+	struct i40e_aqc_cloud_filters_element_data cld_filter;
+	struct i40e_pf *pf = vsi->back;
+	int ret;
+	static const u16 flag_table[128] = {
+		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
+			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
+		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
+			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
+		[I40E_CLOUD_FILTER_FLAGS_IIP] =
+			I40E_AQC_ADD_CLOUD_FILTER_IIP,
+	};
+
+	if (filter->flags >= ARRAY_SIZE(flag_table))
+		return I40E_ERR_CONFIG;
+
+	/* copy element needed to add cloud filter from filter */
+	i40e_set_cld_element(filter, &cld_filter);
+
+	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
+		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
+					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
+
+	if (filter->ip_version == IPV6_VERSION)
+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
+						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
+	else
+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
+						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
+
+	if (add)
+		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
+						&cld_filter, 1);
+	else
+		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
+						&cld_filter, 1);
+	if (ret)
+		dev_dbg(&pf->pdev->dev,
+			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
+			add ? "add" : "delete", filter->dst_port, ret,
+			pf->hw.aq.asq_last_status);
+	else
+		dev_info(&pf->pdev->dev,
+			 "%s cloud filter for VSI: %d\n",
+			 add ? "Added" : "Deleted", filter->seid);
+	return ret;
+}
+
+/**
+ * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
+ * @vsi: pointer to VSI
+ * @filter: cloud filter rule
+ * @add: if true, add, if false, delete
+ *
+ * Add or delete a cloud filter for a specific flow spec using big buffer.
+ * Returns 0 if the filter were successfully added.
+ **/
+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+					     struct i40e_cloud_filter *filter,
+					     bool add)
+{
+	struct i40e_aqc_cloud_filters_element_bb cld_filter;
+	struct i40e_pf *pf = vsi->back;
+	int ret;
+
+	/* Both (Outer/Inner) valid mac_addr are not supported */
+	if (is_valid_ether_addr(filter->dst_mac) &&
+	    is_valid_ether_addr(filter->src_mac))
+		return -EINVAL;
+
+	/* Make sure port is specified, otherwise bail out, for channel
+	 * specific cloud filter needs 'L4 port' to be non-zero
+	 */
+	if (!filter->dst_port)
+		return -EINVAL;
+
+	/* adding filter using src_port/src_ip is not supported at this stage */
+	if (filter->src_port || filter->src_ip ||
+	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
+		return -EINVAL;
+
+	/* copy element needed to add cloud filter from filter */
+	i40e_set_cld_element(filter, &cld_filter.element);
+
+	if (is_valid_ether_addr(filter->dst_mac) ||
+	    is_valid_ether_addr(filter->src_mac) ||
+	    is_multicast_ether_addr(filter->dst_mac) ||
+	    is_multicast_ether_addr(filter->src_mac)) {
+		/* MAC + IP : unsupported mode */
+		if (filter->dst_ip)
+			return -EINVAL;
+
+		/* since we validated that L4 port must be valid before
+		 * we get here, start with respective "flags" value
+		 * and update if vlan is present or not
+		 */
+		cld_filter.element.flags =
+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
+
+		if (filter->vlan_id) {
+			cld_filter.element.flags =
+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
+		}
+
+	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
+		cld_filter.element.flags =
+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
+		if (filter->ip_version == IPV6_VERSION)
+			cld_filter.element.flags |=
+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
+		else
+			cld_filter.element.flags |=
+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
+	} else {
+		dev_err(&pf->pdev->dev,
+			"either mac or ip has to be valid for cloud filter\n");
+		return -EINVAL;
+	}
+
+	/* Now copy L4 port in Byte 6..7 in general fields */
+	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
+						be16_to_cpu(filter->dst_port);
+
+	if (add) {
+		bool proto_type, port_type;
+
+		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
+		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
+			     true : false;
+
+		/* For now, src port based cloud filter for channel is not
+		 * supported
+		 */
+		if (!port_type) {
+			dev_err(&pf->pdev->dev,
+				"unsupported port type (src port)\n");
+			return -EOPNOTSUPP;
+		}
+
+		/* Validate current device switch mode, change if necessary */
+		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
+							port_type);
+		if (ret) {
+			dev_err(&pf->pdev->dev,
+				"failed to set switch mode, ret %d\n",
+				ret);
+			return ret;
+		}
+
+		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
+						   &cld_filter, 1);
+	} else {
+		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
+						   &cld_filter, 1);
+	}
+
+	if (ret)
+		dev_dbg(&pf->pdev->dev,
+			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
+			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
+	else
+		dev_info(&pf->pdev->dev,
+			 "%s cloud filter for VSI: %d, L4 port: %d\n",
+			 add ? "add" : "delete", filter->seid,
+			 ntohs(filter->dst_port));
+	return ret;
+}
+
+/**
+ * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @filter: Pointer to cloud filter structure
+ *
+ **/
+static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
+				 struct tc_cls_flower_offload *f,
+				 struct i40e_cloud_filter *filter)
+{
+	struct i40e_pf *pf = vsi->back;
+	u16 addr_type = 0;
+	u8 field_flags = 0;
+
+	if (f->dissector->used_keys &
+	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
+	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
+	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
+	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
+	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
+	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
+	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
+	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
+		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
+			f->dissector->used_keys);
+		return -EOPNOTSUPP;
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_dissector_key_keyid *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
+						  f->key);
+
+		struct flow_dissector_key_keyid *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
+						  f->mask);
+
+		if (mask->keyid != 0)
+			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
+
+		filter->tenant_id = be32_to_cpu(key->keyid);
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_dissector_key_basic *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_BASIC,
+						  f->key);
+
+		filter->ip_proto = key->ip_proto;
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_dissector_key_eth_addrs *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
+						  f->key);
+
+		struct flow_dissector_key_eth_addrs *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
+						  f->mask);
+
+		/* use is_broadcast and is_zero to check for all 0xf or 0 */
+		if (!is_zero_ether_addr(mask->dst)) {
+			if (is_broadcast_ether_addr(mask->dst)) {
+				field_flags |= I40E_CLOUD_FIELD_OMAC;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
+					mask->dst);
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (!is_zero_ether_addr(mask->src)) {
+			if (is_broadcast_ether_addr(mask->src)) {
+				field_flags |= I40E_CLOUD_FIELD_IMAC;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
+					mask->src);
+				return I40E_ERR_CONFIG;
+			}
+		}
+		ether_addr_copy(filter->dst_mac, key->dst);
+		ether_addr_copy(filter->src_mac, key->src);
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_dissector_key_vlan *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_VLAN,
+						  f->key);
+		struct flow_dissector_key_vlan *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_VLAN,
+						  f->mask);
+
+		if (mask->vlan_id) {
+			if (mask->vlan_id == VLAN_VID_MASK) {
+				field_flags |= I40E_CLOUD_FIELD_IVLAN;
+
+			} else {
+				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
+					mask->vlan_id);
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		filter->vlan_id = cpu_to_be16(key->vlan_id);
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_dissector_key_control *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_CONTROL,
+						  f->key);
+
+		addr_type = key->addr_type;
+	}
+
+	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
+		struct flow_dissector_key_ipv4_addrs *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+						  f->key);
+		struct flow_dissector_key_ipv4_addrs *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+						  f->mask);
+
+		if (mask->dst) {
+			if (mask->dst == cpu_to_be32(0xffffffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
+					be32_to_cpu(mask->dst));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (mask->src) {
+			if (mask->src == cpu_to_be32(0xffffffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
+					be32_to_cpu(mask->dst));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
+			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
+			return I40E_ERR_CONFIG;
+		}
+		filter->dst_ip = key->dst;
+		filter->src_ip = key->src;
+		filter->ip_version = IPV4_VERSION;
+	}
+
+	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
+		struct flow_dissector_key_ipv6_addrs *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+						  f->key);
+		struct flow_dissector_key_ipv6_addrs *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+						  f->mask);
+
+		/* src and dest IPV6 address should not be LOOPBACK
+		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
+		 */
+		if (ipv6_addr_loopback(&key->dst) ||
+		    ipv6_addr_loopback(&key->src)) {
+			dev_err(&pf->pdev->dev,
+				"Bad ipv6, addr is LOOPBACK\n");
+			return I40E_ERR_CONFIG;
+		}
+		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
+			field_flags |= I40E_CLOUD_FIELD_IIP;
+
+		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
+		       sizeof(filter->src_ipv6));
+		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
+		       sizeof(filter->dst_ipv6));
+
+		/* mark it as IPv6 filter, to be used later */
+		filter->ip_version = IPV6_VERSION;
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_dissector_key_ports *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_PORTS,
+						  f->key);
+		struct flow_dissector_key_ports *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_PORTS,
+						  f->mask);
+
+		if (mask->src) {
+			if (mask->src == cpu_to_be16(0xffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
+					be16_to_cpu(mask->src));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (mask->dst) {
+			if (mask->dst == cpu_to_be16(0xffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
+					be16_to_cpu(mask->dst));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		filter->dst_port = key->dst;
+		filter->src_port = key->src;
+
+		/* For now, only supports destination port*/
+		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
+
+		switch (filter->ip_proto) {
+		case IPPROTO_TCP:
+		case IPPROTO_UDP:
+			break;
+		default:
+			dev_err(&pf->pdev->dev,
+				"Only UDP and TCP transport are supported\n");
+			return -EINVAL;
+		}
+	}
+	filter->flags = field_flags;
+	return 0;
+}
+
+/**
+ * i40e_handle_redirect_action: Forward to a traffic class on the device
+ * @vsi: Pointer to VSI
+ * @ifindex: ifindex of the device to forwared to
+ * @tc: traffic class index on the device
+ * @filter: Pointer to cloud filter structure
+ *
+ **/
+static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
+				       struct i40e_cloud_filter *filter)
+{
+	struct i40e_channel *ch, *ch_tmp;
+
+	/* redirect to a traffic class on the same device */
+	if (vsi->netdev->ifindex == ifindex) {
+		if (tc == 0) {
+			filter->seid = vsi->seid;
+			return 0;
+		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
+			if (!filter->dst_port) {
+				dev_err(&vsi->back->pdev->dev,
+					"Specify destination port to redirect to traffic class that is not default\n");
+				return -EINVAL;
+			}
+			if (list_empty(&vsi->ch_list))
+				return -EINVAL;
+			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
+						 list) {
+				if (ch->seid == vsi->tc_seid_map[tc])
+					filter->seid = ch->seid;
+			}
+			return 0;
+		}
+	}
+	return -EINVAL;
+}
+
+/**
+ * i40e_parse_tc_actions - Parse tc actions
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @filter: Pointer to cloud filter structure
+ *
+ **/
+static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
+				 struct i40e_cloud_filter *filter)
+{
+	const struct tc_action *a;
+	LIST_HEAD(actions);
+	int err;
+
+	if (!tcf_exts_has_actions(exts))
+		return -EINVAL;
+
+	tcf_exts_to_list(exts, &actions);
+	list_for_each_entry(a, &actions, list) {
+		/* Drop action */
+		if (is_tcf_gact_shot(a)) {
+			dev_err(&vsi->back->pdev->dev,
+				"Cloud filters do not support the drop action.\n");
+			return -EOPNOTSUPP;
+		}
+
+		/* Redirect to a traffic class on the same device */
+		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
+			int ifindex = tcf_mirred_ifindex(a);
+			u8 tc = tcf_mirred_tc(a);
+
+			err = i40e_handle_redirect_action(vsi, ifindex, tc,
+							  filter);
+			if (err == 0)
+				return err;
+		}
+	}
+	return -EINVAL;
+}
+
+/**
+ * i40e_configure_clsflower - Configure tc flower filters
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ *
+ **/
+static int i40e_configure_clsflower(struct i40e_vsi *vsi,
+				    struct tc_cls_flower_offload *cls_flower)
+{
+	struct i40e_cloud_filter *filter = NULL;
+	struct i40e_pf *pf = vsi->back;
+	int err = 0;
+
+	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
+	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
+		return -EBUSY;
+
+	if (pf->fdir_pf_active_filters ||
+	    (!hlist_empty(&pf->fdir_filter_list))) {
+		dev_err(&vsi->back->pdev->dev,
+			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
+		return -EINVAL;
+	}
+
+	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
+		dev_err(&vsi->back->pdev->dev,
+			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
+		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
+	}
+
+	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
+	if (!filter)
+		return -ENOMEM;
+
+	filter->cookie = cls_flower->cookie;
+
+	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
+	if (err < 0)
+		goto err;
+
+	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
+	if (err < 0)
+		goto err;
+
+	/* Add cloud filter */
+	if (filter->dst_port)
+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
+	else
+		err = i40e_add_del_cloud_filter(vsi, filter, true);
+
+	if (err) {
+		dev_err(&pf->pdev->dev,
+			"Failed to add cloud filter, err %s\n",
+			i40e_stat_str(&pf->hw, err));
+		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
+		goto err;
+	}
+
+	/* add filter to the ordered list */
+	INIT_HLIST_NODE(&filter->cloud_node);
+
+	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
+
+	pf->num_cloud_filters++;
+
+	return err;
+err:
+	kfree(filter);
+	return err;
+}
+
+/**
+ * i40e_find_cloud_filter - Find the could filter in the list
+ * @vsi: Pointer to VSI
+ * @cookie: filter specific cookie
+ *
+ **/
+static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
+							unsigned long *cookie)
+{
+	struct i40e_cloud_filter *filter = NULL;
+	struct hlist_node *node2;
+
+	hlist_for_each_entry_safe(filter, node2,
+				  &vsi->back->cloud_filter_list, cloud_node)
+		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
+			return filter;
+	return NULL;
+}
+
+/**
+ * i40e_delete_clsflower - Remove tc flower filters
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ *
+ **/
+static int i40e_delete_clsflower(struct i40e_vsi *vsi,
+				 struct tc_cls_flower_offload *cls_flower)
+{
+	struct i40e_cloud_filter *filter = NULL;
+	struct i40e_pf *pf = vsi->back;
+	int err = 0;
+
+	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
+
+	if (!filter)
+		return -EINVAL;
+
+	hash_del(&filter->cloud_node);
+
+	if (filter->dst_port)
+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
+	else
+		err = i40e_add_del_cloud_filter(vsi, filter, false);
+	if (err) {
+		kfree(filter);
+		dev_err(&pf->pdev->dev,
+			"Failed to delete cloud filter, err %s\n",
+			i40e_stat_str(&pf->hw, err));
+		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
+	}
+
+	kfree(filter);
+	pf->num_cloud_filters--;
+
+	if (!pf->num_cloud_filters)
+		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
+		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
+			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
+			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
+		}
+	return 0;
+}
+
+/**
+ * i40e_setup_tc_cls_flower - flower classifier offloads
+ * @netdev: net device to configure
+ * @type_data: offload data
+ **/
+static int i40e_setup_tc_cls_flower(struct net_device *netdev,
+				    struct tc_cls_flower_offload *cls_flower)
+{
+	struct i40e_netdev_priv *np = netdev_priv(netdev);
+	struct i40e_vsi *vsi = np->vsi;
+
+	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
+	    cls_flower->common.chain_index)
+		return -EOPNOTSUPP;
+
+	switch (cls_flower->command) {
+	case TC_CLSFLOWER_REPLACE:
+		return i40e_configure_clsflower(vsi, cls_flower);
+	case TC_CLSFLOWER_DESTROY:
+		return i40e_delete_clsflower(vsi, cls_flower);
+	case TC_CLSFLOWER_STATS:
+		return -EOPNOTSUPP;
+	default:
+		return -EINVAL;
+	}
+}
+
 static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
 			   void *type_data)
 {
-	if (type != TC_SETUP_MQPRIO)
+	switch (type) {
+	case TC_SETUP_MQPRIO:
+		return i40e_setup_tc(netdev, type_data);
+	case TC_SETUP_CLSFLOWER:
+		return i40e_setup_tc_cls_flower(netdev, type_data);
+	default:
 		return -EOPNOTSUPP;
-
-	return i40e_setup_tc(netdev, type_data);
+	}
 }
 
 /**
@@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
 		kfree(cfilter);
 	}
 	pf->num_cloud_filters = 0;
+
+	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
+	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
+		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
+		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
+		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
+	}
 }
 
 /**
@@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
  * i40e_get_capabilities - get info about the HW
  * @pf: the PF struct
  **/
-static int i40e_get_capabilities(struct i40e_pf *pf)
+static int i40e_get_capabilities(struct i40e_pf *pf,
+				 enum i40e_admin_queue_opc list_type)
 {
 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
 	u16 data_size;
@@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
 
 		/* this loads the data into the hw struct for us */
 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
-					    &data_size,
-					    i40e_aqc_opc_list_func_capabilities,
-					    NULL);
+						    &data_size, list_type,
+						    NULL);
 		/* data loaded, buffer no longer needed */
 		kfree(cap_buf);
 
@@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
 		}
 	} while (err);
 
-	if (pf->hw.debug_mask & I40E_DEBUG_USER)
-		dev_info(&pf->pdev->dev,
-			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
-			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
-			 pf->hw.func_caps.num_msix_vectors,
-			 pf->hw.func_caps.num_msix_vectors_vf,
-			 pf->hw.func_caps.fd_filters_guaranteed,
-			 pf->hw.func_caps.fd_filters_best_effort,
-			 pf->hw.func_caps.num_tx_qp,
-			 pf->hw.func_caps.num_vsis);
-
+	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
+		if (list_type == i40e_aqc_opc_list_func_capabilities) {
+			dev_info(&pf->pdev->dev,
+				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
+				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
+				 pf->hw.func_caps.num_msix_vectors,
+				 pf->hw.func_caps.num_msix_vectors_vf,
+				 pf->hw.func_caps.fd_filters_guaranteed,
+				 pf->hw.func_caps.fd_filters_best_effort,
+				 pf->hw.func_caps.num_tx_qp,
+				 pf->hw.func_caps.num_vsis);
+		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
+			dev_info(&pf->pdev->dev,
+				 "switch_mode=0x%04x, function_valid=0x%08x\n",
+				 pf->hw.dev_caps.switch_mode,
+				 pf->hw.dev_caps.valid_functions);
+			dev_info(&pf->pdev->dev,
+				 "SR-IOV=%d, num_vfs for all function=%u\n",
+				 pf->hw.dev_caps.sr_iov_1_1,
+				 pf->hw.dev_caps.num_vfs);
+			dev_info(&pf->pdev->dev,
+				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
+				 pf->hw.dev_caps.num_vsis,
+				 pf->hw.dev_caps.num_rx_qp,
+				 pf->hw.dev_caps.num_tx_qp);
+		}
+	}
+	if (list_type == i40e_aqc_opc_list_func_capabilities) {
 #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
 		       + pf->hw.func_caps.num_vfs)
-	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
-		dev_info(&pf->pdev->dev,
-			 "got num_vsis %d, setting num_vsis to %d\n",
-			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
-		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
+		if (pf->hw.revision_id == 0 &&
+		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
+			dev_info(&pf->pdev->dev,
+				 "got num_vsis %d, setting num_vsis to %d\n",
+				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
+			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
+		}
 	}
-
 	return 0;
 }
 
@@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
 		if (!vsi) {
 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 			return;
 		}
 	}
@@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
 }
 
 /**
+ * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
+ * @vsi: PF main vsi
+ * @seid: seid of main or channel VSIs
+ *
+ * Rebuilds cloud filters associated with main VSI and channel VSIs if they
+ * existed before reset
+ **/
+static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
+{
+	struct i40e_cloud_filter *cfilter;
+	struct i40e_pf *pf = vsi->back;
+	struct hlist_node *node;
+	i40e_status ret;
+
+	/* Add cloud filters back if they exist */
+	if (hlist_empty(&pf->cloud_filter_list))
+		return 0;
+
+	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
+				  cloud_node) {
+		if (cfilter->seid != seid)
+			continue;
+
+		if (cfilter->dst_port)
+			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
+								true);
+		else
+			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
+
+		if (ret) {
+			dev_dbg(&pf->pdev->dev,
+				"Failed to rebuild cloud filter, err %s aq_err %s\n",
+				i40e_stat_str(&pf->hw, ret),
+				i40e_aq_str(&pf->hw,
+					    pf->hw.aq.asq_last_status));
+			return ret;
+		}
+	}
+	return 0;
+}
+
+/**
  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
  * @vsi: PF main vsi
  *
@@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
 						I40E_BW_CREDIT_DIVISOR,
 				ch->seid);
 		}
+		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
+		if (ret) {
+			dev_dbg(&vsi->back->pdev->dev,
+				"Failed to rebuild cloud filters for channel VSI %u\n",
+				ch->seid);
+			return ret;
+		}
 	}
 	return 0;
 }
@@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
 		i40e_verify_eeprom(pf);
 
 	i40e_clear_pxe_mode(hw);
-	ret = i40e_get_capabilities(pf);
+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
 	if (ret)
 		goto end_core_reset;
 
@@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
 			goto end_unlock;
 	}
 
+	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
+	if (ret)
+		goto end_unlock;
+
 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
 	 * for this main VSI if they exist
 	 */
@@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
 	    (pf->num_fdsb_msix == 0)) {
 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 	}
 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
 	    (pf->num_vmdq_msix == 0)) {
@@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
 				       I40E_FLAG_FD_SB_ENABLED	|
 				       I40E_FLAG_FD_ATR_ENABLED	|
 				       I40E_FLAG_VMDQ_ENABLED);
+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 
 			/* rework the queue expectations without MSIX */
 			i40e_determine_queue_usage(pf);
@@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
 		/* Enable filters and mark for reset */
 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
 			need_reset = true;
-		/* enable FD_SB only if there is MSI-X vector */
-		if (pf->num_fdsb_msix > 0)
+		/* enable FD_SB only if there is MSI-X vector and no cloud
+		 * filters exist
+		 */
+		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
+		}
 	} else {
 		/* turn off filters, mark for reset and clear SW filter list */
 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
@@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
 		}
 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
+
 		/* reset fd counters */
 		pf->fd_add_err = 0;
 		pf->fd_atr_cnt = 0;
@@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
 		netdev->hw_features |= NETIF_F_NTUPLE;
 	hw_features = hw_enc_features		|
 		      NETIF_F_HW_VLAN_CTAG_TX	|
-		      NETIF_F_HW_VLAN_CTAG_RX;
+		      NETIF_F_HW_VLAN_CTAG_RX	|
+		      NETIF_F_HW_TC;
 
 	netdev->hw_features |= hw_features;
 
@@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 	*/
 
 	if ((pf->hw.pf_id == 0) &&
-	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
+	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
+		pf->last_sw_conf_flags = flags;
+	}
 
 	if (pf->hw.pf_id == 0) {
 		u16 valid_flags;
@@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 					     pf->hw.aq.asq_last_status));
 			/* not a fatal problem, just keep going */
 		}
+		pf->last_sw_conf_valid_flags = valid_flags;
 	}
 
 	/* first time setup */
@@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
 			       I40E_FLAG_DCB_ENABLED	|
 			       I40E_FLAG_SRIOV_ENABLED	|
 			       I40E_FLAG_VMDQ_ENABLED);
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
 				  I40E_FLAG_FD_SB_ENABLED |
 				  I40E_FLAG_FD_ATR_ENABLED |
@@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
 			       I40E_FLAG_FD_ATR_ENABLED	|
 			       I40E_FLAG_DCB_ENABLED	|
 			       I40E_FLAG_VMDQ_ENABLED);
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 	} else {
 		/* Not enough queues for all TCs */
 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
@@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
 			queues_left -= 1; /* save 1 queue for FD */
 		} else {
 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
 		}
 	}
@@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
 
 	i40e_clear_pxe_mode(hw);
-	err = i40e_get_capabilities(pf);
+	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
 	if (err)
 		goto err_adminq_setup;
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 92869f5..3bb6659 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
 		struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
 				   struct i40e_asq_cmd_details *cmd_details);
+i40e_status
+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count);
+enum i40e_status_code
+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count);
+enum i40e_status_code
+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count);
+i40e_status
+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count);
 i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
 			       struct i40e_lldp_variables *lldp_cfg);
 /* i40e_common */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index c019f46..af38881 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
 #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
 #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
 #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
+#define I40E_SWITCH_MODE_MASK		0xF
 
 	u32  management_mode;
 	u32  mng_protocols_over_mctp;
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
index b8c78bf..4fe27f0 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
@@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
 		struct {
 			u8 data[16];
 		} v6;
+		struct {
+			__le16 data[8];
+		} raw_v6;
 	} ipaddr;
 	__le16	flags;
 #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
@ 2017-09-13  9:59   ` Amritha Nambiar
  0 siblings, 0 replies; 30+ messages in thread
From: Amritha Nambiar @ 2017-09-13  9:59 UTC (permalink / raw)
  To: intel-wired-lan

This patch enables tc-flower based hardware offloads. tc flower
filter provided by the kernel is configured as driver specific
cloud filter. The patch implements functions and admin queue
commands needed to support cloud filters in the driver and
adds cloud filters to configure these tc-flower filters.

The only action supported is to redirect packets to a traffic class
on the same device.

# tc qdisc add dev eth0 ingress
# ethtool -K eth0 hw-tc-offload on

# tc filter add dev eth0 protocol ip parent ffff:\
  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
  action mirred ingress redirect dev eth0 tclass 0

# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_sw\
  action mirred ingress redirect dev eth0 tclass 1

# tc filter add dev eth0 protocol ipv6 parent ffff:\
  prio 3 flower dst_ip fe8::200:1\
  ip_proto udp dst_port 66 skip_sw\
  action mirred ingress redirect dev eth0 tclass 1

Delete tc flower filter:
Example:

# tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
# tc filter del dev eth0 parent ffff:

Flow Director Sideband is disabled while configuring cloud filters
via tc-flower and until any cloud filter exists.

Unsupported matches when cloud filters are added using enhanced
big buffer cloud filter mode of underlying switch include:
1. source port and source IP
2. Combined MAC address and IP fields.
3. Not specifying L4 port

These filter matches can however be used to redirect traffic to
the main VSI (tc 0) which does not require the enhanced big buffer
cloud filter support.

v3: Cleaned up some lengthy function names. Changed ipv6 address to
__be32 array instead of u8 array. Used macro for IP version. Minor
formatting changes.
v2:
1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
2. Moved dev_info for add/deleting cloud filters in else condition
3. Fixed some format specifier in dev_err logs
4. Refactored i40e_get_capabilities to take an additional
   list_type parameter and use it to query device and function
   level capabilities.
5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
   to verify if redirect to a traffic class is supported.
6. Added comments for Geneve fix in cloud filter big buffer AQ
   function definitions.
7. Cleaned up setup_tc interface to rebase and work with Jiri's
   updates, separate function to process tc cls flower offloads.
8. Changes to make Flow Director Sideband and Cloud filters mutually
   exclusive.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
 drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
 drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
 drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
 drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
 drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
 .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
 7 files changed, 1202 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 6018fb6..b110519 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -55,6 +55,8 @@
 #include <linux/net_tstamp.h>
 #include <linux/ptp_clock_kernel.h>
 #include <net/pkt_cls.h>
+#include <net/tc_act/tc_gact.h>
+#include <net/tc_act/tc_mirred.h>
 #include "i40e_type.h"
 #include "i40e_prototype.h"
 #include "i40e_client.h"
@@ -252,9 +254,52 @@ struct i40e_fdir_filter {
 	u32 fd_id;
 };
 
+#define IPV4_VERSION 4
+#define IPV6_VERSION 6
+
+#define I40E_CLOUD_FIELD_OMAC	0x01
+#define I40E_CLOUD_FIELD_IMAC	0x02
+#define I40E_CLOUD_FIELD_IVLAN	0x04
+#define I40E_CLOUD_FIELD_TEN_ID	0x08
+#define I40E_CLOUD_FIELD_IIP	0x10
+
+#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
+#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
+						 I40E_CLOUD_FIELD_IVLAN)
+#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
+						 I40E_CLOUD_FIELD_TEN_ID)
+#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
+						  I40E_CLOUD_FIELD_IMAC | \
+						  I40E_CLOUD_FIELD_TEN_ID)
+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
+						   I40E_CLOUD_FIELD_IVLAN | \
+						   I40E_CLOUD_FIELD_TEN_ID)
+#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
+
 struct i40e_cloud_filter {
 	struct hlist_node cloud_node;
 	unsigned long cookie;
+	/* cloud filter input set follows */
+	u8 dst_mac[ETH_ALEN];
+	u8 src_mac[ETH_ALEN];
+	__be16 vlan_id;
+	__be32 dst_ip;
+	__be32 src_ip;
+	__be32 dst_ipv6[4];
+	__be32 src_ipv6[4];
+	__be16 dst_port;
+	__be16 src_port;
+	u32 ip_version;
+	u8 ip_proto;	/* IPPROTO value */
+	/* L4 port type: src or destination port */
+#define I40E_CLOUD_FILTER_PORT_SRC	0x01
+#define I40E_CLOUD_FILTER_PORT_DEST	0x02
+	u8 port_type;
+	u32 tenant_id;
+	u8 flags;
+#define I40E_CLOUD_TNL_TYPE_NONE	0xff
+	u8 tunnel_type;
 	u16 seid;	/* filter control */
 };
 
@@ -491,6 +536,8 @@ struct i40e_pf {
 #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
 #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
 #define I40E_FLAG_TC_MQPRIO			BIT(29)
+#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
+#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
 
 	struct i40e_client_instance *cinst;
 	bool stat_offsets_loaded;
@@ -573,6 +620,8 @@ struct i40e_pf {
 	u16 phy_led_val;
 
 	u16 override_q_count;
+	u16 last_sw_conf_flags;
+	u16 last_sw_conf_valid_flags;
 };
 
 /**
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index 2e567c2..feb3d42 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
 		struct {
 			u8 data[16];
 		} v6;
+		struct {
+			__le16 data[8];
+		} raw_v6;
 	} ipaddr;
 	__le16	flags;
 #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index 9567702..d9c9665 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
 
 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
 				   track_id, &offset, &info, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_add_cloud_filters
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to add cloud filters from
+ * @filters: Buffer which contains the filters to be added
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Set the cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
+ * of the function.
+ *
+ **/
+enum i40e_status_code
+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	enum i40e_status_code status;
+	u16 buff_len;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_add_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_add_cloud_filters_bb
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to add cloud filters from
+ * @filters: Buffer which contains the filters in big buffer to be added
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Set the big buffer cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
+ * function.
+ *
+ **/
+i40e_status
+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	i40e_status status;
+	u16 buff_len;
+	int i;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_add_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
+
+	for (i = 0; i < filter_count; i++) {
+		u16 tnl_type;
+		u32 ti;
+
+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
+
+		/* For Geneve, the VNI should be placed in offset shifted by a
+		 * byte than the offset for the Tenant ID for rest of the
+		 * tunnels.
+		 */
+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
+			ti = le32_to_cpu(filters[i].element.tenant_id);
+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
+		}
+	}
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_rem_cloud_filters
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters to be removed
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Remove the cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
+ * of the function.
+ *
+ **/
+enum i40e_status_code
+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	enum i40e_status_code status;
+	u16 buff_len;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_remove_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
+	return status;
+}
+
+/**
+ * i40e_aq_rem_cloud_filters_bb
+ * @hw: pointer to the hardware structure
+ * @seid: VSI seid to remove cloud filters from
+ * @filters: Buffer which contains the filters in big buffer to be removed
+ * @filter_count: number of filters contained in the buffer
+ *
+ * Remove the big buffer cloud filters for a given VSI.  The contents of the
+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
+ * function.
+ *
+ **/
+i40e_status
+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count)
+{
+	struct i40e_aq_desc desc;
+	struct i40e_aqc_add_remove_cloud_filters *cmd =
+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
+	i40e_status status;
+	u16 buff_len;
+	int i;
+
+	i40e_fill_default_direct_cmd_desc(&desc,
+					  i40e_aqc_opc_remove_cloud_filters);
+
+	buff_len = filter_count * sizeof(*filters);
+	desc.datalen = cpu_to_le16(buff_len);
+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+	cmd->num_filters = filter_count;
+	cmd->seid = cpu_to_le16(seid);
+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
+
+	for (i = 0; i < filter_count; i++) {
+		u16 tnl_type;
+		u32 ti;
+
+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
+
+		/* For Geneve, the VNI should be placed in offset shifted by a
+		 * byte than the offset for the Tenant ID for rest of the
+		 * tunnels.
+		 */
+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
+			ti = le32_to_cpu(filters[i].element.tenant_id);
+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
+		}
+	}
+
+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
+
 	return status;
 }
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index afcf08a..96ee608 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
 static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
 static void i40e_fdir_sb_setup(struct i40e_pf *pf);
 static int i40e_veb_get_bw_info(struct i40e_veb *veb);
+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
+				     struct i40e_cloud_filter *filter,
+				     bool add);
+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+					     struct i40e_cloud_filter *filter,
+					     bool add);
+static int i40e_get_capabilities(struct i40e_pf *pf,
+				 enum i40e_admin_queue_opc list_type);
+
 
 /* i40e_pci_tbl - PCI Device ID Table
  *
@@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
  **/
 static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
 {
+	enum i40e_admin_queue_err last_aq_status;
+	struct i40e_cloud_filter *cfilter;
 	struct i40e_channel *ch, *ch_tmp;
+	struct i40e_pf *pf = vsi->back;
+	struct hlist_node *node;
 	int ret, i;
 
 	/* Reset rss size that was stored when reconfiguring rss for
@@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
 				 "Failed to reset tx rate for ch->seid %u\n",
 				 ch->seid);
 
+		/* delete cloud filters associated with this channel */
+		hlist_for_each_entry_safe(cfilter, node,
+					  &pf->cloud_filter_list, cloud_node) {
+			if (cfilter->seid != ch->seid)
+				continue;
+
+			hash_del(&cfilter->cloud_node);
+			if (cfilter->dst_port)
+				ret = i40e_add_del_cloud_filter_big_buf(vsi,
+									cfilter,
+									false);
+			else
+				ret = i40e_add_del_cloud_filter(vsi, cfilter,
+								false);
+			last_aq_status = pf->hw.aq.asq_last_status;
+			if (ret)
+				dev_info(&pf->pdev->dev,
+					 "Failed to delete cloud filter, err %s aq_err %s\n",
+					 i40e_stat_str(&pf->hw, ret),
+					 i40e_aq_str(&pf->hw, last_aq_status));
+			kfree(cfilter);
+		}
+
 		/* delete VSI from FW */
 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
 					     NULL);
@@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
 }
 
 /**
+ * i40e_validate_and_set_switch_mode - sets up switch mode correctly
+ * @vsi: ptr to VSI which has PF backing
+ * @l4type: true for TCP ond false for UDP
+ * @port_type: true if port is destination and false if port is source
+ *
+ * Sets up switch mode correctly if it needs to be changed and perform
+ * what are allowed modes.
+ **/
+static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
+					     bool port_type)
+{
+	u8 mode;
+	struct i40e_pf *pf = vsi->back;
+	struct i40e_hw *hw = &pf->hw;
+	int ret;
+
+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
+	if (ret)
+		return -EINVAL;
+
+	if (hw->dev_caps.switch_mode) {
+		/* if switch mode is set, support mode2 (non-tunneled for
+		 * cloud filter) for now
+		 */
+		u32 switch_mode = hw->dev_caps.switch_mode &
+							I40E_SWITCH_MODE_MASK;
+		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
+			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
+				return 0;
+			dev_err(&pf->pdev->dev,
+				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
+				hw->dev_caps.switch_mode);
+			return -EINVAL;
+		}
+	}
+
+	/* port_type: true for destination port and false for source port
+	 * For now, supports only destination port type
+	 */
+	if (!port_type) {
+		dev_err(&pf->pdev->dev, "src port type not supported\n");
+		return -EINVAL;
+	}
+
+	/* Set Bit 7 to be valid */
+	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
+
+	/* Set L4type to both TCP and UDP support */
+	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
+
+	/* Set cloud filter mode */
+	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
+
+	/* Prep mode field for set_switch_config */
+	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
+					pf->last_sw_conf_valid_flags,
+					mode, NULL);
+	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
+		dev_err(&pf->pdev->dev,
+			"couldn't set switch config bits, err %s aq_err %s\n",
+			i40e_stat_str(hw, ret),
+			i40e_aq_str(hw,
+				    hw->aq.asq_last_status));
+
+	return ret;
+}
+
+/**
  * i40e_create_queue_channel - function to create channel
  * @vsi: VSI to be configured
  * @ch: ptr to channel (it contains channel specific params)
@@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
 	return ret;
 }
 
+/**
+ * i40e_set_cld_element - sets cloud filter element data
+ * @filter: cloud filter rule
+ * @cld: ptr to cloud filter element data
+ *
+ * This is helper function to copy data into cloud filter element
+ **/
+static inline void
+i40e_set_cld_element(struct i40e_cloud_filter *filter,
+		     struct i40e_aqc_cloud_filters_element_data *cld)
+{
+	int i, j;
+	u32 ipa;
+
+	memset(cld, 0, sizeof(*cld));
+	ether_addr_copy(cld->outer_mac, filter->dst_mac);
+	ether_addr_copy(cld->inner_mac, filter->src_mac);
+
+	if (filter->ip_version == IPV6_VERSION) {
+#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
+		for (i = 0, j = 0; i < 4; i++, j += 2) {
+			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
+			ipa = cpu_to_le32(ipa);
+			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
+		}
+	} else {
+		ipa = be32_to_cpu(filter->dst_ip);
+		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
+	}
+
+	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
+
+	/* tenant_id is not supported by FW now, once the support is enabled
+	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
+	 */
+	if (filter->tenant_id)
+		return;
+}
+
+/**
+ * i40e_add_del_cloud_filter - Add/del cloud filter
+ * @vsi: pointer to VSI
+ * @filter: cloud filter rule
+ * @add: if true, add, if false, delete
+ *
+ * Add or delete a cloud filter for a specific flow spec.
+ * Returns 0 if the filter were successfully added.
+ **/
+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
+				     struct i40e_cloud_filter *filter, bool add)
+{
+	struct i40e_aqc_cloud_filters_element_data cld_filter;
+	struct i40e_pf *pf = vsi->back;
+	int ret;
+	static const u16 flag_table[128] = {
+		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
+			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
+		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
+			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
+		[I40E_CLOUD_FILTER_FLAGS_IIP] =
+			I40E_AQC_ADD_CLOUD_FILTER_IIP,
+	};
+
+	if (filter->flags >= ARRAY_SIZE(flag_table))
+		return I40E_ERR_CONFIG;
+
+	/* copy element needed to add cloud filter from filter */
+	i40e_set_cld_element(filter, &cld_filter);
+
+	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
+		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
+					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
+
+	if (filter->ip_version == IPV6_VERSION)
+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
+						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
+	else
+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
+						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
+
+	if (add)
+		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
+						&cld_filter, 1);
+	else
+		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
+						&cld_filter, 1);
+	if (ret)
+		dev_dbg(&pf->pdev->dev,
+			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
+			add ? "add" : "delete", filter->dst_port, ret,
+			pf->hw.aq.asq_last_status);
+	else
+		dev_info(&pf->pdev->dev,
+			 "%s cloud filter for VSI: %d\n",
+			 add ? "Added" : "Deleted", filter->seid);
+	return ret;
+}
+
+/**
+ * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
+ * @vsi: pointer to VSI
+ * @filter: cloud filter rule
+ * @add: if true, add, if false, delete
+ *
+ * Add or delete a cloud filter for a specific flow spec using big buffer.
+ * Returns 0 if the filter were successfully added.
+ **/
+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
+					     struct i40e_cloud_filter *filter,
+					     bool add)
+{
+	struct i40e_aqc_cloud_filters_element_bb cld_filter;
+	struct i40e_pf *pf = vsi->back;
+	int ret;
+
+	/* Both (Outer/Inner) valid mac_addr are not supported */
+	if (is_valid_ether_addr(filter->dst_mac) &&
+	    is_valid_ether_addr(filter->src_mac))
+		return -EINVAL;
+
+	/* Make sure port is specified, otherwise bail out, for channel
+	 * specific cloud filter needs 'L4 port' to be non-zero
+	 */
+	if (!filter->dst_port)
+		return -EINVAL;
+
+	/* adding filter using src_port/src_ip is not supported at this stage */
+	if (filter->src_port || filter->src_ip ||
+	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
+		return -EINVAL;
+
+	/* copy element needed to add cloud filter from filter */
+	i40e_set_cld_element(filter, &cld_filter.element);
+
+	if (is_valid_ether_addr(filter->dst_mac) ||
+	    is_valid_ether_addr(filter->src_mac) ||
+	    is_multicast_ether_addr(filter->dst_mac) ||
+	    is_multicast_ether_addr(filter->src_mac)) {
+		/* MAC + IP : unsupported mode */
+		if (filter->dst_ip)
+			return -EINVAL;
+
+		/* since we validated that L4 port must be valid before
+		 * we get here, start with respective "flags" value
+		 * and update if vlan is present or not
+		 */
+		cld_filter.element.flags =
+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
+
+		if (filter->vlan_id) {
+			cld_filter.element.flags =
+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
+		}
+
+	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
+		cld_filter.element.flags =
+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
+		if (filter->ip_version == IPV6_VERSION)
+			cld_filter.element.flags |=
+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
+		else
+			cld_filter.element.flags |=
+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
+	} else {
+		dev_err(&pf->pdev->dev,
+			"either mac or ip has to be valid for cloud filter\n");
+		return -EINVAL;
+	}
+
+	/* Now copy L4 port in Byte 6..7 in general fields */
+	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
+						be16_to_cpu(filter->dst_port);
+
+	if (add) {
+		bool proto_type, port_type;
+
+		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
+		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
+			     true : false;
+
+		/* For now, src port based cloud filter for channel is not
+		 * supported
+		 */
+		if (!port_type) {
+			dev_err(&pf->pdev->dev,
+				"unsupported port type (src port)\n");
+			return -EOPNOTSUPP;
+		}
+
+		/* Validate current device switch mode, change if necessary */
+		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
+							port_type);
+		if (ret) {
+			dev_err(&pf->pdev->dev,
+				"failed to set switch mode, ret %d\n",
+				ret);
+			return ret;
+		}
+
+		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
+						   &cld_filter, 1);
+	} else {
+		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
+						   &cld_filter, 1);
+	}
+
+	if (ret)
+		dev_dbg(&pf->pdev->dev,
+			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
+			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
+	else
+		dev_info(&pf->pdev->dev,
+			 "%s cloud filter for VSI: %d, L4 port: %d\n",
+			 add ? "add" : "delete", filter->seid,
+			 ntohs(filter->dst_port));
+	return ret;
+}
+
+/**
+ * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @filter: Pointer to cloud filter structure
+ *
+ **/
+static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
+				 struct tc_cls_flower_offload *f,
+				 struct i40e_cloud_filter *filter)
+{
+	struct i40e_pf *pf = vsi->back;
+	u16 addr_type = 0;
+	u8 field_flags = 0;
+
+	if (f->dissector->used_keys &
+	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
+	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
+	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
+	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
+	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
+	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
+	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
+	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
+		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
+			f->dissector->used_keys);
+		return -EOPNOTSUPP;
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+		struct flow_dissector_key_keyid *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
+						  f->key);
+
+		struct flow_dissector_key_keyid *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
+						  f->mask);
+
+		if (mask->keyid != 0)
+			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
+
+		filter->tenant_id = be32_to_cpu(key->keyid);
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+		struct flow_dissector_key_basic *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_BASIC,
+						  f->key);
+
+		filter->ip_proto = key->ip_proto;
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+		struct flow_dissector_key_eth_addrs *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
+						  f->key);
+
+		struct flow_dissector_key_eth_addrs *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
+						  f->mask);
+
+		/* use is_broadcast and is_zero to check for all 0xf or 0 */
+		if (!is_zero_ether_addr(mask->dst)) {
+			if (is_broadcast_ether_addr(mask->dst)) {
+				field_flags |= I40E_CLOUD_FIELD_OMAC;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
+					mask->dst);
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (!is_zero_ether_addr(mask->src)) {
+			if (is_broadcast_ether_addr(mask->src)) {
+				field_flags |= I40E_CLOUD_FIELD_IMAC;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
+					mask->src);
+				return I40E_ERR_CONFIG;
+			}
+		}
+		ether_addr_copy(filter->dst_mac, key->dst);
+		ether_addr_copy(filter->src_mac, key->src);
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
+		struct flow_dissector_key_vlan *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_VLAN,
+						  f->key);
+		struct flow_dissector_key_vlan *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_VLAN,
+						  f->mask);
+
+		if (mask->vlan_id) {
+			if (mask->vlan_id == VLAN_VID_MASK) {
+				field_flags |= I40E_CLOUD_FIELD_IVLAN;
+
+			} else {
+				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
+					mask->vlan_id);
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		filter->vlan_id = cpu_to_be16(key->vlan_id);
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
+		struct flow_dissector_key_control *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_CONTROL,
+						  f->key);
+
+		addr_type = key->addr_type;
+	}
+
+	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
+		struct flow_dissector_key_ipv4_addrs *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+						  f->key);
+		struct flow_dissector_key_ipv4_addrs *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+						  f->mask);
+
+		if (mask->dst) {
+			if (mask->dst == cpu_to_be32(0xffffffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
+					be32_to_cpu(mask->dst));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (mask->src) {
+			if (mask->src == cpu_to_be32(0xffffffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
+					be32_to_cpu(mask->dst));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
+			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
+			return I40E_ERR_CONFIG;
+		}
+		filter->dst_ip = key->dst;
+		filter->src_ip = key->src;
+		filter->ip_version = IPV4_VERSION;
+	}
+
+	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
+		struct flow_dissector_key_ipv6_addrs *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+						  f->key);
+		struct flow_dissector_key_ipv6_addrs *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+						  f->mask);
+
+		/* src and dest IPV6 address should not be LOOPBACK
+		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
+		 */
+		if (ipv6_addr_loopback(&key->dst) ||
+		    ipv6_addr_loopback(&key->src)) {
+			dev_err(&pf->pdev->dev,
+				"Bad ipv6, addr is LOOPBACK\n");
+			return I40E_ERR_CONFIG;
+		}
+		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
+			field_flags |= I40E_CLOUD_FIELD_IIP;
+
+		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
+		       sizeof(filter->src_ipv6));
+		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
+		       sizeof(filter->dst_ipv6));
+
+		/* mark it as IPv6 filter, to be used later */
+		filter->ip_version = IPV6_VERSION;
+	}
+
+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
+		struct flow_dissector_key_ports *key =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_PORTS,
+						  f->key);
+		struct flow_dissector_key_ports *mask =
+			skb_flow_dissector_target(f->dissector,
+						  FLOW_DISSECTOR_KEY_PORTS,
+						  f->mask);
+
+		if (mask->src) {
+			if (mask->src == cpu_to_be16(0xffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
+					be16_to_cpu(mask->src));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		if (mask->dst) {
+			if (mask->dst == cpu_to_be16(0xffff)) {
+				field_flags |= I40E_CLOUD_FIELD_IIP;
+			} else {
+				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
+					be16_to_cpu(mask->dst));
+				return I40E_ERR_CONFIG;
+			}
+		}
+
+		filter->dst_port = key->dst;
+		filter->src_port = key->src;
+
+		/* For now, only supports destination port*/
+		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
+
+		switch (filter->ip_proto) {
+		case IPPROTO_TCP:
+		case IPPROTO_UDP:
+			break;
+		default:
+			dev_err(&pf->pdev->dev,
+				"Only UDP and TCP transport are supported\n");
+			return -EINVAL;
+		}
+	}
+	filter->flags = field_flags;
+	return 0;
+}
+
+/**
+ * i40e_handle_redirect_action: Forward to a traffic class on the device
+ * @vsi: Pointer to VSI
+ * @ifindex: ifindex of the device to forwared to
+ * @tc: traffic class index on the device
+ * @filter: Pointer to cloud filter structure
+ *
+ **/
+static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
+				       struct i40e_cloud_filter *filter)
+{
+	struct i40e_channel *ch, *ch_tmp;
+
+	/* redirect to a traffic class on the same device */
+	if (vsi->netdev->ifindex == ifindex) {
+		if (tc == 0) {
+			filter->seid = vsi->seid;
+			return 0;
+		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
+			if (!filter->dst_port) {
+				dev_err(&vsi->back->pdev->dev,
+					"Specify destination port to redirect to traffic class that is not default\n");
+				return -EINVAL;
+			}
+			if (list_empty(&vsi->ch_list))
+				return -EINVAL;
+			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
+						 list) {
+				if (ch->seid == vsi->tc_seid_map[tc])
+					filter->seid = ch->seid;
+			}
+			return 0;
+		}
+	}
+	return -EINVAL;
+}
+
+/**
+ * i40e_parse_tc_actions - Parse tc actions
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @filter: Pointer to cloud filter structure
+ *
+ **/
+static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
+				 struct i40e_cloud_filter *filter)
+{
+	const struct tc_action *a;
+	LIST_HEAD(actions);
+	int err;
+
+	if (!tcf_exts_has_actions(exts))
+		return -EINVAL;
+
+	tcf_exts_to_list(exts, &actions);
+	list_for_each_entry(a, &actions, list) {
+		/* Drop action */
+		if (is_tcf_gact_shot(a)) {
+			dev_err(&vsi->back->pdev->dev,
+				"Cloud filters do not support the drop action.\n");
+			return -EOPNOTSUPP;
+		}
+
+		/* Redirect to a traffic class on the same device */
+		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
+			int ifindex = tcf_mirred_ifindex(a);
+			u8 tc = tcf_mirred_tc(a);
+
+			err = i40e_handle_redirect_action(vsi, ifindex, tc,
+							  filter);
+			if (err == 0)
+				return err;
+		}
+	}
+	return -EINVAL;
+}
+
+/**
+ * i40e_configure_clsflower - Configure tc flower filters
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ *
+ **/
+static int i40e_configure_clsflower(struct i40e_vsi *vsi,
+				    struct tc_cls_flower_offload *cls_flower)
+{
+	struct i40e_cloud_filter *filter = NULL;
+	struct i40e_pf *pf = vsi->back;
+	int err = 0;
+
+	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
+	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
+		return -EBUSY;
+
+	if (pf->fdir_pf_active_filters ||
+	    (!hlist_empty(&pf->fdir_filter_list))) {
+		dev_err(&vsi->back->pdev->dev,
+			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
+		return -EINVAL;
+	}
+
+	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
+		dev_err(&vsi->back->pdev->dev,
+			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
+		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
+	}
+
+	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
+	if (!filter)
+		return -ENOMEM;
+
+	filter->cookie = cls_flower->cookie;
+
+	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
+	if (err < 0)
+		goto err;
+
+	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
+	if (err < 0)
+		goto err;
+
+	/* Add cloud filter */
+	if (filter->dst_port)
+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
+	else
+		err = i40e_add_del_cloud_filter(vsi, filter, true);
+
+	if (err) {
+		dev_err(&pf->pdev->dev,
+			"Failed to add cloud filter, err %s\n",
+			i40e_stat_str(&pf->hw, err));
+		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
+		goto err;
+	}
+
+	/* add filter to the ordered list */
+	INIT_HLIST_NODE(&filter->cloud_node);
+
+	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
+
+	pf->num_cloud_filters++;
+
+	return err;
+err:
+	kfree(filter);
+	return err;
+}
+
+/**
+ * i40e_find_cloud_filter - Find the could filter in the list
+ * @vsi: Pointer to VSI
+ * @cookie: filter specific cookie
+ *
+ **/
+static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
+							unsigned long *cookie)
+{
+	struct i40e_cloud_filter *filter = NULL;
+	struct hlist_node *node2;
+
+	hlist_for_each_entry_safe(filter, node2,
+				  &vsi->back->cloud_filter_list, cloud_node)
+		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
+			return filter;
+	return NULL;
+}
+
+/**
+ * i40e_delete_clsflower - Remove tc flower filters
+ * @vsi: Pointer to VSI
+ * @cls_flower: Pointer to struct tc_cls_flower_offload
+ *
+ **/
+static int i40e_delete_clsflower(struct i40e_vsi *vsi,
+				 struct tc_cls_flower_offload *cls_flower)
+{
+	struct i40e_cloud_filter *filter = NULL;
+	struct i40e_pf *pf = vsi->back;
+	int err = 0;
+
+	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
+
+	if (!filter)
+		return -EINVAL;
+
+	hash_del(&filter->cloud_node);
+
+	if (filter->dst_port)
+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
+	else
+		err = i40e_add_del_cloud_filter(vsi, filter, false);
+	if (err) {
+		kfree(filter);
+		dev_err(&pf->pdev->dev,
+			"Failed to delete cloud filter, err %s\n",
+			i40e_stat_str(&pf->hw, err));
+		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
+	}
+
+	kfree(filter);
+	pf->num_cloud_filters--;
+
+	if (!pf->num_cloud_filters)
+		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
+		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
+			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
+			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
+		}
+	return 0;
+}
+
+/**
+ * i40e_setup_tc_cls_flower - flower classifier offloads
+ * @netdev: net device to configure
+ * @type_data: offload data
+ **/
+static int i40e_setup_tc_cls_flower(struct net_device *netdev,
+				    struct tc_cls_flower_offload *cls_flower)
+{
+	struct i40e_netdev_priv *np = netdev_priv(netdev);
+	struct i40e_vsi *vsi = np->vsi;
+
+	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
+	    cls_flower->common.chain_index)
+		return -EOPNOTSUPP;
+
+	switch (cls_flower->command) {
+	case TC_CLSFLOWER_REPLACE:
+		return i40e_configure_clsflower(vsi, cls_flower);
+	case TC_CLSFLOWER_DESTROY:
+		return i40e_delete_clsflower(vsi, cls_flower);
+	case TC_CLSFLOWER_STATS:
+		return -EOPNOTSUPP;
+	default:
+		return -EINVAL;
+	}
+}
+
 static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
 			   void *type_data)
 {
-	if (type != TC_SETUP_MQPRIO)
+	switch (type) {
+	case TC_SETUP_MQPRIO:
+		return i40e_setup_tc(netdev, type_data);
+	case TC_SETUP_CLSFLOWER:
+		return i40e_setup_tc_cls_flower(netdev, type_data);
+	default:
 		return -EOPNOTSUPP;
-
-	return i40e_setup_tc(netdev, type_data);
+	}
 }
 
 /**
@@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
 		kfree(cfilter);
 	}
 	pf->num_cloud_filters = 0;
+
+	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
+	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
+		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
+		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
+		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
+	}
 }
 
 /**
@@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
  * i40e_get_capabilities - get info about the HW
  * @pf: the PF struct
  **/
-static int i40e_get_capabilities(struct i40e_pf *pf)
+static int i40e_get_capabilities(struct i40e_pf *pf,
+				 enum i40e_admin_queue_opc list_type)
 {
 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
 	u16 data_size;
@@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
 
 		/* this loads the data into the hw struct for us */
 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
-					    &data_size,
-					    i40e_aqc_opc_list_func_capabilities,
-					    NULL);
+						    &data_size, list_type,
+						    NULL);
 		/* data loaded, buffer no longer needed */
 		kfree(cap_buf);
 
@@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
 		}
 	} while (err);
 
-	if (pf->hw.debug_mask & I40E_DEBUG_USER)
-		dev_info(&pf->pdev->dev,
-			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
-			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
-			 pf->hw.func_caps.num_msix_vectors,
-			 pf->hw.func_caps.num_msix_vectors_vf,
-			 pf->hw.func_caps.fd_filters_guaranteed,
-			 pf->hw.func_caps.fd_filters_best_effort,
-			 pf->hw.func_caps.num_tx_qp,
-			 pf->hw.func_caps.num_vsis);
-
+	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
+		if (list_type == i40e_aqc_opc_list_func_capabilities) {
+			dev_info(&pf->pdev->dev,
+				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
+				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
+				 pf->hw.func_caps.num_msix_vectors,
+				 pf->hw.func_caps.num_msix_vectors_vf,
+				 pf->hw.func_caps.fd_filters_guaranteed,
+				 pf->hw.func_caps.fd_filters_best_effort,
+				 pf->hw.func_caps.num_tx_qp,
+				 pf->hw.func_caps.num_vsis);
+		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
+			dev_info(&pf->pdev->dev,
+				 "switch_mode=0x%04x, function_valid=0x%08x\n",
+				 pf->hw.dev_caps.switch_mode,
+				 pf->hw.dev_caps.valid_functions);
+			dev_info(&pf->pdev->dev,
+				 "SR-IOV=%d, num_vfs for all function=%u\n",
+				 pf->hw.dev_caps.sr_iov_1_1,
+				 pf->hw.dev_caps.num_vfs);
+			dev_info(&pf->pdev->dev,
+				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
+				 pf->hw.dev_caps.num_vsis,
+				 pf->hw.dev_caps.num_rx_qp,
+				 pf->hw.dev_caps.num_tx_qp);
+		}
+	}
+	if (list_type == i40e_aqc_opc_list_func_capabilities) {
 #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
 		       + pf->hw.func_caps.num_vfs)
-	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
-		dev_info(&pf->pdev->dev,
-			 "got num_vsis %d, setting num_vsis to %d\n",
-			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
-		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
+		if (pf->hw.revision_id == 0 &&
+		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
+			dev_info(&pf->pdev->dev,
+				 "got num_vsis %d, setting num_vsis to %d\n",
+				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
+			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
+		}
 	}
-
 	return 0;
 }
 
@@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
 		if (!vsi) {
 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 			return;
 		}
 	}
@@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
 }
 
 /**
+ * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
+ * @vsi: PF main vsi
+ * @seid: seid of main or channel VSIs
+ *
+ * Rebuilds cloud filters associated with main VSI and channel VSIs if they
+ * existed before reset
+ **/
+static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
+{
+	struct i40e_cloud_filter *cfilter;
+	struct i40e_pf *pf = vsi->back;
+	struct hlist_node *node;
+	i40e_status ret;
+
+	/* Add cloud filters back if they exist */
+	if (hlist_empty(&pf->cloud_filter_list))
+		return 0;
+
+	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
+				  cloud_node) {
+		if (cfilter->seid != seid)
+			continue;
+
+		if (cfilter->dst_port)
+			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
+								true);
+		else
+			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
+
+		if (ret) {
+			dev_dbg(&pf->pdev->dev,
+				"Failed to rebuild cloud filter, err %s aq_err %s\n",
+				i40e_stat_str(&pf->hw, ret),
+				i40e_aq_str(&pf->hw,
+					    pf->hw.aq.asq_last_status));
+			return ret;
+		}
+	}
+	return 0;
+}
+
+/**
  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
  * @vsi: PF main vsi
  *
@@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
 						I40E_BW_CREDIT_DIVISOR,
 				ch->seid);
 		}
+		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
+		if (ret) {
+			dev_dbg(&vsi->back->pdev->dev,
+				"Failed to rebuild cloud filters for channel VSI %u\n",
+				ch->seid);
+			return ret;
+		}
 	}
 	return 0;
 }
@@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
 		i40e_verify_eeprom(pf);
 
 	i40e_clear_pxe_mode(hw);
-	ret = i40e_get_capabilities(pf);
+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
 	if (ret)
 		goto end_core_reset;
 
@@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
 			goto end_unlock;
 	}
 
+	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
+	if (ret)
+		goto end_unlock;
+
 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
 	 * for this main VSI if they exist
 	 */
@@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
 	    (pf->num_fdsb_msix == 0)) {
 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 	}
 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
 	    (pf->num_vmdq_msix == 0)) {
@@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
 				       I40E_FLAG_FD_SB_ENABLED	|
 				       I40E_FLAG_FD_ATR_ENABLED	|
 				       I40E_FLAG_VMDQ_ENABLED);
+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 
 			/* rework the queue expectations without MSIX */
 			i40e_determine_queue_usage(pf);
@@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
 		/* Enable filters and mark for reset */
 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
 			need_reset = true;
-		/* enable FD_SB only if there is MSI-X vector */
-		if (pf->num_fdsb_msix > 0)
+		/* enable FD_SB only if there is MSI-X vector and no cloud
+		 * filters exist
+		 */
+		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
+		}
 	} else {
 		/* turn off filters, mark for reset and clear SW filter list */
 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
@@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
 		}
 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
+
 		/* reset fd counters */
 		pf->fd_add_err = 0;
 		pf->fd_atr_cnt = 0;
@@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
 		netdev->hw_features |= NETIF_F_NTUPLE;
 	hw_features = hw_enc_features		|
 		      NETIF_F_HW_VLAN_CTAG_TX	|
-		      NETIF_F_HW_VLAN_CTAG_RX;
+		      NETIF_F_HW_VLAN_CTAG_RX	|
+		      NETIF_F_HW_TC;
 
 	netdev->hw_features |= hw_features;
 
@@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 	*/
 
 	if ((pf->hw.pf_id == 0) &&
-	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
+	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
+		pf->last_sw_conf_flags = flags;
+	}
 
 	if (pf->hw.pf_id == 0) {
 		u16 valid_flags;
@@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
 					     pf->hw.aq.asq_last_status));
 			/* not a fatal problem, just keep going */
 		}
+		pf->last_sw_conf_valid_flags = valid_flags;
 	}
 
 	/* first time setup */
@@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
 			       I40E_FLAG_DCB_ENABLED	|
 			       I40E_FLAG_SRIOV_ENABLED	|
 			       I40E_FLAG_VMDQ_ENABLED);
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
 				  I40E_FLAG_FD_SB_ENABLED |
 				  I40E_FLAG_FD_ATR_ENABLED |
@@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
 			       I40E_FLAG_FD_ATR_ENABLED	|
 			       I40E_FLAG_DCB_ENABLED	|
 			       I40E_FLAG_VMDQ_ENABLED);
+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 	} else {
 		/* Not enough queues for all TCs */
 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
@@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
 			queues_left -= 1; /* save 1 queue for FD */
 		} else {
 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
 		}
 	}
@@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
 
 	i40e_clear_pxe_mode(hw);
-	err = i40e_get_capabilities(pf);
+	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
 	if (err)
 		goto err_adminq_setup;
 
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 92869f5..3bb6659 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
 		struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
 				   struct i40e_asq_cmd_details *cmd_details);
+i40e_status
+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count);
+enum i40e_status_code
+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count);
+enum i40e_status_code
+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
+			  struct i40e_aqc_cloud_filters_element_data *filters,
+			  u8 filter_count);
+i40e_status
+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
+			     struct i40e_aqc_cloud_filters_element_bb *filters,
+			     u8 filter_count);
 i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
 			       struct i40e_lldp_variables *lldp_cfg);
 /* i40e_common */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index c019f46..af38881 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
 #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
 #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
 #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
+#define I40E_SWITCH_MODE_MASK		0xF
 
 	u32  management_mode;
 	u32  mng_protocols_over_mctp;
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
index b8c78bf..4fe27f0 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
@@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
 		struct {
 			u8 data[16];
 		} v6;
+		struct {
+			__le16 data[8];
+		} raw_v6;
 	} ipaddr;
 	__le16	flags;
 #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 0/7] tc-flower based cloud filters in i40e
  2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13 10:12   ` Jiri Pirko
  -1 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-13 10:12 UTC (permalink / raw)
  To: Amritha Nambiar
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev

Wed, Sep 13, 2017 at 11:59:13AM CEST, amritha.nambiar@intel.com wrote:
>This patch series enables configuring cloud filters in i40e
>using the tc-flower classifier. The only tc-filter action
>supported is to redirect packets to a traffic class on the
>same device. The mirror/redirect action is extended to
>accept a traffic class to achieve this.
>
>The cloud filters are added for a VSI and are cleaned up when
>the VSI is deleted. The filters that match on L4 ports needs
>enhanced admin queue functions with big buffer support for
>extended fields in cloud filter commands.
>
>Example:
># tc qdisc add dev eth0 ingress
>
># ethtool -K eth0 hw-tc-offload on
>
># tc filter add dev eth0 protocol ip parent ffff: prio 1 flower\
>  dst_ip 192.168.1.1/32 ip_proto udp dst_port 22\
>  skip_sw action mirred ingress redirect dev eth0 tclass 1
>
># tc filter show dev eth0 parent ffff:
>filter protocol ip pref 1 flower chain 0
>filter protocol ip pref 1 flower chain 0 handle 0x1
>  eth_type ipv4
>  ip_proto udp
>  dst_ip 192.168.1.1
>  dst_port 22
>  skip_sw
>  in_hw
>        action order 1: mirred (Ingress Redirect to device eth0) stolen tclass 1
>        index 7 ref 1 bind 1
>
>v3: Added an extra patch to clean up white-space noise. Cleaned up
>some lengthy function names. Used __be32 array for ipv6 address.
>Used macro for IP version. Minor formatting changes.
>
>---
>
>Amritha Nambiar (7):
>      tc_mirred: Clean up white-space noise
>      sched: act_mirred: Traffic class option for mirror/redirect action
>      i40e: Map TCs with the VSI seids
>      i40e: Cloud filter mode for set_switch_config command
>      i40e: Admin queue definitions for cloud filters
>      i40e: Clean up of cloud filters
>      i40e: Enable cloud filters via tc-flower

Would be good to use get_maintainers script and cc people if you want
comments.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 0/7] tc-flower based cloud filters in i40e
@ 2017-09-13 10:12   ` Jiri Pirko
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-13 10:12 UTC (permalink / raw)
  To: intel-wired-lan

Wed, Sep 13, 2017 at 11:59:13AM CEST, amritha.nambiar at intel.com wrote:
>This patch series enables configuring cloud filters in i40e
>using the tc-flower classifier. The only tc-filter action
>supported is to redirect packets to a traffic class on the
>same device. The mirror/redirect action is extended to
>accept a traffic class to achieve this.
>
>The cloud filters are added for a VSI and are cleaned up when
>the VSI is deleted. The filters that match on L4 ports needs
>enhanced admin queue functions with big buffer support for
>extended fields in cloud filter commands.
>
>Example:
># tc qdisc add dev eth0 ingress
>
># ethtool -K eth0 hw-tc-offload on
>
># tc filter add dev eth0 protocol ip parent ffff: prio 1 flower\
>  dst_ip 192.168.1.1/32 ip_proto udp dst_port 22\
>  skip_sw action mirred ingress redirect dev eth0 tclass 1
>
># tc filter show dev eth0 parent ffff:
>filter protocol ip pref 1 flower chain 0
>filter protocol ip pref 1 flower chain 0 handle 0x1
>  eth_type ipv4
>  ip_proto udp
>  dst_ip 192.168.1.1
>  dst_port 22
>  skip_sw
>  in_hw
>        action order 1: mirred (Ingress Redirect to device eth0) stolen tclass 1
>        index 7 ref 1 bind 1
>
>v3: Added an extra patch to clean up white-space noise. Cleaned up
>some lengthy function names. Used __be32 array for ipv6 address.
>Used macro for IP version. Minor formatting changes.
>
>---
>
>Amritha Nambiar (7):
>      tc_mirred: Clean up white-space noise
>      sched: act_mirred: Traffic class option for mirror/redirect action
>      i40e: Map TCs with the VSI seids
>      i40e: Cloud filter mode for set_switch_config command
>      i40e: Admin queue definitions for cloud filters
>      i40e: Clean up of cloud filters
>      i40e: Enable cloud filters via tc-flower

Would be good to use get_maintainers script and cc people if you want
comments.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action
  2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13 13:18     ` Jiri Pirko
  -1 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-13 13:18 UTC (permalink / raw)
  To: Amritha Nambiar
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev, mlxsw

Wed, Sep 13, 2017 at 11:59:24AM CEST, amritha.nambiar@intel.com wrote:
>Adds optional traffic class parameter to the mirror/redirect action.
>The mirror/redirect action is extended to forward to a traffic
>class on the device if the traffic class index is provided in
>addition to the device's ifindex.

Do I understand it correctly that you just abuse mirred to pas tcclass
index down to the driver, without actually doing anything with the value
inside mirred-code ? That is a bit confusing for me.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action
@ 2017-09-13 13:18     ` Jiri Pirko
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-13 13:18 UTC (permalink / raw)
  To: intel-wired-lan

Wed, Sep 13, 2017 at 11:59:24AM CEST, amritha.nambiar at intel.com wrote:
>Adds optional traffic class parameter to the mirror/redirect action.
>The mirror/redirect action is extended to forward to a traffic
>class on the device if the traffic class index is provided in
>addition to the device's ifindex.

Do I understand it correctly that you just abuse mirred to pas tcclass
index down to the driver, without actually doing anything with the value
inside mirred-code ? That is a bit confusing for me.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
  2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
@ 2017-09-13 13:26     ` Jiri Pirko
  -1 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-13 13:26 UTC (permalink / raw)
  To: Amritha Nambiar
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev, mlxsw

Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar@intel.com wrote:
>This patch enables tc-flower based hardware offloads. tc flower
>filter provided by the kernel is configured as driver specific
>cloud filter. The patch implements functions and admin queue
>commands needed to support cloud filters in the driver and
>adds cloud filters to configure these tc-flower filters.
>
>The only action supported is to redirect packets to a traffic class
>on the same device.

So basically you are not doing redirect, you are just setting tclass for
matched packets, right? Why you use mirred for this? I think that
you might consider extending g_act for that:

# tc filter add dev eth0 protocol ip ingress \
  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
  action tclass 0


>
># tc qdisc add dev eth0 ingress
># ethtool -K eth0 hw-tc-offload on
>
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>  action mirred ingress redirect dev eth0 tclass 0
>
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_sw\
>  action mirred ingress redirect dev eth0 tclass 1
>
># tc filter add dev eth0 protocol ipv6 parent ffff:\
>  prio 3 flower dst_ip fe8::200:1\
>  ip_proto udp dst_port 66 skip_sw\
>  action mirred ingress redirect dev eth0 tclass 1
>
>Delete tc flower filter:
>Example:
>
># tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
># tc filter del dev eth0 parent ffff:
>
>Flow Director Sideband is disabled while configuring cloud filters
>via tc-flower and until any cloud filter exists.
>
>Unsupported matches when cloud filters are added using enhanced
>big buffer cloud filter mode of underlying switch include:
>1. source port and source IP
>2. Combined MAC address and IP fields.
>3. Not specifying L4 port
>
>These filter matches can however be used to redirect traffic to
>the main VSI (tc 0) which does not require the enhanced big buffer
>cloud filter support.
>
>v3: Cleaned up some lengthy function names. Changed ipv6 address to
>__be32 array instead of u8 array. Used macro for IP version. Minor
>formatting changes.
>v2:
>1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>2. Moved dev_info for add/deleting cloud filters in else condition
>3. Fixed some format specifier in dev_err logs
>4. Refactored i40e_get_capabilities to take an additional
>   list_type parameter and use it to query device and function
>   level capabilities.
>5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>   to verify if redirect to a traffic class is supported.
>6. Added comments for Geneve fix in cloud filter big buffer AQ
>   function definitions.
>7. Cleaned up setup_tc interface to rebase and work with Jiri's
>   updates, separate function to process tc cls flower offloads.
>8. Changes to make Flow Director Sideband and Cloud filters mutually
>   exclusive.
>
>Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>---
> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
> 7 files changed, 1202 insertions(+), 30 deletions(-)
>
>diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>index 6018fb6..b110519 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e.h
>@@ -55,6 +55,8 @@
> #include <linux/net_tstamp.h>
> #include <linux/ptp_clock_kernel.h>
> #include <net/pkt_cls.h>
>+#include <net/tc_act/tc_gact.h>
>+#include <net/tc_act/tc_mirred.h>
> #include "i40e_type.h"
> #include "i40e_prototype.h"
> #include "i40e_client.h"
>@@ -252,9 +254,52 @@ struct i40e_fdir_filter {
> 	u32 fd_id;
> };
> 
>+#define IPV4_VERSION 4
>+#define IPV6_VERSION 6
>+
>+#define I40E_CLOUD_FIELD_OMAC	0x01
>+#define I40E_CLOUD_FIELD_IMAC	0x02
>+#define I40E_CLOUD_FIELD_IVLAN	0x04
>+#define I40E_CLOUD_FIELD_TEN_ID	0x08
>+#define I40E_CLOUD_FIELD_IIP	0x10
>+
>+#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>+						 I40E_CLOUD_FIELD_IVLAN)
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>+						 I40E_CLOUD_FIELD_TEN_ID)
>+#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>+						  I40E_CLOUD_FIELD_IMAC | \
>+						  I40E_CLOUD_FIELD_TEN_ID)
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>+						   I40E_CLOUD_FIELD_IVLAN | \
>+						   I40E_CLOUD_FIELD_TEN_ID)
>+#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>+
> struct i40e_cloud_filter {
> 	struct hlist_node cloud_node;
> 	unsigned long cookie;
>+	/* cloud filter input set follows */
>+	u8 dst_mac[ETH_ALEN];
>+	u8 src_mac[ETH_ALEN];
>+	__be16 vlan_id;
>+	__be32 dst_ip;
>+	__be32 src_ip;
>+	__be32 dst_ipv6[4];
>+	__be32 src_ipv6[4];
>+	__be16 dst_port;
>+	__be16 src_port;
>+	u32 ip_version;
>+	u8 ip_proto;	/* IPPROTO value */
>+	/* L4 port type: src or destination port */
>+#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>+#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>+	u8 port_type;
>+	u32 tenant_id;
>+	u8 flags;
>+#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>+	u8 tunnel_type;
> 	u16 seid;	/* filter control */
> };
> 
>@@ -491,6 +536,8 @@ struct i40e_pf {
> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>+#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>+#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
> 
> 	struct i40e_client_instance *cinst;
> 	bool stat_offsets_loaded;
>@@ -573,6 +620,8 @@ struct i40e_pf {
> 	u16 phy_led_val;
> 
> 	u16 override_q_count;
>+	u16 last_sw_conf_flags;
>+	u16 last_sw_conf_valid_flags;
> };
> 
> /**
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>index 2e567c2..feb3d42 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>@@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
> 		struct {
> 			u8 data[16];
> 		} v6;
>+		struct {
>+			__le16 data[8];
>+		} raw_v6;
> 	} ipaddr;
> 	__le16	flags;
> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>index 9567702..d9c9665 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>@@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
> 
> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
> 				   track_id, &offset, &info, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_add_cloud_filters
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to add cloud filters from
>+ * @filters: Buffer which contains the filters to be added
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Set the cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
>+ * of the function.
>+ *
>+ **/
>+enum i40e_status_code
>+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	enum i40e_status_code status;
>+	u16 buff_len;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_add_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_add_cloud_filters_bb
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to add cloud filters from
>+ * @filters: Buffer which contains the filters in big buffer to be added
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Set the big buffer cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>+ * function.
>+ *
>+ **/
>+i40e_status
>+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	i40e_status status;
>+	u16 buff_len;
>+	int i;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_add_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>+
>+	for (i = 0; i < filter_count; i++) {
>+		u16 tnl_type;
>+		u32 ti;
>+
>+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>+
>+		/* For Geneve, the VNI should be placed in offset shifted by a
>+		 * byte than the offset for the Tenant ID for rest of the
>+		 * tunnels.
>+		 */
>+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>+			ti = le32_to_cpu(filters[i].element.tenant_id);
>+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>+		}
>+	}
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_rem_cloud_filters
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to remove cloud filters from
>+ * @filters: Buffer which contains the filters to be removed
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Remove the cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
>+ * of the function.
>+ *
>+ **/
>+enum i40e_status_code
>+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	enum i40e_status_code status;
>+	u16 buff_len;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_remove_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_rem_cloud_filters_bb
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to remove cloud filters from
>+ * @filters: Buffer which contains the filters in big buffer to be removed
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Remove the big buffer cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>+ * function.
>+ *
>+ **/
>+i40e_status
>+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	i40e_status status;
>+	u16 buff_len;
>+	int i;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_remove_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>+
>+	for (i = 0; i < filter_count; i++) {
>+		u16 tnl_type;
>+		u32 ti;
>+
>+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>+
>+		/* For Geneve, the VNI should be placed in offset shifted by a
>+		 * byte than the offset for the Tenant ID for rest of the
>+		 * tunnels.
>+		 */
>+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>+			ti = le32_to_cpu(filters[i].element.tenant_id);
>+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>+		}
>+	}
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
> 	return status;
> }
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>index afcf08a..96ee608 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>@@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>+				     struct i40e_cloud_filter *filter,
>+				     bool add);
>+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>+					     struct i40e_cloud_filter *filter,
>+					     bool add);
>+static int i40e_get_capabilities(struct i40e_pf *pf,
>+				 enum i40e_admin_queue_opc list_type);
>+
> 
> /* i40e_pci_tbl - PCI Device ID Table
>  *
>@@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>  **/
> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
> {
>+	enum i40e_admin_queue_err last_aq_status;
>+	struct i40e_cloud_filter *cfilter;
> 	struct i40e_channel *ch, *ch_tmp;
>+	struct i40e_pf *pf = vsi->back;
>+	struct hlist_node *node;
> 	int ret, i;
> 
> 	/* Reset rss size that was stored when reconfiguring rss for
>@@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
> 				 "Failed to reset tx rate for ch->seid %u\n",
> 				 ch->seid);
> 
>+		/* delete cloud filters associated with this channel */
>+		hlist_for_each_entry_safe(cfilter, node,
>+					  &pf->cloud_filter_list, cloud_node) {
>+			if (cfilter->seid != ch->seid)
>+				continue;
>+
>+			hash_del(&cfilter->cloud_node);
>+			if (cfilter->dst_port)
>+				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>+									cfilter,
>+									false);
>+			else
>+				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>+								false);
>+			last_aq_status = pf->hw.aq.asq_last_status;
>+			if (ret)
>+				dev_info(&pf->pdev->dev,
>+					 "Failed to delete cloud filter, err %s aq_err %s\n",
>+					 i40e_stat_str(&pf->hw, ret),
>+					 i40e_aq_str(&pf->hw, last_aq_status));
>+			kfree(cfilter);
>+		}
>+
> 		/* delete VSI from FW */
> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
> 					     NULL);
>@@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
> }
> 
> /**
>+ * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>+ * @vsi: ptr to VSI which has PF backing
>+ * @l4type: true for TCP ond false for UDP
>+ * @port_type: true if port is destination and false if port is source
>+ *
>+ * Sets up switch mode correctly if it needs to be changed and perform
>+ * what are allowed modes.
>+ **/
>+static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>+					     bool port_type)
>+{
>+	u8 mode;
>+	struct i40e_pf *pf = vsi->back;
>+	struct i40e_hw *hw = &pf->hw;
>+	int ret;
>+
>+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>+	if (ret)
>+		return -EINVAL;
>+
>+	if (hw->dev_caps.switch_mode) {
>+		/* if switch mode is set, support mode2 (non-tunneled for
>+		 * cloud filter) for now
>+		 */
>+		u32 switch_mode = hw->dev_caps.switch_mode &
>+							I40E_SWITCH_MODE_MASK;
>+		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>+			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>+				return 0;
>+			dev_err(&pf->pdev->dev,
>+				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>+				hw->dev_caps.switch_mode);
>+			return -EINVAL;
>+		}
>+	}
>+
>+	/* port_type: true for destination port and false for source port
>+	 * For now, supports only destination port type
>+	 */
>+	if (!port_type) {
>+		dev_err(&pf->pdev->dev, "src port type not supported\n");
>+		return -EINVAL;
>+	}
>+
>+	/* Set Bit 7 to be valid */
>+	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>+
>+	/* Set L4type to both TCP and UDP support */
>+	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>+
>+	/* Set cloud filter mode */
>+	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>+
>+	/* Prep mode field for set_switch_config */
>+	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>+					pf->last_sw_conf_valid_flags,
>+					mode, NULL);
>+	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>+		dev_err(&pf->pdev->dev,
>+			"couldn't set switch config bits, err %s aq_err %s\n",
>+			i40e_stat_str(hw, ret),
>+			i40e_aq_str(hw,
>+				    hw->aq.asq_last_status));
>+
>+	return ret;
>+}
>+
>+/**
>  * i40e_create_queue_channel - function to create channel
>  * @vsi: VSI to be configured
>  * @ch: ptr to channel (it contains channel specific params)
>@@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
> 	return ret;
> }
> 
>+/**
>+ * i40e_set_cld_element - sets cloud filter element data
>+ * @filter: cloud filter rule
>+ * @cld: ptr to cloud filter element data
>+ *
>+ * This is helper function to copy data into cloud filter element
>+ **/
>+static inline void
>+i40e_set_cld_element(struct i40e_cloud_filter *filter,
>+		     struct i40e_aqc_cloud_filters_element_data *cld)
>+{
>+	int i, j;
>+	u32 ipa;
>+
>+	memset(cld, 0, sizeof(*cld));
>+	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>+	ether_addr_copy(cld->inner_mac, filter->src_mac);
>+
>+	if (filter->ip_version == IPV6_VERSION) {
>+#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>+		for (i = 0, j = 0; i < 4; i++, j += 2) {
>+			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>+			ipa = cpu_to_le32(ipa);
>+			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>+		}
>+	} else {
>+		ipa = be32_to_cpu(filter->dst_ip);
>+		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>+	}
>+
>+	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>+
>+	/* tenant_id is not supported by FW now, once the support is enabled
>+	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>+	 */
>+	if (filter->tenant_id)
>+		return;
>+}
>+
>+/**
>+ * i40e_add_del_cloud_filter - Add/del cloud filter
>+ * @vsi: pointer to VSI
>+ * @filter: cloud filter rule
>+ * @add: if true, add, if false, delete
>+ *
>+ * Add or delete a cloud filter for a specific flow spec.
>+ * Returns 0 if the filter were successfully added.
>+ **/
>+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>+				     struct i40e_cloud_filter *filter, bool add)
>+{
>+	struct i40e_aqc_cloud_filters_element_data cld_filter;
>+	struct i40e_pf *pf = vsi->back;
>+	int ret;
>+	static const u16 flag_table[128] = {
>+		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>+			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>+		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>+			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>+		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>+			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>+	};
>+
>+	if (filter->flags >= ARRAY_SIZE(flag_table))
>+		return I40E_ERR_CONFIG;
>+
>+	/* copy element needed to add cloud filter from filter */
>+	i40e_set_cld_element(filter, &cld_filter);
>+
>+	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>+		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>+					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>+
>+	if (filter->ip_version == IPV6_VERSION)
>+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>+						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>+	else
>+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>+						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>+
>+	if (add)
>+		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>+						&cld_filter, 1);
>+	else
>+		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>+						&cld_filter, 1);
>+	if (ret)
>+		dev_dbg(&pf->pdev->dev,
>+			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>+			add ? "add" : "delete", filter->dst_port, ret,
>+			pf->hw.aq.asq_last_status);
>+	else
>+		dev_info(&pf->pdev->dev,
>+			 "%s cloud filter for VSI: %d\n",
>+			 add ? "Added" : "Deleted", filter->seid);
>+	return ret;
>+}
>+
>+/**
>+ * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>+ * @vsi: pointer to VSI
>+ * @filter: cloud filter rule
>+ * @add: if true, add, if false, delete
>+ *
>+ * Add or delete a cloud filter for a specific flow spec using big buffer.
>+ * Returns 0 if the filter were successfully added.
>+ **/
>+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>+					     struct i40e_cloud_filter *filter,
>+					     bool add)
>+{
>+	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>+	struct i40e_pf *pf = vsi->back;
>+	int ret;
>+
>+	/* Both (Outer/Inner) valid mac_addr are not supported */
>+	if (is_valid_ether_addr(filter->dst_mac) &&
>+	    is_valid_ether_addr(filter->src_mac))
>+		return -EINVAL;
>+
>+	/* Make sure port is specified, otherwise bail out, for channel
>+	 * specific cloud filter needs 'L4 port' to be non-zero
>+	 */
>+	if (!filter->dst_port)
>+		return -EINVAL;
>+
>+	/* adding filter using src_port/src_ip is not supported at this stage */
>+	if (filter->src_port || filter->src_ip ||
>+	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>+		return -EINVAL;
>+
>+	/* copy element needed to add cloud filter from filter */
>+	i40e_set_cld_element(filter, &cld_filter.element);
>+
>+	if (is_valid_ether_addr(filter->dst_mac) ||
>+	    is_valid_ether_addr(filter->src_mac) ||
>+	    is_multicast_ether_addr(filter->dst_mac) ||
>+	    is_multicast_ether_addr(filter->src_mac)) {
>+		/* MAC + IP : unsupported mode */
>+		if (filter->dst_ip)
>+			return -EINVAL;
>+
>+		/* since we validated that L4 port must be valid before
>+		 * we get here, start with respective "flags" value
>+		 * and update if vlan is present or not
>+		 */
>+		cld_filter.element.flags =
>+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>+
>+		if (filter->vlan_id) {
>+			cld_filter.element.flags =
>+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>+		}
>+
>+	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>+		cld_filter.element.flags =
>+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>+		if (filter->ip_version == IPV6_VERSION)
>+			cld_filter.element.flags |=
>+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>+		else
>+			cld_filter.element.flags |=
>+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>+	} else {
>+		dev_err(&pf->pdev->dev,
>+			"either mac or ip has to be valid for cloud filter\n");
>+		return -EINVAL;
>+	}
>+
>+	/* Now copy L4 port in Byte 6..7 in general fields */
>+	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>+						be16_to_cpu(filter->dst_port);
>+
>+	if (add) {
>+		bool proto_type, port_type;
>+
>+		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>+		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>+			     true : false;
>+
>+		/* For now, src port based cloud filter for channel is not
>+		 * supported
>+		 */
>+		if (!port_type) {
>+			dev_err(&pf->pdev->dev,
>+				"unsupported port type (src port)\n");
>+			return -EOPNOTSUPP;
>+		}
>+
>+		/* Validate current device switch mode, change if necessary */
>+		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>+							port_type);
>+		if (ret) {
>+			dev_err(&pf->pdev->dev,
>+				"failed to set switch mode, ret %d\n",
>+				ret);
>+			return ret;
>+		}
>+
>+		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>+						   &cld_filter, 1);
>+	} else {
>+		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>+						   &cld_filter, 1);
>+	}
>+
>+	if (ret)
>+		dev_dbg(&pf->pdev->dev,
>+			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>+			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>+	else
>+		dev_info(&pf->pdev->dev,
>+			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>+			 add ? "add" : "delete", filter->seid,
>+			 ntohs(filter->dst_port));
>+	return ret;
>+}
>+
>+/**
>+ * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ * @filter: Pointer to cloud filter structure
>+ *
>+ **/
>+static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>+				 struct tc_cls_flower_offload *f,
>+				 struct i40e_cloud_filter *filter)
>+{
>+	struct i40e_pf *pf = vsi->back;
>+	u16 addr_type = 0;
>+	u8 field_flags = 0;
>+
>+	if (f->dissector->used_keys &
>+	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>+	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>+	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>+	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>+	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>+	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>+	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>+	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>+		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>+			f->dissector->used_keys);
>+		return -EOPNOTSUPP;
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>+		struct flow_dissector_key_keyid *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>+						  f->key);
>+
>+		struct flow_dissector_key_keyid *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>+						  f->mask);
>+
>+		if (mask->keyid != 0)
>+			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>+
>+		filter->tenant_id = be32_to_cpu(key->keyid);
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>+		struct flow_dissector_key_basic *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_BASIC,
>+						  f->key);
>+
>+		filter->ip_proto = key->ip_proto;
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>+		struct flow_dissector_key_eth_addrs *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>+						  f->key);
>+
>+		struct flow_dissector_key_eth_addrs *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>+						  f->mask);
>+
>+		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>+		if (!is_zero_ether_addr(mask->dst)) {
>+			if (is_broadcast_ether_addr(mask->dst)) {
>+				field_flags |= I40E_CLOUD_FIELD_OMAC;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>+					mask->dst);
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (!is_zero_ether_addr(mask->src)) {
>+			if (is_broadcast_ether_addr(mask->src)) {
>+				field_flags |= I40E_CLOUD_FIELD_IMAC;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>+					mask->src);
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+		ether_addr_copy(filter->dst_mac, key->dst);
>+		ether_addr_copy(filter->src_mac, key->src);
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>+		struct flow_dissector_key_vlan *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_VLAN,
>+						  f->key);
>+		struct flow_dissector_key_vlan *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_VLAN,
>+						  f->mask);
>+
>+		if (mask->vlan_id) {
>+			if (mask->vlan_id == VLAN_VID_MASK) {
>+				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>+
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>+					mask->vlan_id);
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		filter->vlan_id = cpu_to_be16(key->vlan_id);
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>+		struct flow_dissector_key_control *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_CONTROL,
>+						  f->key);
>+
>+		addr_type = key->addr_type;
>+	}
>+
>+	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>+		struct flow_dissector_key_ipv4_addrs *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>+						  f->key);
>+		struct flow_dissector_key_ipv4_addrs *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>+						  f->mask);
>+
>+		if (mask->dst) {
>+			if (mask->dst == cpu_to_be32(0xffffffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>+					be32_to_cpu(mask->dst));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (mask->src) {
>+			if (mask->src == cpu_to_be32(0xffffffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>+					be32_to_cpu(mask->dst));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>+			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>+			return I40E_ERR_CONFIG;
>+		}
>+		filter->dst_ip = key->dst;
>+		filter->src_ip = key->src;
>+		filter->ip_version = IPV4_VERSION;
>+	}
>+
>+	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>+		struct flow_dissector_key_ipv6_addrs *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>+						  f->key);
>+		struct flow_dissector_key_ipv6_addrs *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>+						  f->mask);
>+
>+		/* src and dest IPV6 address should not be LOOPBACK
>+		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>+		 */
>+		if (ipv6_addr_loopback(&key->dst) ||
>+		    ipv6_addr_loopback(&key->src)) {
>+			dev_err(&pf->pdev->dev,
>+				"Bad ipv6, addr is LOOPBACK\n");
>+			return I40E_ERR_CONFIG;
>+		}
>+		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>+			field_flags |= I40E_CLOUD_FIELD_IIP;
>+
>+		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>+		       sizeof(filter->src_ipv6));
>+		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>+		       sizeof(filter->dst_ipv6));
>+
>+		/* mark it as IPv6 filter, to be used later */
>+		filter->ip_version = IPV6_VERSION;
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>+		struct flow_dissector_key_ports *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_PORTS,
>+						  f->key);
>+		struct flow_dissector_key_ports *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_PORTS,
>+						  f->mask);
>+
>+		if (mask->src) {
>+			if (mask->src == cpu_to_be16(0xffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>+					be16_to_cpu(mask->src));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (mask->dst) {
>+			if (mask->dst == cpu_to_be16(0xffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>+					be16_to_cpu(mask->dst));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		filter->dst_port = key->dst;
>+		filter->src_port = key->src;
>+
>+		/* For now, only supports destination port*/
>+		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>+
>+		switch (filter->ip_proto) {
>+		case IPPROTO_TCP:
>+		case IPPROTO_UDP:
>+			break;
>+		default:
>+			dev_err(&pf->pdev->dev,
>+				"Only UDP and TCP transport are supported\n");
>+			return -EINVAL;
>+		}
>+	}
>+	filter->flags = field_flags;
>+	return 0;
>+}
>+
>+/**
>+ * i40e_handle_redirect_action: Forward to a traffic class on the device
>+ * @vsi: Pointer to VSI
>+ * @ifindex: ifindex of the device to forwared to
>+ * @tc: traffic class index on the device
>+ * @filter: Pointer to cloud filter structure
>+ *
>+ **/
>+static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>+				       struct i40e_cloud_filter *filter)
>+{
>+	struct i40e_channel *ch, *ch_tmp;
>+
>+	/* redirect to a traffic class on the same device */
>+	if (vsi->netdev->ifindex == ifindex) {
>+		if (tc == 0) {
>+			filter->seid = vsi->seid;
>+			return 0;
>+		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>+			if (!filter->dst_port) {
>+				dev_err(&vsi->back->pdev->dev,
>+					"Specify destination port to redirect to traffic class that is not default\n");
>+				return -EINVAL;
>+			}
>+			if (list_empty(&vsi->ch_list))
>+				return -EINVAL;
>+			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>+						 list) {
>+				if (ch->seid == vsi->tc_seid_map[tc])
>+					filter->seid = ch->seid;
>+			}
>+			return 0;
>+		}
>+	}
>+	return -EINVAL;
>+}
>+
>+/**
>+ * i40e_parse_tc_actions - Parse tc actions
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ * @filter: Pointer to cloud filter structure
>+ *
>+ **/
>+static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>+				 struct i40e_cloud_filter *filter)
>+{
>+	const struct tc_action *a;
>+	LIST_HEAD(actions);
>+	int err;
>+
>+	if (!tcf_exts_has_actions(exts))
>+		return -EINVAL;
>+
>+	tcf_exts_to_list(exts, &actions);
>+	list_for_each_entry(a, &actions, list) {
>+		/* Drop action */
>+		if (is_tcf_gact_shot(a)) {
>+			dev_err(&vsi->back->pdev->dev,
>+				"Cloud filters do not support the drop action.\n");
>+			return -EOPNOTSUPP;
>+		}
>+
>+		/* Redirect to a traffic class on the same device */
>+		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>+			int ifindex = tcf_mirred_ifindex(a);
>+			u8 tc = tcf_mirred_tc(a);
>+
>+			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>+							  filter);
>+			if (err == 0)
>+				return err;
>+		}
>+	}
>+	return -EINVAL;
>+}
>+
>+/**
>+ * i40e_configure_clsflower - Configure tc flower filters
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ *
>+ **/
>+static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>+				    struct tc_cls_flower_offload *cls_flower)
>+{
>+	struct i40e_cloud_filter *filter = NULL;
>+	struct i40e_pf *pf = vsi->back;
>+	int err = 0;
>+
>+	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>+	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>+		return -EBUSY;
>+
>+	if (pf->fdir_pf_active_filters ||
>+	    (!hlist_empty(&pf->fdir_filter_list))) {
>+		dev_err(&vsi->back->pdev->dev,
>+			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>+		return -EINVAL;
>+	}
>+
>+	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>+		dev_err(&vsi->back->pdev->dev,
>+			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>+		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>+	}
>+
>+	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>+	if (!filter)
>+		return -ENOMEM;
>+
>+	filter->cookie = cls_flower->cookie;
>+
>+	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>+	if (err < 0)
>+		goto err;
>+
>+	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>+	if (err < 0)
>+		goto err;
>+
>+	/* Add cloud filter */
>+	if (filter->dst_port)
>+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>+	else
>+		err = i40e_add_del_cloud_filter(vsi, filter, true);
>+
>+	if (err) {
>+		dev_err(&pf->pdev->dev,
>+			"Failed to add cloud filter, err %s\n",
>+			i40e_stat_str(&pf->hw, err));
>+		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>+		goto err;
>+	}
>+
>+	/* add filter to the ordered list */
>+	INIT_HLIST_NODE(&filter->cloud_node);
>+
>+	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>+
>+	pf->num_cloud_filters++;
>+
>+	return err;
>+err:
>+	kfree(filter);
>+	return err;
>+}
>+
>+/**
>+ * i40e_find_cloud_filter - Find the could filter in the list
>+ * @vsi: Pointer to VSI
>+ * @cookie: filter specific cookie
>+ *
>+ **/
>+static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>+							unsigned long *cookie)
>+{
>+	struct i40e_cloud_filter *filter = NULL;
>+	struct hlist_node *node2;
>+
>+	hlist_for_each_entry_safe(filter, node2,
>+				  &vsi->back->cloud_filter_list, cloud_node)
>+		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>+			return filter;
>+	return NULL;
>+}
>+
>+/**
>+ * i40e_delete_clsflower - Remove tc flower filters
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ *
>+ **/
>+static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>+				 struct tc_cls_flower_offload *cls_flower)
>+{
>+	struct i40e_cloud_filter *filter = NULL;
>+	struct i40e_pf *pf = vsi->back;
>+	int err = 0;
>+
>+	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>+
>+	if (!filter)
>+		return -EINVAL;
>+
>+	hash_del(&filter->cloud_node);
>+
>+	if (filter->dst_port)
>+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>+	else
>+		err = i40e_add_del_cloud_filter(vsi, filter, false);
>+	if (err) {
>+		kfree(filter);
>+		dev_err(&pf->pdev->dev,
>+			"Failed to delete cloud filter, err %s\n",
>+			i40e_stat_str(&pf->hw, err));
>+		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>+	}
>+
>+	kfree(filter);
>+	pf->num_cloud_filters--;
>+
>+	if (!pf->num_cloud_filters)
>+		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>+		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>+			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>+		}
>+	return 0;
>+}
>+
>+/**
>+ * i40e_setup_tc_cls_flower - flower classifier offloads
>+ * @netdev: net device to configure
>+ * @type_data: offload data
>+ **/
>+static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>+				    struct tc_cls_flower_offload *cls_flower)
>+{
>+	struct i40e_netdev_priv *np = netdev_priv(netdev);
>+	struct i40e_vsi *vsi = np->vsi;
>+
>+	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>+	    cls_flower->common.chain_index)
>+		return -EOPNOTSUPP;
>+
>+	switch (cls_flower->command) {
>+	case TC_CLSFLOWER_REPLACE:
>+		return i40e_configure_clsflower(vsi, cls_flower);
>+	case TC_CLSFLOWER_DESTROY:
>+		return i40e_delete_clsflower(vsi, cls_flower);
>+	case TC_CLSFLOWER_STATS:
>+		return -EOPNOTSUPP;
>+	default:
>+		return -EINVAL;
>+	}
>+}
>+
> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
> 			   void *type_data)
> {
>-	if (type != TC_SETUP_MQPRIO)
>+	switch (type) {
>+	case TC_SETUP_MQPRIO:
>+		return i40e_setup_tc(netdev, type_data);
>+	case TC_SETUP_CLSFLOWER:
>+		return i40e_setup_tc_cls_flower(netdev, type_data);
>+	default:
> 		return -EOPNOTSUPP;
>-
>-	return i40e_setup_tc(netdev, type_data);
>+	}
> }
> 
> /**
>@@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
> 		kfree(cfilter);
> 	}
> 	pf->num_cloud_filters = 0;
>+
>+	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>+	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>+		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>+		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>+		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>+	}
> }
> 
> /**
>@@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>  * i40e_get_capabilities - get info about the HW
>  * @pf: the PF struct
>  **/
>-static int i40e_get_capabilities(struct i40e_pf *pf)
>+static int i40e_get_capabilities(struct i40e_pf *pf,
>+				 enum i40e_admin_queue_opc list_type)
> {
> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
> 	u16 data_size;
>@@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
> 
> 		/* this loads the data into the hw struct for us */
> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>-					    &data_size,
>-					    i40e_aqc_opc_list_func_capabilities,
>-					    NULL);
>+						    &data_size, list_type,
>+						    NULL);
> 		/* data loaded, buffer no longer needed */
> 		kfree(cap_buf);
> 
>@@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
> 		}
> 	} while (err);
> 
>-	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>-		dev_info(&pf->pdev->dev,
>-			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>-			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>-			 pf->hw.func_caps.num_msix_vectors,
>-			 pf->hw.func_caps.num_msix_vectors_vf,
>-			 pf->hw.func_caps.fd_filters_guaranteed,
>-			 pf->hw.func_caps.fd_filters_best_effort,
>-			 pf->hw.func_caps.num_tx_qp,
>-			 pf->hw.func_caps.num_vsis);
>-
>+	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>+		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>+			dev_info(&pf->pdev->dev,
>+				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>+				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>+				 pf->hw.func_caps.num_msix_vectors,
>+				 pf->hw.func_caps.num_msix_vectors_vf,
>+				 pf->hw.func_caps.fd_filters_guaranteed,
>+				 pf->hw.func_caps.fd_filters_best_effort,
>+				 pf->hw.func_caps.num_tx_qp,
>+				 pf->hw.func_caps.num_vsis);
>+		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>+			dev_info(&pf->pdev->dev,
>+				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>+				 pf->hw.dev_caps.switch_mode,
>+				 pf->hw.dev_caps.valid_functions);
>+			dev_info(&pf->pdev->dev,
>+				 "SR-IOV=%d, num_vfs for all function=%u\n",
>+				 pf->hw.dev_caps.sr_iov_1_1,
>+				 pf->hw.dev_caps.num_vfs);
>+			dev_info(&pf->pdev->dev,
>+				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>+				 pf->hw.dev_caps.num_vsis,
>+				 pf->hw.dev_caps.num_rx_qp,
>+				 pf->hw.dev_caps.num_tx_qp);
>+		}
>+	}
>+	if (list_type == i40e_aqc_opc_list_func_capabilities) {
> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
> 		       + pf->hw.func_caps.num_vfs)
>-	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>-		dev_info(&pf->pdev->dev,
>-			 "got num_vsis %d, setting num_vsis to %d\n",
>-			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>-		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>+		if (pf->hw.revision_id == 0 &&
>+		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>+			dev_info(&pf->pdev->dev,
>+				 "got num_vsis %d, setting num_vsis to %d\n",
>+				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>+			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>+		}
> 	}
>-
> 	return 0;
> }
> 
>@@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
> 		if (!vsi) {
> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 			return;
> 		}
> 	}
>@@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
> }
> 
> /**
>+ * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>+ * @vsi: PF main vsi
>+ * @seid: seid of main or channel VSIs
>+ *
>+ * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>+ * existed before reset
>+ **/
>+static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>+{
>+	struct i40e_cloud_filter *cfilter;
>+	struct i40e_pf *pf = vsi->back;
>+	struct hlist_node *node;
>+	i40e_status ret;
>+
>+	/* Add cloud filters back if they exist */
>+	if (hlist_empty(&pf->cloud_filter_list))
>+		return 0;
>+
>+	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>+				  cloud_node) {
>+		if (cfilter->seid != seid)
>+			continue;
>+
>+		if (cfilter->dst_port)
>+			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>+								true);
>+		else
>+			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>+
>+		if (ret) {
>+			dev_dbg(&pf->pdev->dev,
>+				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>+				i40e_stat_str(&pf->hw, ret),
>+				i40e_aq_str(&pf->hw,
>+					    pf->hw.aq.asq_last_status));
>+			return ret;
>+		}
>+	}
>+	return 0;
>+}
>+
>+/**
>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>  * @vsi: PF main vsi
>  *
>@@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
> 						I40E_BW_CREDIT_DIVISOR,
> 				ch->seid);
> 		}
>+		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>+		if (ret) {
>+			dev_dbg(&vsi->back->pdev->dev,
>+				"Failed to rebuild cloud filters for channel VSI %u\n",
>+				ch->seid);
>+			return ret;
>+		}
> 	}
> 	return 0;
> }
>@@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
> 		i40e_verify_eeprom(pf);
> 
> 	i40e_clear_pxe_mode(hw);
>-	ret = i40e_get_capabilities(pf);
>+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
> 	if (ret)
> 		goto end_core_reset;
> 
>@@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
> 			goto end_unlock;
> 	}
> 
>+	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>+	if (ret)
>+		goto end_unlock;
>+
> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
> 	 * for this main VSI if they exist
> 	 */
>@@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
> 	    (pf->num_fdsb_msix == 0)) {
> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 	}
> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
> 	    (pf->num_vmdq_msix == 0)) {
>@@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
> 				       I40E_FLAG_FD_SB_ENABLED	|
> 				       I40E_FLAG_FD_ATR_ENABLED	|
> 				       I40E_FLAG_VMDQ_ENABLED);
>+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 
> 			/* rework the queue expectations without MSIX */
> 			i40e_determine_queue_usage(pf);
>@@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
> 		/* Enable filters and mark for reset */
> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
> 			need_reset = true;
>-		/* enable FD_SB only if there is MSI-X vector */
>-		if (pf->num_fdsb_msix > 0)
>+		/* enable FD_SB only if there is MSI-X vector and no cloud
>+		 * filters exist
>+		 */
>+		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>+		}
> 	} else {
> 		/* turn off filters, mark for reset and clear SW filter list */
> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>@@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
> 		}
> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>+
> 		/* reset fd counters */
> 		pf->fd_add_err = 0;
> 		pf->fd_atr_cnt = 0;
>@@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
> 		netdev->hw_features |= NETIF_F_NTUPLE;
> 	hw_features = hw_enc_features		|
> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>-		      NETIF_F_HW_VLAN_CTAG_RX;
>+		      NETIF_F_HW_VLAN_CTAG_RX	|
>+		      NETIF_F_HW_TC;
> 
> 	netdev->hw_features |= hw_features;
> 
>@@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
> 	*/
> 
> 	if ((pf->hw.pf_id == 0) &&
>-	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>+	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>+		pf->last_sw_conf_flags = flags;
>+	}
> 
> 	if (pf->hw.pf_id == 0) {
> 		u16 valid_flags;
>@@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
> 					     pf->hw.aq.asq_last_status));
> 			/* not a fatal problem, just keep going */
> 		}
>+		pf->last_sw_conf_valid_flags = valid_flags;
> 	}
> 
> 	/* first time setup */
>@@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
> 			       I40E_FLAG_DCB_ENABLED	|
> 			       I40E_FLAG_SRIOV_ENABLED	|
> 			       I40E_FLAG_VMDQ_ENABLED);
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
> 				  I40E_FLAG_FD_SB_ENABLED |
> 				  I40E_FLAG_FD_ATR_ENABLED |
>@@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
> 			       I40E_FLAG_FD_ATR_ENABLED	|
> 			       I40E_FLAG_DCB_ENABLED	|
> 			       I40E_FLAG_VMDQ_ENABLED);
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 	} else {
> 		/* Not enough queues for all TCs */
> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>@@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
> 			queues_left -= 1; /* save 1 queue for FD */
> 		} else {
> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
> 		}
> 	}
>@@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
> 
> 	i40e_clear_pxe_mode(hw);
>-	err = i40e_get_capabilities(pf);
>+	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
> 	if (err)
> 		goto err_adminq_setup;
> 
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>index 92869f5..3bb6659 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>@@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
> 		struct i40e_asq_cmd_details *cmd_details);
> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
> 				   struct i40e_asq_cmd_details *cmd_details);
>+i40e_status
>+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count);
>+enum i40e_status_code
>+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count);
>+enum i40e_status_code
>+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count);
>+i40e_status
>+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count);
> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
> 			       struct i40e_lldp_variables *lldp_cfg);
> /* i40e_common */
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>index c019f46..af38881 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>@@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>+#define I40E_SWITCH_MODE_MASK		0xF
> 
> 	u32  management_mode;
> 	u32  mng_protocols_over_mctp;
>diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>index b8c78bf..4fe27f0 100644
>--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>@@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
> 		struct {
> 			u8 data[16];
> 		} v6;
>+		struct {
>+			__le16 data[8];
>+		} raw_v6;
> 	} ipaddr;
> 	__le16	flags;
> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
@ 2017-09-13 13:26     ` Jiri Pirko
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-13 13:26 UTC (permalink / raw)
  To: intel-wired-lan

Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar at intel.com wrote:
>This patch enables tc-flower based hardware offloads. tc flower
>filter provided by the kernel is configured as driver specific
>cloud filter. The patch implements functions and admin queue
>commands needed to support cloud filters in the driver and
>adds cloud filters to configure these tc-flower filters.
>
>The only action supported is to redirect packets to a traffic class
>on the same device.

So basically you are not doing redirect, you are just setting tclass for
matched packets, right? Why you use mirred for this? I think that
you might consider extending g_act for that:

# tc filter add dev eth0 protocol ip ingress \
  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
  action tclass 0


>
># tc qdisc add dev eth0 ingress
># ethtool -K eth0 hw-tc-offload on
>
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>  action mirred ingress redirect dev eth0 tclass 0
>
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_sw\
>  action mirred ingress redirect dev eth0 tclass 1
>
># tc filter add dev eth0 protocol ipv6 parent ffff:\
>  prio 3 flower dst_ip fe8::200:1\
>  ip_proto udp dst_port 66 skip_sw\
>  action mirred ingress redirect dev eth0 tclass 1
>
>Delete tc flower filter:
>Example:
>
># tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
># tc filter del dev eth0 parent ffff:
>
>Flow Director Sideband is disabled while configuring cloud filters
>via tc-flower and until any cloud filter exists.
>
>Unsupported matches when cloud filters are added using enhanced
>big buffer cloud filter mode of underlying switch include:
>1. source port and source IP
>2. Combined MAC address and IP fields.
>3. Not specifying L4 port
>
>These filter matches can however be used to redirect traffic to
>the main VSI (tc 0) which does not require the enhanced big buffer
>cloud filter support.
>
>v3: Cleaned up some lengthy function names. Changed ipv6 address to
>__be32 array instead of u8 array. Used macro for IP version. Minor
>formatting changes.
>v2:
>1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>2. Moved dev_info for add/deleting cloud filters in else condition
>3. Fixed some format specifier in dev_err logs
>4. Refactored i40e_get_capabilities to take an additional
>   list_type parameter and use it to query device and function
>   level capabilities.
>5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>   to verify if redirect to a traffic class is supported.
>6. Added comments for Geneve fix in cloud filter big buffer AQ
>   function definitions.
>7. Cleaned up setup_tc interface to rebase and work with Jiri's
>   updates, separate function to process tc cls flower offloads.
>8. Changes to make Flow Director Sideband and Cloud filters mutually
>   exclusive.
>
>Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>---
> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
> 7 files changed, 1202 insertions(+), 30 deletions(-)
>
>diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>index 6018fb6..b110519 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e.h
>@@ -55,6 +55,8 @@
> #include <linux/net_tstamp.h>
> #include <linux/ptp_clock_kernel.h>
> #include <net/pkt_cls.h>
>+#include <net/tc_act/tc_gact.h>
>+#include <net/tc_act/tc_mirred.h>
> #include "i40e_type.h"
> #include "i40e_prototype.h"
> #include "i40e_client.h"
>@@ -252,9 +254,52 @@ struct i40e_fdir_filter {
> 	u32 fd_id;
> };
> 
>+#define IPV4_VERSION 4
>+#define IPV6_VERSION 6
>+
>+#define I40E_CLOUD_FIELD_OMAC	0x01
>+#define I40E_CLOUD_FIELD_IMAC	0x02
>+#define I40E_CLOUD_FIELD_IVLAN	0x04
>+#define I40E_CLOUD_FIELD_TEN_ID	0x08
>+#define I40E_CLOUD_FIELD_IIP	0x10
>+
>+#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>+						 I40E_CLOUD_FIELD_IVLAN)
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>+						 I40E_CLOUD_FIELD_TEN_ID)
>+#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>+						  I40E_CLOUD_FIELD_IMAC | \
>+						  I40E_CLOUD_FIELD_TEN_ID)
>+#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>+						   I40E_CLOUD_FIELD_IVLAN | \
>+						   I40E_CLOUD_FIELD_TEN_ID)
>+#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>+
> struct i40e_cloud_filter {
> 	struct hlist_node cloud_node;
> 	unsigned long cookie;
>+	/* cloud filter input set follows */
>+	u8 dst_mac[ETH_ALEN];
>+	u8 src_mac[ETH_ALEN];
>+	__be16 vlan_id;
>+	__be32 dst_ip;
>+	__be32 src_ip;
>+	__be32 dst_ipv6[4];
>+	__be32 src_ipv6[4];
>+	__be16 dst_port;
>+	__be16 src_port;
>+	u32 ip_version;
>+	u8 ip_proto;	/* IPPROTO value */
>+	/* L4 port type: src or destination port */
>+#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>+#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>+	u8 port_type;
>+	u32 tenant_id;
>+	u8 flags;
>+#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>+	u8 tunnel_type;
> 	u16 seid;	/* filter control */
> };
> 
>@@ -491,6 +536,8 @@ struct i40e_pf {
> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>+#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>+#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
> 
> 	struct i40e_client_instance *cinst;
> 	bool stat_offsets_loaded;
>@@ -573,6 +620,8 @@ struct i40e_pf {
> 	u16 phy_led_val;
> 
> 	u16 override_q_count;
>+	u16 last_sw_conf_flags;
>+	u16 last_sw_conf_valid_flags;
> };
> 
> /**
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>index 2e567c2..feb3d42 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>@@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
> 		struct {
> 			u8 data[16];
> 		} v6;
>+		struct {
>+			__le16 data[8];
>+		} raw_v6;
> 	} ipaddr;
> 	__le16	flags;
> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>index 9567702..d9c9665 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>@@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
> 
> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
> 				   track_id, &offset, &info, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_add_cloud_filters
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to add cloud filters from
>+ * @filters: Buffer which contains the filters to be added
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Set the cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
>+ * of the function.
>+ *
>+ **/
>+enum i40e_status_code
>+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	enum i40e_status_code status;
>+	u16 buff_len;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_add_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_add_cloud_filters_bb
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to add cloud filters from
>+ * @filters: Buffer which contains the filters in big buffer to be added
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Set the big buffer cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>+ * function.
>+ *
>+ **/
>+i40e_status
>+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	i40e_status status;
>+	u16 buff_len;
>+	int i;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_add_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>+
>+	for (i = 0; i < filter_count; i++) {
>+		u16 tnl_type;
>+		u32 ti;
>+
>+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>+
>+		/* For Geneve, the VNI should be placed in offset shifted by a
>+		 * byte than the offset for the Tenant ID for rest of the
>+		 * tunnels.
>+		 */
>+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>+			ti = le32_to_cpu(filters[i].element.tenant_id);
>+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>+		}
>+	}
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_rem_cloud_filters
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to remove cloud filters from
>+ * @filters: Buffer which contains the filters to be removed
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Remove the cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_data are filled in by the caller
>+ * of the function.
>+ *
>+ **/
>+enum i40e_status_code
>+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	enum i40e_status_code status;
>+	u16 buff_len;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_remove_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
>+	return status;
>+}
>+
>+/**
>+ * i40e_aq_rem_cloud_filters_bb
>+ * @hw: pointer to the hardware structure
>+ * @seid: VSI seid to remove cloud filters from
>+ * @filters: Buffer which contains the filters in big buffer to be removed
>+ * @filter_count: number of filters contained in the buffer
>+ *
>+ * Remove the big buffer cloud filters for a given VSI.  The contents of the
>+ * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>+ * function.
>+ *
>+ **/
>+i40e_status
>+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count)
>+{
>+	struct i40e_aq_desc desc;
>+	struct i40e_aqc_add_remove_cloud_filters *cmd =
>+	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>+	i40e_status status;
>+	u16 buff_len;
>+	int i;
>+
>+	i40e_fill_default_direct_cmd_desc(&desc,
>+					  i40e_aqc_opc_remove_cloud_filters);
>+
>+	buff_len = filter_count * sizeof(*filters);
>+	desc.datalen = cpu_to_le16(buff_len);
>+	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>+	cmd->num_filters = filter_count;
>+	cmd->seid = cpu_to_le16(seid);
>+	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>+
>+	for (i = 0; i < filter_count; i++) {
>+		u16 tnl_type;
>+		u32 ti;
>+
>+		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>+			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>+
>+		/* For Geneve, the VNI should be placed in offset shifted by a
>+		 * byte than the offset for the Tenant ID for rest of the
>+		 * tunnels.
>+		 */
>+		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>+			ti = le32_to_cpu(filters[i].element.tenant_id);
>+			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>+		}
>+	}
>+
>+	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>+
> 	return status;
> }
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>index afcf08a..96ee608 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>@@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>+				     struct i40e_cloud_filter *filter,
>+				     bool add);
>+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>+					     struct i40e_cloud_filter *filter,
>+					     bool add);
>+static int i40e_get_capabilities(struct i40e_pf *pf,
>+				 enum i40e_admin_queue_opc list_type);
>+
> 
> /* i40e_pci_tbl - PCI Device ID Table
>  *
>@@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>  **/
> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
> {
>+	enum i40e_admin_queue_err last_aq_status;
>+	struct i40e_cloud_filter *cfilter;
> 	struct i40e_channel *ch, *ch_tmp;
>+	struct i40e_pf *pf = vsi->back;
>+	struct hlist_node *node;
> 	int ret, i;
> 
> 	/* Reset rss size that was stored when reconfiguring rss for
>@@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
> 				 "Failed to reset tx rate for ch->seid %u\n",
> 				 ch->seid);
> 
>+		/* delete cloud filters associated with this channel */
>+		hlist_for_each_entry_safe(cfilter, node,
>+					  &pf->cloud_filter_list, cloud_node) {
>+			if (cfilter->seid != ch->seid)
>+				continue;
>+
>+			hash_del(&cfilter->cloud_node);
>+			if (cfilter->dst_port)
>+				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>+									cfilter,
>+									false);
>+			else
>+				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>+								false);
>+			last_aq_status = pf->hw.aq.asq_last_status;
>+			if (ret)
>+				dev_info(&pf->pdev->dev,
>+					 "Failed to delete cloud filter, err %s aq_err %s\n",
>+					 i40e_stat_str(&pf->hw, ret),
>+					 i40e_aq_str(&pf->hw, last_aq_status));
>+			kfree(cfilter);
>+		}
>+
> 		/* delete VSI from FW */
> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
> 					     NULL);
>@@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
> }
> 
> /**
>+ * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>+ * @vsi: ptr to VSI which has PF backing
>+ * @l4type: true for TCP ond false for UDP
>+ * @port_type: true if port is destination and false if port is source
>+ *
>+ * Sets up switch mode correctly if it needs to be changed and perform
>+ * what are allowed modes.
>+ **/
>+static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>+					     bool port_type)
>+{
>+	u8 mode;
>+	struct i40e_pf *pf = vsi->back;
>+	struct i40e_hw *hw = &pf->hw;
>+	int ret;
>+
>+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>+	if (ret)
>+		return -EINVAL;
>+
>+	if (hw->dev_caps.switch_mode) {
>+		/* if switch mode is set, support mode2 (non-tunneled for
>+		 * cloud filter) for now
>+		 */
>+		u32 switch_mode = hw->dev_caps.switch_mode &
>+							I40E_SWITCH_MODE_MASK;
>+		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>+			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>+				return 0;
>+			dev_err(&pf->pdev->dev,
>+				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>+				hw->dev_caps.switch_mode);
>+			return -EINVAL;
>+		}
>+	}
>+
>+	/* port_type: true for destination port and false for source port
>+	 * For now, supports only destination port type
>+	 */
>+	if (!port_type) {
>+		dev_err(&pf->pdev->dev, "src port type not supported\n");
>+		return -EINVAL;
>+	}
>+
>+	/* Set Bit 7 to be valid */
>+	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>+
>+	/* Set L4type to both TCP and UDP support */
>+	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>+
>+	/* Set cloud filter mode */
>+	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>+
>+	/* Prep mode field for set_switch_config */
>+	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>+					pf->last_sw_conf_valid_flags,
>+					mode, NULL);
>+	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>+		dev_err(&pf->pdev->dev,
>+			"couldn't set switch config bits, err %s aq_err %s\n",
>+			i40e_stat_str(hw, ret),
>+			i40e_aq_str(hw,
>+				    hw->aq.asq_last_status));
>+
>+	return ret;
>+}
>+
>+/**
>  * i40e_create_queue_channel - function to create channel
>  * @vsi: VSI to be configured
>  * @ch: ptr to channel (it contains channel specific params)
>@@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
> 	return ret;
> }
> 
>+/**
>+ * i40e_set_cld_element - sets cloud filter element data
>+ * @filter: cloud filter rule
>+ * @cld: ptr to cloud filter element data
>+ *
>+ * This is helper function to copy data into cloud filter element
>+ **/
>+static inline void
>+i40e_set_cld_element(struct i40e_cloud_filter *filter,
>+		     struct i40e_aqc_cloud_filters_element_data *cld)
>+{
>+	int i, j;
>+	u32 ipa;
>+
>+	memset(cld, 0, sizeof(*cld));
>+	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>+	ether_addr_copy(cld->inner_mac, filter->src_mac);
>+
>+	if (filter->ip_version == IPV6_VERSION) {
>+#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>+		for (i = 0, j = 0; i < 4; i++, j += 2) {
>+			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>+			ipa = cpu_to_le32(ipa);
>+			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>+		}
>+	} else {
>+		ipa = be32_to_cpu(filter->dst_ip);
>+		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>+	}
>+
>+	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>+
>+	/* tenant_id is not supported by FW now, once the support is enabled
>+	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>+	 */
>+	if (filter->tenant_id)
>+		return;
>+}
>+
>+/**
>+ * i40e_add_del_cloud_filter - Add/del cloud filter
>+ * @vsi: pointer to VSI
>+ * @filter: cloud filter rule
>+ * @add: if true, add, if false, delete
>+ *
>+ * Add or delete a cloud filter for a specific flow spec.
>+ * Returns 0 if the filter were successfully added.
>+ **/
>+static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>+				     struct i40e_cloud_filter *filter, bool add)
>+{
>+	struct i40e_aqc_cloud_filters_element_data cld_filter;
>+	struct i40e_pf *pf = vsi->back;
>+	int ret;
>+	static const u16 flag_table[128] = {
>+		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>+			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>+		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>+			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>+		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>+			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>+		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>+			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>+	};
>+
>+	if (filter->flags >= ARRAY_SIZE(flag_table))
>+		return I40E_ERR_CONFIG;
>+
>+	/* copy element needed to add cloud filter from filter */
>+	i40e_set_cld_element(filter, &cld_filter);
>+
>+	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>+		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>+					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>+
>+	if (filter->ip_version == IPV6_VERSION)
>+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>+						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>+	else
>+		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>+						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>+
>+	if (add)
>+		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>+						&cld_filter, 1);
>+	else
>+		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>+						&cld_filter, 1);
>+	if (ret)
>+		dev_dbg(&pf->pdev->dev,
>+			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>+			add ? "add" : "delete", filter->dst_port, ret,
>+			pf->hw.aq.asq_last_status);
>+	else
>+		dev_info(&pf->pdev->dev,
>+			 "%s cloud filter for VSI: %d\n",
>+			 add ? "Added" : "Deleted", filter->seid);
>+	return ret;
>+}
>+
>+/**
>+ * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>+ * @vsi: pointer to VSI
>+ * @filter: cloud filter rule
>+ * @add: if true, add, if false, delete
>+ *
>+ * Add or delete a cloud filter for a specific flow spec using big buffer.
>+ * Returns 0 if the filter were successfully added.
>+ **/
>+static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>+					     struct i40e_cloud_filter *filter,
>+					     bool add)
>+{
>+	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>+	struct i40e_pf *pf = vsi->back;
>+	int ret;
>+
>+	/* Both (Outer/Inner) valid mac_addr are not supported */
>+	if (is_valid_ether_addr(filter->dst_mac) &&
>+	    is_valid_ether_addr(filter->src_mac))
>+		return -EINVAL;
>+
>+	/* Make sure port is specified, otherwise bail out, for channel
>+	 * specific cloud filter needs 'L4 port' to be non-zero
>+	 */
>+	if (!filter->dst_port)
>+		return -EINVAL;
>+
>+	/* adding filter using src_port/src_ip is not supported at this stage */
>+	if (filter->src_port || filter->src_ip ||
>+	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>+		return -EINVAL;
>+
>+	/* copy element needed to add cloud filter from filter */
>+	i40e_set_cld_element(filter, &cld_filter.element);
>+
>+	if (is_valid_ether_addr(filter->dst_mac) ||
>+	    is_valid_ether_addr(filter->src_mac) ||
>+	    is_multicast_ether_addr(filter->dst_mac) ||
>+	    is_multicast_ether_addr(filter->src_mac)) {
>+		/* MAC + IP : unsupported mode */
>+		if (filter->dst_ip)
>+			return -EINVAL;
>+
>+		/* since we validated that L4 port must be valid before
>+		 * we get here, start with respective "flags" value
>+		 * and update if vlan is present or not
>+		 */
>+		cld_filter.element.flags =
>+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>+
>+		if (filter->vlan_id) {
>+			cld_filter.element.flags =
>+			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>+		}
>+
>+	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>+		cld_filter.element.flags =
>+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>+		if (filter->ip_version == IPV6_VERSION)
>+			cld_filter.element.flags |=
>+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>+		else
>+			cld_filter.element.flags |=
>+				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>+	} else {
>+		dev_err(&pf->pdev->dev,
>+			"either mac or ip has to be valid for cloud filter\n");
>+		return -EINVAL;
>+	}
>+
>+	/* Now copy L4 port in Byte 6..7 in general fields */
>+	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>+						be16_to_cpu(filter->dst_port);
>+
>+	if (add) {
>+		bool proto_type, port_type;
>+
>+		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>+		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>+			     true : false;
>+
>+		/* For now, src port based cloud filter for channel is not
>+		 * supported
>+		 */
>+		if (!port_type) {
>+			dev_err(&pf->pdev->dev,
>+				"unsupported port type (src port)\n");
>+			return -EOPNOTSUPP;
>+		}
>+
>+		/* Validate current device switch mode, change if necessary */
>+		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>+							port_type);
>+		if (ret) {
>+			dev_err(&pf->pdev->dev,
>+				"failed to set switch mode, ret %d\n",
>+				ret);
>+			return ret;
>+		}
>+
>+		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>+						   &cld_filter, 1);
>+	} else {
>+		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>+						   &cld_filter, 1);
>+	}
>+
>+	if (ret)
>+		dev_dbg(&pf->pdev->dev,
>+			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>+			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>+	else
>+		dev_info(&pf->pdev->dev,
>+			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>+			 add ? "add" : "delete", filter->seid,
>+			 ntohs(filter->dst_port));
>+	return ret;
>+}
>+
>+/**
>+ * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ * @filter: Pointer to cloud filter structure
>+ *
>+ **/
>+static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>+				 struct tc_cls_flower_offload *f,
>+				 struct i40e_cloud_filter *filter)
>+{
>+	struct i40e_pf *pf = vsi->back;
>+	u16 addr_type = 0;
>+	u8 field_flags = 0;
>+
>+	if (f->dissector->used_keys &
>+	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>+	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>+	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>+	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>+	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>+	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>+	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>+	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>+		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>+			f->dissector->used_keys);
>+		return -EOPNOTSUPP;
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>+		struct flow_dissector_key_keyid *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>+						  f->key);
>+
>+		struct flow_dissector_key_keyid *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>+						  f->mask);
>+
>+		if (mask->keyid != 0)
>+			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>+
>+		filter->tenant_id = be32_to_cpu(key->keyid);
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>+		struct flow_dissector_key_basic *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_BASIC,
>+						  f->key);
>+
>+		filter->ip_proto = key->ip_proto;
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>+		struct flow_dissector_key_eth_addrs *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>+						  f->key);
>+
>+		struct flow_dissector_key_eth_addrs *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>+						  f->mask);
>+
>+		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>+		if (!is_zero_ether_addr(mask->dst)) {
>+			if (is_broadcast_ether_addr(mask->dst)) {
>+				field_flags |= I40E_CLOUD_FIELD_OMAC;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>+					mask->dst);
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (!is_zero_ether_addr(mask->src)) {
>+			if (is_broadcast_ether_addr(mask->src)) {
>+				field_flags |= I40E_CLOUD_FIELD_IMAC;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>+					mask->src);
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+		ether_addr_copy(filter->dst_mac, key->dst);
>+		ether_addr_copy(filter->src_mac, key->src);
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>+		struct flow_dissector_key_vlan *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_VLAN,
>+						  f->key);
>+		struct flow_dissector_key_vlan *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_VLAN,
>+						  f->mask);
>+
>+		if (mask->vlan_id) {
>+			if (mask->vlan_id == VLAN_VID_MASK) {
>+				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>+
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>+					mask->vlan_id);
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		filter->vlan_id = cpu_to_be16(key->vlan_id);
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>+		struct flow_dissector_key_control *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_CONTROL,
>+						  f->key);
>+
>+		addr_type = key->addr_type;
>+	}
>+
>+	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>+		struct flow_dissector_key_ipv4_addrs *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>+						  f->key);
>+		struct flow_dissector_key_ipv4_addrs *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>+						  f->mask);
>+
>+		if (mask->dst) {
>+			if (mask->dst == cpu_to_be32(0xffffffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>+					be32_to_cpu(mask->dst));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (mask->src) {
>+			if (mask->src == cpu_to_be32(0xffffffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>+					be32_to_cpu(mask->dst));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>+			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>+			return I40E_ERR_CONFIG;
>+		}
>+		filter->dst_ip = key->dst;
>+		filter->src_ip = key->src;
>+		filter->ip_version = IPV4_VERSION;
>+	}
>+
>+	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>+		struct flow_dissector_key_ipv6_addrs *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>+						  f->key);
>+		struct flow_dissector_key_ipv6_addrs *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>+						  f->mask);
>+
>+		/* src and dest IPV6 address should not be LOOPBACK
>+		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>+		 */
>+		if (ipv6_addr_loopback(&key->dst) ||
>+		    ipv6_addr_loopback(&key->src)) {
>+			dev_err(&pf->pdev->dev,
>+				"Bad ipv6, addr is LOOPBACK\n");
>+			return I40E_ERR_CONFIG;
>+		}
>+		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>+			field_flags |= I40E_CLOUD_FIELD_IIP;
>+
>+		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>+		       sizeof(filter->src_ipv6));
>+		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>+		       sizeof(filter->dst_ipv6));
>+
>+		/* mark it as IPv6 filter, to be used later */
>+		filter->ip_version = IPV6_VERSION;
>+	}
>+
>+	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>+		struct flow_dissector_key_ports *key =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_PORTS,
>+						  f->key);
>+		struct flow_dissector_key_ports *mask =
>+			skb_flow_dissector_target(f->dissector,
>+						  FLOW_DISSECTOR_KEY_PORTS,
>+						  f->mask);
>+
>+		if (mask->src) {
>+			if (mask->src == cpu_to_be16(0xffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>+					be16_to_cpu(mask->src));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		if (mask->dst) {
>+			if (mask->dst == cpu_to_be16(0xffff)) {
>+				field_flags |= I40E_CLOUD_FIELD_IIP;
>+			} else {
>+				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>+					be16_to_cpu(mask->dst));
>+				return I40E_ERR_CONFIG;
>+			}
>+		}
>+
>+		filter->dst_port = key->dst;
>+		filter->src_port = key->src;
>+
>+		/* For now, only supports destination port*/
>+		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>+
>+		switch (filter->ip_proto) {
>+		case IPPROTO_TCP:
>+		case IPPROTO_UDP:
>+			break;
>+		default:
>+			dev_err(&pf->pdev->dev,
>+				"Only UDP and TCP transport are supported\n");
>+			return -EINVAL;
>+		}
>+	}
>+	filter->flags = field_flags;
>+	return 0;
>+}
>+
>+/**
>+ * i40e_handle_redirect_action: Forward to a traffic class on the device
>+ * @vsi: Pointer to VSI
>+ * @ifindex: ifindex of the device to forwared to
>+ * @tc: traffic class index on the device
>+ * @filter: Pointer to cloud filter structure
>+ *
>+ **/
>+static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>+				       struct i40e_cloud_filter *filter)
>+{
>+	struct i40e_channel *ch, *ch_tmp;
>+
>+	/* redirect to a traffic class on the same device */
>+	if (vsi->netdev->ifindex == ifindex) {
>+		if (tc == 0) {
>+			filter->seid = vsi->seid;
>+			return 0;
>+		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>+			if (!filter->dst_port) {
>+				dev_err(&vsi->back->pdev->dev,
>+					"Specify destination port to redirect to traffic class that is not default\n");
>+				return -EINVAL;
>+			}
>+			if (list_empty(&vsi->ch_list))
>+				return -EINVAL;
>+			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>+						 list) {
>+				if (ch->seid == vsi->tc_seid_map[tc])
>+					filter->seid = ch->seid;
>+			}
>+			return 0;
>+		}
>+	}
>+	return -EINVAL;
>+}
>+
>+/**
>+ * i40e_parse_tc_actions - Parse tc actions
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ * @filter: Pointer to cloud filter structure
>+ *
>+ **/
>+static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>+				 struct i40e_cloud_filter *filter)
>+{
>+	const struct tc_action *a;
>+	LIST_HEAD(actions);
>+	int err;
>+
>+	if (!tcf_exts_has_actions(exts))
>+		return -EINVAL;
>+
>+	tcf_exts_to_list(exts, &actions);
>+	list_for_each_entry(a, &actions, list) {
>+		/* Drop action */
>+		if (is_tcf_gact_shot(a)) {
>+			dev_err(&vsi->back->pdev->dev,
>+				"Cloud filters do not support the drop action.\n");
>+			return -EOPNOTSUPP;
>+		}
>+
>+		/* Redirect to a traffic class on the same device */
>+		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>+			int ifindex = tcf_mirred_ifindex(a);
>+			u8 tc = tcf_mirred_tc(a);
>+
>+			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>+							  filter);
>+			if (err == 0)
>+				return err;
>+		}
>+	}
>+	return -EINVAL;
>+}
>+
>+/**
>+ * i40e_configure_clsflower - Configure tc flower filters
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ *
>+ **/
>+static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>+				    struct tc_cls_flower_offload *cls_flower)
>+{
>+	struct i40e_cloud_filter *filter = NULL;
>+	struct i40e_pf *pf = vsi->back;
>+	int err = 0;
>+
>+	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>+	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>+		return -EBUSY;
>+
>+	if (pf->fdir_pf_active_filters ||
>+	    (!hlist_empty(&pf->fdir_filter_list))) {
>+		dev_err(&vsi->back->pdev->dev,
>+			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>+		return -EINVAL;
>+	}
>+
>+	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>+		dev_err(&vsi->back->pdev->dev,
>+			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>+		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>+	}
>+
>+	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>+	if (!filter)
>+		return -ENOMEM;
>+
>+	filter->cookie = cls_flower->cookie;
>+
>+	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>+	if (err < 0)
>+		goto err;
>+
>+	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>+	if (err < 0)
>+		goto err;
>+
>+	/* Add cloud filter */
>+	if (filter->dst_port)
>+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>+	else
>+		err = i40e_add_del_cloud_filter(vsi, filter, true);
>+
>+	if (err) {
>+		dev_err(&pf->pdev->dev,
>+			"Failed to add cloud filter, err %s\n",
>+			i40e_stat_str(&pf->hw, err));
>+		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>+		goto err;
>+	}
>+
>+	/* add filter to the ordered list */
>+	INIT_HLIST_NODE(&filter->cloud_node);
>+
>+	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>+
>+	pf->num_cloud_filters++;
>+
>+	return err;
>+err:
>+	kfree(filter);
>+	return err;
>+}
>+
>+/**
>+ * i40e_find_cloud_filter - Find the could filter in the list
>+ * @vsi: Pointer to VSI
>+ * @cookie: filter specific cookie
>+ *
>+ **/
>+static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>+							unsigned long *cookie)
>+{
>+	struct i40e_cloud_filter *filter = NULL;
>+	struct hlist_node *node2;
>+
>+	hlist_for_each_entry_safe(filter, node2,
>+				  &vsi->back->cloud_filter_list, cloud_node)
>+		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>+			return filter;
>+	return NULL;
>+}
>+
>+/**
>+ * i40e_delete_clsflower - Remove tc flower filters
>+ * @vsi: Pointer to VSI
>+ * @cls_flower: Pointer to struct tc_cls_flower_offload
>+ *
>+ **/
>+static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>+				 struct tc_cls_flower_offload *cls_flower)
>+{
>+	struct i40e_cloud_filter *filter = NULL;
>+	struct i40e_pf *pf = vsi->back;
>+	int err = 0;
>+
>+	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>+
>+	if (!filter)
>+		return -EINVAL;
>+
>+	hash_del(&filter->cloud_node);
>+
>+	if (filter->dst_port)
>+		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>+	else
>+		err = i40e_add_del_cloud_filter(vsi, filter, false);
>+	if (err) {
>+		kfree(filter);
>+		dev_err(&pf->pdev->dev,
>+			"Failed to delete cloud filter, err %s\n",
>+			i40e_stat_str(&pf->hw, err));
>+		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>+	}
>+
>+	kfree(filter);
>+	pf->num_cloud_filters--;
>+
>+	if (!pf->num_cloud_filters)
>+		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>+		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>+			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>+		}
>+	return 0;
>+}
>+
>+/**
>+ * i40e_setup_tc_cls_flower - flower classifier offloads
>+ * @netdev: net device to configure
>+ * @type_data: offload data
>+ **/
>+static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>+				    struct tc_cls_flower_offload *cls_flower)
>+{
>+	struct i40e_netdev_priv *np = netdev_priv(netdev);
>+	struct i40e_vsi *vsi = np->vsi;
>+
>+	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>+	    cls_flower->common.chain_index)
>+		return -EOPNOTSUPP;
>+
>+	switch (cls_flower->command) {
>+	case TC_CLSFLOWER_REPLACE:
>+		return i40e_configure_clsflower(vsi, cls_flower);
>+	case TC_CLSFLOWER_DESTROY:
>+		return i40e_delete_clsflower(vsi, cls_flower);
>+	case TC_CLSFLOWER_STATS:
>+		return -EOPNOTSUPP;
>+	default:
>+		return -EINVAL;
>+	}
>+}
>+
> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
> 			   void *type_data)
> {
>-	if (type != TC_SETUP_MQPRIO)
>+	switch (type) {
>+	case TC_SETUP_MQPRIO:
>+		return i40e_setup_tc(netdev, type_data);
>+	case TC_SETUP_CLSFLOWER:
>+		return i40e_setup_tc_cls_flower(netdev, type_data);
>+	default:
> 		return -EOPNOTSUPP;
>-
>-	return i40e_setup_tc(netdev, type_data);
>+	}
> }
> 
> /**
>@@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
> 		kfree(cfilter);
> 	}
> 	pf->num_cloud_filters = 0;
>+
>+	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>+	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>+		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>+		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>+		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>+	}
> }
> 
> /**
>@@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>  * i40e_get_capabilities - get info about the HW
>  * @pf: the PF struct
>  **/
>-static int i40e_get_capabilities(struct i40e_pf *pf)
>+static int i40e_get_capabilities(struct i40e_pf *pf,
>+				 enum i40e_admin_queue_opc list_type)
> {
> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
> 	u16 data_size;
>@@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
> 
> 		/* this loads the data into the hw struct for us */
> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>-					    &data_size,
>-					    i40e_aqc_opc_list_func_capabilities,
>-					    NULL);
>+						    &data_size, list_type,
>+						    NULL);
> 		/* data loaded, buffer no longer needed */
> 		kfree(cap_buf);
> 
>@@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
> 		}
> 	} while (err);
> 
>-	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>-		dev_info(&pf->pdev->dev,
>-			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>-			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>-			 pf->hw.func_caps.num_msix_vectors,
>-			 pf->hw.func_caps.num_msix_vectors_vf,
>-			 pf->hw.func_caps.fd_filters_guaranteed,
>-			 pf->hw.func_caps.fd_filters_best_effort,
>-			 pf->hw.func_caps.num_tx_qp,
>-			 pf->hw.func_caps.num_vsis);
>-
>+	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>+		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>+			dev_info(&pf->pdev->dev,
>+				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>+				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>+				 pf->hw.func_caps.num_msix_vectors,
>+				 pf->hw.func_caps.num_msix_vectors_vf,
>+				 pf->hw.func_caps.fd_filters_guaranteed,
>+				 pf->hw.func_caps.fd_filters_best_effort,
>+				 pf->hw.func_caps.num_tx_qp,
>+				 pf->hw.func_caps.num_vsis);
>+		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>+			dev_info(&pf->pdev->dev,
>+				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>+				 pf->hw.dev_caps.switch_mode,
>+				 pf->hw.dev_caps.valid_functions);
>+			dev_info(&pf->pdev->dev,
>+				 "SR-IOV=%d, num_vfs for all function=%u\n",
>+				 pf->hw.dev_caps.sr_iov_1_1,
>+				 pf->hw.dev_caps.num_vfs);
>+			dev_info(&pf->pdev->dev,
>+				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>+				 pf->hw.dev_caps.num_vsis,
>+				 pf->hw.dev_caps.num_rx_qp,
>+				 pf->hw.dev_caps.num_tx_qp);
>+		}
>+	}
>+	if (list_type == i40e_aqc_opc_list_func_capabilities) {
> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
> 		       + pf->hw.func_caps.num_vfs)
>-	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>-		dev_info(&pf->pdev->dev,
>-			 "got num_vsis %d, setting num_vsis to %d\n",
>-			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>-		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>+		if (pf->hw.revision_id == 0 &&
>+		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>+			dev_info(&pf->pdev->dev,
>+				 "got num_vsis %d, setting num_vsis to %d\n",
>+				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>+			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>+		}
> 	}
>-
> 	return 0;
> }
> 
>@@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
> 		if (!vsi) {
> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 			return;
> 		}
> 	}
>@@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
> }
> 
> /**
>+ * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>+ * @vsi: PF main vsi
>+ * @seid: seid of main or channel VSIs
>+ *
>+ * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>+ * existed before reset
>+ **/
>+static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>+{
>+	struct i40e_cloud_filter *cfilter;
>+	struct i40e_pf *pf = vsi->back;
>+	struct hlist_node *node;
>+	i40e_status ret;
>+
>+	/* Add cloud filters back if they exist */
>+	if (hlist_empty(&pf->cloud_filter_list))
>+		return 0;
>+
>+	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>+				  cloud_node) {
>+		if (cfilter->seid != seid)
>+			continue;
>+
>+		if (cfilter->dst_port)
>+			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>+								true);
>+		else
>+			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>+
>+		if (ret) {
>+			dev_dbg(&pf->pdev->dev,
>+				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>+				i40e_stat_str(&pf->hw, ret),
>+				i40e_aq_str(&pf->hw,
>+					    pf->hw.aq.asq_last_status));
>+			return ret;
>+		}
>+	}
>+	return 0;
>+}
>+
>+/**
>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>  * @vsi: PF main vsi
>  *
>@@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
> 						I40E_BW_CREDIT_DIVISOR,
> 				ch->seid);
> 		}
>+		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>+		if (ret) {
>+			dev_dbg(&vsi->back->pdev->dev,
>+				"Failed to rebuild cloud filters for channel VSI %u\n",
>+				ch->seid);
>+			return ret;
>+		}
> 	}
> 	return 0;
> }
>@@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
> 		i40e_verify_eeprom(pf);
> 
> 	i40e_clear_pxe_mode(hw);
>-	ret = i40e_get_capabilities(pf);
>+	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
> 	if (ret)
> 		goto end_core_reset;
> 
>@@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
> 			goto end_unlock;
> 	}
> 
>+	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>+	if (ret)
>+		goto end_unlock;
>+
> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
> 	 * for this main VSI if they exist
> 	 */
>@@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
> 	    (pf->num_fdsb_msix == 0)) {
> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 	}
> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
> 	    (pf->num_vmdq_msix == 0)) {
>@@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
> 				       I40E_FLAG_FD_SB_ENABLED	|
> 				       I40E_FLAG_FD_ATR_ENABLED	|
> 				       I40E_FLAG_VMDQ_ENABLED);
>+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 
> 			/* rework the queue expectations without MSIX */
> 			i40e_determine_queue_usage(pf);
>@@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
> 		/* Enable filters and mark for reset */
> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
> 			need_reset = true;
>-		/* enable FD_SB only if there is MSI-X vector */
>-		if (pf->num_fdsb_msix > 0)
>+		/* enable FD_SB only if there is MSI-X vector and no cloud
>+		 * filters exist
>+		 */
>+		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>+		}
> 	} else {
> 		/* turn off filters, mark for reset and clear SW filter list */
> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>@@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
> 		}
> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>+
> 		/* reset fd counters */
> 		pf->fd_add_err = 0;
> 		pf->fd_atr_cnt = 0;
>@@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
> 		netdev->hw_features |= NETIF_F_NTUPLE;
> 	hw_features = hw_enc_features		|
> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>-		      NETIF_F_HW_VLAN_CTAG_RX;
>+		      NETIF_F_HW_VLAN_CTAG_RX	|
>+		      NETIF_F_HW_TC;
> 
> 	netdev->hw_features |= hw_features;
> 
>@@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
> 	*/
> 
> 	if ((pf->hw.pf_id == 0) &&
>-	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>+	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>+		pf->last_sw_conf_flags = flags;
>+	}
> 
> 	if (pf->hw.pf_id == 0) {
> 		u16 valid_flags;
>@@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
> 					     pf->hw.aq.asq_last_status));
> 			/* not a fatal problem, just keep going */
> 		}
>+		pf->last_sw_conf_valid_flags = valid_flags;
> 	}
> 
> 	/* first time setup */
>@@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
> 			       I40E_FLAG_DCB_ENABLED	|
> 			       I40E_FLAG_SRIOV_ENABLED	|
> 			       I40E_FLAG_VMDQ_ENABLED);
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
> 				  I40E_FLAG_FD_SB_ENABLED |
> 				  I40E_FLAG_FD_ATR_ENABLED |
>@@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
> 			       I40E_FLAG_FD_ATR_ENABLED	|
> 			       I40E_FLAG_DCB_ENABLED	|
> 			       I40E_FLAG_VMDQ_ENABLED);
>+		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 	} else {
> 		/* Not enough queues for all TCs */
> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>@@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
> 			queues_left -= 1; /* save 1 queue for FD */
> 		} else {
> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>+			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
> 		}
> 	}
>@@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
> 
> 	i40e_clear_pxe_mode(hw);
>-	err = i40e_get_capabilities(pf);
>+	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
> 	if (err)
> 		goto err_adminq_setup;
> 
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>index 92869f5..3bb6659 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>@@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
> 		struct i40e_asq_cmd_details *cmd_details);
> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
> 				   struct i40e_asq_cmd_details *cmd_details);
>+i40e_status
>+i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count);
>+enum i40e_status_code
>+i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count);
>+enum i40e_status_code
>+i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>+			  struct i40e_aqc_cloud_filters_element_data *filters,
>+			  u8 filter_count);
>+i40e_status
>+i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>+			     struct i40e_aqc_cloud_filters_element_bb *filters,
>+			     u8 filter_count);
> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
> 			       struct i40e_lldp_variables *lldp_cfg);
> /* i40e_common */
>diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>index c019f46..af38881 100644
>--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>@@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>+#define I40E_SWITCH_MODE_MASK		0xF
> 
> 	u32  management_mode;
> 	u32  mng_protocols_over_mctp;
>diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>index b8c78bf..4fe27f0 100644
>--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>@@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
> 		struct {
> 			u8 data[16];
> 		} v6;
>+		struct {
>+			__le16 data[8];
>+		} raw_v6;
> 	} ipaddr;
> 	__le16	flags;
> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action
  2017-09-13 13:18     ` [Intel-wired-lan] " Jiri Pirko
@ 2017-09-14  7:58       ` Nambiar, Amritha
  -1 siblings, 0 replies; 30+ messages in thread
From: Nambiar, Amritha @ 2017-09-14  7:58 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev, mlxsw

On 9/13/2017 6:18 AM, Jiri Pirko wrote:
> Wed, Sep 13, 2017 at 11:59:24AM CEST, amritha.nambiar@intel.com wrote:
>> Adds optional traffic class parameter to the mirror/redirect action.
>> The mirror/redirect action is extended to forward to a traffic
>> class on the device if the traffic class index is provided in
>> addition to the device's ifindex.
> 
> Do I understand it correctly that you just abuse mirred to pas tcclass
> index down to the driver, without actually doing anything with the value
> inside mirred-code ? That is a bit confusing for me.
> 

I think I get your point, I was looking at it more from a hardware
angle, and the 'redirect' action looked quite close to how this actually
works in the hardware. I agree the tclass value in the mirred-code is
not very useful other than offloading it.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action
@ 2017-09-14  7:58       ` Nambiar, Amritha
  0 siblings, 0 replies; 30+ messages in thread
From: Nambiar, Amritha @ 2017-09-14  7:58 UTC (permalink / raw)
  To: intel-wired-lan

On 9/13/2017 6:18 AM, Jiri Pirko wrote:
> Wed, Sep 13, 2017 at 11:59:24AM CEST, amritha.nambiar at intel.com wrote:
>> Adds optional traffic class parameter to the mirror/redirect action.
>> The mirror/redirect action is extended to forward to a traffic
>> class on the device if the traffic class index is provided in
>> addition to the device's ifindex.
> 
> Do I understand it correctly that you just abuse mirred to pas tcclass
> index down to the driver, without actually doing anything with the value
> inside mirred-code ? That is a bit confusing for me.
> 

I think I get your point, I was looking at it more from a hardware
angle, and the 'redirect' action looked quite close to how this actually
works in the hardware. I agree the tclass value in the mirred-code is
not very useful other than offloading it.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
  2017-09-13 13:26     ` [Intel-wired-lan] " Jiri Pirko
@ 2017-09-14  8:00       ` Nambiar, Amritha
  -1 siblings, 0 replies; 30+ messages in thread
From: Nambiar, Amritha @ 2017-09-14  8:00 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev, mlxsw

On 9/13/2017 6:26 AM, Jiri Pirko wrote:
> Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar@intel.com wrote:
>> This patch enables tc-flower based hardware offloads. tc flower
>> filter provided by the kernel is configured as driver specific
>> cloud filter. The patch implements functions and admin queue
>> commands needed to support cloud filters in the driver and
>> adds cloud filters to configure these tc-flower filters.
>>
>> The only action supported is to redirect packets to a traffic class
>> on the same device.
> 
> So basically you are not doing redirect, you are just setting tclass for
> matched packets, right? Why you use mirred for this? I think that
> you might consider extending g_act for that:
> 
> # tc filter add dev eth0 protocol ip ingress \
>   prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
>   action tclass 0
> 
Yes, this doesn't work like a typical egress redirect, but is aimed at
forwarding the matched packets to a different queue-group/traffic class
on the same device, so some sort-of ingress redirect in the hardware. I
possibly may not need the mirred-redirect as you say, I'll look into the
g_act way of doing this with a new gact tc action.

> 
>>
>> # tc qdisc add dev eth0 ingress
>> # ethtool -K eth0 hw-tc-offload on
>>
>> # tc filter add dev eth0 protocol ip parent ffff:\
>>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>>  action mirred ingress redirect dev eth0 tclass 0
>>
>> # tc filter add dev eth0 protocol ip parent ffff:\
>>  prio 2 flower dst_ip 192.168.3.5/32\
>>  ip_proto udp dst_port 25 skip_sw\
>>  action mirred ingress redirect dev eth0 tclass 1
>>
>> # tc filter add dev eth0 protocol ipv6 parent ffff:\
>>  prio 3 flower dst_ip fe8::200:1\
>>  ip_proto udp dst_port 66 skip_sw\
>>  action mirred ingress redirect dev eth0 tclass 1
>>
>> Delete tc flower filter:
>> Example:
>>
>> # tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
>> # tc filter del dev eth0 parent ffff:
>>
>> Flow Director Sideband is disabled while configuring cloud filters
>> via tc-flower and until any cloud filter exists.
>>
>> Unsupported matches when cloud filters are added using enhanced
>> big buffer cloud filter mode of underlying switch include:
>> 1. source port and source IP
>> 2. Combined MAC address and IP fields.
>> 3. Not specifying L4 port
>>
>> These filter matches can however be used to redirect traffic to
>> the main VSI (tc 0) which does not require the enhanced big buffer
>> cloud filter support.
>>
>> v3: Cleaned up some lengthy function names. Changed ipv6 address to
>> __be32 array instead of u8 array. Used macro for IP version. Minor
>> formatting changes.
>> v2:
>> 1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>> 2. Moved dev_info for add/deleting cloud filters in else condition
>> 3. Fixed some format specifier in dev_err logs
>> 4. Refactored i40e_get_capabilities to take an additional
>>   list_type parameter and use it to query device and function
>>   level capabilities.
>> 5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>>   to verify if redirect to a traffic class is supported.
>> 6. Added comments for Geneve fix in cloud filter big buffer AQ
>>   function definitions.
>> 7. Cleaned up setup_tc interface to rebase and work with Jiri's
>>   updates, separate function to process tc cls flower offloads.
>> 8. Changes to make Flow Director Sideband and Cloud filters mutually
>>   exclusive.
>>
>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>> Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>> ---
>> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
>> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
>> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
>> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
>> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
>> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
>> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
>> 7 files changed, 1202 insertions(+), 30 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>> index 6018fb6..b110519 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e.h
>> @@ -55,6 +55,8 @@
>> #include <linux/net_tstamp.h>
>> #include <linux/ptp_clock_kernel.h>
>> #include <net/pkt_cls.h>
>> +#include <net/tc_act/tc_gact.h>
>> +#include <net/tc_act/tc_mirred.h>
>> #include "i40e_type.h"
>> #include "i40e_prototype.h"
>> #include "i40e_client.h"
>> @@ -252,9 +254,52 @@ struct i40e_fdir_filter {
>> 	u32 fd_id;
>> };
>>
>> +#define IPV4_VERSION 4
>> +#define IPV6_VERSION 6
>> +
>> +#define I40E_CLOUD_FIELD_OMAC	0x01
>> +#define I40E_CLOUD_FIELD_IMAC	0x02
>> +#define I40E_CLOUD_FIELD_IVLAN	0x04
>> +#define I40E_CLOUD_FIELD_TEN_ID	0x08
>> +#define I40E_CLOUD_FIELD_IIP	0x10
>> +
>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>> +						 I40E_CLOUD_FIELD_IVLAN)
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>> +						 I40E_CLOUD_FIELD_TEN_ID)
>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>> +						  I40E_CLOUD_FIELD_IMAC | \
>> +						  I40E_CLOUD_FIELD_TEN_ID)
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>> +						   I40E_CLOUD_FIELD_IVLAN | \
>> +						   I40E_CLOUD_FIELD_TEN_ID)
>> +#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>> +
>> struct i40e_cloud_filter {
>> 	struct hlist_node cloud_node;
>> 	unsigned long cookie;
>> +	/* cloud filter input set follows */
>> +	u8 dst_mac[ETH_ALEN];
>> +	u8 src_mac[ETH_ALEN];
>> +	__be16 vlan_id;
>> +	__be32 dst_ip;
>> +	__be32 src_ip;
>> +	__be32 dst_ipv6[4];
>> +	__be32 src_ipv6[4];
>> +	__be16 dst_port;
>> +	__be16 src_port;
>> +	u32 ip_version;
>> +	u8 ip_proto;	/* IPPROTO value */
>> +	/* L4 port type: src or destination port */
>> +#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>> +#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>> +	u8 port_type;
>> +	u32 tenant_id;
>> +	u8 flags;
>> +#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>> +	u8 tunnel_type;
>> 	u16 seid;	/* filter control */
>> };
>>
>> @@ -491,6 +536,8 @@ struct i40e_pf {
>> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
>> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
>> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>> +#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>> +#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
>>
>> 	struct i40e_client_instance *cinst;
>> 	bool stat_offsets_loaded;
>> @@ -573,6 +620,8 @@ struct i40e_pf {
>> 	u16 phy_led_val;
>>
>> 	u16 override_q_count;
>> +	u16 last_sw_conf_flags;
>> +	u16 last_sw_conf_valid_flags;
>> };
>>
>> /**
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>> index 2e567c2..feb3d42 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>> @@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
>> 		struct {
>> 			u8 data[16];
>> 		} v6;
>> +		struct {
>> +			__le16 data[8];
>> +		} raw_v6;
>> 	} ipaddr;
>> 	__le16	flags;
>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>> index 9567702..d9c9665 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>> @@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
>>
>> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
>> 				   track_id, &offset, &info, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_add_cloud_filters
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to add cloud filters from
>> + * @filters: Buffer which contains the filters to be added
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Set the cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>> + * of the function.
>> + *
>> + **/
>> +enum i40e_status_code
>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	enum i40e_status_code status;
>> +	u16 buff_len;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_add_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_add_cloud_filters_bb
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to add cloud filters from
>> + * @filters: Buffer which contains the filters in big buffer to be added
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Set the big buffer cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>> + * function.
>> + *
>> + **/
>> +i40e_status
>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	i40e_status status;
>> +	u16 buff_len;
>> +	int i;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_add_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>> +
>> +	for (i = 0; i < filter_count; i++) {
>> +		u16 tnl_type;
>> +		u32 ti;
>> +
>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>> +
>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>> +		 * byte than the offset for the Tenant ID for rest of the
>> +		 * tunnels.
>> +		 */
>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>> +		}
>> +	}
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_rem_cloud_filters
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to remove cloud filters from
>> + * @filters: Buffer which contains the filters to be removed
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Remove the cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>> + * of the function.
>> + *
>> + **/
>> +enum i40e_status_code
>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	enum i40e_status_code status;
>> +	u16 buff_len;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_remove_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_rem_cloud_filters_bb
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to remove cloud filters from
>> + * @filters: Buffer which contains the filters in big buffer to be removed
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Remove the big buffer cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>> + * function.
>> + *
>> + **/
>> +i40e_status
>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	i40e_status status;
>> +	u16 buff_len;
>> +	int i;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_remove_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>> +
>> +	for (i = 0; i < filter_count; i++) {
>> +		u16 tnl_type;
>> +		u32 ti;
>> +
>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>> +
>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>> +		 * byte than the offset for the Tenant ID for rest of the
>> +		 * tunnels.
>> +		 */
>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>> +		}
>> +	}
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> 	return status;
>> }
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>> index afcf08a..96ee608 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>> @@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
>> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
>> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
>> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>> +				     struct i40e_cloud_filter *filter,
>> +				     bool add);
>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>> +					     struct i40e_cloud_filter *filter,
>> +					     bool add);
>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>> +				 enum i40e_admin_queue_opc list_type);
>> +
>>
>> /* i40e_pci_tbl - PCI Device ID Table
>>  *
>> @@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>>  **/
>> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>> {
>> +	enum i40e_admin_queue_err last_aq_status;
>> +	struct i40e_cloud_filter *cfilter;
>> 	struct i40e_channel *ch, *ch_tmp;
>> +	struct i40e_pf *pf = vsi->back;
>> +	struct hlist_node *node;
>> 	int ret, i;
>>
>> 	/* Reset rss size that was stored when reconfiguring rss for
>> @@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>> 				 "Failed to reset tx rate for ch->seid %u\n",
>> 				 ch->seid);
>>
>> +		/* delete cloud filters associated with this channel */
>> +		hlist_for_each_entry_safe(cfilter, node,
>> +					  &pf->cloud_filter_list, cloud_node) {
>> +			if (cfilter->seid != ch->seid)
>> +				continue;
>> +
>> +			hash_del(&cfilter->cloud_node);
>> +			if (cfilter->dst_port)
>> +				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>> +									cfilter,
>> +									false);
>> +			else
>> +				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>> +								false);
>> +			last_aq_status = pf->hw.aq.asq_last_status;
>> +			if (ret)
>> +				dev_info(&pf->pdev->dev,
>> +					 "Failed to delete cloud filter, err %s aq_err %s\n",
>> +					 i40e_stat_str(&pf->hw, ret),
>> +					 i40e_aq_str(&pf->hw, last_aq_status));
>> +			kfree(cfilter);
>> +		}
>> +
>> 		/* delete VSI from FW */
>> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
>> 					     NULL);
>> @@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
>> }
>>
>> /**
>> + * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>> + * @vsi: ptr to VSI which has PF backing
>> + * @l4type: true for TCP ond false for UDP
>> + * @port_type: true if port is destination and false if port is source
>> + *
>> + * Sets up switch mode correctly if it needs to be changed and perform
>> + * what are allowed modes.
>> + **/
>> +static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>> +					     bool port_type)
>> +{
>> +	u8 mode;
>> +	struct i40e_pf *pf = vsi->back;
>> +	struct i40e_hw *hw = &pf->hw;
>> +	int ret;
>> +
>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>> +	if (ret)
>> +		return -EINVAL;
>> +
>> +	if (hw->dev_caps.switch_mode) {
>> +		/* if switch mode is set, support mode2 (non-tunneled for
>> +		 * cloud filter) for now
>> +		 */
>> +		u32 switch_mode = hw->dev_caps.switch_mode &
>> +							I40E_SWITCH_MODE_MASK;
>> +		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>> +			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>> +				return 0;
>> +			dev_err(&pf->pdev->dev,
>> +				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>> +				hw->dev_caps.switch_mode);
>> +			return -EINVAL;
>> +		}
>> +	}
>> +
>> +	/* port_type: true for destination port and false for source port
>> +	 * For now, supports only destination port type
>> +	 */
>> +	if (!port_type) {
>> +		dev_err(&pf->pdev->dev, "src port type not supported\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Set Bit 7 to be valid */
>> +	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>> +
>> +	/* Set L4type to both TCP and UDP support */
>> +	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>> +
>> +	/* Set cloud filter mode */
>> +	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>> +
>> +	/* Prep mode field for set_switch_config */
>> +	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>> +					pf->last_sw_conf_valid_flags,
>> +					mode, NULL);
>> +	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>> +		dev_err(&pf->pdev->dev,
>> +			"couldn't set switch config bits, err %s aq_err %s\n",
>> +			i40e_stat_str(hw, ret),
>> +			i40e_aq_str(hw,
>> +				    hw->aq.asq_last_status));
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>>  * i40e_create_queue_channel - function to create channel
>>  * @vsi: VSI to be configured
>>  * @ch: ptr to channel (it contains channel specific params)
>> @@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
>> 	return ret;
>> }
>>
>> +/**
>> + * i40e_set_cld_element - sets cloud filter element data
>> + * @filter: cloud filter rule
>> + * @cld: ptr to cloud filter element data
>> + *
>> + * This is helper function to copy data into cloud filter element
>> + **/
>> +static inline void
>> +i40e_set_cld_element(struct i40e_cloud_filter *filter,
>> +		     struct i40e_aqc_cloud_filters_element_data *cld)
>> +{
>> +	int i, j;
>> +	u32 ipa;
>> +
>> +	memset(cld, 0, sizeof(*cld));
>> +	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>> +	ether_addr_copy(cld->inner_mac, filter->src_mac);
>> +
>> +	if (filter->ip_version == IPV6_VERSION) {
>> +#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>> +		for (i = 0, j = 0; i < 4; i++, j += 2) {
>> +			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>> +			ipa = cpu_to_le32(ipa);
>> +			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>> +		}
>> +	} else {
>> +		ipa = be32_to_cpu(filter->dst_ip);
>> +		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>> +	}
>> +
>> +	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>> +
>> +	/* tenant_id is not supported by FW now, once the support is enabled
>> +	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>> +	 */
>> +	if (filter->tenant_id)
>> +		return;
>> +}
>> +
>> +/**
>> + * i40e_add_del_cloud_filter - Add/del cloud filter
>> + * @vsi: pointer to VSI
>> + * @filter: cloud filter rule
>> + * @add: if true, add, if false, delete
>> + *
>> + * Add or delete a cloud filter for a specific flow spec.
>> + * Returns 0 if the filter were successfully added.
>> + **/
>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>> +				     struct i40e_cloud_filter *filter, bool add)
>> +{
>> +	struct i40e_aqc_cloud_filters_element_data cld_filter;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int ret;
>> +	static const u16 flag_table[128] = {
>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>> +		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>> +	};
>> +
>> +	if (filter->flags >= ARRAY_SIZE(flag_table))
>> +		return I40E_ERR_CONFIG;
>> +
>> +	/* copy element needed to add cloud filter from filter */
>> +	i40e_set_cld_element(filter, &cld_filter);
>> +
>> +	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>> +		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>> +					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>> +
>> +	if (filter->ip_version == IPV6_VERSION)
>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>> +	else
>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>> +
>> +	if (add)
>> +		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>> +						&cld_filter, 1);
>> +	else
>> +		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>> +						&cld_filter, 1);
>> +	if (ret)
>> +		dev_dbg(&pf->pdev->dev,
>> +			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>> +			add ? "add" : "delete", filter->dst_port, ret,
>> +			pf->hw.aq.asq_last_status);
>> +	else
>> +		dev_info(&pf->pdev->dev,
>> +			 "%s cloud filter for VSI: %d\n",
>> +			 add ? "Added" : "Deleted", filter->seid);
>> +	return ret;
>> +}
>> +
>> +/**
>> + * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>> + * @vsi: pointer to VSI
>> + * @filter: cloud filter rule
>> + * @add: if true, add, if false, delete
>> + *
>> + * Add or delete a cloud filter for a specific flow spec using big buffer.
>> + * Returns 0 if the filter were successfully added.
>> + **/
>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>> +					     struct i40e_cloud_filter *filter,
>> +					     bool add)
>> +{
>> +	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int ret;
>> +
>> +	/* Both (Outer/Inner) valid mac_addr are not supported */
>> +	if (is_valid_ether_addr(filter->dst_mac) &&
>> +	    is_valid_ether_addr(filter->src_mac))
>> +		return -EINVAL;
>> +
>> +	/* Make sure port is specified, otherwise bail out, for channel
>> +	 * specific cloud filter needs 'L4 port' to be non-zero
>> +	 */
>> +	if (!filter->dst_port)
>> +		return -EINVAL;
>> +
>> +	/* adding filter using src_port/src_ip is not supported at this stage */
>> +	if (filter->src_port || filter->src_ip ||
>> +	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>> +		return -EINVAL;
>> +
>> +	/* copy element needed to add cloud filter from filter */
>> +	i40e_set_cld_element(filter, &cld_filter.element);
>> +
>> +	if (is_valid_ether_addr(filter->dst_mac) ||
>> +	    is_valid_ether_addr(filter->src_mac) ||
>> +	    is_multicast_ether_addr(filter->dst_mac) ||
>> +	    is_multicast_ether_addr(filter->src_mac)) {
>> +		/* MAC + IP : unsupported mode */
>> +		if (filter->dst_ip)
>> +			return -EINVAL;
>> +
>> +		/* since we validated that L4 port must be valid before
>> +		 * we get here, start with respective "flags" value
>> +		 * and update if vlan is present or not
>> +		 */
>> +		cld_filter.element.flags =
>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>> +
>> +		if (filter->vlan_id) {
>> +			cld_filter.element.flags =
>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>> +		}
>> +
>> +	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>> +		cld_filter.element.flags =
>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>> +		if (filter->ip_version == IPV6_VERSION)
>> +			cld_filter.element.flags |=
>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>> +		else
>> +			cld_filter.element.flags |=
>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>> +	} else {
>> +		dev_err(&pf->pdev->dev,
>> +			"either mac or ip has to be valid for cloud filter\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Now copy L4 port in Byte 6..7 in general fields */
>> +	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>> +						be16_to_cpu(filter->dst_port);
>> +
>> +	if (add) {
>> +		bool proto_type, port_type;
>> +
>> +		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>> +		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>> +			     true : false;
>> +
>> +		/* For now, src port based cloud filter for channel is not
>> +		 * supported
>> +		 */
>> +		if (!port_type) {
>> +			dev_err(&pf->pdev->dev,
>> +				"unsupported port type (src port)\n");
>> +			return -EOPNOTSUPP;
>> +		}
>> +
>> +		/* Validate current device switch mode, change if necessary */
>> +		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>> +							port_type);
>> +		if (ret) {
>> +			dev_err(&pf->pdev->dev,
>> +				"failed to set switch mode, ret %d\n",
>> +				ret);
>> +			return ret;
>> +		}
>> +
>> +		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>> +						   &cld_filter, 1);
>> +	} else {
>> +		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>> +						   &cld_filter, 1);
>> +	}
>> +
>> +	if (ret)
>> +		dev_dbg(&pf->pdev->dev,
>> +			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>> +			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>> +	else
>> +		dev_info(&pf->pdev->dev,
>> +			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>> +			 add ? "add" : "delete", filter->seid,
>> +			 ntohs(filter->dst_port));
>> +	return ret;
>> +}
>> +
>> +/**
>> + * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + * @filter: Pointer to cloud filter structure
>> + *
>> + **/
>> +static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>> +				 struct tc_cls_flower_offload *f,
>> +				 struct i40e_cloud_filter *filter)
>> +{
>> +	struct i40e_pf *pf = vsi->back;
>> +	u16 addr_type = 0;
>> +	u8 field_flags = 0;
>> +
>> +	if (f->dissector->used_keys &
>> +	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>> +	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>> +	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>> +	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>> +		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>> +			f->dissector->used_keys);
>> +		return -EOPNOTSUPP;
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>> +		struct flow_dissector_key_keyid *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>> +						  f->key);
>> +
>> +		struct flow_dissector_key_keyid *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>> +						  f->mask);
>> +
>> +		if (mask->keyid != 0)
>> +			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>> +
>> +		filter->tenant_id = be32_to_cpu(key->keyid);
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>> +		struct flow_dissector_key_basic *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_BASIC,
>> +						  f->key);
>> +
>> +		filter->ip_proto = key->ip_proto;
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>> +		struct flow_dissector_key_eth_addrs *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>> +						  f->key);
>> +
>> +		struct flow_dissector_key_eth_addrs *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>> +						  f->mask);
>> +
>> +		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>> +		if (!is_zero_ether_addr(mask->dst)) {
>> +			if (is_broadcast_ether_addr(mask->dst)) {
>> +				field_flags |= I40E_CLOUD_FIELD_OMAC;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>> +					mask->dst);
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (!is_zero_ether_addr(mask->src)) {
>> +			if (is_broadcast_ether_addr(mask->src)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IMAC;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>> +					mask->src);
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +		ether_addr_copy(filter->dst_mac, key->dst);
>> +		ether_addr_copy(filter->src_mac, key->src);
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>> +		struct flow_dissector_key_vlan *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_VLAN,
>> +						  f->key);
>> +		struct flow_dissector_key_vlan *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_VLAN,
>> +						  f->mask);
>> +
>> +		if (mask->vlan_id) {
>> +			if (mask->vlan_id == VLAN_VID_MASK) {
>> +				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>> +
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>> +					mask->vlan_id);
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		filter->vlan_id = cpu_to_be16(key->vlan_id);
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>> +		struct flow_dissector_key_control *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_CONTROL,
>> +						  f->key);
>> +
>> +		addr_type = key->addr_type;
>> +	}
>> +
>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>> +		struct flow_dissector_key_ipv4_addrs *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>> +						  f->key);
>> +		struct flow_dissector_key_ipv4_addrs *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>> +						  f->mask);
>> +
>> +		if (mask->dst) {
>> +			if (mask->dst == cpu_to_be32(0xffffffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>> +					be32_to_cpu(mask->dst));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (mask->src) {
>> +			if (mask->src == cpu_to_be32(0xffffffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>> +					be32_to_cpu(mask->dst));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>> +			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>> +			return I40E_ERR_CONFIG;
>> +		}
>> +		filter->dst_ip = key->dst;
>> +		filter->src_ip = key->src;
>> +		filter->ip_version = IPV4_VERSION;
>> +	}
>> +
>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>> +		struct flow_dissector_key_ipv6_addrs *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>> +						  f->key);
>> +		struct flow_dissector_key_ipv6_addrs *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>> +						  f->mask);
>> +
>> +		/* src and dest IPV6 address should not be LOOPBACK
>> +		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>> +		 */
>> +		if (ipv6_addr_loopback(&key->dst) ||
>> +		    ipv6_addr_loopback(&key->src)) {
>> +			dev_err(&pf->pdev->dev,
>> +				"Bad ipv6, addr is LOOPBACK\n");
>> +			return I40E_ERR_CONFIG;
>> +		}
>> +		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>> +			field_flags |= I40E_CLOUD_FIELD_IIP;
>> +
>> +		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>> +		       sizeof(filter->src_ipv6));
>> +		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>> +		       sizeof(filter->dst_ipv6));
>> +
>> +		/* mark it as IPv6 filter, to be used later */
>> +		filter->ip_version = IPV6_VERSION;
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>> +		struct flow_dissector_key_ports *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_PORTS,
>> +						  f->key);
>> +		struct flow_dissector_key_ports *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_PORTS,
>> +						  f->mask);
>> +
>> +		if (mask->src) {
>> +			if (mask->src == cpu_to_be16(0xffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>> +					be16_to_cpu(mask->src));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (mask->dst) {
>> +			if (mask->dst == cpu_to_be16(0xffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>> +					be16_to_cpu(mask->dst));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		filter->dst_port = key->dst;
>> +		filter->src_port = key->src;
>> +
>> +		/* For now, only supports destination port*/
>> +		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>> +
>> +		switch (filter->ip_proto) {
>> +		case IPPROTO_TCP:
>> +		case IPPROTO_UDP:
>> +			break;
>> +		default:
>> +			dev_err(&pf->pdev->dev,
>> +				"Only UDP and TCP transport are supported\n");
>> +			return -EINVAL;
>> +		}
>> +	}
>> +	filter->flags = field_flags;
>> +	return 0;
>> +}
>> +
>> +/**
>> + * i40e_handle_redirect_action: Forward to a traffic class on the device
>> + * @vsi: Pointer to VSI
>> + * @ifindex: ifindex of the device to forwared to
>> + * @tc: traffic class index on the device
>> + * @filter: Pointer to cloud filter structure
>> + *
>> + **/
>> +static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>> +				       struct i40e_cloud_filter *filter)
>> +{
>> +	struct i40e_channel *ch, *ch_tmp;
>> +
>> +	/* redirect to a traffic class on the same device */
>> +	if (vsi->netdev->ifindex == ifindex) {
>> +		if (tc == 0) {
>> +			filter->seid = vsi->seid;
>> +			return 0;
>> +		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>> +			if (!filter->dst_port) {
>> +				dev_err(&vsi->back->pdev->dev,
>> +					"Specify destination port to redirect to traffic class that is not default\n");
>> +				return -EINVAL;
>> +			}
>> +			if (list_empty(&vsi->ch_list))
>> +				return -EINVAL;
>> +			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>> +						 list) {
>> +				if (ch->seid == vsi->tc_seid_map[tc])
>> +					filter->seid = ch->seid;
>> +			}
>> +			return 0;
>> +		}
>> +	}
>> +	return -EINVAL;
>> +}
>> +
>> +/**
>> + * i40e_parse_tc_actions - Parse tc actions
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + * @filter: Pointer to cloud filter structure
>> + *
>> + **/
>> +static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>> +				 struct i40e_cloud_filter *filter)
>> +{
>> +	const struct tc_action *a;
>> +	LIST_HEAD(actions);
>> +	int err;
>> +
>> +	if (!tcf_exts_has_actions(exts))
>> +		return -EINVAL;
>> +
>> +	tcf_exts_to_list(exts, &actions);
>> +	list_for_each_entry(a, &actions, list) {
>> +		/* Drop action */
>> +		if (is_tcf_gact_shot(a)) {
>> +			dev_err(&vsi->back->pdev->dev,
>> +				"Cloud filters do not support the drop action.\n");
>> +			return -EOPNOTSUPP;
>> +		}
>> +
>> +		/* Redirect to a traffic class on the same device */
>> +		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>> +			int ifindex = tcf_mirred_ifindex(a);
>> +			u8 tc = tcf_mirred_tc(a);
>> +
>> +			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>> +							  filter);
>> +			if (err == 0)
>> +				return err;
>> +		}
>> +	}
>> +	return -EINVAL;
>> +}
>> +
>> +/**
>> + * i40e_configure_clsflower - Configure tc flower filters
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + *
>> + **/
>> +static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>> +				    struct tc_cls_flower_offload *cls_flower)
>> +{
>> +	struct i40e_cloud_filter *filter = NULL;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int err = 0;
>> +
>> +	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>> +	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>> +		return -EBUSY;
>> +
>> +	if (pf->fdir_pf_active_filters ||
>> +	    (!hlist_empty(&pf->fdir_filter_list))) {
>> +		dev_err(&vsi->back->pdev->dev,
>> +			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>> +		dev_err(&vsi->back->pdev->dev,
>> +			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>> +		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>> +	}
>> +
>> +	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>> +	if (!filter)
>> +		return -ENOMEM;
>> +
>> +	filter->cookie = cls_flower->cookie;
>> +
>> +	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>> +	if (err < 0)
>> +		goto err;
>> +
>> +	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>> +	if (err < 0)
>> +		goto err;
>> +
>> +	/* Add cloud filter */
>> +	if (filter->dst_port)
>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>> +	else
>> +		err = i40e_add_del_cloud_filter(vsi, filter, true);
>> +
>> +	if (err) {
>> +		dev_err(&pf->pdev->dev,
>> +			"Failed to add cloud filter, err %s\n",
>> +			i40e_stat_str(&pf->hw, err));
>> +		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>> +		goto err;
>> +	}
>> +
>> +	/* add filter to the ordered list */
>> +	INIT_HLIST_NODE(&filter->cloud_node);
>> +
>> +	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>> +
>> +	pf->num_cloud_filters++;
>> +
>> +	return err;
>> +err:
>> +	kfree(filter);
>> +	return err;
>> +}
>> +
>> +/**
>> + * i40e_find_cloud_filter - Find the could filter in the list
>> + * @vsi: Pointer to VSI
>> + * @cookie: filter specific cookie
>> + *
>> + **/
>> +static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>> +							unsigned long *cookie)
>> +{
>> +	struct i40e_cloud_filter *filter = NULL;
>> +	struct hlist_node *node2;
>> +
>> +	hlist_for_each_entry_safe(filter, node2,
>> +				  &vsi->back->cloud_filter_list, cloud_node)
>> +		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>> +			return filter;
>> +	return NULL;
>> +}
>> +
>> +/**
>> + * i40e_delete_clsflower - Remove tc flower filters
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + *
>> + **/
>> +static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>> +				 struct tc_cls_flower_offload *cls_flower)
>> +{
>> +	struct i40e_cloud_filter *filter = NULL;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int err = 0;
>> +
>> +	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>> +
>> +	if (!filter)
>> +		return -EINVAL;
>> +
>> +	hash_del(&filter->cloud_node);
>> +
>> +	if (filter->dst_port)
>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>> +	else
>> +		err = i40e_add_del_cloud_filter(vsi, filter, false);
>> +	if (err) {
>> +		kfree(filter);
>> +		dev_err(&pf->pdev->dev,
>> +			"Failed to delete cloud filter, err %s\n",
>> +			i40e_stat_str(&pf->hw, err));
>> +		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>> +	}
>> +
>> +	kfree(filter);
>> +	pf->num_cloud_filters--;
>> +
>> +	if (!pf->num_cloud_filters)
>> +		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>> +		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>> +			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>> +		}
>> +	return 0;
>> +}
>> +
>> +/**
>> + * i40e_setup_tc_cls_flower - flower classifier offloads
>> + * @netdev: net device to configure
>> + * @type_data: offload data
>> + **/
>> +static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>> +				    struct tc_cls_flower_offload *cls_flower)
>> +{
>> +	struct i40e_netdev_priv *np = netdev_priv(netdev);
>> +	struct i40e_vsi *vsi = np->vsi;
>> +
>> +	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>> +	    cls_flower->common.chain_index)
>> +		return -EOPNOTSUPP;
>> +
>> +	switch (cls_flower->command) {
>> +	case TC_CLSFLOWER_REPLACE:
>> +		return i40e_configure_clsflower(vsi, cls_flower);
>> +	case TC_CLSFLOWER_DESTROY:
>> +		return i40e_delete_clsflower(vsi, cls_flower);
>> +	case TC_CLSFLOWER_STATS:
>> +		return -EOPNOTSUPP;
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +}
>> +
>> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>> 			   void *type_data)
>> {
>> -	if (type != TC_SETUP_MQPRIO)
>> +	switch (type) {
>> +	case TC_SETUP_MQPRIO:
>> +		return i40e_setup_tc(netdev, type_data);
>> +	case TC_SETUP_CLSFLOWER:
>> +		return i40e_setup_tc_cls_flower(netdev, type_data);
>> +	default:
>> 		return -EOPNOTSUPP;
>> -
>> -	return i40e_setup_tc(netdev, type_data);
>> +	}
>> }
>>
>> /**
>> @@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
>> 		kfree(cfilter);
>> 	}
>> 	pf->num_cloud_filters = 0;
>> +
>> +	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>> +	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>> +		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>> +		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>> +		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>> +	}
>> }
>>
>> /**
>> @@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>>  * i40e_get_capabilities - get info about the HW
>>  * @pf: the PF struct
>>  **/
>> -static int i40e_get_capabilities(struct i40e_pf *pf)
>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>> +				 enum i40e_admin_queue_opc list_type)
>> {
>> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
>> 	u16 data_size;
>> @@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>
>> 		/* this loads the data into the hw struct for us */
>> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>> -					    &data_size,
>> -					    i40e_aqc_opc_list_func_capabilities,
>> -					    NULL);
>> +						    &data_size, list_type,
>> +						    NULL);
>> 		/* data loaded, buffer no longer needed */
>> 		kfree(cap_buf);
>>
>> @@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>> 		}
>> 	} while (err);
>>
>> -	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>> -		dev_info(&pf->pdev->dev,
>> -			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>> -			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>> -			 pf->hw.func_caps.num_msix_vectors,
>> -			 pf->hw.func_caps.num_msix_vectors_vf,
>> -			 pf->hw.func_caps.fd_filters_guaranteed,
>> -			 pf->hw.func_caps.fd_filters_best_effort,
>> -			 pf->hw.func_caps.num_tx_qp,
>> -			 pf->hw.func_caps.num_vsis);
>> -
>> +	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>> +		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>> +			dev_info(&pf->pdev->dev,
>> +				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>> +				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>> +				 pf->hw.func_caps.num_msix_vectors,
>> +				 pf->hw.func_caps.num_msix_vectors_vf,
>> +				 pf->hw.func_caps.fd_filters_guaranteed,
>> +				 pf->hw.func_caps.fd_filters_best_effort,
>> +				 pf->hw.func_caps.num_tx_qp,
>> +				 pf->hw.func_caps.num_vsis);
>> +		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>> +			dev_info(&pf->pdev->dev,
>> +				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>> +				 pf->hw.dev_caps.switch_mode,
>> +				 pf->hw.dev_caps.valid_functions);
>> +			dev_info(&pf->pdev->dev,
>> +				 "SR-IOV=%d, num_vfs for all function=%u\n",
>> +				 pf->hw.dev_caps.sr_iov_1_1,
>> +				 pf->hw.dev_caps.num_vfs);
>> +			dev_info(&pf->pdev->dev,
>> +				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>> +				 pf->hw.dev_caps.num_vsis,
>> +				 pf->hw.dev_caps.num_rx_qp,
>> +				 pf->hw.dev_caps.num_tx_qp);
>> +		}
>> +	}
>> +	if (list_type == i40e_aqc_opc_list_func_capabilities) {
>> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
>> 		       + pf->hw.func_caps.num_vfs)
>> -	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>> -		dev_info(&pf->pdev->dev,
>> -			 "got num_vsis %d, setting num_vsis to %d\n",
>> -			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>> -		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>> +		if (pf->hw.revision_id == 0 &&
>> +		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>> +			dev_info(&pf->pdev->dev,
>> +				 "got num_vsis %d, setting num_vsis to %d\n",
>> +				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>> +			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>> +		}
>> 	}
>> -
>> 	return 0;
>> }
>>
>> @@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
>> 		if (!vsi) {
>> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 			return;
>> 		}
>> 	}
>> @@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
>> }
>>
>> /**
>> + * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>> + * @vsi: PF main vsi
>> + * @seid: seid of main or channel VSIs
>> + *
>> + * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>> + * existed before reset
>> + **/
>> +static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>> +{
>> +	struct i40e_cloud_filter *cfilter;
>> +	struct i40e_pf *pf = vsi->back;
>> +	struct hlist_node *node;
>> +	i40e_status ret;
>> +
>> +	/* Add cloud filters back if they exist */
>> +	if (hlist_empty(&pf->cloud_filter_list))
>> +		return 0;
>> +
>> +	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>> +				  cloud_node) {
>> +		if (cfilter->seid != seid)
>> +			continue;
>> +
>> +		if (cfilter->dst_port)
>> +			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>> +								true);
>> +		else
>> +			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>> +
>> +		if (ret) {
>> +			dev_dbg(&pf->pdev->dev,
>> +				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>> +				i40e_stat_str(&pf->hw, ret),
>> +				i40e_aq_str(&pf->hw,
>> +					    pf->hw.aq.asq_last_status));
>> +			return ret;
>> +		}
>> +	}
>> +	return 0;
>> +}
>> +
>> +/**
>>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>>  * @vsi: PF main vsi
>>  *
>> @@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
>> 						I40E_BW_CREDIT_DIVISOR,
>> 				ch->seid);
>> 		}
>> +		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>> +		if (ret) {
>> +			dev_dbg(&vsi->back->pdev->dev,
>> +				"Failed to rebuild cloud filters for channel VSI %u\n",
>> +				ch->seid);
>> +			return ret;
>> +		}
>> 	}
>> 	return 0;
>> }
>> @@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>> 		i40e_verify_eeprom(pf);
>>
>> 	i40e_clear_pxe_mode(hw);
>> -	ret = i40e_get_capabilities(pf);
>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>> 	if (ret)
>> 		goto end_core_reset;
>>
>> @@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>> 			goto end_unlock;
>> 	}
>>
>> +	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>> +	if (ret)
>> +		goto end_unlock;
>> +
>> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
>> 	 * for this main VSI if they exist
>> 	 */
>> @@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
>> 	    (pf->num_fdsb_msix == 0)) {
>> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
>> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 	}
>> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
>> 	    (pf->num_vmdq_msix == 0)) {
>> @@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
>> 				       I40E_FLAG_FD_SB_ENABLED	|
>> 				       I40E_FLAG_FD_ATR_ENABLED	|
>> 				       I40E_FLAG_VMDQ_ENABLED);
>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>
>> 			/* rework the queue expectations without MSIX */
>> 			i40e_determine_queue_usage(pf);
>> @@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>> 		/* Enable filters and mark for reset */
>> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
>> 			need_reset = true;
>> -		/* enable FD_SB only if there is MSI-X vector */
>> -		if (pf->num_fdsb_msix > 0)
>> +		/* enable FD_SB only if there is MSI-X vector and no cloud
>> +		 * filters exist
>> +		 */
>> +		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
>> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>> +		}
>> 	} else {
>> 		/* turn off filters, mark for reset and clear SW filter list */
>> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>> @@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>> 		}
>> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
>> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> +
>> 		/* reset fd counters */
>> 		pf->fd_add_err = 0;
>> 		pf->fd_atr_cnt = 0;
>> @@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
>> 		netdev->hw_features |= NETIF_F_NTUPLE;
>> 	hw_features = hw_enc_features		|
>> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>> -		      NETIF_F_HW_VLAN_CTAG_RX;
>> +		      NETIF_F_HW_VLAN_CTAG_RX	|
>> +		      NETIF_F_HW_TC;
>>
>> 	netdev->hw_features |= hw_features;
>>
>> @@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>> 	*/
>>
>> 	if ((pf->hw.pf_id == 0) &&
>> -	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>> +	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
>> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>> +		pf->last_sw_conf_flags = flags;
>> +	}
>>
>> 	if (pf->hw.pf_id == 0) {
>> 		u16 valid_flags;
>> @@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>> 					     pf->hw.aq.asq_last_status));
>> 			/* not a fatal problem, just keep going */
>> 		}
>> +		pf->last_sw_conf_valid_flags = valid_flags;
>> 	}
>>
>> 	/* first time setup */
>> @@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>> 			       I40E_FLAG_DCB_ENABLED	|
>> 			       I40E_FLAG_SRIOV_ENABLED	|
>> 			       I40E_FLAG_VMDQ_ENABLED);
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
>> 				  I40E_FLAG_FD_SB_ENABLED |
>> 				  I40E_FLAG_FD_ATR_ENABLED |
>> @@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>> 			       I40E_FLAG_FD_ATR_ENABLED	|
>> 			       I40E_FLAG_DCB_ENABLED	|
>> 			       I40E_FLAG_VMDQ_ENABLED);
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 	} else {
>> 		/* Not enough queues for all TCs */
>> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>> @@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>> 			queues_left -= 1; /* save 1 queue for FD */
>> 		} else {
>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
>> 		}
>> 	}
>> @@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
>>
>> 	i40e_clear_pxe_mode(hw);
>> -	err = i40e_get_capabilities(pf);
>> +	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>> 	if (err)
>> 		goto err_adminq_setup;
>>
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>> index 92869f5..3bb6659 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>> @@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
>> 		struct i40e_asq_cmd_details *cmd_details);
>> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
>> 				   struct i40e_asq_cmd_details *cmd_details);
>> +i40e_status
>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count);
>> +enum i40e_status_code
>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count);
>> +enum i40e_status_code
>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count);
>> +i40e_status
>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count);
>> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
>> 			       struct i40e_lldp_variables *lldp_cfg);
>> /* i40e_common */
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>> index c019f46..af38881 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>> @@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
>> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
>> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
>> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>> +#define I40E_SWITCH_MODE_MASK		0xF
>>
>> 	u32  management_mode;
>> 	u32  mng_protocols_over_mctp;
>> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>> index b8c78bf..4fe27f0 100644
>> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>> @@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
>> 		struct {
>> 			u8 data[16];
>> 		} v6;
>> +		struct {
>> +			__le16 data[8];
>> +		} raw_v6;
>> 	} ipaddr;
>> 	__le16	flags;
>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
@ 2017-09-14  8:00       ` Nambiar, Amritha
  0 siblings, 0 replies; 30+ messages in thread
From: Nambiar, Amritha @ 2017-09-14  8:00 UTC (permalink / raw)
  To: intel-wired-lan

On 9/13/2017 6:26 AM, Jiri Pirko wrote:
> Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar at intel.com wrote:
>> This patch enables tc-flower based hardware offloads. tc flower
>> filter provided by the kernel is configured as driver specific
>> cloud filter. The patch implements functions and admin queue
>> commands needed to support cloud filters in the driver and
>> adds cloud filters to configure these tc-flower filters.
>>
>> The only action supported is to redirect packets to a traffic class
>> on the same device.
> 
> So basically you are not doing redirect, you are just setting tclass for
> matched packets, right? Why you use mirred for this? I think that
> you might consider extending g_act for that:
> 
> # tc filter add dev eth0 protocol ip ingress \
>   prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
>   action tclass 0
> 
Yes, this doesn't work like a typical egress redirect, but is aimed at
forwarding the matched packets to a different queue-group/traffic class
on the same device, so some sort-of ingress redirect in the hardware. I
possibly may not need the mirred-redirect as you say, I'll look into the
g_act way of doing this with a new gact tc action.

> 
>>
>> # tc qdisc add dev eth0 ingress
>> # ethtool -K eth0 hw-tc-offload on
>>
>> # tc filter add dev eth0 protocol ip parent ffff:\
>>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>>  action mirred ingress redirect dev eth0 tclass 0
>>
>> # tc filter add dev eth0 protocol ip parent ffff:\
>>  prio 2 flower dst_ip 192.168.3.5/32\
>>  ip_proto udp dst_port 25 skip_sw\
>>  action mirred ingress redirect dev eth0 tclass 1
>>
>> # tc filter add dev eth0 protocol ipv6 parent ffff:\
>>  prio 3 flower dst_ip fe8::200:1\
>>  ip_proto udp dst_port 66 skip_sw\
>>  action mirred ingress redirect dev eth0 tclass 1
>>
>> Delete tc flower filter:
>> Example:
>>
>> # tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
>> # tc filter del dev eth0 parent ffff:
>>
>> Flow Director Sideband is disabled while configuring cloud filters
>> via tc-flower and until any cloud filter exists.
>>
>> Unsupported matches when cloud filters are added using enhanced
>> big buffer cloud filter mode of underlying switch include:
>> 1. source port and source IP
>> 2. Combined MAC address and IP fields.
>> 3. Not specifying L4 port
>>
>> These filter matches can however be used to redirect traffic to
>> the main VSI (tc 0) which does not require the enhanced big buffer
>> cloud filter support.
>>
>> v3: Cleaned up some lengthy function names. Changed ipv6 address to
>> __be32 array instead of u8 array. Used macro for IP version. Minor
>> formatting changes.
>> v2:
>> 1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>> 2. Moved dev_info for add/deleting cloud filters in else condition
>> 3. Fixed some format specifier in dev_err logs
>> 4. Refactored i40e_get_capabilities to take an additional
>>   list_type parameter and use it to query device and function
>>   level capabilities.
>> 5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>>   to verify if redirect to a traffic class is supported.
>> 6. Added comments for Geneve fix in cloud filter big buffer AQ
>>   function definitions.
>> 7. Cleaned up setup_tc interface to rebase and work with Jiri's
>>   updates, separate function to process tc cls flower offloads.
>> 8. Changes to make Flow Director Sideband and Cloud filters mutually
>>   exclusive.
>>
>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>> Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>> ---
>> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
>> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
>> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
>> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
>> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
>> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
>> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
>> 7 files changed, 1202 insertions(+), 30 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>> index 6018fb6..b110519 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e.h
>> @@ -55,6 +55,8 @@
>> #include <linux/net_tstamp.h>
>> #include <linux/ptp_clock_kernel.h>
>> #include <net/pkt_cls.h>
>> +#include <net/tc_act/tc_gact.h>
>> +#include <net/tc_act/tc_mirred.h>
>> #include "i40e_type.h"
>> #include "i40e_prototype.h"
>> #include "i40e_client.h"
>> @@ -252,9 +254,52 @@ struct i40e_fdir_filter {
>> 	u32 fd_id;
>> };
>>
>> +#define IPV4_VERSION 4
>> +#define IPV6_VERSION 6
>> +
>> +#define I40E_CLOUD_FIELD_OMAC	0x01
>> +#define I40E_CLOUD_FIELD_IMAC	0x02
>> +#define I40E_CLOUD_FIELD_IVLAN	0x04
>> +#define I40E_CLOUD_FIELD_TEN_ID	0x08
>> +#define I40E_CLOUD_FIELD_IIP	0x10
>> +
>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>> +						 I40E_CLOUD_FIELD_IVLAN)
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>> +						 I40E_CLOUD_FIELD_TEN_ID)
>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>> +						  I40E_CLOUD_FIELD_IMAC | \
>> +						  I40E_CLOUD_FIELD_TEN_ID)
>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>> +						   I40E_CLOUD_FIELD_IVLAN | \
>> +						   I40E_CLOUD_FIELD_TEN_ID)
>> +#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>> +
>> struct i40e_cloud_filter {
>> 	struct hlist_node cloud_node;
>> 	unsigned long cookie;
>> +	/* cloud filter input set follows */
>> +	u8 dst_mac[ETH_ALEN];
>> +	u8 src_mac[ETH_ALEN];
>> +	__be16 vlan_id;
>> +	__be32 dst_ip;
>> +	__be32 src_ip;
>> +	__be32 dst_ipv6[4];
>> +	__be32 src_ipv6[4];
>> +	__be16 dst_port;
>> +	__be16 src_port;
>> +	u32 ip_version;
>> +	u8 ip_proto;	/* IPPROTO value */
>> +	/* L4 port type: src or destination port */
>> +#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>> +#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>> +	u8 port_type;
>> +	u32 tenant_id;
>> +	u8 flags;
>> +#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>> +	u8 tunnel_type;
>> 	u16 seid;	/* filter control */
>> };
>>
>> @@ -491,6 +536,8 @@ struct i40e_pf {
>> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
>> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
>> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>> +#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>> +#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
>>
>> 	struct i40e_client_instance *cinst;
>> 	bool stat_offsets_loaded;
>> @@ -573,6 +620,8 @@ struct i40e_pf {
>> 	u16 phy_led_val;
>>
>> 	u16 override_q_count;
>> +	u16 last_sw_conf_flags;
>> +	u16 last_sw_conf_valid_flags;
>> };
>>
>> /**
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>> index 2e567c2..feb3d42 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>> @@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
>> 		struct {
>> 			u8 data[16];
>> 		} v6;
>> +		struct {
>> +			__le16 data[8];
>> +		} raw_v6;
>> 	} ipaddr;
>> 	__le16	flags;
>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>> index 9567702..d9c9665 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>> @@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
>>
>> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
>> 				   track_id, &offset, &info, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_add_cloud_filters
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to add cloud filters from
>> + * @filters: Buffer which contains the filters to be added
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Set the cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>> + * of the function.
>> + *
>> + **/
>> +enum i40e_status_code
>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	enum i40e_status_code status;
>> +	u16 buff_len;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_add_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_add_cloud_filters_bb
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to add cloud filters from
>> + * @filters: Buffer which contains the filters in big buffer to be added
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Set the big buffer cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>> + * function.
>> + *
>> + **/
>> +i40e_status
>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	i40e_status status;
>> +	u16 buff_len;
>> +	int i;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_add_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>> +
>> +	for (i = 0; i < filter_count; i++) {
>> +		u16 tnl_type;
>> +		u32 ti;
>> +
>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>> +
>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>> +		 * byte than the offset for the Tenant ID for rest of the
>> +		 * tunnels.
>> +		 */
>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>> +		}
>> +	}
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_rem_cloud_filters
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to remove cloud filters from
>> + * @filters: Buffer which contains the filters to be removed
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Remove the cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>> + * of the function.
>> + *
>> + **/
>> +enum i40e_status_code
>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	enum i40e_status_code status;
>> +	u16 buff_len;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_remove_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> +	return status;
>> +}
>> +
>> +/**
>> + * i40e_aq_rem_cloud_filters_bb
>> + * @hw: pointer to the hardware structure
>> + * @seid: VSI seid to remove cloud filters from
>> + * @filters: Buffer which contains the filters in big buffer to be removed
>> + * @filter_count: number of filters contained in the buffer
>> + *
>> + * Remove the big buffer cloud filters for a given VSI.  The contents of the
>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>> + * function.
>> + *
>> + **/
>> +i40e_status
>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count)
>> +{
>> +	struct i40e_aq_desc desc;
>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>> +	i40e_status status;
>> +	u16 buff_len;
>> +	int i;
>> +
>> +	i40e_fill_default_direct_cmd_desc(&desc,
>> +					  i40e_aqc_opc_remove_cloud_filters);
>> +
>> +	buff_len = filter_count * sizeof(*filters);
>> +	desc.datalen = cpu_to_le16(buff_len);
>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>> +	cmd->num_filters = filter_count;
>> +	cmd->seid = cpu_to_le16(seid);
>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>> +
>> +	for (i = 0; i < filter_count; i++) {
>> +		u16 tnl_type;
>> +		u32 ti;
>> +
>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>> +
>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>> +		 * byte than the offset for the Tenant ID for rest of the
>> +		 * tunnels.
>> +		 */
>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>> +		}
>> +	}
>> +
>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>> +
>> 	return status;
>> }
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>> index afcf08a..96ee608 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>> @@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
>> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
>> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
>> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>> +				     struct i40e_cloud_filter *filter,
>> +				     bool add);
>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>> +					     struct i40e_cloud_filter *filter,
>> +					     bool add);
>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>> +				 enum i40e_admin_queue_opc list_type);
>> +
>>
>> /* i40e_pci_tbl - PCI Device ID Table
>>  *
>> @@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>>  **/
>> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>> {
>> +	enum i40e_admin_queue_err last_aq_status;
>> +	struct i40e_cloud_filter *cfilter;
>> 	struct i40e_channel *ch, *ch_tmp;
>> +	struct i40e_pf *pf = vsi->back;
>> +	struct hlist_node *node;
>> 	int ret, i;
>>
>> 	/* Reset rss size that was stored when reconfiguring rss for
>> @@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>> 				 "Failed to reset tx rate for ch->seid %u\n",
>> 				 ch->seid);
>>
>> +		/* delete cloud filters associated with this channel */
>> +		hlist_for_each_entry_safe(cfilter, node,
>> +					  &pf->cloud_filter_list, cloud_node) {
>> +			if (cfilter->seid != ch->seid)
>> +				continue;
>> +
>> +			hash_del(&cfilter->cloud_node);
>> +			if (cfilter->dst_port)
>> +				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>> +									cfilter,
>> +									false);
>> +			else
>> +				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>> +								false);
>> +			last_aq_status = pf->hw.aq.asq_last_status;
>> +			if (ret)
>> +				dev_info(&pf->pdev->dev,
>> +					 "Failed to delete cloud filter, err %s aq_err %s\n",
>> +					 i40e_stat_str(&pf->hw, ret),
>> +					 i40e_aq_str(&pf->hw, last_aq_status));
>> +			kfree(cfilter);
>> +		}
>> +
>> 		/* delete VSI from FW */
>> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
>> 					     NULL);
>> @@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
>> }
>>
>> /**
>> + * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>> + * @vsi: ptr to VSI which has PF backing
>> + * @l4type: true for TCP ond false for UDP
>> + * @port_type: true if port is destination and false if port is source
>> + *
>> + * Sets up switch mode correctly if it needs to be changed and perform
>> + * what are allowed modes.
>> + **/
>> +static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>> +					     bool port_type)
>> +{
>> +	u8 mode;
>> +	struct i40e_pf *pf = vsi->back;
>> +	struct i40e_hw *hw = &pf->hw;
>> +	int ret;
>> +
>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>> +	if (ret)
>> +		return -EINVAL;
>> +
>> +	if (hw->dev_caps.switch_mode) {
>> +		/* if switch mode is set, support mode2 (non-tunneled for
>> +		 * cloud filter) for now
>> +		 */
>> +		u32 switch_mode = hw->dev_caps.switch_mode &
>> +							I40E_SWITCH_MODE_MASK;
>> +		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>> +			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>> +				return 0;
>> +			dev_err(&pf->pdev->dev,
>> +				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>> +				hw->dev_caps.switch_mode);
>> +			return -EINVAL;
>> +		}
>> +	}
>> +
>> +	/* port_type: true for destination port and false for source port
>> +	 * For now, supports only destination port type
>> +	 */
>> +	if (!port_type) {
>> +		dev_err(&pf->pdev->dev, "src port type not supported\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Set Bit 7 to be valid */
>> +	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>> +
>> +	/* Set L4type to both TCP and UDP support */
>> +	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>> +
>> +	/* Set cloud filter mode */
>> +	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>> +
>> +	/* Prep mode field for set_switch_config */
>> +	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>> +					pf->last_sw_conf_valid_flags,
>> +					mode, NULL);
>> +	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>> +		dev_err(&pf->pdev->dev,
>> +			"couldn't set switch config bits, err %s aq_err %s\n",
>> +			i40e_stat_str(hw, ret),
>> +			i40e_aq_str(hw,
>> +				    hw->aq.asq_last_status));
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>>  * i40e_create_queue_channel - function to create channel
>>  * @vsi: VSI to be configured
>>  * @ch: ptr to channel (it contains channel specific params)
>> @@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
>> 	return ret;
>> }
>>
>> +/**
>> + * i40e_set_cld_element - sets cloud filter element data
>> + * @filter: cloud filter rule
>> + * @cld: ptr to cloud filter element data
>> + *
>> + * This is helper function to copy data into cloud filter element
>> + **/
>> +static inline void
>> +i40e_set_cld_element(struct i40e_cloud_filter *filter,
>> +		     struct i40e_aqc_cloud_filters_element_data *cld)
>> +{
>> +	int i, j;
>> +	u32 ipa;
>> +
>> +	memset(cld, 0, sizeof(*cld));
>> +	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>> +	ether_addr_copy(cld->inner_mac, filter->src_mac);
>> +
>> +	if (filter->ip_version == IPV6_VERSION) {
>> +#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>> +		for (i = 0, j = 0; i < 4; i++, j += 2) {
>> +			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>> +			ipa = cpu_to_le32(ipa);
>> +			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>> +		}
>> +	} else {
>> +		ipa = be32_to_cpu(filter->dst_ip);
>> +		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>> +	}
>> +
>> +	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>> +
>> +	/* tenant_id is not supported by FW now, once the support is enabled
>> +	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>> +	 */
>> +	if (filter->tenant_id)
>> +		return;
>> +}
>> +
>> +/**
>> + * i40e_add_del_cloud_filter - Add/del cloud filter
>> + * @vsi: pointer to VSI
>> + * @filter: cloud filter rule
>> + * @add: if true, add, if false, delete
>> + *
>> + * Add or delete a cloud filter for a specific flow spec.
>> + * Returns 0 if the filter were successfully added.
>> + **/
>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>> +				     struct i40e_cloud_filter *filter, bool add)
>> +{
>> +	struct i40e_aqc_cloud_filters_element_data cld_filter;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int ret;
>> +	static const u16 flag_table[128] = {
>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>> +		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>> +			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>> +	};
>> +
>> +	if (filter->flags >= ARRAY_SIZE(flag_table))
>> +		return I40E_ERR_CONFIG;
>> +
>> +	/* copy element needed to add cloud filter from filter */
>> +	i40e_set_cld_element(filter, &cld_filter);
>> +
>> +	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>> +		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>> +					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>> +
>> +	if (filter->ip_version == IPV6_VERSION)
>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>> +	else
>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>> +
>> +	if (add)
>> +		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>> +						&cld_filter, 1);
>> +	else
>> +		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>> +						&cld_filter, 1);
>> +	if (ret)
>> +		dev_dbg(&pf->pdev->dev,
>> +			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>> +			add ? "add" : "delete", filter->dst_port, ret,
>> +			pf->hw.aq.asq_last_status);
>> +	else
>> +		dev_info(&pf->pdev->dev,
>> +			 "%s cloud filter for VSI: %d\n",
>> +			 add ? "Added" : "Deleted", filter->seid);
>> +	return ret;
>> +}
>> +
>> +/**
>> + * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>> + * @vsi: pointer to VSI
>> + * @filter: cloud filter rule
>> + * @add: if true, add, if false, delete
>> + *
>> + * Add or delete a cloud filter for a specific flow spec using big buffer.
>> + * Returns 0 if the filter were successfully added.
>> + **/
>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>> +					     struct i40e_cloud_filter *filter,
>> +					     bool add)
>> +{
>> +	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int ret;
>> +
>> +	/* Both (Outer/Inner) valid mac_addr are not supported */
>> +	if (is_valid_ether_addr(filter->dst_mac) &&
>> +	    is_valid_ether_addr(filter->src_mac))
>> +		return -EINVAL;
>> +
>> +	/* Make sure port is specified, otherwise bail out, for channel
>> +	 * specific cloud filter needs 'L4 port' to be non-zero
>> +	 */
>> +	if (!filter->dst_port)
>> +		return -EINVAL;
>> +
>> +	/* adding filter using src_port/src_ip is not supported at this stage */
>> +	if (filter->src_port || filter->src_ip ||
>> +	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>> +		return -EINVAL;
>> +
>> +	/* copy element needed to add cloud filter from filter */
>> +	i40e_set_cld_element(filter, &cld_filter.element);
>> +
>> +	if (is_valid_ether_addr(filter->dst_mac) ||
>> +	    is_valid_ether_addr(filter->src_mac) ||
>> +	    is_multicast_ether_addr(filter->dst_mac) ||
>> +	    is_multicast_ether_addr(filter->src_mac)) {
>> +		/* MAC + IP : unsupported mode */
>> +		if (filter->dst_ip)
>> +			return -EINVAL;
>> +
>> +		/* since we validated that L4 port must be valid before
>> +		 * we get here, start with respective "flags" value
>> +		 * and update if vlan is present or not
>> +		 */
>> +		cld_filter.element.flags =
>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>> +
>> +		if (filter->vlan_id) {
>> +			cld_filter.element.flags =
>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>> +		}
>> +
>> +	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>> +		cld_filter.element.flags =
>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>> +		if (filter->ip_version == IPV6_VERSION)
>> +			cld_filter.element.flags |=
>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>> +		else
>> +			cld_filter.element.flags |=
>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>> +	} else {
>> +		dev_err(&pf->pdev->dev,
>> +			"either mac or ip has to be valid for cloud filter\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Now copy L4 port in Byte 6..7 in general fields */
>> +	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>> +						be16_to_cpu(filter->dst_port);
>> +
>> +	if (add) {
>> +		bool proto_type, port_type;
>> +
>> +		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>> +		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>> +			     true : false;
>> +
>> +		/* For now, src port based cloud filter for channel is not
>> +		 * supported
>> +		 */
>> +		if (!port_type) {
>> +			dev_err(&pf->pdev->dev,
>> +				"unsupported port type (src port)\n");
>> +			return -EOPNOTSUPP;
>> +		}
>> +
>> +		/* Validate current device switch mode, change if necessary */
>> +		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>> +							port_type);
>> +		if (ret) {
>> +			dev_err(&pf->pdev->dev,
>> +				"failed to set switch mode, ret %d\n",
>> +				ret);
>> +			return ret;
>> +		}
>> +
>> +		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>> +						   &cld_filter, 1);
>> +	} else {
>> +		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>> +						   &cld_filter, 1);
>> +	}
>> +
>> +	if (ret)
>> +		dev_dbg(&pf->pdev->dev,
>> +			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>> +			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>> +	else
>> +		dev_info(&pf->pdev->dev,
>> +			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>> +			 add ? "add" : "delete", filter->seid,
>> +			 ntohs(filter->dst_port));
>> +	return ret;
>> +}
>> +
>> +/**
>> + * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + * @filter: Pointer to cloud filter structure
>> + *
>> + **/
>> +static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>> +				 struct tc_cls_flower_offload *f,
>> +				 struct i40e_cloud_filter *filter)
>> +{
>> +	struct i40e_pf *pf = vsi->back;
>> +	u16 addr_type = 0;
>> +	u8 field_flags = 0;
>> +
>> +	if (f->dissector->used_keys &
>> +	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>> +	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>> +	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>> +	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>> +	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>> +		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>> +			f->dissector->used_keys);
>> +		return -EOPNOTSUPP;
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>> +		struct flow_dissector_key_keyid *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>> +						  f->key);
>> +
>> +		struct flow_dissector_key_keyid *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>> +						  f->mask);
>> +
>> +		if (mask->keyid != 0)
>> +			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>> +
>> +		filter->tenant_id = be32_to_cpu(key->keyid);
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>> +		struct flow_dissector_key_basic *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_BASIC,
>> +						  f->key);
>> +
>> +		filter->ip_proto = key->ip_proto;
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>> +		struct flow_dissector_key_eth_addrs *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>> +						  f->key);
>> +
>> +		struct flow_dissector_key_eth_addrs *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>> +						  f->mask);
>> +
>> +		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>> +		if (!is_zero_ether_addr(mask->dst)) {
>> +			if (is_broadcast_ether_addr(mask->dst)) {
>> +				field_flags |= I40E_CLOUD_FIELD_OMAC;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>> +					mask->dst);
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (!is_zero_ether_addr(mask->src)) {
>> +			if (is_broadcast_ether_addr(mask->src)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IMAC;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>> +					mask->src);
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +		ether_addr_copy(filter->dst_mac, key->dst);
>> +		ether_addr_copy(filter->src_mac, key->src);
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>> +		struct flow_dissector_key_vlan *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_VLAN,
>> +						  f->key);
>> +		struct flow_dissector_key_vlan *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_VLAN,
>> +						  f->mask);
>> +
>> +		if (mask->vlan_id) {
>> +			if (mask->vlan_id == VLAN_VID_MASK) {
>> +				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>> +
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>> +					mask->vlan_id);
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		filter->vlan_id = cpu_to_be16(key->vlan_id);
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>> +		struct flow_dissector_key_control *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_CONTROL,
>> +						  f->key);
>> +
>> +		addr_type = key->addr_type;
>> +	}
>> +
>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>> +		struct flow_dissector_key_ipv4_addrs *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>> +						  f->key);
>> +		struct flow_dissector_key_ipv4_addrs *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>> +						  f->mask);
>> +
>> +		if (mask->dst) {
>> +			if (mask->dst == cpu_to_be32(0xffffffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>> +					be32_to_cpu(mask->dst));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (mask->src) {
>> +			if (mask->src == cpu_to_be32(0xffffffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>> +					be32_to_cpu(mask->dst));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>> +			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>> +			return I40E_ERR_CONFIG;
>> +		}
>> +		filter->dst_ip = key->dst;
>> +		filter->src_ip = key->src;
>> +		filter->ip_version = IPV4_VERSION;
>> +	}
>> +
>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>> +		struct flow_dissector_key_ipv6_addrs *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>> +						  f->key);
>> +		struct flow_dissector_key_ipv6_addrs *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>> +						  f->mask);
>> +
>> +		/* src and dest IPV6 address should not be LOOPBACK
>> +		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>> +		 */
>> +		if (ipv6_addr_loopback(&key->dst) ||
>> +		    ipv6_addr_loopback(&key->src)) {
>> +			dev_err(&pf->pdev->dev,
>> +				"Bad ipv6, addr is LOOPBACK\n");
>> +			return I40E_ERR_CONFIG;
>> +		}
>> +		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>> +			field_flags |= I40E_CLOUD_FIELD_IIP;
>> +
>> +		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>> +		       sizeof(filter->src_ipv6));
>> +		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>> +		       sizeof(filter->dst_ipv6));
>> +
>> +		/* mark it as IPv6 filter, to be used later */
>> +		filter->ip_version = IPV6_VERSION;
>> +	}
>> +
>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>> +		struct flow_dissector_key_ports *key =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_PORTS,
>> +						  f->key);
>> +		struct flow_dissector_key_ports *mask =
>> +			skb_flow_dissector_target(f->dissector,
>> +						  FLOW_DISSECTOR_KEY_PORTS,
>> +						  f->mask);
>> +
>> +		if (mask->src) {
>> +			if (mask->src == cpu_to_be16(0xffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>> +					be16_to_cpu(mask->src));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		if (mask->dst) {
>> +			if (mask->dst == cpu_to_be16(0xffff)) {
>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>> +			} else {
>> +				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>> +					be16_to_cpu(mask->dst));
>> +				return I40E_ERR_CONFIG;
>> +			}
>> +		}
>> +
>> +		filter->dst_port = key->dst;
>> +		filter->src_port = key->src;
>> +
>> +		/* For now, only supports destination port*/
>> +		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>> +
>> +		switch (filter->ip_proto) {
>> +		case IPPROTO_TCP:
>> +		case IPPROTO_UDP:
>> +			break;
>> +		default:
>> +			dev_err(&pf->pdev->dev,
>> +				"Only UDP and TCP transport are supported\n");
>> +			return -EINVAL;
>> +		}
>> +	}
>> +	filter->flags = field_flags;
>> +	return 0;
>> +}
>> +
>> +/**
>> + * i40e_handle_redirect_action: Forward to a traffic class on the device
>> + * @vsi: Pointer to VSI
>> + * @ifindex: ifindex of the device to forwared to
>> + * @tc: traffic class index on the device
>> + * @filter: Pointer to cloud filter structure
>> + *
>> + **/
>> +static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>> +				       struct i40e_cloud_filter *filter)
>> +{
>> +	struct i40e_channel *ch, *ch_tmp;
>> +
>> +	/* redirect to a traffic class on the same device */
>> +	if (vsi->netdev->ifindex == ifindex) {
>> +		if (tc == 0) {
>> +			filter->seid = vsi->seid;
>> +			return 0;
>> +		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>> +			if (!filter->dst_port) {
>> +				dev_err(&vsi->back->pdev->dev,
>> +					"Specify destination port to redirect to traffic class that is not default\n");
>> +				return -EINVAL;
>> +			}
>> +			if (list_empty(&vsi->ch_list))
>> +				return -EINVAL;
>> +			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>> +						 list) {
>> +				if (ch->seid == vsi->tc_seid_map[tc])
>> +					filter->seid = ch->seid;
>> +			}
>> +			return 0;
>> +		}
>> +	}
>> +	return -EINVAL;
>> +}
>> +
>> +/**
>> + * i40e_parse_tc_actions - Parse tc actions
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + * @filter: Pointer to cloud filter structure
>> + *
>> + **/
>> +static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>> +				 struct i40e_cloud_filter *filter)
>> +{
>> +	const struct tc_action *a;
>> +	LIST_HEAD(actions);
>> +	int err;
>> +
>> +	if (!tcf_exts_has_actions(exts))
>> +		return -EINVAL;
>> +
>> +	tcf_exts_to_list(exts, &actions);
>> +	list_for_each_entry(a, &actions, list) {
>> +		/* Drop action */
>> +		if (is_tcf_gact_shot(a)) {
>> +			dev_err(&vsi->back->pdev->dev,
>> +				"Cloud filters do not support the drop action.\n");
>> +			return -EOPNOTSUPP;
>> +		}
>> +
>> +		/* Redirect to a traffic class on the same device */
>> +		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>> +			int ifindex = tcf_mirred_ifindex(a);
>> +			u8 tc = tcf_mirred_tc(a);
>> +
>> +			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>> +							  filter);
>> +			if (err == 0)
>> +				return err;
>> +		}
>> +	}
>> +	return -EINVAL;
>> +}
>> +
>> +/**
>> + * i40e_configure_clsflower - Configure tc flower filters
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + *
>> + **/
>> +static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>> +				    struct tc_cls_flower_offload *cls_flower)
>> +{
>> +	struct i40e_cloud_filter *filter = NULL;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int err = 0;
>> +
>> +	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>> +	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>> +		return -EBUSY;
>> +
>> +	if (pf->fdir_pf_active_filters ||
>> +	    (!hlist_empty(&pf->fdir_filter_list))) {
>> +		dev_err(&vsi->back->pdev->dev,
>> +			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>> +		dev_err(&vsi->back->pdev->dev,
>> +			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>> +		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>> +	}
>> +
>> +	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>> +	if (!filter)
>> +		return -ENOMEM;
>> +
>> +	filter->cookie = cls_flower->cookie;
>> +
>> +	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>> +	if (err < 0)
>> +		goto err;
>> +
>> +	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>> +	if (err < 0)
>> +		goto err;
>> +
>> +	/* Add cloud filter */
>> +	if (filter->dst_port)
>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>> +	else
>> +		err = i40e_add_del_cloud_filter(vsi, filter, true);
>> +
>> +	if (err) {
>> +		dev_err(&pf->pdev->dev,
>> +			"Failed to add cloud filter, err %s\n",
>> +			i40e_stat_str(&pf->hw, err));
>> +		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>> +		goto err;
>> +	}
>> +
>> +	/* add filter to the ordered list */
>> +	INIT_HLIST_NODE(&filter->cloud_node);
>> +
>> +	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>> +
>> +	pf->num_cloud_filters++;
>> +
>> +	return err;
>> +err:
>> +	kfree(filter);
>> +	return err;
>> +}
>> +
>> +/**
>> + * i40e_find_cloud_filter - Find the could filter in the list
>> + * @vsi: Pointer to VSI
>> + * @cookie: filter specific cookie
>> + *
>> + **/
>> +static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>> +							unsigned long *cookie)
>> +{
>> +	struct i40e_cloud_filter *filter = NULL;
>> +	struct hlist_node *node2;
>> +
>> +	hlist_for_each_entry_safe(filter, node2,
>> +				  &vsi->back->cloud_filter_list, cloud_node)
>> +		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>> +			return filter;
>> +	return NULL;
>> +}
>> +
>> +/**
>> + * i40e_delete_clsflower - Remove tc flower filters
>> + * @vsi: Pointer to VSI
>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>> + *
>> + **/
>> +static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>> +				 struct tc_cls_flower_offload *cls_flower)
>> +{
>> +	struct i40e_cloud_filter *filter = NULL;
>> +	struct i40e_pf *pf = vsi->back;
>> +	int err = 0;
>> +
>> +	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>> +
>> +	if (!filter)
>> +		return -EINVAL;
>> +
>> +	hash_del(&filter->cloud_node);
>> +
>> +	if (filter->dst_port)
>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>> +	else
>> +		err = i40e_add_del_cloud_filter(vsi, filter, false);
>> +	if (err) {
>> +		kfree(filter);
>> +		dev_err(&pf->pdev->dev,
>> +			"Failed to delete cloud filter, err %s\n",
>> +			i40e_stat_str(&pf->hw, err));
>> +		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>> +	}
>> +
>> +	kfree(filter);
>> +	pf->num_cloud_filters--;
>> +
>> +	if (!pf->num_cloud_filters)
>> +		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>> +		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>> +			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>> +		}
>> +	return 0;
>> +}
>> +
>> +/**
>> + * i40e_setup_tc_cls_flower - flower classifier offloads
>> + * @netdev: net device to configure
>> + * @type_data: offload data
>> + **/
>> +static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>> +				    struct tc_cls_flower_offload *cls_flower)
>> +{
>> +	struct i40e_netdev_priv *np = netdev_priv(netdev);
>> +	struct i40e_vsi *vsi = np->vsi;
>> +
>> +	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>> +	    cls_flower->common.chain_index)
>> +		return -EOPNOTSUPP;
>> +
>> +	switch (cls_flower->command) {
>> +	case TC_CLSFLOWER_REPLACE:
>> +		return i40e_configure_clsflower(vsi, cls_flower);
>> +	case TC_CLSFLOWER_DESTROY:
>> +		return i40e_delete_clsflower(vsi, cls_flower);
>> +	case TC_CLSFLOWER_STATS:
>> +		return -EOPNOTSUPP;
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +}
>> +
>> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>> 			   void *type_data)
>> {
>> -	if (type != TC_SETUP_MQPRIO)
>> +	switch (type) {
>> +	case TC_SETUP_MQPRIO:
>> +		return i40e_setup_tc(netdev, type_data);
>> +	case TC_SETUP_CLSFLOWER:
>> +		return i40e_setup_tc_cls_flower(netdev, type_data);
>> +	default:
>> 		return -EOPNOTSUPP;
>> -
>> -	return i40e_setup_tc(netdev, type_data);
>> +	}
>> }
>>
>> /**
>> @@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
>> 		kfree(cfilter);
>> 	}
>> 	pf->num_cloud_filters = 0;
>> +
>> +	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>> +	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>> +		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>> +		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>> +		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>> +	}
>> }
>>
>> /**
>> @@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>>  * i40e_get_capabilities - get info about the HW
>>  * @pf: the PF struct
>>  **/
>> -static int i40e_get_capabilities(struct i40e_pf *pf)
>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>> +				 enum i40e_admin_queue_opc list_type)
>> {
>> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
>> 	u16 data_size;
>> @@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>
>> 		/* this loads the data into the hw struct for us */
>> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>> -					    &data_size,
>> -					    i40e_aqc_opc_list_func_capabilities,
>> -					    NULL);
>> +						    &data_size, list_type,
>> +						    NULL);
>> 		/* data loaded, buffer no longer needed */
>> 		kfree(cap_buf);
>>
>> @@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>> 		}
>> 	} while (err);
>>
>> -	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>> -		dev_info(&pf->pdev->dev,
>> -			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>> -			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>> -			 pf->hw.func_caps.num_msix_vectors,
>> -			 pf->hw.func_caps.num_msix_vectors_vf,
>> -			 pf->hw.func_caps.fd_filters_guaranteed,
>> -			 pf->hw.func_caps.fd_filters_best_effort,
>> -			 pf->hw.func_caps.num_tx_qp,
>> -			 pf->hw.func_caps.num_vsis);
>> -
>> +	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>> +		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>> +			dev_info(&pf->pdev->dev,
>> +				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>> +				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>> +				 pf->hw.func_caps.num_msix_vectors,
>> +				 pf->hw.func_caps.num_msix_vectors_vf,
>> +				 pf->hw.func_caps.fd_filters_guaranteed,
>> +				 pf->hw.func_caps.fd_filters_best_effort,
>> +				 pf->hw.func_caps.num_tx_qp,
>> +				 pf->hw.func_caps.num_vsis);
>> +		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>> +			dev_info(&pf->pdev->dev,
>> +				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>> +				 pf->hw.dev_caps.switch_mode,
>> +				 pf->hw.dev_caps.valid_functions);
>> +			dev_info(&pf->pdev->dev,
>> +				 "SR-IOV=%d, num_vfs for all function=%u\n",
>> +				 pf->hw.dev_caps.sr_iov_1_1,
>> +				 pf->hw.dev_caps.num_vfs);
>> +			dev_info(&pf->pdev->dev,
>> +				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>> +				 pf->hw.dev_caps.num_vsis,
>> +				 pf->hw.dev_caps.num_rx_qp,
>> +				 pf->hw.dev_caps.num_tx_qp);
>> +		}
>> +	}
>> +	if (list_type == i40e_aqc_opc_list_func_capabilities) {
>> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
>> 		       + pf->hw.func_caps.num_vfs)
>> -	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>> -		dev_info(&pf->pdev->dev,
>> -			 "got num_vsis %d, setting num_vsis to %d\n",
>> -			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>> -		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>> +		if (pf->hw.revision_id == 0 &&
>> +		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>> +			dev_info(&pf->pdev->dev,
>> +				 "got num_vsis %d, setting num_vsis to %d\n",
>> +				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>> +			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>> +		}
>> 	}
>> -
>> 	return 0;
>> }
>>
>> @@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
>> 		if (!vsi) {
>> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 			return;
>> 		}
>> 	}
>> @@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
>> }
>>
>> /**
>> + * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>> + * @vsi: PF main vsi
>> + * @seid: seid of main or channel VSIs
>> + *
>> + * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>> + * existed before reset
>> + **/
>> +static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>> +{
>> +	struct i40e_cloud_filter *cfilter;
>> +	struct i40e_pf *pf = vsi->back;
>> +	struct hlist_node *node;
>> +	i40e_status ret;
>> +
>> +	/* Add cloud filters back if they exist */
>> +	if (hlist_empty(&pf->cloud_filter_list))
>> +		return 0;
>> +
>> +	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>> +				  cloud_node) {
>> +		if (cfilter->seid != seid)
>> +			continue;
>> +
>> +		if (cfilter->dst_port)
>> +			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>> +								true);
>> +		else
>> +			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>> +
>> +		if (ret) {
>> +			dev_dbg(&pf->pdev->dev,
>> +				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>> +				i40e_stat_str(&pf->hw, ret),
>> +				i40e_aq_str(&pf->hw,
>> +					    pf->hw.aq.asq_last_status));
>> +			return ret;
>> +		}
>> +	}
>> +	return 0;
>> +}
>> +
>> +/**
>>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>>  * @vsi: PF main vsi
>>  *
>> @@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
>> 						I40E_BW_CREDIT_DIVISOR,
>> 				ch->seid);
>> 		}
>> +		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>> +		if (ret) {
>> +			dev_dbg(&vsi->back->pdev->dev,
>> +				"Failed to rebuild cloud filters for channel VSI %u\n",
>> +				ch->seid);
>> +			return ret;
>> +		}
>> 	}
>> 	return 0;
>> }
>> @@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>> 		i40e_verify_eeprom(pf);
>>
>> 	i40e_clear_pxe_mode(hw);
>> -	ret = i40e_get_capabilities(pf);
>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>> 	if (ret)
>> 		goto end_core_reset;
>>
>> @@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>> 			goto end_unlock;
>> 	}
>>
>> +	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>> +	if (ret)
>> +		goto end_unlock;
>> +
>> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
>> 	 * for this main VSI if they exist
>> 	 */
>> @@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
>> 	    (pf->num_fdsb_msix == 0)) {
>> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
>> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 	}
>> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
>> 	    (pf->num_vmdq_msix == 0)) {
>> @@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
>> 				       I40E_FLAG_FD_SB_ENABLED	|
>> 				       I40E_FLAG_FD_ATR_ENABLED	|
>> 				       I40E_FLAG_VMDQ_ENABLED);
>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>
>> 			/* rework the queue expectations without MSIX */
>> 			i40e_determine_queue_usage(pf);
>> @@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>> 		/* Enable filters and mark for reset */
>> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
>> 			need_reset = true;
>> -		/* enable FD_SB only if there is MSI-X vector */
>> -		if (pf->num_fdsb_msix > 0)
>> +		/* enable FD_SB only if there is MSI-X vector and no cloud
>> +		 * filters exist
>> +		 */
>> +		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
>> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>> +		}
>> 	} else {
>> 		/* turn off filters, mark for reset and clear SW filter list */
>> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>> @@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>> 		}
>> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
>> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> +
>> 		/* reset fd counters */
>> 		pf->fd_add_err = 0;
>> 		pf->fd_atr_cnt = 0;
>> @@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
>> 		netdev->hw_features |= NETIF_F_NTUPLE;
>> 	hw_features = hw_enc_features		|
>> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>> -		      NETIF_F_HW_VLAN_CTAG_RX;
>> +		      NETIF_F_HW_VLAN_CTAG_RX	|
>> +		      NETIF_F_HW_TC;
>>
>> 	netdev->hw_features |= hw_features;
>>
>> @@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>> 	*/
>>
>> 	if ((pf->hw.pf_id == 0) &&
>> -	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>> +	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
>> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>> +		pf->last_sw_conf_flags = flags;
>> +	}
>>
>> 	if (pf->hw.pf_id == 0) {
>> 		u16 valid_flags;
>> @@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>> 					     pf->hw.aq.asq_last_status));
>> 			/* not a fatal problem, just keep going */
>> 		}
>> +		pf->last_sw_conf_valid_flags = valid_flags;
>> 	}
>>
>> 	/* first time setup */
>> @@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>> 			       I40E_FLAG_DCB_ENABLED	|
>> 			       I40E_FLAG_SRIOV_ENABLED	|
>> 			       I40E_FLAG_VMDQ_ENABLED);
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
>> 				  I40E_FLAG_FD_SB_ENABLED |
>> 				  I40E_FLAG_FD_ATR_ENABLED |
>> @@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>> 			       I40E_FLAG_FD_ATR_ENABLED	|
>> 			       I40E_FLAG_DCB_ENABLED	|
>> 			       I40E_FLAG_VMDQ_ENABLED);
>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 	} else {
>> 		/* Not enough queues for all TCs */
>> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>> @@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>> 			queues_left -= 1; /* save 1 queue for FD */
>> 		} else {
>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
>> 		}
>> 	}
>> @@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
>>
>> 	i40e_clear_pxe_mode(hw);
>> -	err = i40e_get_capabilities(pf);
>> +	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>> 	if (err)
>> 		goto err_adminq_setup;
>>
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>> index 92869f5..3bb6659 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>> @@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
>> 		struct i40e_asq_cmd_details *cmd_details);
>> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
>> 				   struct i40e_asq_cmd_details *cmd_details);
>> +i40e_status
>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count);
>> +enum i40e_status_code
>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count);
>> +enum i40e_status_code
>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>> +			  u8 filter_count);
>> +i40e_status
>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>> +			     u8 filter_count);
>> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
>> 			       struct i40e_lldp_variables *lldp_cfg);
>> /* i40e_common */
>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>> index c019f46..af38881 100644
>> --- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>> +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>> @@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
>> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
>> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
>> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>> +#define I40E_SWITCH_MODE_MASK		0xF
>>
>> 	u32  management_mode;
>> 	u32  mng_protocols_over_mctp;
>> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>> index b8c78bf..4fe27f0 100644
>> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>> @@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
>> 		struct {
>> 			u8 data[16];
>> 		} v6;
>> +		struct {
>> +			__le16 data[8];
>> +		} raw_v6;
>> 	} ipaddr;
>> 	__le16	flags;
>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
  2017-09-14  8:00       ` [Intel-wired-lan] " Nambiar, Amritha
@ 2017-09-28 19:22         ` Nambiar, Amritha
  -1 siblings, 0 replies; 30+ messages in thread
From: Nambiar, Amritha @ 2017-09-28 19:22 UTC (permalink / raw)
  To: Jiri Pirko
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev,
	mlxsw, alexander.duyck, Jamal Hadi Salim, Cong Wang

On 9/14/2017 1:00 AM, Nambiar, Amritha wrote:
> On 9/13/2017 6:26 AM, Jiri Pirko wrote:
>> Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar@intel.com wrote:
>>> This patch enables tc-flower based hardware offloads. tc flower
>>> filter provided by the kernel is configured as driver specific
>>> cloud filter. The patch implements functions and admin queue
>>> commands needed to support cloud filters in the driver and
>>> adds cloud filters to configure these tc-flower filters.
>>>
>>> The only action supported is to redirect packets to a traffic class
>>> on the same device.
>>
>> So basically you are not doing redirect, you are just setting tclass for
>> matched packets, right? Why you use mirred for this? I think that
>> you might consider extending g_act for that:
>>
>> # tc filter add dev eth0 protocol ip ingress \
>>   prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
>>   action tclass 0
>>
> Yes, this doesn't work like a typical egress redirect, but is aimed at
> forwarding the matched packets to a different queue-group/traffic class
> on the same device, so some sort-of ingress redirect in the hardware. I
> possibly may not need the mirred-redirect as you say, I'll look into the
> g_act way of doing this with a new gact tc action.
> 

I was looking at introducing a new gact tclass action to TC. In the HW
offload path, this sets a traffic class value for certain matched
packets so they will be processed in a queue belonging to the traffic class.

# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_sw\
  action tclass 2

But, I'm having trouble defining what this action means in the kernel
datapath. For ingress, this action could just take the default path and
do nothing and only have meaning in the HW offloaded path. For egress,
certain qdiscs like 'multiq' and 'prio' could use this 'tclass' value
for band selection, while the 'mqprio' qdisc selects the traffic class
based on the skb priority in netdev_pick_tx(), so what would this action
mean for the 'mqprio' qdisc?

It looks like the 'prio' qdisc uses band selection based on the
'classid', so I was thinking of using the 'classid' through the cls
flower filter and offload it to HW for the traffic class index, this way
we would have the same behavior in HW offload and SW fallback and there
would be no need for a separate tc action.

In HW:
# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_sw classid 1:2\

filter pref 2 flower chain 0
filter pref 2 flower chain 0 handle 0x1 classid 1:2
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.3.5
  dst_port 25
  skip_sw
  in_hw

This will be used to route packets to traffic class 2.

In SW:
# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_hw classid 1:2

filter pref 2 flower chain 0
filter pref 2 flower chain 0 handle 0x1 classid 1:2
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.3.5
  dst_port 25
  skip_hw
  not_in_hw

>>
>>>
>>> # tc qdisc add dev eth0 ingress
>>> # ethtool -K eth0 hw-tc-offload on
>>>
>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>>>  action mirred ingress redirect dev eth0 tclass 0
>>>
>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>  prio 2 flower dst_ip 192.168.3.5/32\
>>>  ip_proto udp dst_port 25 skip_sw\
>>>  action mirred ingress redirect dev eth0 tclass 1
>>>
>>> # tc filter add dev eth0 protocol ipv6 parent ffff:\
>>>  prio 3 flower dst_ip fe8::200:1\
>>>  ip_proto udp dst_port 66 skip_sw\
>>>  action mirred ingress redirect dev eth0 tclass 1
>>>
>>> Delete tc flower filter:
>>> Example:
>>>
>>> # tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
>>> # tc filter del dev eth0 parent ffff:
>>>
>>> Flow Director Sideband is disabled while configuring cloud filters
>>> via tc-flower and until any cloud filter exists.
>>>
>>> Unsupported matches when cloud filters are added using enhanced
>>> big buffer cloud filter mode of underlying switch include:
>>> 1. source port and source IP
>>> 2. Combined MAC address and IP fields.
>>> 3. Not specifying L4 port
>>>
>>> These filter matches can however be used to redirect traffic to
>>> the main VSI (tc 0) which does not require the enhanced big buffer
>>> cloud filter support.
>>>
>>> v3: Cleaned up some lengthy function names. Changed ipv6 address to
>>> __be32 array instead of u8 array. Used macro for IP version. Minor
>>> formatting changes.
>>> v2:
>>> 1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>>> 2. Moved dev_info for add/deleting cloud filters in else condition
>>> 3. Fixed some format specifier in dev_err logs
>>> 4. Refactored i40e_get_capabilities to take an additional
>>>   list_type parameter and use it to query device and function
>>>   level capabilities.
>>> 5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>>>   to verify if redirect to a traffic class is supported.
>>> 6. Added comments for Geneve fix in cloud filter big buffer AQ
>>>   function definitions.
>>> 7. Cleaned up setup_tc interface to rebase and work with Jiri's
>>>   updates, separate function to process tc cls flower offloads.
>>> 8. Changes to make Flow Director Sideband and Cloud filters mutually
>>>   exclusive.
>>>
>>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>>> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>>> Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>>> ---
>>> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
>>> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
>>> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
>>> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
>>> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
>>> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
>>> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
>>> 7 files changed, 1202 insertions(+), 30 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>>> index 6018fb6..b110519 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e.h
>>> @@ -55,6 +55,8 @@
>>> #include <linux/net_tstamp.h>
>>> #include <linux/ptp_clock_kernel.h>
>>> #include <net/pkt_cls.h>
>>> +#include <net/tc_act/tc_gact.h>
>>> +#include <net/tc_act/tc_mirred.h>
>>> #include "i40e_type.h"
>>> #include "i40e_prototype.h"
>>> #include "i40e_client.h"
>>> @@ -252,9 +254,52 @@ struct i40e_fdir_filter {
>>> 	u32 fd_id;
>>> };
>>>
>>> +#define IPV4_VERSION 4
>>> +#define IPV6_VERSION 6
>>> +
>>> +#define I40E_CLOUD_FIELD_OMAC	0x01
>>> +#define I40E_CLOUD_FIELD_IMAC	0x02
>>> +#define I40E_CLOUD_FIELD_IVLAN	0x04
>>> +#define I40E_CLOUD_FIELD_TEN_ID	0x08
>>> +#define I40E_CLOUD_FIELD_IIP	0x10
>>> +
>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>>> +						 I40E_CLOUD_FIELD_IVLAN)
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>>> +						 I40E_CLOUD_FIELD_TEN_ID)
>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>>> +						  I40E_CLOUD_FIELD_IMAC | \
>>> +						  I40E_CLOUD_FIELD_TEN_ID)
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>>> +						   I40E_CLOUD_FIELD_IVLAN | \
>>> +						   I40E_CLOUD_FIELD_TEN_ID)
>>> +#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>>> +
>>> struct i40e_cloud_filter {
>>> 	struct hlist_node cloud_node;
>>> 	unsigned long cookie;
>>> +	/* cloud filter input set follows */
>>> +	u8 dst_mac[ETH_ALEN];
>>> +	u8 src_mac[ETH_ALEN];
>>> +	__be16 vlan_id;
>>> +	__be32 dst_ip;
>>> +	__be32 src_ip;
>>> +	__be32 dst_ipv6[4];
>>> +	__be32 src_ipv6[4];
>>> +	__be16 dst_port;
>>> +	__be16 src_port;
>>> +	u32 ip_version;
>>> +	u8 ip_proto;	/* IPPROTO value */
>>> +	/* L4 port type: src or destination port */
>>> +#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>>> +#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>>> +	u8 port_type;
>>> +	u32 tenant_id;
>>> +	u8 flags;
>>> +#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>>> +	u8 tunnel_type;
>>> 	u16 seid;	/* filter control */
>>> };
>>>
>>> @@ -491,6 +536,8 @@ struct i40e_pf {
>>> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
>>> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
>>> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>>> +#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>>> +#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
>>>
>>> 	struct i40e_client_instance *cinst;
>>> 	bool stat_offsets_loaded;
>>> @@ -573,6 +620,8 @@ struct i40e_pf {
>>> 	u16 phy_led_val;
>>>
>>> 	u16 override_q_count;
>>> +	u16 last_sw_conf_flags;
>>> +	u16 last_sw_conf_valid_flags;
>>> };
>>>
>>> /**
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>> index 2e567c2..feb3d42 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>> @@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>> 		struct {
>>> 			u8 data[16];
>>> 		} v6;
>>> +		struct {
>>> +			__le16 data[8];
>>> +		} raw_v6;
>>> 	} ipaddr;
>>> 	__le16	flags;
>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>> index 9567702..d9c9665 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>> @@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
>>>
>>> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
>>> 				   track_id, &offset, &info, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_add_cloud_filters
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to add cloud filters from
>>> + * @filters: Buffer which contains the filters to be added
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Set the cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>> + * of the function.
>>> + *
>>> + **/
>>> +enum i40e_status_code
>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	enum i40e_status_code status;
>>> +	u16 buff_len;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_add_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_add_cloud_filters_bb
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to add cloud filters from
>>> + * @filters: Buffer which contains the filters in big buffer to be added
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Set the big buffer cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>> + * function.
>>> + *
>>> + **/
>>> +i40e_status
>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	i40e_status status;
>>> +	u16 buff_len;
>>> +	int i;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_add_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>> +
>>> +	for (i = 0; i < filter_count; i++) {
>>> +		u16 tnl_type;
>>> +		u32 ti;
>>> +
>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>> +
>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>> +		 * byte than the offset for the Tenant ID for rest of the
>>> +		 * tunnels.
>>> +		 */
>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>> +		}
>>> +	}
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_rem_cloud_filters
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to remove cloud filters from
>>> + * @filters: Buffer which contains the filters to be removed
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Remove the cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>> + * of the function.
>>> + *
>>> + **/
>>> +enum i40e_status_code
>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	enum i40e_status_code status;
>>> +	u16 buff_len;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_rem_cloud_filters_bb
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to remove cloud filters from
>>> + * @filters: Buffer which contains the filters in big buffer to be removed
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Remove the big buffer cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>> + * function.
>>> + *
>>> + **/
>>> +i40e_status
>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	i40e_status status;
>>> +	u16 buff_len;
>>> +	int i;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>> +
>>> +	for (i = 0; i < filter_count; i++) {
>>> +		u16 tnl_type;
>>> +		u32 ti;
>>> +
>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>> +
>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>> +		 * byte than the offset for the Tenant ID for rest of the
>>> +		 * tunnels.
>>> +		 */
>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>> +		}
>>> +	}
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> 	return status;
>>> }
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>> index afcf08a..96ee608 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>> @@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
>>> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
>>> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
>>> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>> +				     struct i40e_cloud_filter *filter,
>>> +				     bool add);
>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>> +					     struct i40e_cloud_filter *filter,
>>> +					     bool add);
>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>> +				 enum i40e_admin_queue_opc list_type);
>>> +
>>>
>>> /* i40e_pci_tbl - PCI Device ID Table
>>>  *
>>> @@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>>>  **/
>>> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>> {
>>> +	enum i40e_admin_queue_err last_aq_status;
>>> +	struct i40e_cloud_filter *cfilter;
>>> 	struct i40e_channel *ch, *ch_tmp;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	struct hlist_node *node;
>>> 	int ret, i;
>>>
>>> 	/* Reset rss size that was stored when reconfiguring rss for
>>> @@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>> 				 "Failed to reset tx rate for ch->seid %u\n",
>>> 				 ch->seid);
>>>
>>> +		/* delete cloud filters associated with this channel */
>>> +		hlist_for_each_entry_safe(cfilter, node,
>>> +					  &pf->cloud_filter_list, cloud_node) {
>>> +			if (cfilter->seid != ch->seid)
>>> +				continue;
>>> +
>>> +			hash_del(&cfilter->cloud_node);
>>> +			if (cfilter->dst_port)
>>> +				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>>> +									cfilter,
>>> +									false);
>>> +			else
>>> +				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>>> +								false);
>>> +			last_aq_status = pf->hw.aq.asq_last_status;
>>> +			if (ret)
>>> +				dev_info(&pf->pdev->dev,
>>> +					 "Failed to delete cloud filter, err %s aq_err %s\n",
>>> +					 i40e_stat_str(&pf->hw, ret),
>>> +					 i40e_aq_str(&pf->hw, last_aq_status));
>>> +			kfree(cfilter);
>>> +		}
>>> +
>>> 		/* delete VSI from FW */
>>> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
>>> 					     NULL);
>>> @@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
>>> }
>>>
>>> /**
>>> + * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>>> + * @vsi: ptr to VSI which has PF backing
>>> + * @l4type: true for TCP ond false for UDP
>>> + * @port_type: true if port is destination and false if port is source
>>> + *
>>> + * Sets up switch mode correctly if it needs to be changed and perform
>>> + * what are allowed modes.
>>> + **/
>>> +static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>>> +					     bool port_type)
>>> +{
>>> +	u8 mode;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	struct i40e_hw *hw = &pf->hw;
>>> +	int ret;
>>> +
>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>>> +	if (ret)
>>> +		return -EINVAL;
>>> +
>>> +	if (hw->dev_caps.switch_mode) {
>>> +		/* if switch mode is set, support mode2 (non-tunneled for
>>> +		 * cloud filter) for now
>>> +		 */
>>> +		u32 switch_mode = hw->dev_caps.switch_mode &
>>> +							I40E_SWITCH_MODE_MASK;
>>> +		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>>> +			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>>> +				return 0;
>>> +			dev_err(&pf->pdev->dev,
>>> +				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>>> +				hw->dev_caps.switch_mode);
>>> +			return -EINVAL;
>>> +		}
>>> +	}
>>> +
>>> +	/* port_type: true for destination port and false for source port
>>> +	 * For now, supports only destination port type
>>> +	 */
>>> +	if (!port_type) {
>>> +		dev_err(&pf->pdev->dev, "src port type not supported\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	/* Set Bit 7 to be valid */
>>> +	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>>> +
>>> +	/* Set L4type to both TCP and UDP support */
>>> +	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>>> +
>>> +	/* Set cloud filter mode */
>>> +	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>>> +
>>> +	/* Prep mode field for set_switch_config */
>>> +	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>>> +					pf->last_sw_conf_valid_flags,
>>> +					mode, NULL);
>>> +	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>>> +		dev_err(&pf->pdev->dev,
>>> +			"couldn't set switch config bits, err %s aq_err %s\n",
>>> +			i40e_stat_str(hw, ret),
>>> +			i40e_aq_str(hw,
>>> +				    hw->aq.asq_last_status));
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +/**
>>>  * i40e_create_queue_channel - function to create channel
>>>  * @vsi: VSI to be configured
>>>  * @ch: ptr to channel (it contains channel specific params)
>>> @@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
>>> 	return ret;
>>> }
>>>
>>> +/**
>>> + * i40e_set_cld_element - sets cloud filter element data
>>> + * @filter: cloud filter rule
>>> + * @cld: ptr to cloud filter element data
>>> + *
>>> + * This is helper function to copy data into cloud filter element
>>> + **/
>>> +static inline void
>>> +i40e_set_cld_element(struct i40e_cloud_filter *filter,
>>> +		     struct i40e_aqc_cloud_filters_element_data *cld)
>>> +{
>>> +	int i, j;
>>> +	u32 ipa;
>>> +
>>> +	memset(cld, 0, sizeof(*cld));
>>> +	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>>> +	ether_addr_copy(cld->inner_mac, filter->src_mac);
>>> +
>>> +	if (filter->ip_version == IPV6_VERSION) {
>>> +#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>>> +		for (i = 0, j = 0; i < 4; i++, j += 2) {
>>> +			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>>> +			ipa = cpu_to_le32(ipa);
>>> +			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>>> +		}
>>> +	} else {
>>> +		ipa = be32_to_cpu(filter->dst_ip);
>>> +		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>>> +	}
>>> +
>>> +	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>>> +
>>> +	/* tenant_id is not supported by FW now, once the support is enabled
>>> +	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>>> +	 */
>>> +	if (filter->tenant_id)
>>> +		return;
>>> +}
>>> +
>>> +/**
>>> + * i40e_add_del_cloud_filter - Add/del cloud filter
>>> + * @vsi: pointer to VSI
>>> + * @filter: cloud filter rule
>>> + * @add: if true, add, if false, delete
>>> + *
>>> + * Add or delete a cloud filter for a specific flow spec.
>>> + * Returns 0 if the filter were successfully added.
>>> + **/
>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>> +				     struct i40e_cloud_filter *filter, bool add)
>>> +{
>>> +	struct i40e_aqc_cloud_filters_element_data cld_filter;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int ret;
>>> +	static const u16 flag_table[128] = {
>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>>> +	};
>>> +
>>> +	if (filter->flags >= ARRAY_SIZE(flag_table))
>>> +		return I40E_ERR_CONFIG;
>>> +
>>> +	/* copy element needed to add cloud filter from filter */
>>> +	i40e_set_cld_element(filter, &cld_filter);
>>> +
>>> +	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>>> +		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>>> +					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>>> +
>>> +	if (filter->ip_version == IPV6_VERSION)
>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>> +	else
>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>> +
>>> +	if (add)
>>> +		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>>> +						&cld_filter, 1);
>>> +	else
>>> +		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>>> +						&cld_filter, 1);
>>> +	if (ret)
>>> +		dev_dbg(&pf->pdev->dev,
>>> +			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>>> +			add ? "add" : "delete", filter->dst_port, ret,
>>> +			pf->hw.aq.asq_last_status);
>>> +	else
>>> +		dev_info(&pf->pdev->dev,
>>> +			 "%s cloud filter for VSI: %d\n",
>>> +			 add ? "Added" : "Deleted", filter->seid);
>>> +	return ret;
>>> +}
>>> +
>>> +/**
>>> + * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>>> + * @vsi: pointer to VSI
>>> + * @filter: cloud filter rule
>>> + * @add: if true, add, if false, delete
>>> + *
>>> + * Add or delete a cloud filter for a specific flow spec using big buffer.
>>> + * Returns 0 if the filter were successfully added.
>>> + **/
>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>> +					     struct i40e_cloud_filter *filter,
>>> +					     bool add)
>>> +{
>>> +	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int ret;
>>> +
>>> +	/* Both (Outer/Inner) valid mac_addr are not supported */
>>> +	if (is_valid_ether_addr(filter->dst_mac) &&
>>> +	    is_valid_ether_addr(filter->src_mac))
>>> +		return -EINVAL;
>>> +
>>> +	/* Make sure port is specified, otherwise bail out, for channel
>>> +	 * specific cloud filter needs 'L4 port' to be non-zero
>>> +	 */
>>> +	if (!filter->dst_port)
>>> +		return -EINVAL;
>>> +
>>> +	/* adding filter using src_port/src_ip is not supported at this stage */
>>> +	if (filter->src_port || filter->src_ip ||
>>> +	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>>> +		return -EINVAL;
>>> +
>>> +	/* copy element needed to add cloud filter from filter */
>>> +	i40e_set_cld_element(filter, &cld_filter.element);
>>> +
>>> +	if (is_valid_ether_addr(filter->dst_mac) ||
>>> +	    is_valid_ether_addr(filter->src_mac) ||
>>> +	    is_multicast_ether_addr(filter->dst_mac) ||
>>> +	    is_multicast_ether_addr(filter->src_mac)) {
>>> +		/* MAC + IP : unsupported mode */
>>> +		if (filter->dst_ip)
>>> +			return -EINVAL;
>>> +
>>> +		/* since we validated that L4 port must be valid before
>>> +		 * we get here, start with respective "flags" value
>>> +		 * and update if vlan is present or not
>>> +		 */
>>> +		cld_filter.element.flags =
>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>>> +
>>> +		if (filter->vlan_id) {
>>> +			cld_filter.element.flags =
>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>>> +		}
>>> +
>>> +	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>>> +		cld_filter.element.flags =
>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>>> +		if (filter->ip_version == IPV6_VERSION)
>>> +			cld_filter.element.flags |=
>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>> +		else
>>> +			cld_filter.element.flags |=
>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>> +	} else {
>>> +		dev_err(&pf->pdev->dev,
>>> +			"either mac or ip has to be valid for cloud filter\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	/* Now copy L4 port in Byte 6..7 in general fields */
>>> +	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>>> +						be16_to_cpu(filter->dst_port);
>>> +
>>> +	if (add) {
>>> +		bool proto_type, port_type;
>>> +
>>> +		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>>> +		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>>> +			     true : false;
>>> +
>>> +		/* For now, src port based cloud filter for channel is not
>>> +		 * supported
>>> +		 */
>>> +		if (!port_type) {
>>> +			dev_err(&pf->pdev->dev,
>>> +				"unsupported port type (src port)\n");
>>> +			return -EOPNOTSUPP;
>>> +		}
>>> +
>>> +		/* Validate current device switch mode, change if necessary */
>>> +		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>>> +							port_type);
>>> +		if (ret) {
>>> +			dev_err(&pf->pdev->dev,
>>> +				"failed to set switch mode, ret %d\n",
>>> +				ret);
>>> +			return ret;
>>> +		}
>>> +
>>> +		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>>> +						   &cld_filter, 1);
>>> +	} else {
>>> +		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>>> +						   &cld_filter, 1);
>>> +	}
>>> +
>>> +	if (ret)
>>> +		dev_dbg(&pf->pdev->dev,
>>> +			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>>> +			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>>> +	else
>>> +		dev_info(&pf->pdev->dev,
>>> +			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>>> +			 add ? "add" : "delete", filter->seid,
>>> +			 ntohs(filter->dst_port));
>>> +	return ret;
>>> +}
>>> +
>>> +/**
>>> + * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + * @filter: Pointer to cloud filter structure
>>> + *
>>> + **/
>>> +static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>>> +				 struct tc_cls_flower_offload *f,
>>> +				 struct i40e_cloud_filter *filter)
>>> +{
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	u16 addr_type = 0;
>>> +	u8 field_flags = 0;
>>> +
>>> +	if (f->dissector->used_keys &
>>> +	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>>> +		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>>> +			f->dissector->used_keys);
>>> +		return -EOPNOTSUPP;
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>>> +		struct flow_dissector_key_keyid *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>> +						  f->key);
>>> +
>>> +		struct flow_dissector_key_keyid *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>> +						  f->mask);
>>> +
>>> +		if (mask->keyid != 0)
>>> +			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>>> +
>>> +		filter->tenant_id = be32_to_cpu(key->keyid);
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>>> +		struct flow_dissector_key_basic *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_BASIC,
>>> +						  f->key);
>>> +
>>> +		filter->ip_proto = key->ip_proto;
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>>> +		struct flow_dissector_key_eth_addrs *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>> +						  f->key);
>>> +
>>> +		struct flow_dissector_key_eth_addrs *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>> +						  f->mask);
>>> +
>>> +		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>>> +		if (!is_zero_ether_addr(mask->dst)) {
>>> +			if (is_broadcast_ether_addr(mask->dst)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_OMAC;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>>> +					mask->dst);
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (!is_zero_ether_addr(mask->src)) {
>>> +			if (is_broadcast_ether_addr(mask->src)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IMAC;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>>> +					mask->src);
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +		ether_addr_copy(filter->dst_mac, key->dst);
>>> +		ether_addr_copy(filter->src_mac, key->src);
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>>> +		struct flow_dissector_key_vlan *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>> +						  f->key);
>>> +		struct flow_dissector_key_vlan *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>> +						  f->mask);
>>> +
>>> +		if (mask->vlan_id) {
>>> +			if (mask->vlan_id == VLAN_VID_MASK) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>>> +
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>>> +					mask->vlan_id);
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		filter->vlan_id = cpu_to_be16(key->vlan_id);
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>>> +		struct flow_dissector_key_control *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_CONTROL,
>>> +						  f->key);
>>> +
>>> +		addr_type = key->addr_type;
>>> +	}
>>> +
>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>>> +		struct flow_dissector_key_ipv4_addrs *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>> +						  f->key);
>>> +		struct flow_dissector_key_ipv4_addrs *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>> +						  f->mask);
>>> +
>>> +		if (mask->dst) {
>>> +			if (mask->dst == cpu_to_be32(0xffffffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>>> +					be32_to_cpu(mask->dst));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (mask->src) {
>>> +			if (mask->src == cpu_to_be32(0xffffffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>>> +					be32_to_cpu(mask->dst));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>>> +			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>>> +			return I40E_ERR_CONFIG;
>>> +		}
>>> +		filter->dst_ip = key->dst;
>>> +		filter->src_ip = key->src;
>>> +		filter->ip_version = IPV4_VERSION;
>>> +	}
>>> +
>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>>> +		struct flow_dissector_key_ipv6_addrs *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>> +						  f->key);
>>> +		struct flow_dissector_key_ipv6_addrs *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>> +						  f->mask);
>>> +
>>> +		/* src and dest IPV6 address should not be LOOPBACK
>>> +		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>>> +		 */
>>> +		if (ipv6_addr_loopback(&key->dst) ||
>>> +		    ipv6_addr_loopback(&key->src)) {
>>> +			dev_err(&pf->pdev->dev,
>>> +				"Bad ipv6, addr is LOOPBACK\n");
>>> +			return I40E_ERR_CONFIG;
>>> +		}
>>> +		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>>> +			field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +
>>> +		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>>> +		       sizeof(filter->src_ipv6));
>>> +		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>>> +		       sizeof(filter->dst_ipv6));
>>> +
>>> +		/* mark it as IPv6 filter, to be used later */
>>> +		filter->ip_version = IPV6_VERSION;
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>>> +		struct flow_dissector_key_ports *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>> +						  f->key);
>>> +		struct flow_dissector_key_ports *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>> +						  f->mask);
>>> +
>>> +		if (mask->src) {
>>> +			if (mask->src == cpu_to_be16(0xffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>>> +					be16_to_cpu(mask->src));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (mask->dst) {
>>> +			if (mask->dst == cpu_to_be16(0xffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>>> +					be16_to_cpu(mask->dst));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		filter->dst_port = key->dst;
>>> +		filter->src_port = key->src;
>>> +
>>> +		/* For now, only supports destination port*/
>>> +		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>>> +
>>> +		switch (filter->ip_proto) {
>>> +		case IPPROTO_TCP:
>>> +		case IPPROTO_UDP:
>>> +			break;
>>> +		default:
>>> +			dev_err(&pf->pdev->dev,
>>> +				"Only UDP and TCP transport are supported\n");
>>> +			return -EINVAL;
>>> +		}
>>> +	}
>>> +	filter->flags = field_flags;
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * i40e_handle_redirect_action: Forward to a traffic class on the device
>>> + * @vsi: Pointer to VSI
>>> + * @ifindex: ifindex of the device to forwared to
>>> + * @tc: traffic class index on the device
>>> + * @filter: Pointer to cloud filter structure
>>> + *
>>> + **/
>>> +static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>>> +				       struct i40e_cloud_filter *filter)
>>> +{
>>> +	struct i40e_channel *ch, *ch_tmp;
>>> +
>>> +	/* redirect to a traffic class on the same device */
>>> +	if (vsi->netdev->ifindex == ifindex) {
>>> +		if (tc == 0) {
>>> +			filter->seid = vsi->seid;
>>> +			return 0;
>>> +		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>>> +			if (!filter->dst_port) {
>>> +				dev_err(&vsi->back->pdev->dev,
>>> +					"Specify destination port to redirect to traffic class that is not default\n");
>>> +				return -EINVAL;
>>> +			}
>>> +			if (list_empty(&vsi->ch_list))
>>> +				return -EINVAL;
>>> +			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>>> +						 list) {
>>> +				if (ch->seid == vsi->tc_seid_map[tc])
>>> +					filter->seid = ch->seid;
>>> +			}
>>> +			return 0;
>>> +		}
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +/**
>>> + * i40e_parse_tc_actions - Parse tc actions
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + * @filter: Pointer to cloud filter structure
>>> + *
>>> + **/
>>> +static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>>> +				 struct i40e_cloud_filter *filter)
>>> +{
>>> +	const struct tc_action *a;
>>> +	LIST_HEAD(actions);
>>> +	int err;
>>> +
>>> +	if (!tcf_exts_has_actions(exts))
>>> +		return -EINVAL;
>>> +
>>> +	tcf_exts_to_list(exts, &actions);
>>> +	list_for_each_entry(a, &actions, list) {
>>> +		/* Drop action */
>>> +		if (is_tcf_gact_shot(a)) {
>>> +			dev_err(&vsi->back->pdev->dev,
>>> +				"Cloud filters do not support the drop action.\n");
>>> +			return -EOPNOTSUPP;
>>> +		}
>>> +
>>> +		/* Redirect to a traffic class on the same device */
>>> +		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>>> +			int ifindex = tcf_mirred_ifindex(a);
>>> +			u8 tc = tcf_mirred_tc(a);
>>> +
>>> +			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>>> +							  filter);
>>> +			if (err == 0)
>>> +				return err;
>>> +		}
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +/**
>>> + * i40e_configure_clsflower - Configure tc flower filters
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + *
>>> + **/
>>> +static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>>> +				    struct tc_cls_flower_offload *cls_flower)
>>> +{
>>> +	struct i40e_cloud_filter *filter = NULL;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int err = 0;
>>> +
>>> +	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>>> +	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>>> +		return -EBUSY;
>>> +
>>> +	if (pf->fdir_pf_active_filters ||
>>> +	    (!hlist_empty(&pf->fdir_filter_list))) {
>>> +		dev_err(&vsi->back->pdev->dev,
>>> +			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>>> +		dev_err(&vsi->back->pdev->dev,
>>> +			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>>> +		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>> +	}
>>> +
>>> +	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>>> +	if (!filter)
>>> +		return -ENOMEM;
>>> +
>>> +	filter->cookie = cls_flower->cookie;
>>> +
>>> +	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>>> +	if (err < 0)
>>> +		goto err;
>>> +
>>> +	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>>> +	if (err < 0)
>>> +		goto err;
>>> +
>>> +	/* Add cloud filter */
>>> +	if (filter->dst_port)
>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>>> +	else
>>> +		err = i40e_add_del_cloud_filter(vsi, filter, true);
>>> +
>>> +	if (err) {
>>> +		dev_err(&pf->pdev->dev,
>>> +			"Failed to add cloud filter, err %s\n",
>>> +			i40e_stat_str(&pf->hw, err));
>>> +		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>> +		goto err;
>>> +	}
>>> +
>>> +	/* add filter to the ordered list */
>>> +	INIT_HLIST_NODE(&filter->cloud_node);
>>> +
>>> +	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>>> +
>>> +	pf->num_cloud_filters++;
>>> +
>>> +	return err;
>>> +err:
>>> +	kfree(filter);
>>> +	return err;
>>> +}
>>> +
>>> +/**
>>> + * i40e_find_cloud_filter - Find the could filter in the list
>>> + * @vsi: Pointer to VSI
>>> + * @cookie: filter specific cookie
>>> + *
>>> + **/
>>> +static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>>> +							unsigned long *cookie)
>>> +{
>>> +	struct i40e_cloud_filter *filter = NULL;
>>> +	struct hlist_node *node2;
>>> +
>>> +	hlist_for_each_entry_safe(filter, node2,
>>> +				  &vsi->back->cloud_filter_list, cloud_node)
>>> +		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>>> +			return filter;
>>> +	return NULL;
>>> +}
>>> +
>>> +/**
>>> + * i40e_delete_clsflower - Remove tc flower filters
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + *
>>> + **/
>>> +static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>>> +				 struct tc_cls_flower_offload *cls_flower)
>>> +{
>>> +	struct i40e_cloud_filter *filter = NULL;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int err = 0;
>>> +
>>> +	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>>> +
>>> +	if (!filter)
>>> +		return -EINVAL;
>>> +
>>> +	hash_del(&filter->cloud_node);
>>> +
>>> +	if (filter->dst_port)
>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>>> +	else
>>> +		err = i40e_add_del_cloud_filter(vsi, filter, false);
>>> +	if (err) {
>>> +		kfree(filter);
>>> +		dev_err(&pf->pdev->dev,
>>> +			"Failed to delete cloud filter, err %s\n",
>>> +			i40e_stat_str(&pf->hw, err));
>>> +		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>> +	}
>>> +
>>> +	kfree(filter);
>>> +	pf->num_cloud_filters--;
>>> +
>>> +	if (!pf->num_cloud_filters)
>>> +		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>> +		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>> +			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>> +		}
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * i40e_setup_tc_cls_flower - flower classifier offloads
>>> + * @netdev: net device to configure
>>> + * @type_data: offload data
>>> + **/
>>> +static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>>> +				    struct tc_cls_flower_offload *cls_flower)
>>> +{
>>> +	struct i40e_netdev_priv *np = netdev_priv(netdev);
>>> +	struct i40e_vsi *vsi = np->vsi;
>>> +
>>> +	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>>> +	    cls_flower->common.chain_index)
>>> +		return -EOPNOTSUPP;
>>> +
>>> +	switch (cls_flower->command) {
>>> +	case TC_CLSFLOWER_REPLACE:
>>> +		return i40e_configure_clsflower(vsi, cls_flower);
>>> +	case TC_CLSFLOWER_DESTROY:
>>> +		return i40e_delete_clsflower(vsi, cls_flower);
>>> +	case TC_CLSFLOWER_STATS:
>>> +		return -EOPNOTSUPP;
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +}
>>> +
>>> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>>> 			   void *type_data)
>>> {
>>> -	if (type != TC_SETUP_MQPRIO)
>>> +	switch (type) {
>>> +	case TC_SETUP_MQPRIO:
>>> +		return i40e_setup_tc(netdev, type_data);
>>> +	case TC_SETUP_CLSFLOWER:
>>> +		return i40e_setup_tc_cls_flower(netdev, type_data);
>>> +	default:
>>> 		return -EOPNOTSUPP;
>>> -
>>> -	return i40e_setup_tc(netdev, type_data);
>>> +	}
>>> }
>>>
>>> /**
>>> @@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
>>> 		kfree(cfilter);
>>> 	}
>>> 	pf->num_cloud_filters = 0;
>>> +
>>> +	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>> +	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>> +		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>> +		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>> +		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>> +	}
>>> }
>>>
>>> /**
>>> @@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>>>  * i40e_get_capabilities - get info about the HW
>>>  * @pf: the PF struct
>>>  **/
>>> -static int i40e_get_capabilities(struct i40e_pf *pf)
>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>> +				 enum i40e_admin_queue_opc list_type)
>>> {
>>> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
>>> 	u16 data_size;
>>> @@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>>
>>> 		/* this loads the data into the hw struct for us */
>>> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>>> -					    &data_size,
>>> -					    i40e_aqc_opc_list_func_capabilities,
>>> -					    NULL);
>>> +						    &data_size, list_type,
>>> +						    NULL);
>>> 		/* data loaded, buffer no longer needed */
>>> 		kfree(cap_buf);
>>>
>>> @@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>> 		}
>>> 	} while (err);
>>>
>>> -	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>>> -		dev_info(&pf->pdev->dev,
>>> -			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>> -			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>> -			 pf->hw.func_caps.num_msix_vectors,
>>> -			 pf->hw.func_caps.num_msix_vectors_vf,
>>> -			 pf->hw.func_caps.fd_filters_guaranteed,
>>> -			 pf->hw.func_caps.fd_filters_best_effort,
>>> -			 pf->hw.func_caps.num_tx_qp,
>>> -			 pf->hw.func_caps.num_vsis);
>>> -
>>> +	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>>> +		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>> +				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>> +				 pf->hw.func_caps.num_msix_vectors,
>>> +				 pf->hw.func_caps.num_msix_vectors_vf,
>>> +				 pf->hw.func_caps.fd_filters_guaranteed,
>>> +				 pf->hw.func_caps.fd_filters_best_effort,
>>> +				 pf->hw.func_caps.num_tx_qp,
>>> +				 pf->hw.func_caps.num_vsis);
>>> +		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>>> +				 pf->hw.dev_caps.switch_mode,
>>> +				 pf->hw.dev_caps.valid_functions);
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "SR-IOV=%d, num_vfs for all function=%u\n",
>>> +				 pf->hw.dev_caps.sr_iov_1_1,
>>> +				 pf->hw.dev_caps.num_vfs);
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>>> +				 pf->hw.dev_caps.num_vsis,
>>> +				 pf->hw.dev_caps.num_rx_qp,
>>> +				 pf->hw.dev_caps.num_tx_qp);
>>> +		}
>>> +	}
>>> +	if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
>>> 		       + pf->hw.func_caps.num_vfs)
>>> -	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>>> -		dev_info(&pf->pdev->dev,
>>> -			 "got num_vsis %d, setting num_vsis to %d\n",
>>> -			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>> -		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>> +		if (pf->hw.revision_id == 0 &&
>>> +		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "got num_vsis %d, setting num_vsis to %d\n",
>>> +				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>> +			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>> +		}
>>> 	}
>>> -
>>> 	return 0;
>>> }
>>>
>>> @@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
>>> 		if (!vsi) {
>>> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 			return;
>>> 		}
>>> 	}
>>> @@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
>>> }
>>>
>>> /**
>>> + * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>>> + * @vsi: PF main vsi
>>> + * @seid: seid of main or channel VSIs
>>> + *
>>> + * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>>> + * existed before reset
>>> + **/
>>> +static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>>> +{
>>> +	struct i40e_cloud_filter *cfilter;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	struct hlist_node *node;
>>> +	i40e_status ret;
>>> +
>>> +	/* Add cloud filters back if they exist */
>>> +	if (hlist_empty(&pf->cloud_filter_list))
>>> +		return 0;
>>> +
>>> +	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>>> +				  cloud_node) {
>>> +		if (cfilter->seid != seid)
>>> +			continue;
>>> +
>>> +		if (cfilter->dst_port)
>>> +			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>>> +								true);
>>> +		else
>>> +			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>>> +
>>> +		if (ret) {
>>> +			dev_dbg(&pf->pdev->dev,
>>> +				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>>> +				i40e_stat_str(&pf->hw, ret),
>>> +				i40e_aq_str(&pf->hw,
>>> +					    pf->hw.aq.asq_last_status));
>>> +			return ret;
>>> +		}
>>> +	}
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>>>  * @vsi: PF main vsi
>>>  *
>>> @@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
>>> 						I40E_BW_CREDIT_DIVISOR,
>>> 				ch->seid);
>>> 		}
>>> +		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>>> +		if (ret) {
>>> +			dev_dbg(&vsi->back->pdev->dev,
>>> +				"Failed to rebuild cloud filters for channel VSI %u\n",
>>> +				ch->seid);
>>> +			return ret;
>>> +		}
>>> 	}
>>> 	return 0;
>>> }
>>> @@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>> 		i40e_verify_eeprom(pf);
>>>
>>> 	i40e_clear_pxe_mode(hw);
>>> -	ret = i40e_get_capabilities(pf);
>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>> 	if (ret)
>>> 		goto end_core_reset;
>>>
>>> @@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>> 			goto end_unlock;
>>> 	}
>>>
>>> +	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>>> +	if (ret)
>>> +		goto end_unlock;
>>> +
>>> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
>>> 	 * for this main VSI if they exist
>>> 	 */
>>> @@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
>>> 	    (pf->num_fdsb_msix == 0)) {
>>> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
>>> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 	}
>>> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
>>> 	    (pf->num_vmdq_msix == 0)) {
>>> @@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
>>> 				       I40E_FLAG_FD_SB_ENABLED	|
>>> 				       I40E_FLAG_FD_ATR_ENABLED	|
>>> 				       I40E_FLAG_VMDQ_ENABLED);
>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>
>>> 			/* rework the queue expectations without MSIX */
>>> 			i40e_determine_queue_usage(pf);
>>> @@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>> 		/* Enable filters and mark for reset */
>>> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
>>> 			need_reset = true;
>>> -		/* enable FD_SB only if there is MSI-X vector */
>>> -		if (pf->num_fdsb_msix > 0)
>>> +		/* enable FD_SB only if there is MSI-X vector and no cloud
>>> +		 * filters exist
>>> +		 */
>>> +		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
>>> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>> +		}
>>> 	} else {
>>> 		/* turn off filters, mark for reset and clear SW filter list */
>>> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>>> @@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>> 		}
>>> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
>>> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> +
>>> 		/* reset fd counters */
>>> 		pf->fd_add_err = 0;
>>> 		pf->fd_atr_cnt = 0;
>>> @@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
>>> 		netdev->hw_features |= NETIF_F_NTUPLE;
>>> 	hw_features = hw_enc_features		|
>>> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>>> -		      NETIF_F_HW_VLAN_CTAG_RX;
>>> +		      NETIF_F_HW_VLAN_CTAG_RX	|
>>> +		      NETIF_F_HW_TC;
>>>
>>> 	netdev->hw_features |= hw_features;
>>>
>>> @@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>> 	*/
>>>
>>> 	if ((pf->hw.pf_id == 0) &&
>>> -	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>>> +	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
>>> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>>> +		pf->last_sw_conf_flags = flags;
>>> +	}
>>>
>>> 	if (pf->hw.pf_id == 0) {
>>> 		u16 valid_flags;
>>> @@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>> 					     pf->hw.aq.asq_last_status));
>>> 			/* not a fatal problem, just keep going */
>>> 		}
>>> +		pf->last_sw_conf_valid_flags = valid_flags;
>>> 	}
>>>
>>> 	/* first time setup */
>>> @@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>> 			       I40E_FLAG_DCB_ENABLED	|
>>> 			       I40E_FLAG_SRIOV_ENABLED	|
>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
>>> 				  I40E_FLAG_FD_SB_ENABLED |
>>> 				  I40E_FLAG_FD_ATR_ENABLED |
>>> @@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>> 			       I40E_FLAG_FD_ATR_ENABLED	|
>>> 			       I40E_FLAG_DCB_ENABLED	|
>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 	} else {
>>> 		/* Not enough queues for all TCs */
>>> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>>> @@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>> 			queues_left -= 1; /* save 1 queue for FD */
>>> 		} else {
>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
>>> 		}
>>> 	}
>>> @@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>>> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
>>>
>>> 	i40e_clear_pxe_mode(hw);
>>> -	err = i40e_get_capabilities(pf);
>>> +	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>> 	if (err)
>>> 		goto err_adminq_setup;
>>>
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>> index 92869f5..3bb6659 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>> @@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
>>> 		struct i40e_asq_cmd_details *cmd_details);
>>> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
>>> 				   struct i40e_asq_cmd_details *cmd_details);
>>> +i40e_status
>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count);
>>> +enum i40e_status_code
>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count);
>>> +enum i40e_status_code
>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count);
>>> +i40e_status
>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count);
>>> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
>>> 			       struct i40e_lldp_variables *lldp_cfg);
>>> /* i40e_common */
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>> index c019f46..af38881 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>> @@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
>>> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
>>> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
>>> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>>> +#define I40E_SWITCH_MODE_MASK		0xF
>>>
>>> 	u32  management_mode;
>>> 	u32  mng_protocols_over_mctp;
>>> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>> index b8c78bf..4fe27f0 100644
>>> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>> @@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>> 		struct {
>>> 			u8 data[16];
>>> 		} v6;
>>> +		struct {
>>> +			__le16 data[8];
>>> +		} raw_v6;
>>> 	} ipaddr;
>>> 	__le16	flags;
>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
@ 2017-09-28 19:22         ` Nambiar, Amritha
  0 siblings, 0 replies; 30+ messages in thread
From: Nambiar, Amritha @ 2017-09-28 19:22 UTC (permalink / raw)
  To: intel-wired-lan

On 9/14/2017 1:00 AM, Nambiar, Amritha wrote:
> On 9/13/2017 6:26 AM, Jiri Pirko wrote:
>> Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar at intel.com wrote:
>>> This patch enables tc-flower based hardware offloads. tc flower
>>> filter provided by the kernel is configured as driver specific
>>> cloud filter. The patch implements functions and admin queue
>>> commands needed to support cloud filters in the driver and
>>> adds cloud filters to configure these tc-flower filters.
>>>
>>> The only action supported is to redirect packets to a traffic class
>>> on the same device.
>>
>> So basically you are not doing redirect, you are just setting tclass for
>> matched packets, right? Why you use mirred for this? I think that
>> you might consider extending g_act for that:
>>
>> # tc filter add dev eth0 protocol ip ingress \
>>   prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
>>   action tclass 0
>>
> Yes, this doesn't work like a typical egress redirect, but is aimed at
> forwarding the matched packets to a different queue-group/traffic class
> on the same device, so some sort-of ingress redirect in the hardware. I
> possibly may not need the mirred-redirect as you say, I'll look into the
> g_act way of doing this with a new gact tc action.
> 

I was looking at introducing a new gact tclass action to TC. In the HW
offload path, this sets a traffic class value for certain matched
packets so they will be processed in a queue belonging to the traffic class.

# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_sw\
  action tclass 2

But, I'm having trouble defining what this action means in the kernel
datapath. For ingress, this action could just take the default path and
do nothing and only have meaning in the HW offloaded path. For egress,
certain qdiscs like 'multiq' and 'prio' could use this 'tclass' value
for band selection, while the 'mqprio' qdisc selects the traffic class
based on the skb priority in netdev_pick_tx(), so what would this action
mean for the 'mqprio' qdisc?

It looks like the 'prio' qdisc uses band selection based on the
'classid', so I was thinking of using the 'classid' through the cls
flower filter and offload it to HW for the traffic class index, this way
we would have the same behavior in HW offload and SW fallback and there
would be no need for a separate tc action.

In HW:
# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_sw classid 1:2\

filter pref 2 flower chain 0
filter pref 2 flower chain 0 handle 0x1 classid 1:2
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.3.5
  dst_port 25
  skip_sw
  in_hw

This will be used to route packets to traffic class 2.

In SW:
# tc filter add dev eth0 protocol ip parent ffff:\
  prio 2 flower dst_ip 192.168.3.5/32\
  ip_proto udp dst_port 25 skip_hw classid 1:2

filter pref 2 flower chain 0
filter pref 2 flower chain 0 handle 0x1 classid 1:2
  eth_type ipv4
  ip_proto udp
  dst_ip 192.168.3.5
  dst_port 25
  skip_hw
  not_in_hw

>>
>>>
>>> # tc qdisc add dev eth0 ingress
>>> # ethtool -K eth0 hw-tc-offload on
>>>
>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>>>  action mirred ingress redirect dev eth0 tclass 0
>>>
>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>  prio 2 flower dst_ip 192.168.3.5/32\
>>>  ip_proto udp dst_port 25 skip_sw\
>>>  action mirred ingress redirect dev eth0 tclass 1
>>>
>>> # tc filter add dev eth0 protocol ipv6 parent ffff:\
>>>  prio 3 flower dst_ip fe8::200:1\
>>>  ip_proto udp dst_port 66 skip_sw\
>>>  action mirred ingress redirect dev eth0 tclass 1
>>>
>>> Delete tc flower filter:
>>> Example:
>>>
>>> # tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
>>> # tc filter del dev eth0 parent ffff:
>>>
>>> Flow Director Sideband is disabled while configuring cloud filters
>>> via tc-flower and until any cloud filter exists.
>>>
>>> Unsupported matches when cloud filters are added using enhanced
>>> big buffer cloud filter mode of underlying switch include:
>>> 1. source port and source IP
>>> 2. Combined MAC address and IP fields.
>>> 3. Not specifying L4 port
>>>
>>> These filter matches can however be used to redirect traffic to
>>> the main VSI (tc 0) which does not require the enhanced big buffer
>>> cloud filter support.
>>>
>>> v3: Cleaned up some lengthy function names. Changed ipv6 address to
>>> __be32 array instead of u8 array. Used macro for IP version. Minor
>>> formatting changes.
>>> v2:
>>> 1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>>> 2. Moved dev_info for add/deleting cloud filters in else condition
>>> 3. Fixed some format specifier in dev_err logs
>>> 4. Refactored i40e_get_capabilities to take an additional
>>>   list_type parameter and use it to query device and function
>>>   level capabilities.
>>> 5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>>>   to verify if redirect to a traffic class is supported.
>>> 6. Added comments for Geneve fix in cloud filter big buffer AQ
>>>   function definitions.
>>> 7. Cleaned up setup_tc interface to rebase and work with Jiri's
>>>   updates, separate function to process tc cls flower offloads.
>>> 8. Changes to make Flow Director Sideband and Cloud filters mutually
>>>   exclusive.
>>>
>>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>>> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>>> Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>>> ---
>>> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
>>> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
>>> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
>>> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
>>> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
>>> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
>>> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
>>> 7 files changed, 1202 insertions(+), 30 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>>> index 6018fb6..b110519 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e.h
>>> @@ -55,6 +55,8 @@
>>> #include <linux/net_tstamp.h>
>>> #include <linux/ptp_clock_kernel.h>
>>> #include <net/pkt_cls.h>
>>> +#include <net/tc_act/tc_gact.h>
>>> +#include <net/tc_act/tc_mirred.h>
>>> #include "i40e_type.h"
>>> #include "i40e_prototype.h"
>>> #include "i40e_client.h"
>>> @@ -252,9 +254,52 @@ struct i40e_fdir_filter {
>>> 	u32 fd_id;
>>> };
>>>
>>> +#define IPV4_VERSION 4
>>> +#define IPV6_VERSION 6
>>> +
>>> +#define I40E_CLOUD_FIELD_OMAC	0x01
>>> +#define I40E_CLOUD_FIELD_IMAC	0x02
>>> +#define I40E_CLOUD_FIELD_IVLAN	0x04
>>> +#define I40E_CLOUD_FIELD_TEN_ID	0x08
>>> +#define I40E_CLOUD_FIELD_IIP	0x10
>>> +
>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>>> +						 I40E_CLOUD_FIELD_IVLAN)
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>>> +						 I40E_CLOUD_FIELD_TEN_ID)
>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>>> +						  I40E_CLOUD_FIELD_IMAC | \
>>> +						  I40E_CLOUD_FIELD_TEN_ID)
>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>>> +						   I40E_CLOUD_FIELD_IVLAN | \
>>> +						   I40E_CLOUD_FIELD_TEN_ID)
>>> +#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>>> +
>>> struct i40e_cloud_filter {
>>> 	struct hlist_node cloud_node;
>>> 	unsigned long cookie;
>>> +	/* cloud filter input set follows */
>>> +	u8 dst_mac[ETH_ALEN];
>>> +	u8 src_mac[ETH_ALEN];
>>> +	__be16 vlan_id;
>>> +	__be32 dst_ip;
>>> +	__be32 src_ip;
>>> +	__be32 dst_ipv6[4];
>>> +	__be32 src_ipv6[4];
>>> +	__be16 dst_port;
>>> +	__be16 src_port;
>>> +	u32 ip_version;
>>> +	u8 ip_proto;	/* IPPROTO value */
>>> +	/* L4 port type: src or destination port */
>>> +#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>>> +#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>>> +	u8 port_type;
>>> +	u32 tenant_id;
>>> +	u8 flags;
>>> +#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>>> +	u8 tunnel_type;
>>> 	u16 seid;	/* filter control */
>>> };
>>>
>>> @@ -491,6 +536,8 @@ struct i40e_pf {
>>> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
>>> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
>>> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>>> +#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>>> +#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
>>>
>>> 	struct i40e_client_instance *cinst;
>>> 	bool stat_offsets_loaded;
>>> @@ -573,6 +620,8 @@ struct i40e_pf {
>>> 	u16 phy_led_val;
>>>
>>> 	u16 override_q_count;
>>> +	u16 last_sw_conf_flags;
>>> +	u16 last_sw_conf_valid_flags;
>>> };
>>>
>>> /**
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>> index 2e567c2..feb3d42 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>> @@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>> 		struct {
>>> 			u8 data[16];
>>> 		} v6;
>>> +		struct {
>>> +			__le16 data[8];
>>> +		} raw_v6;
>>> 	} ipaddr;
>>> 	__le16	flags;
>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>> index 9567702..d9c9665 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>> @@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
>>>
>>> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
>>> 				   track_id, &offset, &info, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_add_cloud_filters
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to add cloud filters from
>>> + * @filters: Buffer which contains the filters to be added
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Set the cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>> + * of the function.
>>> + *
>>> + **/
>>> +enum i40e_status_code
>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	enum i40e_status_code status;
>>> +	u16 buff_len;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_add_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_add_cloud_filters_bb
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to add cloud filters from
>>> + * @filters: Buffer which contains the filters in big buffer to be added
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Set the big buffer cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>> + * function.
>>> + *
>>> + **/
>>> +i40e_status
>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	i40e_status status;
>>> +	u16 buff_len;
>>> +	int i;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_add_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>> +
>>> +	for (i = 0; i < filter_count; i++) {
>>> +		u16 tnl_type;
>>> +		u32 ti;
>>> +
>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>> +
>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>> +		 * byte than the offset for the Tenant ID for rest of the
>>> +		 * tunnels.
>>> +		 */
>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>> +		}
>>> +	}
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_rem_cloud_filters
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to remove cloud filters from
>>> + * @filters: Buffer which contains the filters to be removed
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Remove the cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>> + * of the function.
>>> + *
>>> + **/
>>> +enum i40e_status_code
>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	enum i40e_status_code status;
>>> +	u16 buff_len;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> +	return status;
>>> +}
>>> +
>>> +/**
>>> + * i40e_aq_rem_cloud_filters_bb
>>> + * @hw: pointer to the hardware structure
>>> + * @seid: VSI seid to remove cloud filters from
>>> + * @filters: Buffer which contains the filters in big buffer to be removed
>>> + * @filter_count: number of filters contained in the buffer
>>> + *
>>> + * Remove the big buffer cloud filters for a given VSI.  The contents of the
>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>> + * function.
>>> + *
>>> + **/
>>> +i40e_status
>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count)
>>> +{
>>> +	struct i40e_aq_desc desc;
>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>> +	i40e_status status;
>>> +	u16 buff_len;
>>> +	int i;
>>> +
>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>> +
>>> +	buff_len = filter_count * sizeof(*filters);
>>> +	desc.datalen = cpu_to_le16(buff_len);
>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>> +	cmd->num_filters = filter_count;
>>> +	cmd->seid = cpu_to_le16(seid);
>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>> +
>>> +	for (i = 0; i < filter_count; i++) {
>>> +		u16 tnl_type;
>>> +		u32 ti;
>>> +
>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>> +
>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>> +		 * byte than the offset for the Tenant ID for rest of the
>>> +		 * tunnels.
>>> +		 */
>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>> +		}
>>> +	}
>>> +
>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>> +
>>> 	return status;
>>> }
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>> index afcf08a..96ee608 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>> @@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
>>> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
>>> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
>>> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>> +				     struct i40e_cloud_filter *filter,
>>> +				     bool add);
>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>> +					     struct i40e_cloud_filter *filter,
>>> +					     bool add);
>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>> +				 enum i40e_admin_queue_opc list_type);
>>> +
>>>
>>> /* i40e_pci_tbl - PCI Device ID Table
>>>  *
>>> @@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>>>  **/
>>> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>> {
>>> +	enum i40e_admin_queue_err last_aq_status;
>>> +	struct i40e_cloud_filter *cfilter;
>>> 	struct i40e_channel *ch, *ch_tmp;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	struct hlist_node *node;
>>> 	int ret, i;
>>>
>>> 	/* Reset rss size that was stored when reconfiguring rss for
>>> @@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>> 				 "Failed to reset tx rate for ch->seid %u\n",
>>> 				 ch->seid);
>>>
>>> +		/* delete cloud filters associated with this channel */
>>> +		hlist_for_each_entry_safe(cfilter, node,
>>> +					  &pf->cloud_filter_list, cloud_node) {
>>> +			if (cfilter->seid != ch->seid)
>>> +				continue;
>>> +
>>> +			hash_del(&cfilter->cloud_node);
>>> +			if (cfilter->dst_port)
>>> +				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>>> +									cfilter,
>>> +									false);
>>> +			else
>>> +				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>>> +								false);
>>> +			last_aq_status = pf->hw.aq.asq_last_status;
>>> +			if (ret)
>>> +				dev_info(&pf->pdev->dev,
>>> +					 "Failed to delete cloud filter, err %s aq_err %s\n",
>>> +					 i40e_stat_str(&pf->hw, ret),
>>> +					 i40e_aq_str(&pf->hw, last_aq_status));
>>> +			kfree(cfilter);
>>> +		}
>>> +
>>> 		/* delete VSI from FW */
>>> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
>>> 					     NULL);
>>> @@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
>>> }
>>>
>>> /**
>>> + * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>>> + * @vsi: ptr to VSI which has PF backing
>>> + * @l4type: true for TCP ond false for UDP
>>> + * @port_type: true if port is destination and false if port is source
>>> + *
>>> + * Sets up switch mode correctly if it needs to be changed and perform
>>> + * what are allowed modes.
>>> + **/
>>> +static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>>> +					     bool port_type)
>>> +{
>>> +	u8 mode;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	struct i40e_hw *hw = &pf->hw;
>>> +	int ret;
>>> +
>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>>> +	if (ret)
>>> +		return -EINVAL;
>>> +
>>> +	if (hw->dev_caps.switch_mode) {
>>> +		/* if switch mode is set, support mode2 (non-tunneled for
>>> +		 * cloud filter) for now
>>> +		 */
>>> +		u32 switch_mode = hw->dev_caps.switch_mode &
>>> +							I40E_SWITCH_MODE_MASK;
>>> +		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>>> +			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>>> +				return 0;
>>> +			dev_err(&pf->pdev->dev,
>>> +				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>>> +				hw->dev_caps.switch_mode);
>>> +			return -EINVAL;
>>> +		}
>>> +	}
>>> +
>>> +	/* port_type: true for destination port and false for source port
>>> +	 * For now, supports only destination port type
>>> +	 */
>>> +	if (!port_type) {
>>> +		dev_err(&pf->pdev->dev, "src port type not supported\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	/* Set Bit 7 to be valid */
>>> +	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>>> +
>>> +	/* Set L4type to both TCP and UDP support */
>>> +	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>>> +
>>> +	/* Set cloud filter mode */
>>> +	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>>> +
>>> +	/* Prep mode field for set_switch_config */
>>> +	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>>> +					pf->last_sw_conf_valid_flags,
>>> +					mode, NULL);
>>> +	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>>> +		dev_err(&pf->pdev->dev,
>>> +			"couldn't set switch config bits, err %s aq_err %s\n",
>>> +			i40e_stat_str(hw, ret),
>>> +			i40e_aq_str(hw,
>>> +				    hw->aq.asq_last_status));
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +/**
>>>  * i40e_create_queue_channel - function to create channel
>>>  * @vsi: VSI to be configured
>>>  * @ch: ptr to channel (it contains channel specific params)
>>> @@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
>>> 	return ret;
>>> }
>>>
>>> +/**
>>> + * i40e_set_cld_element - sets cloud filter element data
>>> + * @filter: cloud filter rule
>>> + * @cld: ptr to cloud filter element data
>>> + *
>>> + * This is helper function to copy data into cloud filter element
>>> + **/
>>> +static inline void
>>> +i40e_set_cld_element(struct i40e_cloud_filter *filter,
>>> +		     struct i40e_aqc_cloud_filters_element_data *cld)
>>> +{
>>> +	int i, j;
>>> +	u32 ipa;
>>> +
>>> +	memset(cld, 0, sizeof(*cld));
>>> +	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>>> +	ether_addr_copy(cld->inner_mac, filter->src_mac);
>>> +
>>> +	if (filter->ip_version == IPV6_VERSION) {
>>> +#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>>> +		for (i = 0, j = 0; i < 4; i++, j += 2) {
>>> +			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>>> +			ipa = cpu_to_le32(ipa);
>>> +			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>>> +		}
>>> +	} else {
>>> +		ipa = be32_to_cpu(filter->dst_ip);
>>> +		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>>> +	}
>>> +
>>> +	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>>> +
>>> +	/* tenant_id is not supported by FW now, once the support is enabled
>>> +	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>>> +	 */
>>> +	if (filter->tenant_id)
>>> +		return;
>>> +}
>>> +
>>> +/**
>>> + * i40e_add_del_cloud_filter - Add/del cloud filter
>>> + * @vsi: pointer to VSI
>>> + * @filter: cloud filter rule
>>> + * @add: if true, add, if false, delete
>>> + *
>>> + * Add or delete a cloud filter for a specific flow spec.
>>> + * Returns 0 if the filter were successfully added.
>>> + **/
>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>> +				     struct i40e_cloud_filter *filter, bool add)
>>> +{
>>> +	struct i40e_aqc_cloud_filters_element_data cld_filter;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int ret;
>>> +	static const u16 flag_table[128] = {
>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>>> +		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>>> +			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>>> +	};
>>> +
>>> +	if (filter->flags >= ARRAY_SIZE(flag_table))
>>> +		return I40E_ERR_CONFIG;
>>> +
>>> +	/* copy element needed to add cloud filter from filter */
>>> +	i40e_set_cld_element(filter, &cld_filter);
>>> +
>>> +	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>>> +		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>>> +					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>>> +
>>> +	if (filter->ip_version == IPV6_VERSION)
>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>> +	else
>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>> +
>>> +	if (add)
>>> +		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>>> +						&cld_filter, 1);
>>> +	else
>>> +		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>>> +						&cld_filter, 1);
>>> +	if (ret)
>>> +		dev_dbg(&pf->pdev->dev,
>>> +			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>>> +			add ? "add" : "delete", filter->dst_port, ret,
>>> +			pf->hw.aq.asq_last_status);
>>> +	else
>>> +		dev_info(&pf->pdev->dev,
>>> +			 "%s cloud filter for VSI: %d\n",
>>> +			 add ? "Added" : "Deleted", filter->seid);
>>> +	return ret;
>>> +}
>>> +
>>> +/**
>>> + * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>>> + * @vsi: pointer to VSI
>>> + * @filter: cloud filter rule
>>> + * @add: if true, add, if false, delete
>>> + *
>>> + * Add or delete a cloud filter for a specific flow spec using big buffer.
>>> + * Returns 0 if the filter were successfully added.
>>> + **/
>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>> +					     struct i40e_cloud_filter *filter,
>>> +					     bool add)
>>> +{
>>> +	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int ret;
>>> +
>>> +	/* Both (Outer/Inner) valid mac_addr are not supported */
>>> +	if (is_valid_ether_addr(filter->dst_mac) &&
>>> +	    is_valid_ether_addr(filter->src_mac))
>>> +		return -EINVAL;
>>> +
>>> +	/* Make sure port is specified, otherwise bail out, for channel
>>> +	 * specific cloud filter needs 'L4 port' to be non-zero
>>> +	 */
>>> +	if (!filter->dst_port)
>>> +		return -EINVAL;
>>> +
>>> +	/* adding filter using src_port/src_ip is not supported at this stage */
>>> +	if (filter->src_port || filter->src_ip ||
>>> +	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>>> +		return -EINVAL;
>>> +
>>> +	/* copy element needed to add cloud filter from filter */
>>> +	i40e_set_cld_element(filter, &cld_filter.element);
>>> +
>>> +	if (is_valid_ether_addr(filter->dst_mac) ||
>>> +	    is_valid_ether_addr(filter->src_mac) ||
>>> +	    is_multicast_ether_addr(filter->dst_mac) ||
>>> +	    is_multicast_ether_addr(filter->src_mac)) {
>>> +		/* MAC + IP : unsupported mode */
>>> +		if (filter->dst_ip)
>>> +			return -EINVAL;
>>> +
>>> +		/* since we validated that L4 port must be valid before
>>> +		 * we get here, start with respective "flags" value
>>> +		 * and update if vlan is present or not
>>> +		 */
>>> +		cld_filter.element.flags =
>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>>> +
>>> +		if (filter->vlan_id) {
>>> +			cld_filter.element.flags =
>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>>> +		}
>>> +
>>> +	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>>> +		cld_filter.element.flags =
>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>>> +		if (filter->ip_version == IPV6_VERSION)
>>> +			cld_filter.element.flags |=
>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>> +		else
>>> +			cld_filter.element.flags |=
>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>> +	} else {
>>> +		dev_err(&pf->pdev->dev,
>>> +			"either mac or ip has to be valid for cloud filter\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	/* Now copy L4 port in Byte 6..7 in general fields */
>>> +	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>>> +						be16_to_cpu(filter->dst_port);
>>> +
>>> +	if (add) {
>>> +		bool proto_type, port_type;
>>> +
>>> +		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>>> +		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>>> +			     true : false;
>>> +
>>> +		/* For now, src port based cloud filter for channel is not
>>> +		 * supported
>>> +		 */
>>> +		if (!port_type) {
>>> +			dev_err(&pf->pdev->dev,
>>> +				"unsupported port type (src port)\n");
>>> +			return -EOPNOTSUPP;
>>> +		}
>>> +
>>> +		/* Validate current device switch mode, change if necessary */
>>> +		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>>> +							port_type);
>>> +		if (ret) {
>>> +			dev_err(&pf->pdev->dev,
>>> +				"failed to set switch mode, ret %d\n",
>>> +				ret);
>>> +			return ret;
>>> +		}
>>> +
>>> +		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>>> +						   &cld_filter, 1);
>>> +	} else {
>>> +		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>>> +						   &cld_filter, 1);
>>> +	}
>>> +
>>> +	if (ret)
>>> +		dev_dbg(&pf->pdev->dev,
>>> +			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>>> +			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>>> +	else
>>> +		dev_info(&pf->pdev->dev,
>>> +			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>>> +			 add ? "add" : "delete", filter->seid,
>>> +			 ntohs(filter->dst_port));
>>> +	return ret;
>>> +}
>>> +
>>> +/**
>>> + * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + * @filter: Pointer to cloud filter structure
>>> + *
>>> + **/
>>> +static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>>> +				 struct tc_cls_flower_offload *f,
>>> +				 struct i40e_cloud_filter *filter)
>>> +{
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	u16 addr_type = 0;
>>> +	u8 field_flags = 0;
>>> +
>>> +	if (f->dissector->used_keys &
>>> +	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>>> +	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>>> +		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>>> +			f->dissector->used_keys);
>>> +		return -EOPNOTSUPP;
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>>> +		struct flow_dissector_key_keyid *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>> +						  f->key);
>>> +
>>> +		struct flow_dissector_key_keyid *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>> +						  f->mask);
>>> +
>>> +		if (mask->keyid != 0)
>>> +			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>>> +
>>> +		filter->tenant_id = be32_to_cpu(key->keyid);
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>>> +		struct flow_dissector_key_basic *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_BASIC,
>>> +						  f->key);
>>> +
>>> +		filter->ip_proto = key->ip_proto;
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>>> +		struct flow_dissector_key_eth_addrs *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>> +						  f->key);
>>> +
>>> +		struct flow_dissector_key_eth_addrs *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>> +						  f->mask);
>>> +
>>> +		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>>> +		if (!is_zero_ether_addr(mask->dst)) {
>>> +			if (is_broadcast_ether_addr(mask->dst)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_OMAC;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>>> +					mask->dst);
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (!is_zero_ether_addr(mask->src)) {
>>> +			if (is_broadcast_ether_addr(mask->src)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IMAC;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>>> +					mask->src);
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +		ether_addr_copy(filter->dst_mac, key->dst);
>>> +		ether_addr_copy(filter->src_mac, key->src);
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>>> +		struct flow_dissector_key_vlan *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>> +						  f->key);
>>> +		struct flow_dissector_key_vlan *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>> +						  f->mask);
>>> +
>>> +		if (mask->vlan_id) {
>>> +			if (mask->vlan_id == VLAN_VID_MASK) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>>> +
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>>> +					mask->vlan_id);
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		filter->vlan_id = cpu_to_be16(key->vlan_id);
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>>> +		struct flow_dissector_key_control *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_CONTROL,
>>> +						  f->key);
>>> +
>>> +		addr_type = key->addr_type;
>>> +	}
>>> +
>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>>> +		struct flow_dissector_key_ipv4_addrs *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>> +						  f->key);
>>> +		struct flow_dissector_key_ipv4_addrs *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>> +						  f->mask);
>>> +
>>> +		if (mask->dst) {
>>> +			if (mask->dst == cpu_to_be32(0xffffffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>>> +					be32_to_cpu(mask->dst));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (mask->src) {
>>> +			if (mask->src == cpu_to_be32(0xffffffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>>> +					be32_to_cpu(mask->dst));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>>> +			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>>> +			return I40E_ERR_CONFIG;
>>> +		}
>>> +		filter->dst_ip = key->dst;
>>> +		filter->src_ip = key->src;
>>> +		filter->ip_version = IPV4_VERSION;
>>> +	}
>>> +
>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>>> +		struct flow_dissector_key_ipv6_addrs *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>> +						  f->key);
>>> +		struct flow_dissector_key_ipv6_addrs *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>> +						  f->mask);
>>> +
>>> +		/* src and dest IPV6 address should not be LOOPBACK
>>> +		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>>> +		 */
>>> +		if (ipv6_addr_loopback(&key->dst) ||
>>> +		    ipv6_addr_loopback(&key->src)) {
>>> +			dev_err(&pf->pdev->dev,
>>> +				"Bad ipv6, addr is LOOPBACK\n");
>>> +			return I40E_ERR_CONFIG;
>>> +		}
>>> +		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>>> +			field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +
>>> +		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>>> +		       sizeof(filter->src_ipv6));
>>> +		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>>> +		       sizeof(filter->dst_ipv6));
>>> +
>>> +		/* mark it as IPv6 filter, to be used later */
>>> +		filter->ip_version = IPV6_VERSION;
>>> +	}
>>> +
>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>>> +		struct flow_dissector_key_ports *key =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>> +						  f->key);
>>> +		struct flow_dissector_key_ports *mask =
>>> +			skb_flow_dissector_target(f->dissector,
>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>> +						  f->mask);
>>> +
>>> +		if (mask->src) {
>>> +			if (mask->src == cpu_to_be16(0xffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>>> +					be16_to_cpu(mask->src));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		if (mask->dst) {
>>> +			if (mask->dst == cpu_to_be16(0xffff)) {
>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>> +			} else {
>>> +				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>>> +					be16_to_cpu(mask->dst));
>>> +				return I40E_ERR_CONFIG;
>>> +			}
>>> +		}
>>> +
>>> +		filter->dst_port = key->dst;
>>> +		filter->src_port = key->src;
>>> +
>>> +		/* For now, only supports destination port*/
>>> +		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>>> +
>>> +		switch (filter->ip_proto) {
>>> +		case IPPROTO_TCP:
>>> +		case IPPROTO_UDP:
>>> +			break;
>>> +		default:
>>> +			dev_err(&pf->pdev->dev,
>>> +				"Only UDP and TCP transport are supported\n");
>>> +			return -EINVAL;
>>> +		}
>>> +	}
>>> +	filter->flags = field_flags;
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * i40e_handle_redirect_action: Forward to a traffic class on the device
>>> + * @vsi: Pointer to VSI
>>> + * @ifindex: ifindex of the device to forwared to
>>> + * @tc: traffic class index on the device
>>> + * @filter: Pointer to cloud filter structure
>>> + *
>>> + **/
>>> +static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>>> +				       struct i40e_cloud_filter *filter)
>>> +{
>>> +	struct i40e_channel *ch, *ch_tmp;
>>> +
>>> +	/* redirect to a traffic class on the same device */
>>> +	if (vsi->netdev->ifindex == ifindex) {
>>> +		if (tc == 0) {
>>> +			filter->seid = vsi->seid;
>>> +			return 0;
>>> +		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>>> +			if (!filter->dst_port) {
>>> +				dev_err(&vsi->back->pdev->dev,
>>> +					"Specify destination port to redirect to traffic class that is not default\n");
>>> +				return -EINVAL;
>>> +			}
>>> +			if (list_empty(&vsi->ch_list))
>>> +				return -EINVAL;
>>> +			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>>> +						 list) {
>>> +				if (ch->seid == vsi->tc_seid_map[tc])
>>> +					filter->seid = ch->seid;
>>> +			}
>>> +			return 0;
>>> +		}
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +/**
>>> + * i40e_parse_tc_actions - Parse tc actions
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + * @filter: Pointer to cloud filter structure
>>> + *
>>> + **/
>>> +static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>>> +				 struct i40e_cloud_filter *filter)
>>> +{
>>> +	const struct tc_action *a;
>>> +	LIST_HEAD(actions);
>>> +	int err;
>>> +
>>> +	if (!tcf_exts_has_actions(exts))
>>> +		return -EINVAL;
>>> +
>>> +	tcf_exts_to_list(exts, &actions);
>>> +	list_for_each_entry(a, &actions, list) {
>>> +		/* Drop action */
>>> +		if (is_tcf_gact_shot(a)) {
>>> +			dev_err(&vsi->back->pdev->dev,
>>> +				"Cloud filters do not support the drop action.\n");
>>> +			return -EOPNOTSUPP;
>>> +		}
>>> +
>>> +		/* Redirect to a traffic class on the same device */
>>> +		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>>> +			int ifindex = tcf_mirred_ifindex(a);
>>> +			u8 tc = tcf_mirred_tc(a);
>>> +
>>> +			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>>> +							  filter);
>>> +			if (err == 0)
>>> +				return err;
>>> +		}
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +/**
>>> + * i40e_configure_clsflower - Configure tc flower filters
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + *
>>> + **/
>>> +static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>>> +				    struct tc_cls_flower_offload *cls_flower)
>>> +{
>>> +	struct i40e_cloud_filter *filter = NULL;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int err = 0;
>>> +
>>> +	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>>> +	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>>> +		return -EBUSY;
>>> +
>>> +	if (pf->fdir_pf_active_filters ||
>>> +	    (!hlist_empty(&pf->fdir_filter_list))) {
>>> +		dev_err(&vsi->back->pdev->dev,
>>> +			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>>> +		dev_err(&vsi->back->pdev->dev,
>>> +			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>>> +		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>> +	}
>>> +
>>> +	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>>> +	if (!filter)
>>> +		return -ENOMEM;
>>> +
>>> +	filter->cookie = cls_flower->cookie;
>>> +
>>> +	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>>> +	if (err < 0)
>>> +		goto err;
>>> +
>>> +	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>>> +	if (err < 0)
>>> +		goto err;
>>> +
>>> +	/* Add cloud filter */
>>> +	if (filter->dst_port)
>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>>> +	else
>>> +		err = i40e_add_del_cloud_filter(vsi, filter, true);
>>> +
>>> +	if (err) {
>>> +		dev_err(&pf->pdev->dev,
>>> +			"Failed to add cloud filter, err %s\n",
>>> +			i40e_stat_str(&pf->hw, err));
>>> +		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>> +		goto err;
>>> +	}
>>> +
>>> +	/* add filter to the ordered list */
>>> +	INIT_HLIST_NODE(&filter->cloud_node);
>>> +
>>> +	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>>> +
>>> +	pf->num_cloud_filters++;
>>> +
>>> +	return err;
>>> +err:
>>> +	kfree(filter);
>>> +	return err;
>>> +}
>>> +
>>> +/**
>>> + * i40e_find_cloud_filter - Find the could filter in the list
>>> + * @vsi: Pointer to VSI
>>> + * @cookie: filter specific cookie
>>> + *
>>> + **/
>>> +static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>>> +							unsigned long *cookie)
>>> +{
>>> +	struct i40e_cloud_filter *filter = NULL;
>>> +	struct hlist_node *node2;
>>> +
>>> +	hlist_for_each_entry_safe(filter, node2,
>>> +				  &vsi->back->cloud_filter_list, cloud_node)
>>> +		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>>> +			return filter;
>>> +	return NULL;
>>> +}
>>> +
>>> +/**
>>> + * i40e_delete_clsflower - Remove tc flower filters
>>> + * @vsi: Pointer to VSI
>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>> + *
>>> + **/
>>> +static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>>> +				 struct tc_cls_flower_offload *cls_flower)
>>> +{
>>> +	struct i40e_cloud_filter *filter = NULL;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	int err = 0;
>>> +
>>> +	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>>> +
>>> +	if (!filter)
>>> +		return -EINVAL;
>>> +
>>> +	hash_del(&filter->cloud_node);
>>> +
>>> +	if (filter->dst_port)
>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>>> +	else
>>> +		err = i40e_add_del_cloud_filter(vsi, filter, false);
>>> +	if (err) {
>>> +		kfree(filter);
>>> +		dev_err(&pf->pdev->dev,
>>> +			"Failed to delete cloud filter, err %s\n",
>>> +			i40e_stat_str(&pf->hw, err));
>>> +		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>> +	}
>>> +
>>> +	kfree(filter);
>>> +	pf->num_cloud_filters--;
>>> +
>>> +	if (!pf->num_cloud_filters)
>>> +		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>> +		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>> +			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>> +		}
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * i40e_setup_tc_cls_flower - flower classifier offloads
>>> + * @netdev: net device to configure
>>> + * @type_data: offload data
>>> + **/
>>> +static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>>> +				    struct tc_cls_flower_offload *cls_flower)
>>> +{
>>> +	struct i40e_netdev_priv *np = netdev_priv(netdev);
>>> +	struct i40e_vsi *vsi = np->vsi;
>>> +
>>> +	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>>> +	    cls_flower->common.chain_index)
>>> +		return -EOPNOTSUPP;
>>> +
>>> +	switch (cls_flower->command) {
>>> +	case TC_CLSFLOWER_REPLACE:
>>> +		return i40e_configure_clsflower(vsi, cls_flower);
>>> +	case TC_CLSFLOWER_DESTROY:
>>> +		return i40e_delete_clsflower(vsi, cls_flower);
>>> +	case TC_CLSFLOWER_STATS:
>>> +		return -EOPNOTSUPP;
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +}
>>> +
>>> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>>> 			   void *type_data)
>>> {
>>> -	if (type != TC_SETUP_MQPRIO)
>>> +	switch (type) {
>>> +	case TC_SETUP_MQPRIO:
>>> +		return i40e_setup_tc(netdev, type_data);
>>> +	case TC_SETUP_CLSFLOWER:
>>> +		return i40e_setup_tc_cls_flower(netdev, type_data);
>>> +	default:
>>> 		return -EOPNOTSUPP;
>>> -
>>> -	return i40e_setup_tc(netdev, type_data);
>>> +	}
>>> }
>>>
>>> /**
>>> @@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
>>> 		kfree(cfilter);
>>> 	}
>>> 	pf->num_cloud_filters = 0;
>>> +
>>> +	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>> +	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>> +		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>> +		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>> +		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>> +	}
>>> }
>>>
>>> /**
>>> @@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>>>  * i40e_get_capabilities - get info about the HW
>>>  * @pf: the PF struct
>>>  **/
>>> -static int i40e_get_capabilities(struct i40e_pf *pf)
>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>> +				 enum i40e_admin_queue_opc list_type)
>>> {
>>> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
>>> 	u16 data_size;
>>> @@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>>
>>> 		/* this loads the data into the hw struct for us */
>>> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>>> -					    &data_size,
>>> -					    i40e_aqc_opc_list_func_capabilities,
>>> -					    NULL);
>>> +						    &data_size, list_type,
>>> +						    NULL);
>>> 		/* data loaded, buffer no longer needed */
>>> 		kfree(cap_buf);
>>>
>>> @@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>> 		}
>>> 	} while (err);
>>>
>>> -	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>>> -		dev_info(&pf->pdev->dev,
>>> -			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>> -			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>> -			 pf->hw.func_caps.num_msix_vectors,
>>> -			 pf->hw.func_caps.num_msix_vectors_vf,
>>> -			 pf->hw.func_caps.fd_filters_guaranteed,
>>> -			 pf->hw.func_caps.fd_filters_best_effort,
>>> -			 pf->hw.func_caps.num_tx_qp,
>>> -			 pf->hw.func_caps.num_vsis);
>>> -
>>> +	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>>> +		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>> +				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>> +				 pf->hw.func_caps.num_msix_vectors,
>>> +				 pf->hw.func_caps.num_msix_vectors_vf,
>>> +				 pf->hw.func_caps.fd_filters_guaranteed,
>>> +				 pf->hw.func_caps.fd_filters_best_effort,
>>> +				 pf->hw.func_caps.num_tx_qp,
>>> +				 pf->hw.func_caps.num_vsis);
>>> +		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>>> +				 pf->hw.dev_caps.switch_mode,
>>> +				 pf->hw.dev_caps.valid_functions);
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "SR-IOV=%d, num_vfs for all function=%u\n",
>>> +				 pf->hw.dev_caps.sr_iov_1_1,
>>> +				 pf->hw.dev_caps.num_vfs);
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>>> +				 pf->hw.dev_caps.num_vsis,
>>> +				 pf->hw.dev_caps.num_rx_qp,
>>> +				 pf->hw.dev_caps.num_tx_qp);
>>> +		}
>>> +	}
>>> +	if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
>>> 		       + pf->hw.func_caps.num_vfs)
>>> -	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>>> -		dev_info(&pf->pdev->dev,
>>> -			 "got num_vsis %d, setting num_vsis to %d\n",
>>> -			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>> -		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>> +		if (pf->hw.revision_id == 0 &&
>>> +		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>>> +			dev_info(&pf->pdev->dev,
>>> +				 "got num_vsis %d, setting num_vsis to %d\n",
>>> +				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>> +			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>> +		}
>>> 	}
>>> -
>>> 	return 0;
>>> }
>>>
>>> @@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
>>> 		if (!vsi) {
>>> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 			return;
>>> 		}
>>> 	}
>>> @@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
>>> }
>>>
>>> /**
>>> + * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>>> + * @vsi: PF main vsi
>>> + * @seid: seid of main or channel VSIs
>>> + *
>>> + * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>>> + * existed before reset
>>> + **/
>>> +static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>>> +{
>>> +	struct i40e_cloud_filter *cfilter;
>>> +	struct i40e_pf *pf = vsi->back;
>>> +	struct hlist_node *node;
>>> +	i40e_status ret;
>>> +
>>> +	/* Add cloud filters back if they exist */
>>> +	if (hlist_empty(&pf->cloud_filter_list))
>>> +		return 0;
>>> +
>>> +	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>>> +				  cloud_node) {
>>> +		if (cfilter->seid != seid)
>>> +			continue;
>>> +
>>> +		if (cfilter->dst_port)
>>> +			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>>> +								true);
>>> +		else
>>> +			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>>> +
>>> +		if (ret) {
>>> +			dev_dbg(&pf->pdev->dev,
>>> +				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>>> +				i40e_stat_str(&pf->hw, ret),
>>> +				i40e_aq_str(&pf->hw,
>>> +					    pf->hw.aq.asq_last_status));
>>> +			return ret;
>>> +		}
>>> +	}
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>>>  * @vsi: PF main vsi
>>>  *
>>> @@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
>>> 						I40E_BW_CREDIT_DIVISOR,
>>> 				ch->seid);
>>> 		}
>>> +		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>>> +		if (ret) {
>>> +			dev_dbg(&vsi->back->pdev->dev,
>>> +				"Failed to rebuild cloud filters for channel VSI %u\n",
>>> +				ch->seid);
>>> +			return ret;
>>> +		}
>>> 	}
>>> 	return 0;
>>> }
>>> @@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>> 		i40e_verify_eeprom(pf);
>>>
>>> 	i40e_clear_pxe_mode(hw);
>>> -	ret = i40e_get_capabilities(pf);
>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>> 	if (ret)
>>> 		goto end_core_reset;
>>>
>>> @@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>> 			goto end_unlock;
>>> 	}
>>>
>>> +	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>>> +	if (ret)
>>> +		goto end_unlock;
>>> +
>>> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
>>> 	 * for this main VSI if they exist
>>> 	 */
>>> @@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
>>> 	    (pf->num_fdsb_msix == 0)) {
>>> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
>>> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 	}
>>> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
>>> 	    (pf->num_vmdq_msix == 0)) {
>>> @@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
>>> 				       I40E_FLAG_FD_SB_ENABLED	|
>>> 				       I40E_FLAG_FD_ATR_ENABLED	|
>>> 				       I40E_FLAG_VMDQ_ENABLED);
>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>
>>> 			/* rework the queue expectations without MSIX */
>>> 			i40e_determine_queue_usage(pf);
>>> @@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>> 		/* Enable filters and mark for reset */
>>> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
>>> 			need_reset = true;
>>> -		/* enable FD_SB only if there is MSI-X vector */
>>> -		if (pf->num_fdsb_msix > 0)
>>> +		/* enable FD_SB only if there is MSI-X vector and no cloud
>>> +		 * filters exist
>>> +		 */
>>> +		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
>>> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>> +		}
>>> 	} else {
>>> 		/* turn off filters, mark for reset and clear SW filter list */
>>> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>>> @@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>> 		}
>>> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
>>> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> +
>>> 		/* reset fd counters */
>>> 		pf->fd_add_err = 0;
>>> 		pf->fd_atr_cnt = 0;
>>> @@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
>>> 		netdev->hw_features |= NETIF_F_NTUPLE;
>>> 	hw_features = hw_enc_features		|
>>> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>>> -		      NETIF_F_HW_VLAN_CTAG_RX;
>>> +		      NETIF_F_HW_VLAN_CTAG_RX	|
>>> +		      NETIF_F_HW_TC;
>>>
>>> 	netdev->hw_features |= hw_features;
>>>
>>> @@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>> 	*/
>>>
>>> 	if ((pf->hw.pf_id == 0) &&
>>> -	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>>> +	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
>>> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>>> +		pf->last_sw_conf_flags = flags;
>>> +	}
>>>
>>> 	if (pf->hw.pf_id == 0) {
>>> 		u16 valid_flags;
>>> @@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>> 					     pf->hw.aq.asq_last_status));
>>> 			/* not a fatal problem, just keep going */
>>> 		}
>>> +		pf->last_sw_conf_valid_flags = valid_flags;
>>> 	}
>>>
>>> 	/* first time setup */
>>> @@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>> 			       I40E_FLAG_DCB_ENABLED	|
>>> 			       I40E_FLAG_SRIOV_ENABLED	|
>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
>>> 				  I40E_FLAG_FD_SB_ENABLED |
>>> 				  I40E_FLAG_FD_ATR_ENABLED |
>>> @@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>> 			       I40E_FLAG_FD_ATR_ENABLED	|
>>> 			       I40E_FLAG_DCB_ENABLED	|
>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 	} else {
>>> 		/* Not enough queues for all TCs */
>>> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>>> @@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>> 			queues_left -= 1; /* save 1 queue for FD */
>>> 		} else {
>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
>>> 		}
>>> 	}
>>> @@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>>> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
>>>
>>> 	i40e_clear_pxe_mode(hw);
>>> -	err = i40e_get_capabilities(pf);
>>> +	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>> 	if (err)
>>> 		goto err_adminq_setup;
>>>
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>> index 92869f5..3bb6659 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>> @@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
>>> 		struct i40e_asq_cmd_details *cmd_details);
>>> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
>>> 				   struct i40e_asq_cmd_details *cmd_details);
>>> +i40e_status
>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count);
>>> +enum i40e_status_code
>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count);
>>> +enum i40e_status_code
>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>> +			  u8 filter_count);
>>> +i40e_status
>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>> +			     u8 filter_count);
>>> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
>>> 			       struct i40e_lldp_variables *lldp_cfg);
>>> /* i40e_common */
>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>> index c019f46..af38881 100644
>>> --- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>> @@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
>>> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
>>> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
>>> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>>> +#define I40E_SWITCH_MODE_MASK		0xF
>>>
>>> 	u32  management_mode;
>>> 	u32  mng_protocols_over_mctp;
>>> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>> index b8c78bf..4fe27f0 100644
>>> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>> @@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>> 		struct {
>>> 			u8 data[16];
>>> 		} v6;
>>> +		struct {
>>> +			__le16 data[8];
>>> +		} raw_v6;
>>> 	} ipaddr;
>>> 	__le16	flags;
>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
  2017-09-28 19:22         ` [Intel-wired-lan] " Nambiar, Amritha
@ 2017-09-29  6:20           ` Jiri Pirko
  -1 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-29  6:20 UTC (permalink / raw)
  To: Nambiar, Amritha
  Cc: intel-wired-lan, jeffrey.t.kirsher, alexander.h.duyck, netdev,
	mlxsw, alexander.duyck, Jamal Hadi Salim, Cong Wang

Thu, Sep 28, 2017 at 09:22:15PM CEST, amritha.nambiar@intel.com wrote:
>On 9/14/2017 1:00 AM, Nambiar, Amritha wrote:
>> On 9/13/2017 6:26 AM, Jiri Pirko wrote:
>>> Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar@intel.com wrote:
>>>> This patch enables tc-flower based hardware offloads. tc flower
>>>> filter provided by the kernel is configured as driver specific
>>>> cloud filter. The patch implements functions and admin queue
>>>> commands needed to support cloud filters in the driver and
>>>> adds cloud filters to configure these tc-flower filters.
>>>>
>>>> The only action supported is to redirect packets to a traffic class
>>>> on the same device.
>>>
>>> So basically you are not doing redirect, you are just setting tclass for
>>> matched packets, right? Why you use mirred for this? I think that
>>> you might consider extending g_act for that:
>>>
>>> # tc filter add dev eth0 protocol ip ingress \
>>>   prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
>>>   action tclass 0
>>>
>> Yes, this doesn't work like a typical egress redirect, but is aimed at
>> forwarding the matched packets to a different queue-group/traffic class
>> on the same device, so some sort-of ingress redirect in the hardware. I
>> possibly may not need the mirred-redirect as you say, I'll look into the
>> g_act way of doing this with a new gact tc action.
>> 
>
>I was looking at introducing a new gact tclass action to TC. In the HW
>offload path, this sets a traffic class value for certain matched
>packets so they will be processed in a queue belonging to the traffic class.
>
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_sw\
>  action tclass 2
>
>But, I'm having trouble defining what this action means in the kernel
>datapath. For ingress, this action could just take the default path and
>do nothing and only have meaning in the HW offloaded path. For egress,

Sounds ok.


>certain qdiscs like 'multiq' and 'prio' could use this 'tclass' value
>for band selection, while the 'mqprio' qdisc selects the traffic class
>based on the skb priority in netdev_pick_tx(), so what would this action
>mean for the 'mqprio' qdisc?

I don't see why this action would have any special meaning for specific
qdiscs. The qdiscs have already mechanisms for band mapping. I don't see
why to mix it up with tclass action.

Also, you can use tclass action on qdisc clsact egress to do band
mapping. That would be symmetrical with ingress.


>
>It looks like the 'prio' qdisc uses band selection based on the
>'classid', so I was thinking of using the 'classid' through the cls
>flower filter and offload it to HW for the traffic class index, this way
>we would have the same behavior in HW offload and SW fallback and there
>would be no need for a separate tc action.
>
>In HW:
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_sw classid 1:2\
>
>filter pref 2 flower chain 0
>filter pref 2 flower chain 0 handle 0x1 classid 1:2
>  eth_type ipv4
>  ip_proto udp
>  dst_ip 192.168.3.5
>  dst_port 25
>  skip_sw
>  in_hw
>
>This will be used to route packets to traffic class 2.
>
>In SW:
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_hw classid 1:2
>
>filter pref 2 flower chain 0
>filter pref 2 flower chain 0 handle 0x1 classid 1:2
>  eth_type ipv4
>  ip_proto udp
>  dst_ip 192.168.3.5
>  dst_port 25
>  skip_hw
>  not_in_hw
>
>>>
>>>>
>>>> # tc qdisc add dev eth0 ingress
>>>> # ethtool -K eth0 hw-tc-offload on
>>>>
>>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>>>>  action mirred ingress redirect dev eth0 tclass 0
>>>>
>>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>>  prio 2 flower dst_ip 192.168.3.5/32\
>>>>  ip_proto udp dst_port 25 skip_sw\
>>>>  action mirred ingress redirect dev eth0 tclass 1
>>>>
>>>> # tc filter add dev eth0 protocol ipv6 parent ffff:\
>>>>  prio 3 flower dst_ip fe8::200:1\
>>>>  ip_proto udp dst_port 66 skip_sw\
>>>>  action mirred ingress redirect dev eth0 tclass 1
>>>>
>>>> Delete tc flower filter:
>>>> Example:
>>>>
>>>> # tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
>>>> # tc filter del dev eth0 parent ffff:
>>>>
>>>> Flow Director Sideband is disabled while configuring cloud filters
>>>> via tc-flower and until any cloud filter exists.
>>>>
>>>> Unsupported matches when cloud filters are added using enhanced
>>>> big buffer cloud filter mode of underlying switch include:
>>>> 1. source port and source IP
>>>> 2. Combined MAC address and IP fields.
>>>> 3. Not specifying L4 port
>>>>
>>>> These filter matches can however be used to redirect traffic to
>>>> the main VSI (tc 0) which does not require the enhanced big buffer
>>>> cloud filter support.
>>>>
>>>> v3: Cleaned up some lengthy function names. Changed ipv6 address to
>>>> __be32 array instead of u8 array. Used macro for IP version. Minor
>>>> formatting changes.
>>>> v2:
>>>> 1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>>>> 2. Moved dev_info for add/deleting cloud filters in else condition
>>>> 3. Fixed some format specifier in dev_err logs
>>>> 4. Refactored i40e_get_capabilities to take an additional
>>>>   list_type parameter and use it to query device and function
>>>>   level capabilities.
>>>> 5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>>>>   to verify if redirect to a traffic class is supported.
>>>> 6. Added comments for Geneve fix in cloud filter big buffer AQ
>>>>   function definitions.
>>>> 7. Cleaned up setup_tc interface to rebase and work with Jiri's
>>>>   updates, separate function to process tc cls flower offloads.
>>>> 8. Changes to make Flow Director Sideband and Cloud filters mutually
>>>>   exclusive.
>>>>
>>>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>>>> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>>>> Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>>>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>>>> ---
>>>> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
>>>> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
>>>> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
>>>> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
>>>> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
>>>> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
>>>> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
>>>> 7 files changed, 1202 insertions(+), 30 deletions(-)
>>>>
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>>>> index 6018fb6..b110519 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e.h
>>>> @@ -55,6 +55,8 @@
>>>> #include <linux/net_tstamp.h>
>>>> #include <linux/ptp_clock_kernel.h>
>>>> #include <net/pkt_cls.h>
>>>> +#include <net/tc_act/tc_gact.h>
>>>> +#include <net/tc_act/tc_mirred.h>
>>>> #include "i40e_type.h"
>>>> #include "i40e_prototype.h"
>>>> #include "i40e_client.h"
>>>> @@ -252,9 +254,52 @@ struct i40e_fdir_filter {
>>>> 	u32 fd_id;
>>>> };
>>>>
>>>> +#define IPV4_VERSION 4
>>>> +#define IPV6_VERSION 6
>>>> +
>>>> +#define I40E_CLOUD_FIELD_OMAC	0x01
>>>> +#define I40E_CLOUD_FIELD_IMAC	0x02
>>>> +#define I40E_CLOUD_FIELD_IVLAN	0x04
>>>> +#define I40E_CLOUD_FIELD_TEN_ID	0x08
>>>> +#define I40E_CLOUD_FIELD_IIP	0x10
>>>> +
>>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>>>> +						 I40E_CLOUD_FIELD_IVLAN)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>>>> +						 I40E_CLOUD_FIELD_TEN_ID)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>>>> +						  I40E_CLOUD_FIELD_IMAC | \
>>>> +						  I40E_CLOUD_FIELD_TEN_ID)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>>>> +						   I40E_CLOUD_FIELD_IVLAN | \
>>>> +						   I40E_CLOUD_FIELD_TEN_ID)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>>>> +
>>>> struct i40e_cloud_filter {
>>>> 	struct hlist_node cloud_node;
>>>> 	unsigned long cookie;
>>>> +	/* cloud filter input set follows */
>>>> +	u8 dst_mac[ETH_ALEN];
>>>> +	u8 src_mac[ETH_ALEN];
>>>> +	__be16 vlan_id;
>>>> +	__be32 dst_ip;
>>>> +	__be32 src_ip;
>>>> +	__be32 dst_ipv6[4];
>>>> +	__be32 src_ipv6[4];
>>>> +	__be16 dst_port;
>>>> +	__be16 src_port;
>>>> +	u32 ip_version;
>>>> +	u8 ip_proto;	/* IPPROTO value */
>>>> +	/* L4 port type: src or destination port */
>>>> +#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>>>> +#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>>>> +	u8 port_type;
>>>> +	u32 tenant_id;
>>>> +	u8 flags;
>>>> +#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>>>> +	u8 tunnel_type;
>>>> 	u16 seid;	/* filter control */
>>>> };
>>>>
>>>> @@ -491,6 +536,8 @@ struct i40e_pf {
>>>> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
>>>> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
>>>> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>>>> +#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>>>> +#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
>>>>
>>>> 	struct i40e_client_instance *cinst;
>>>> 	bool stat_offsets_loaded;
>>>> @@ -573,6 +620,8 @@ struct i40e_pf {
>>>> 	u16 phy_led_val;
>>>>
>>>> 	u16 override_q_count;
>>>> +	u16 last_sw_conf_flags;
>>>> +	u16 last_sw_conf_valid_flags;
>>>> };
>>>>
>>>> /**
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>>> index 2e567c2..feb3d42 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>>> @@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>>> 		struct {
>>>> 			u8 data[16];
>>>> 		} v6;
>>>> +		struct {
>>>> +			__le16 data[8];
>>>> +		} raw_v6;
>>>> 	} ipaddr;
>>>> 	__le16	flags;
>>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>>> index 9567702..d9c9665 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>>> @@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
>>>>
>>>> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
>>>> 				   track_id, &offset, &info, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_add_cloud_filters
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to add cloud filters from
>>>> + * @filters: Buffer which contains the filters to be added
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Set the cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>>> + * of the function.
>>>> + *
>>>> + **/
>>>> +enum i40e_status_code
>>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	enum i40e_status_code status;
>>>> +	u16 buff_len;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_add_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_add_cloud_filters_bb
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to add cloud filters from
>>>> + * @filters: Buffer which contains the filters in big buffer to be added
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Set the big buffer cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>>> + * function.
>>>> + *
>>>> + **/
>>>> +i40e_status
>>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	i40e_status status;
>>>> +	u16 buff_len;
>>>> +	int i;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_add_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>>> +
>>>> +	for (i = 0; i < filter_count; i++) {
>>>> +		u16 tnl_type;
>>>> +		u32 ti;
>>>> +
>>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>>> +
>>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>>> +		 * byte than the offset for the Tenant ID for rest of the
>>>> +		 * tunnels.
>>>> +		 */
>>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>>> +		}
>>>> +	}
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_rem_cloud_filters
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to remove cloud filters from
>>>> + * @filters: Buffer which contains the filters to be removed
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Remove the cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>>> + * of the function.
>>>> + *
>>>> + **/
>>>> +enum i40e_status_code
>>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	enum i40e_status_code status;
>>>> +	u16 buff_len;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_rem_cloud_filters_bb
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to remove cloud filters from
>>>> + * @filters: Buffer which contains the filters in big buffer to be removed
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Remove the big buffer cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>>> + * function.
>>>> + *
>>>> + **/
>>>> +i40e_status
>>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	i40e_status status;
>>>> +	u16 buff_len;
>>>> +	int i;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>>> +
>>>> +	for (i = 0; i < filter_count; i++) {
>>>> +		u16 tnl_type;
>>>> +		u32 ti;
>>>> +
>>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>>> +
>>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>>> +		 * byte than the offset for the Tenant ID for rest of the
>>>> +		 * tunnels.
>>>> +		 */
>>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>>> +		}
>>>> +	}
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> 	return status;
>>>> }
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>>> index afcf08a..96ee608 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>>> @@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
>>>> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
>>>> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
>>>> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>>> +				     struct i40e_cloud_filter *filter,
>>>> +				     bool add);
>>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>>> +					     struct i40e_cloud_filter *filter,
>>>> +					     bool add);
>>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>>> +				 enum i40e_admin_queue_opc list_type);
>>>> +
>>>>
>>>> /* i40e_pci_tbl - PCI Device ID Table
>>>>  *
>>>> @@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>>>>  **/
>>>> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>>> {
>>>> +	enum i40e_admin_queue_err last_aq_status;
>>>> +	struct i40e_cloud_filter *cfilter;
>>>> 	struct i40e_channel *ch, *ch_tmp;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	struct hlist_node *node;
>>>> 	int ret, i;
>>>>
>>>> 	/* Reset rss size that was stored when reconfiguring rss for
>>>> @@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>>> 				 "Failed to reset tx rate for ch->seid %u\n",
>>>> 				 ch->seid);
>>>>
>>>> +		/* delete cloud filters associated with this channel */
>>>> +		hlist_for_each_entry_safe(cfilter, node,
>>>> +					  &pf->cloud_filter_list, cloud_node) {
>>>> +			if (cfilter->seid != ch->seid)
>>>> +				continue;
>>>> +
>>>> +			hash_del(&cfilter->cloud_node);
>>>> +			if (cfilter->dst_port)
>>>> +				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>>>> +									cfilter,
>>>> +									false);
>>>> +			else
>>>> +				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>>>> +								false);
>>>> +			last_aq_status = pf->hw.aq.asq_last_status;
>>>> +			if (ret)
>>>> +				dev_info(&pf->pdev->dev,
>>>> +					 "Failed to delete cloud filter, err %s aq_err %s\n",
>>>> +					 i40e_stat_str(&pf->hw, ret),
>>>> +					 i40e_aq_str(&pf->hw, last_aq_status));
>>>> +			kfree(cfilter);
>>>> +		}
>>>> +
>>>> 		/* delete VSI from FW */
>>>> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
>>>> 					     NULL);
>>>> @@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
>>>> }
>>>>
>>>> /**
>>>> + * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>>>> + * @vsi: ptr to VSI which has PF backing
>>>> + * @l4type: true for TCP ond false for UDP
>>>> + * @port_type: true if port is destination and false if port is source
>>>> + *
>>>> + * Sets up switch mode correctly if it needs to be changed and perform
>>>> + * what are allowed modes.
>>>> + **/
>>>> +static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>>>> +					     bool port_type)
>>>> +{
>>>> +	u8 mode;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	struct i40e_hw *hw = &pf->hw;
>>>> +	int ret;
>>>> +
>>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>>>> +	if (ret)
>>>> +		return -EINVAL;
>>>> +
>>>> +	if (hw->dev_caps.switch_mode) {
>>>> +		/* if switch mode is set, support mode2 (non-tunneled for
>>>> +		 * cloud filter) for now
>>>> +		 */
>>>> +		u32 switch_mode = hw->dev_caps.switch_mode &
>>>> +							I40E_SWITCH_MODE_MASK;
>>>> +		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>>>> +			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>>>> +				return 0;
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>>>> +				hw->dev_caps.switch_mode);
>>>> +			return -EINVAL;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	/* port_type: true for destination port and false for source port
>>>> +	 * For now, supports only destination port type
>>>> +	 */
>>>> +	if (!port_type) {
>>>> +		dev_err(&pf->pdev->dev, "src port type not supported\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	/* Set Bit 7 to be valid */
>>>> +	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>>>> +
>>>> +	/* Set L4type to both TCP and UDP support */
>>>> +	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>>>> +
>>>> +	/* Set cloud filter mode */
>>>> +	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>>>> +
>>>> +	/* Prep mode field for set_switch_config */
>>>> +	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>>>> +					pf->last_sw_conf_valid_flags,
>>>> +					mode, NULL);
>>>> +	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"couldn't set switch config bits, err %s aq_err %s\n",
>>>> +			i40e_stat_str(hw, ret),
>>>> +			i40e_aq_str(hw,
>>>> +				    hw->aq.asq_last_status));
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>>  * i40e_create_queue_channel - function to create channel
>>>>  * @vsi: VSI to be configured
>>>>  * @ch: ptr to channel (it contains channel specific params)
>>>> @@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
>>>> 	return ret;
>>>> }
>>>>
>>>> +/**
>>>> + * i40e_set_cld_element - sets cloud filter element data
>>>> + * @filter: cloud filter rule
>>>> + * @cld: ptr to cloud filter element data
>>>> + *
>>>> + * This is helper function to copy data into cloud filter element
>>>> + **/
>>>> +static inline void
>>>> +i40e_set_cld_element(struct i40e_cloud_filter *filter,
>>>> +		     struct i40e_aqc_cloud_filters_element_data *cld)
>>>> +{
>>>> +	int i, j;
>>>> +	u32 ipa;
>>>> +
>>>> +	memset(cld, 0, sizeof(*cld));
>>>> +	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>>>> +	ether_addr_copy(cld->inner_mac, filter->src_mac);
>>>> +
>>>> +	if (filter->ip_version == IPV6_VERSION) {
>>>> +#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>>>> +		for (i = 0, j = 0; i < 4; i++, j += 2) {
>>>> +			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>>>> +			ipa = cpu_to_le32(ipa);
>>>> +			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>>>> +		}
>>>> +	} else {
>>>> +		ipa = be32_to_cpu(filter->dst_ip);
>>>> +		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>>>> +	}
>>>> +
>>>> +	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>>>> +
>>>> +	/* tenant_id is not supported by FW now, once the support is enabled
>>>> +	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>>>> +	 */
>>>> +	if (filter->tenant_id)
>>>> +		return;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_add_del_cloud_filter - Add/del cloud filter
>>>> + * @vsi: pointer to VSI
>>>> + * @filter: cloud filter rule
>>>> + * @add: if true, add, if false, delete
>>>> + *
>>>> + * Add or delete a cloud filter for a specific flow spec.
>>>> + * Returns 0 if the filter were successfully added.
>>>> + **/
>>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>>> +				     struct i40e_cloud_filter *filter, bool add)
>>>> +{
>>>> +	struct i40e_aqc_cloud_filters_element_data cld_filter;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int ret;
>>>> +	static const u16 flag_table[128] = {
>>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>>>> +	};
>>>> +
>>>> +	if (filter->flags >= ARRAY_SIZE(flag_table))
>>>> +		return I40E_ERR_CONFIG;
>>>> +
>>>> +	/* copy element needed to add cloud filter from filter */
>>>> +	i40e_set_cld_element(filter, &cld_filter);
>>>> +
>>>> +	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>>>> +		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>>>> +					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>>>> +
>>>> +	if (filter->ip_version == IPV6_VERSION)
>>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>>> +	else
>>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>>> +
>>>> +	if (add)
>>>> +		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>>>> +						&cld_filter, 1);
>>>> +	else
>>>> +		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>>>> +						&cld_filter, 1);
>>>> +	if (ret)
>>>> +		dev_dbg(&pf->pdev->dev,
>>>> +			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>>>> +			add ? "add" : "delete", filter->dst_port, ret,
>>>> +			pf->hw.aq.asq_last_status);
>>>> +	else
>>>> +		dev_info(&pf->pdev->dev,
>>>> +			 "%s cloud filter for VSI: %d\n",
>>>> +			 add ? "Added" : "Deleted", filter->seid);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>>>> + * @vsi: pointer to VSI
>>>> + * @filter: cloud filter rule
>>>> + * @add: if true, add, if false, delete
>>>> + *
>>>> + * Add or delete a cloud filter for a specific flow spec using big buffer.
>>>> + * Returns 0 if the filter were successfully added.
>>>> + **/
>>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>>> +					     struct i40e_cloud_filter *filter,
>>>> +					     bool add)
>>>> +{
>>>> +	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int ret;
>>>> +
>>>> +	/* Both (Outer/Inner) valid mac_addr are not supported */
>>>> +	if (is_valid_ether_addr(filter->dst_mac) &&
>>>> +	    is_valid_ether_addr(filter->src_mac))
>>>> +		return -EINVAL;
>>>> +
>>>> +	/* Make sure port is specified, otherwise bail out, for channel
>>>> +	 * specific cloud filter needs 'L4 port' to be non-zero
>>>> +	 */
>>>> +	if (!filter->dst_port)
>>>> +		return -EINVAL;
>>>> +
>>>> +	/* adding filter using src_port/src_ip is not supported at this stage */
>>>> +	if (filter->src_port || filter->src_ip ||
>>>> +	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>>>> +		return -EINVAL;
>>>> +
>>>> +	/* copy element needed to add cloud filter from filter */
>>>> +	i40e_set_cld_element(filter, &cld_filter.element);
>>>> +
>>>> +	if (is_valid_ether_addr(filter->dst_mac) ||
>>>> +	    is_valid_ether_addr(filter->src_mac) ||
>>>> +	    is_multicast_ether_addr(filter->dst_mac) ||
>>>> +	    is_multicast_ether_addr(filter->src_mac)) {
>>>> +		/* MAC + IP : unsupported mode */
>>>> +		if (filter->dst_ip)
>>>> +			return -EINVAL;
>>>> +
>>>> +		/* since we validated that L4 port must be valid before
>>>> +		 * we get here, start with respective "flags" value
>>>> +		 * and update if vlan is present or not
>>>> +		 */
>>>> +		cld_filter.element.flags =
>>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>>>> +
>>>> +		if (filter->vlan_id) {
>>>> +			cld_filter.element.flags =
>>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>>>> +		}
>>>> +
>>>> +	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>>>> +		cld_filter.element.flags =
>>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>>>> +		if (filter->ip_version == IPV6_VERSION)
>>>> +			cld_filter.element.flags |=
>>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>>> +		else
>>>> +			cld_filter.element.flags |=
>>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>>> +	} else {
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"either mac or ip has to be valid for cloud filter\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	/* Now copy L4 port in Byte 6..7 in general fields */
>>>> +	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>>>> +						be16_to_cpu(filter->dst_port);
>>>> +
>>>> +	if (add) {
>>>> +		bool proto_type, port_type;
>>>> +
>>>> +		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>>>> +		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>>>> +			     true : false;
>>>> +
>>>> +		/* For now, src port based cloud filter for channel is not
>>>> +		 * supported
>>>> +		 */
>>>> +		if (!port_type) {
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"unsupported port type (src port)\n");
>>>> +			return -EOPNOTSUPP;
>>>> +		}
>>>> +
>>>> +		/* Validate current device switch mode, change if necessary */
>>>> +		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>>>> +							port_type);
>>>> +		if (ret) {
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"failed to set switch mode, ret %d\n",
>>>> +				ret);
>>>> +			return ret;
>>>> +		}
>>>> +
>>>> +		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>>>> +						   &cld_filter, 1);
>>>> +	} else {
>>>> +		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>>>> +						   &cld_filter, 1);
>>>> +	}
>>>> +
>>>> +	if (ret)
>>>> +		dev_dbg(&pf->pdev->dev,
>>>> +			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>>>> +			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>>>> +	else
>>>> +		dev_info(&pf->pdev->dev,
>>>> +			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>>>> +			 add ? "add" : "delete", filter->seid,
>>>> +			 ntohs(filter->dst_port));
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + * @filter: Pointer to cloud filter structure
>>>> + *
>>>> + **/
>>>> +static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>>>> +				 struct tc_cls_flower_offload *f,
>>>> +				 struct i40e_cloud_filter *filter)
>>>> +{
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	u16 addr_type = 0;
>>>> +	u8 field_flags = 0;
>>>> +
>>>> +	if (f->dissector->used_keys &
>>>> +	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>>>> +		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>>>> +			f->dissector->used_keys);
>>>> +		return -EOPNOTSUPP;
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>>>> +		struct flow_dissector_key_keyid *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>>> +						  f->key);
>>>> +
>>>> +		struct flow_dissector_key_keyid *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->keyid != 0)
>>>> +			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>>>> +
>>>> +		filter->tenant_id = be32_to_cpu(key->keyid);
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>>>> +		struct flow_dissector_key_basic *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_BASIC,
>>>> +						  f->key);
>>>> +
>>>> +		filter->ip_proto = key->ip_proto;
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>>>> +		struct flow_dissector_key_eth_addrs *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>>> +						  f->key);
>>>> +
>>>> +		struct flow_dissector_key_eth_addrs *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>>> +						  f->mask);
>>>> +
>>>> +		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>>>> +		if (!is_zero_ether_addr(mask->dst)) {
>>>> +			if (is_broadcast_ether_addr(mask->dst)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_OMAC;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>>>> +					mask->dst);
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (!is_zero_ether_addr(mask->src)) {
>>>> +			if (is_broadcast_ether_addr(mask->src)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IMAC;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>>>> +					mask->src);
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +		ether_addr_copy(filter->dst_mac, key->dst);
>>>> +		ether_addr_copy(filter->src_mac, key->src);
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>>>> +		struct flow_dissector_key_vlan *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_vlan *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->vlan_id) {
>>>> +			if (mask->vlan_id == VLAN_VID_MASK) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>>>> +
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>>>> +					mask->vlan_id);
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		filter->vlan_id = cpu_to_be16(key->vlan_id);
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>>>> +		struct flow_dissector_key_control *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_CONTROL,
>>>> +						  f->key);
>>>> +
>>>> +		addr_type = key->addr_type;
>>>> +	}
>>>> +
>>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>>>> +		struct flow_dissector_key_ipv4_addrs *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_ipv4_addrs *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->dst) {
>>>> +			if (mask->dst == cpu_to_be32(0xffffffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>>>> +					be32_to_cpu(mask->dst));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (mask->src) {
>>>> +			if (mask->src == cpu_to_be32(0xffffffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>>>> +					be32_to_cpu(mask->dst));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>>>> +			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>>>> +			return I40E_ERR_CONFIG;
>>>> +		}
>>>> +		filter->dst_ip = key->dst;
>>>> +		filter->src_ip = key->src;
>>>> +		filter->ip_version = IPV4_VERSION;
>>>> +	}
>>>> +
>>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>>>> +		struct flow_dissector_key_ipv6_addrs *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_ipv6_addrs *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>>> +						  f->mask);
>>>> +
>>>> +		/* src and dest IPV6 address should not be LOOPBACK
>>>> +		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>>>> +		 */
>>>> +		if (ipv6_addr_loopback(&key->dst) ||
>>>> +		    ipv6_addr_loopback(&key->src)) {
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"Bad ipv6, addr is LOOPBACK\n");
>>>> +			return I40E_ERR_CONFIG;
>>>> +		}
>>>> +		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>>>> +			field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +
>>>> +		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>>>> +		       sizeof(filter->src_ipv6));
>>>> +		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>>>> +		       sizeof(filter->dst_ipv6));
>>>> +
>>>> +		/* mark it as IPv6 filter, to be used later */
>>>> +		filter->ip_version = IPV6_VERSION;
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>>>> +		struct flow_dissector_key_ports *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_ports *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->src) {
>>>> +			if (mask->src == cpu_to_be16(0xffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>>>> +					be16_to_cpu(mask->src));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (mask->dst) {
>>>> +			if (mask->dst == cpu_to_be16(0xffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>>>> +					be16_to_cpu(mask->dst));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		filter->dst_port = key->dst;
>>>> +		filter->src_port = key->src;
>>>> +
>>>> +		/* For now, only supports destination port*/
>>>> +		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>>>> +
>>>> +		switch (filter->ip_proto) {
>>>> +		case IPPROTO_TCP:
>>>> +		case IPPROTO_UDP:
>>>> +			break;
>>>> +		default:
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"Only UDP and TCP transport are supported\n");
>>>> +			return -EINVAL;
>>>> +		}
>>>> +	}
>>>> +	filter->flags = field_flags;
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_handle_redirect_action: Forward to a traffic class on the device
>>>> + * @vsi: Pointer to VSI
>>>> + * @ifindex: ifindex of the device to forwared to
>>>> + * @tc: traffic class index on the device
>>>> + * @filter: Pointer to cloud filter structure
>>>> + *
>>>> + **/
>>>> +static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>>>> +				       struct i40e_cloud_filter *filter)
>>>> +{
>>>> +	struct i40e_channel *ch, *ch_tmp;
>>>> +
>>>> +	/* redirect to a traffic class on the same device */
>>>> +	if (vsi->netdev->ifindex == ifindex) {
>>>> +		if (tc == 0) {
>>>> +			filter->seid = vsi->seid;
>>>> +			return 0;
>>>> +		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>>>> +			if (!filter->dst_port) {
>>>> +				dev_err(&vsi->back->pdev->dev,
>>>> +					"Specify destination port to redirect to traffic class that is not default\n");
>>>> +				return -EINVAL;
>>>> +			}
>>>> +			if (list_empty(&vsi->ch_list))
>>>> +				return -EINVAL;
>>>> +			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>>>> +						 list) {
>>>> +				if (ch->seid == vsi->tc_seid_map[tc])
>>>> +					filter->seid = ch->seid;
>>>> +			}
>>>> +			return 0;
>>>> +		}
>>>> +	}
>>>> +	return -EINVAL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_parse_tc_actions - Parse tc actions
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + * @filter: Pointer to cloud filter structure
>>>> + *
>>>> + **/
>>>> +static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>>>> +				 struct i40e_cloud_filter *filter)
>>>> +{
>>>> +	const struct tc_action *a;
>>>> +	LIST_HEAD(actions);
>>>> +	int err;
>>>> +
>>>> +	if (!tcf_exts_has_actions(exts))
>>>> +		return -EINVAL;
>>>> +
>>>> +	tcf_exts_to_list(exts, &actions);
>>>> +	list_for_each_entry(a, &actions, list) {
>>>> +		/* Drop action */
>>>> +		if (is_tcf_gact_shot(a)) {
>>>> +			dev_err(&vsi->back->pdev->dev,
>>>> +				"Cloud filters do not support the drop action.\n");
>>>> +			return -EOPNOTSUPP;
>>>> +		}
>>>> +
>>>> +		/* Redirect to a traffic class on the same device */
>>>> +		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>>>> +			int ifindex = tcf_mirred_ifindex(a);
>>>> +			u8 tc = tcf_mirred_tc(a);
>>>> +
>>>> +			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>>>> +							  filter);
>>>> +			if (err == 0)
>>>> +				return err;
>>>> +		}
>>>> +	}
>>>> +	return -EINVAL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_configure_clsflower - Configure tc flower filters
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + *
>>>> + **/
>>>> +static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>>>> +				    struct tc_cls_flower_offload *cls_flower)
>>>> +{
>>>> +	struct i40e_cloud_filter *filter = NULL;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int err = 0;
>>>> +
>>>> +	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>>>> +	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>>>> +		return -EBUSY;
>>>> +
>>>> +	if (pf->fdir_pf_active_filters ||
>>>> +	    (!hlist_empty(&pf->fdir_filter_list))) {
>>>> +		dev_err(&vsi->back->pdev->dev,
>>>> +			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>>>> +		dev_err(&vsi->back->pdev->dev,
>>>> +			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>>>> +		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>>> +	}
>>>> +
>>>> +	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>>>> +	if (!filter)
>>>> +		return -ENOMEM;
>>>> +
>>>> +	filter->cookie = cls_flower->cookie;
>>>> +
>>>> +	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>>>> +	if (err < 0)
>>>> +		goto err;
>>>> +
>>>> +	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>>>> +	if (err < 0)
>>>> +		goto err;
>>>> +
>>>> +	/* Add cloud filter */
>>>> +	if (filter->dst_port)
>>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>>>> +	else
>>>> +		err = i40e_add_del_cloud_filter(vsi, filter, true);
>>>> +
>>>> +	if (err) {
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"Failed to add cloud filter, err %s\n",
>>>> +			i40e_stat_str(&pf->hw, err));
>>>> +		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>>> +		goto err;
>>>> +	}
>>>> +
>>>> +	/* add filter to the ordered list */
>>>> +	INIT_HLIST_NODE(&filter->cloud_node);
>>>> +
>>>> +	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>>>> +
>>>> +	pf->num_cloud_filters++;
>>>> +
>>>> +	return err;
>>>> +err:
>>>> +	kfree(filter);
>>>> +	return err;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_find_cloud_filter - Find the could filter in the list
>>>> + * @vsi: Pointer to VSI
>>>> + * @cookie: filter specific cookie
>>>> + *
>>>> + **/
>>>> +static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>>>> +							unsigned long *cookie)
>>>> +{
>>>> +	struct i40e_cloud_filter *filter = NULL;
>>>> +	struct hlist_node *node2;
>>>> +
>>>> +	hlist_for_each_entry_safe(filter, node2,
>>>> +				  &vsi->back->cloud_filter_list, cloud_node)
>>>> +		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>>>> +			return filter;
>>>> +	return NULL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_delete_clsflower - Remove tc flower filters
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + *
>>>> + **/
>>>> +static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>>>> +				 struct tc_cls_flower_offload *cls_flower)
>>>> +{
>>>> +	struct i40e_cloud_filter *filter = NULL;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int err = 0;
>>>> +
>>>> +	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>>>> +
>>>> +	if (!filter)
>>>> +		return -EINVAL;
>>>> +
>>>> +	hash_del(&filter->cloud_node);
>>>> +
>>>> +	if (filter->dst_port)
>>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>>>> +	else
>>>> +		err = i40e_add_del_cloud_filter(vsi, filter, false);
>>>> +	if (err) {
>>>> +		kfree(filter);
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"Failed to delete cloud filter, err %s\n",
>>>> +			i40e_stat_str(&pf->hw, err));
>>>> +		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>>> +	}
>>>> +
>>>> +	kfree(filter);
>>>> +	pf->num_cloud_filters--;
>>>> +
>>>> +	if (!pf->num_cloud_filters)
>>>> +		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>>> +		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>>> +			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>>> +		}
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_setup_tc_cls_flower - flower classifier offloads
>>>> + * @netdev: net device to configure
>>>> + * @type_data: offload data
>>>> + **/
>>>> +static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>>>> +				    struct tc_cls_flower_offload *cls_flower)
>>>> +{
>>>> +	struct i40e_netdev_priv *np = netdev_priv(netdev);
>>>> +	struct i40e_vsi *vsi = np->vsi;
>>>> +
>>>> +	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>>>> +	    cls_flower->common.chain_index)
>>>> +		return -EOPNOTSUPP;
>>>> +
>>>> +	switch (cls_flower->command) {
>>>> +	case TC_CLSFLOWER_REPLACE:
>>>> +		return i40e_configure_clsflower(vsi, cls_flower);
>>>> +	case TC_CLSFLOWER_DESTROY:
>>>> +		return i40e_delete_clsflower(vsi, cls_flower);
>>>> +	case TC_CLSFLOWER_STATS:
>>>> +		return -EOPNOTSUPP;
>>>> +	default:
>>>> +		return -EINVAL;
>>>> +	}
>>>> +}
>>>> +
>>>> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>>>> 			   void *type_data)
>>>> {
>>>> -	if (type != TC_SETUP_MQPRIO)
>>>> +	switch (type) {
>>>> +	case TC_SETUP_MQPRIO:
>>>> +		return i40e_setup_tc(netdev, type_data);
>>>> +	case TC_SETUP_CLSFLOWER:
>>>> +		return i40e_setup_tc_cls_flower(netdev, type_data);
>>>> +	default:
>>>> 		return -EOPNOTSUPP;
>>>> -
>>>> -	return i40e_setup_tc(netdev, type_data);
>>>> +	}
>>>> }
>>>>
>>>> /**
>>>> @@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
>>>> 		kfree(cfilter);
>>>> 	}
>>>> 	pf->num_cloud_filters = 0;
>>>> +
>>>> +	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>>> +	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>>> +		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>>> +		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>>> +		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>>> +	}
>>>> }
>>>>
>>>> /**
>>>> @@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>>>>  * i40e_get_capabilities - get info about the HW
>>>>  * @pf: the PF struct
>>>>  **/
>>>> -static int i40e_get_capabilities(struct i40e_pf *pf)
>>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>>> +				 enum i40e_admin_queue_opc list_type)
>>>> {
>>>> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
>>>> 	u16 data_size;
>>>> @@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>>>
>>>> 		/* this loads the data into the hw struct for us */
>>>> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>>>> -					    &data_size,
>>>> -					    i40e_aqc_opc_list_func_capabilities,
>>>> -					    NULL);
>>>> +						    &data_size, list_type,
>>>> +						    NULL);
>>>> 		/* data loaded, buffer no longer needed */
>>>> 		kfree(cap_buf);
>>>>
>>>> @@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>>> 		}
>>>> 	} while (err);
>>>>
>>>> -	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>>>> -		dev_info(&pf->pdev->dev,
>>>> -			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>>> -			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>>> -			 pf->hw.func_caps.num_msix_vectors,
>>>> -			 pf->hw.func_caps.num_msix_vectors_vf,
>>>> -			 pf->hw.func_caps.fd_filters_guaranteed,
>>>> -			 pf->hw.func_caps.fd_filters_best_effort,
>>>> -			 pf->hw.func_caps.num_tx_qp,
>>>> -			 pf->hw.func_caps.num_vsis);
>>>> -
>>>> +	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>>>> +		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>>> +				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>>> +				 pf->hw.func_caps.num_msix_vectors,
>>>> +				 pf->hw.func_caps.num_msix_vectors_vf,
>>>> +				 pf->hw.func_caps.fd_filters_guaranteed,
>>>> +				 pf->hw.func_caps.fd_filters_best_effort,
>>>> +				 pf->hw.func_caps.num_tx_qp,
>>>> +				 pf->hw.func_caps.num_vsis);
>>>> +		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>>>> +				 pf->hw.dev_caps.switch_mode,
>>>> +				 pf->hw.dev_caps.valid_functions);
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "SR-IOV=%d, num_vfs for all function=%u\n",
>>>> +				 pf->hw.dev_caps.sr_iov_1_1,
>>>> +				 pf->hw.dev_caps.num_vfs);
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>>>> +				 pf->hw.dev_caps.num_vsis,
>>>> +				 pf->hw.dev_caps.num_rx_qp,
>>>> +				 pf->hw.dev_caps.num_tx_qp);
>>>> +		}
>>>> +	}
>>>> +	if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>>> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
>>>> 		       + pf->hw.func_caps.num_vfs)
>>>> -	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>>>> -		dev_info(&pf->pdev->dev,
>>>> -			 "got num_vsis %d, setting num_vsis to %d\n",
>>>> -			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>>> -		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>>> +		if (pf->hw.revision_id == 0 &&
>>>> +		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "got num_vsis %d, setting num_vsis to %d\n",
>>>> +				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>>> +			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>>> +		}
>>>> 	}
>>>> -
>>>> 	return 0;
>>>> }
>>>>
>>>> @@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
>>>> 		if (!vsi) {
>>>> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
>>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 			return;
>>>> 		}
>>>> 	}
>>>> @@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
>>>> }
>>>>
>>>> /**
>>>> + * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>>>> + * @vsi: PF main vsi
>>>> + * @seid: seid of main or channel VSIs
>>>> + *
>>>> + * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>>>> + * existed before reset
>>>> + **/
>>>> +static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>>>> +{
>>>> +	struct i40e_cloud_filter *cfilter;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	struct hlist_node *node;
>>>> +	i40e_status ret;
>>>> +
>>>> +	/* Add cloud filters back if they exist */
>>>> +	if (hlist_empty(&pf->cloud_filter_list))
>>>> +		return 0;
>>>> +
>>>> +	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>>>> +				  cloud_node) {
>>>> +		if (cfilter->seid != seid)
>>>> +			continue;
>>>> +
>>>> +		if (cfilter->dst_port)
>>>> +			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>>>> +								true);
>>>> +		else
>>>> +			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>>>> +
>>>> +		if (ret) {
>>>> +			dev_dbg(&pf->pdev->dev,
>>>> +				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>>>> +				i40e_stat_str(&pf->hw, ret),
>>>> +				i40e_aq_str(&pf->hw,
>>>> +					    pf->hw.aq.asq_last_status));
>>>> +			return ret;
>>>> +		}
>>>> +	}
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>>>>  * @vsi: PF main vsi
>>>>  *
>>>> @@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
>>>> 						I40E_BW_CREDIT_DIVISOR,
>>>> 				ch->seid);
>>>> 		}
>>>> +		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>>>> +		if (ret) {
>>>> +			dev_dbg(&vsi->back->pdev->dev,
>>>> +				"Failed to rebuild cloud filters for channel VSI %u\n",
>>>> +				ch->seid);
>>>> +			return ret;
>>>> +		}
>>>> 	}
>>>> 	return 0;
>>>> }
>>>> @@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>>> 		i40e_verify_eeprom(pf);
>>>>
>>>> 	i40e_clear_pxe_mode(hw);
>>>> -	ret = i40e_get_capabilities(pf);
>>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>>> 	if (ret)
>>>> 		goto end_core_reset;
>>>>
>>>> @@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>>> 			goto end_unlock;
>>>> 	}
>>>>
>>>> +	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>>>> +	if (ret)
>>>> +		goto end_unlock;
>>>> +
>>>> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
>>>> 	 * for this main VSI if they exist
>>>> 	 */
>>>> @@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
>>>> 	    (pf->num_fdsb_msix == 0)) {
>>>> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
>>>> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 	}
>>>> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
>>>> 	    (pf->num_vmdq_msix == 0)) {
>>>> @@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
>>>> 				       I40E_FLAG_FD_SB_ENABLED	|
>>>> 				       I40E_FLAG_FD_ATR_ENABLED	|
>>>> 				       I40E_FLAG_VMDQ_ENABLED);
>>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>>
>>>> 			/* rework the queue expectations without MSIX */
>>>> 			i40e_determine_queue_usage(pf);
>>>> @@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>>> 		/* Enable filters and mark for reset */
>>>> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
>>>> 			need_reset = true;
>>>> -		/* enable FD_SB only if there is MSI-X vector */
>>>> -		if (pf->num_fdsb_msix > 0)
>>>> +		/* enable FD_SB only if there is MSI-X vector and no cloud
>>>> +		 * filters exist
>>>> +		 */
>>>> +		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
>>>> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>>> +		}
>>>> 	} else {
>>>> 		/* turn off filters, mark for reset and clear SW filter list */
>>>> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>>>> @@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>>> 		}
>>>> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
>>>> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> +
>>>> 		/* reset fd counters */
>>>> 		pf->fd_add_err = 0;
>>>> 		pf->fd_atr_cnt = 0;
>>>> @@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
>>>> 		netdev->hw_features |= NETIF_F_NTUPLE;
>>>> 	hw_features = hw_enc_features		|
>>>> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>>>> -		      NETIF_F_HW_VLAN_CTAG_RX;
>>>> +		      NETIF_F_HW_VLAN_CTAG_RX	|
>>>> +		      NETIF_F_HW_TC;
>>>>
>>>> 	netdev->hw_features |= hw_features;
>>>>
>>>> @@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>>> 	*/
>>>>
>>>> 	if ((pf->hw.pf_id == 0) &&
>>>> -	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>>>> +	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
>>>> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>>>> +		pf->last_sw_conf_flags = flags;
>>>> +	}
>>>>
>>>> 	if (pf->hw.pf_id == 0) {
>>>> 		u16 valid_flags;
>>>> @@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>>> 					     pf->hw.aq.asq_last_status));
>>>> 			/* not a fatal problem, just keep going */
>>>> 		}
>>>> +		pf->last_sw_conf_valid_flags = valid_flags;
>>>> 	}
>>>>
>>>> 	/* first time setup */
>>>> @@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>>> 			       I40E_FLAG_DCB_ENABLED	|
>>>> 			       I40E_FLAG_SRIOV_ENABLED	|
>>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
>>>> 				  I40E_FLAG_FD_SB_ENABLED |
>>>> 				  I40E_FLAG_FD_ATR_ENABLED |
>>>> @@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>>> 			       I40E_FLAG_FD_ATR_ENABLED	|
>>>> 			       I40E_FLAG_DCB_ENABLED	|
>>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 	} else {
>>>> 		/* Not enough queues for all TCs */
>>>> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>>>> @@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>>> 			queues_left -= 1; /* save 1 queue for FD */
>>>> 		} else {
>>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
>>>> 		}
>>>> 	}
>>>> @@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>>>> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
>>>>
>>>> 	i40e_clear_pxe_mode(hw);
>>>> -	err = i40e_get_capabilities(pf);
>>>> +	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>>> 	if (err)
>>>> 		goto err_adminq_setup;
>>>>
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>>> index 92869f5..3bb6659 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>>> @@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
>>>> 		struct i40e_asq_cmd_details *cmd_details);
>>>> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
>>>> 				   struct i40e_asq_cmd_details *cmd_details);
>>>> +i40e_status
>>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count);
>>>> +enum i40e_status_code
>>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count);
>>>> +enum i40e_status_code
>>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count);
>>>> +i40e_status
>>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count);
>>>> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
>>>> 			       struct i40e_lldp_variables *lldp_cfg);
>>>> /* i40e_common */
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>>> index c019f46..af38881 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>>> @@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
>>>> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
>>>> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
>>>> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>>>> +#define I40E_SWITCH_MODE_MASK		0xF
>>>>
>>>> 	u32  management_mode;
>>>> 	u32  mng_protocols_over_mctp;
>>>> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>>> index b8c78bf..4fe27f0 100644
>>>> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>>> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>>> @@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>>> 		struct {
>>>> 			u8 data[16];
>>>> 		} v6;
>>>> +		struct {
>>>> +			__le16 data[8];
>>>> +		} raw_v6;
>>>> 	} ipaddr;
>>>> 	__le16	flags;
>>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Intel-wired-lan] [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower
@ 2017-09-29  6:20           ` Jiri Pirko
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Pirko @ 2017-09-29  6:20 UTC (permalink / raw)
  To: intel-wired-lan

Thu, Sep 28, 2017 at 09:22:15PM CEST, amritha.nambiar at intel.com wrote:
>On 9/14/2017 1:00 AM, Nambiar, Amritha wrote:
>> On 9/13/2017 6:26 AM, Jiri Pirko wrote:
>>> Wed, Sep 13, 2017 at 11:59:50AM CEST, amritha.nambiar at intel.com wrote:
>>>> This patch enables tc-flower based hardware offloads. tc flower
>>>> filter provided by the kernel is configured as driver specific
>>>> cloud filter. The patch implements functions and admin queue
>>>> commands needed to support cloud filters in the driver and
>>>> adds cloud filters to configure these tc-flower filters.
>>>>
>>>> The only action supported is to redirect packets to a traffic class
>>>> on the same device.
>>>
>>> So basically you are not doing redirect, you are just setting tclass for
>>> matched packets, right? Why you use mirred for this? I think that
>>> you might consider extending g_act for that:
>>>
>>> # tc filter add dev eth0 protocol ip ingress \
>>>   prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw \
>>>   action tclass 0
>>>
>> Yes, this doesn't work like a typical egress redirect, but is aimed at
>> forwarding the matched packets to a different queue-group/traffic class
>> on the same device, so some sort-of ingress redirect in the hardware. I
>> possibly may not need the mirred-redirect as you say, I'll look into the
>> g_act way of doing this with a new gact tc action.
>> 
>
>I was looking at introducing a new gact tclass action to TC. In the HW
>offload path, this sets a traffic class value for certain matched
>packets so they will be processed in a queue belonging to the traffic class.
>
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_sw\
>  action tclass 2
>
>But, I'm having trouble defining what this action means in the kernel
>datapath. For ingress, this action could just take the default path and
>do nothing and only have meaning in the HW offloaded path. For egress,

Sounds ok.


>certain qdiscs like 'multiq' and 'prio' could use this 'tclass' value
>for band selection, while the 'mqprio' qdisc selects the traffic class
>based on the skb priority in netdev_pick_tx(), so what would this action
>mean for the 'mqprio' qdisc?

I don't see why this action would have any special meaning for specific
qdiscs. The qdiscs have already mechanisms for band mapping. I don't see
why to mix it up with tclass action.

Also, you can use tclass action on qdisc clsact egress to do band
mapping. That would be symmetrical with ingress.


>
>It looks like the 'prio' qdisc uses band selection based on the
>'classid', so I was thinking of using the 'classid' through the cls
>flower filter and offload it to HW for the traffic class index, this way
>we would have the same behavior in HW offload and SW fallback and there
>would be no need for a separate tc action.
>
>In HW:
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_sw classid 1:2\
>
>filter pref 2 flower chain 0
>filter pref 2 flower chain 0 handle 0x1 classid 1:2
>  eth_type ipv4
>  ip_proto udp
>  dst_ip 192.168.3.5
>  dst_port 25
>  skip_sw
>  in_hw
>
>This will be used to route packets to traffic class 2.
>
>In SW:
># tc filter add dev eth0 protocol ip parent ffff:\
>  prio 2 flower dst_ip 192.168.3.5/32\
>  ip_proto udp dst_port 25 skip_hw classid 1:2
>
>filter pref 2 flower chain 0
>filter pref 2 flower chain 0 handle 0x1 classid 1:2
>  eth_type ipv4
>  ip_proto udp
>  dst_ip 192.168.3.5
>  dst_port 25
>  skip_hw
>  not_in_hw
>
>>>
>>>>
>>>> # tc qdisc add dev eth0 ingress
>>>> # ethtool -K eth0 hw-tc-offload on
>>>>
>>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>>  prio 1 flower dst_mac 3c:fd:fe:a0:d6:70 skip_sw\
>>>>  action mirred ingress redirect dev eth0 tclass 0
>>>>
>>>> # tc filter add dev eth0 protocol ip parent ffff:\
>>>>  prio 2 flower dst_ip 192.168.3.5/32\
>>>>  ip_proto udp dst_port 25 skip_sw\
>>>>  action mirred ingress redirect dev eth0 tclass 1
>>>>
>>>> # tc filter add dev eth0 protocol ipv6 parent ffff:\
>>>>  prio 3 flower dst_ip fe8::200:1\
>>>>  ip_proto udp dst_port 66 skip_sw\
>>>>  action mirred ingress redirect dev eth0 tclass 1
>>>>
>>>> Delete tc flower filter:
>>>> Example:
>>>>
>>>> # tc filter del dev eth0 parent ffff: prio 3 handle 0x1 flower
>>>> # tc filter del dev eth0 parent ffff:
>>>>
>>>> Flow Director Sideband is disabled while configuring cloud filters
>>>> via tc-flower and until any cloud filter exists.
>>>>
>>>> Unsupported matches when cloud filters are added using enhanced
>>>> big buffer cloud filter mode of underlying switch include:
>>>> 1. source port and source IP
>>>> 2. Combined MAC address and IP fields.
>>>> 3. Not specifying L4 port
>>>>
>>>> These filter matches can however be used to redirect traffic to
>>>> the main VSI (tc 0) which does not require the enhanced big buffer
>>>> cloud filter support.
>>>>
>>>> v3: Cleaned up some lengthy function names. Changed ipv6 address to
>>>> __be32 array instead of u8 array. Used macro for IP version. Minor
>>>> formatting changes.
>>>> v2:
>>>> 1. Moved I40E_SWITCH_MODE_MASK definition to i40e_type.h
>>>> 2. Moved dev_info for add/deleting cloud filters in else condition
>>>> 3. Fixed some format specifier in dev_err logs
>>>> 4. Refactored i40e_get_capabilities to take an additional
>>>>   list_type parameter and use it to query device and function
>>>>   level capabilities.
>>>> 5. Fixed parsing tc redirect action to check for the is_tcf_mirred_tc()
>>>>   to verify if redirect to a traffic class is supported.
>>>> 6. Added comments for Geneve fix in cloud filter big buffer AQ
>>>>   function definitions.
>>>> 7. Cleaned up setup_tc interface to rebase and work with Jiri's
>>>>   updates, separate function to process tc cls flower offloads.
>>>> 8. Changes to make Flow Director Sideband and Cloud filters mutually
>>>>   exclusive.
>>>>
>>>> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
>>>> Signed-off-by: Kiran Patil <kiran.patil@intel.com>
>>>> Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
>>>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>>>> ---
>>>> drivers/net/ethernet/intel/i40e/i40e.h             |   49 +
>>>> drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h  |    3 
>>>> drivers/net/ethernet/intel/i40e/i40e_common.c      |  189 ++++
>>>> drivers/net/ethernet/intel/i40e/i40e_main.c        |  971 +++++++++++++++++++-
>>>> drivers/net/ethernet/intel/i40e/i40e_prototype.h   |   16 
>>>> drivers/net/ethernet/intel/i40e/i40e_type.h        |    1 
>>>> .../net/ethernet/intel/i40evf/i40e_adminq_cmd.h    |    3 
>>>> 7 files changed, 1202 insertions(+), 30 deletions(-)
>>>>
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
>>>> index 6018fb6..b110519 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e.h
>>>> @@ -55,6 +55,8 @@
>>>> #include <linux/net_tstamp.h>
>>>> #include <linux/ptp_clock_kernel.h>
>>>> #include <net/pkt_cls.h>
>>>> +#include <net/tc_act/tc_gact.h>
>>>> +#include <net/tc_act/tc_mirred.h>
>>>> #include "i40e_type.h"
>>>> #include "i40e_prototype.h"
>>>> #include "i40e_client.h"
>>>> @@ -252,9 +254,52 @@ struct i40e_fdir_filter {
>>>> 	u32 fd_id;
>>>> };
>>>>
>>>> +#define IPV4_VERSION 4
>>>> +#define IPV6_VERSION 6
>>>> +
>>>> +#define I40E_CLOUD_FIELD_OMAC	0x01
>>>> +#define I40E_CLOUD_FIELD_IMAC	0x02
>>>> +#define I40E_CLOUD_FIELD_IVLAN	0x04
>>>> +#define I40E_CLOUD_FIELD_TEN_ID	0x08
>>>> +#define I40E_CLOUD_FIELD_IIP	0x10
>>>> +
>>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC	I40E_CLOUD_FIELD_OMAC
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC	I40E_CLOUD_FIELD_IMAC
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN	(I40E_CLOUD_FIELD_IMAC | \
>>>> +						 I40E_CLOUD_FIELD_IVLAN)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID	(I40E_CLOUD_FIELD_IMAC | \
>>>> +						 I40E_CLOUD_FIELD_TEN_ID)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC (I40E_CLOUD_FIELD_OMAC | \
>>>> +						  I40E_CLOUD_FIELD_IMAC | \
>>>> +						  I40E_CLOUD_FIELD_TEN_ID)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID (I40E_CLOUD_FIELD_IMAC | \
>>>> +						   I40E_CLOUD_FIELD_IVLAN | \
>>>> +						   I40E_CLOUD_FIELD_TEN_ID)
>>>> +#define I40E_CLOUD_FILTER_FLAGS_IIP	I40E_CLOUD_FIELD_IIP
>>>> +
>>>> struct i40e_cloud_filter {
>>>> 	struct hlist_node cloud_node;
>>>> 	unsigned long cookie;
>>>> +	/* cloud filter input set follows */
>>>> +	u8 dst_mac[ETH_ALEN];
>>>> +	u8 src_mac[ETH_ALEN];
>>>> +	__be16 vlan_id;
>>>> +	__be32 dst_ip;
>>>> +	__be32 src_ip;
>>>> +	__be32 dst_ipv6[4];
>>>> +	__be32 src_ipv6[4];
>>>> +	__be16 dst_port;
>>>> +	__be16 src_port;
>>>> +	u32 ip_version;
>>>> +	u8 ip_proto;	/* IPPROTO value */
>>>> +	/* L4 port type: src or destination port */
>>>> +#define I40E_CLOUD_FILTER_PORT_SRC	0x01
>>>> +#define I40E_CLOUD_FILTER_PORT_DEST	0x02
>>>> +	u8 port_type;
>>>> +	u32 tenant_id;
>>>> +	u8 flags;
>>>> +#define I40E_CLOUD_TNL_TYPE_NONE	0xff
>>>> +	u8 tunnel_type;
>>>> 	u16 seid;	/* filter control */
>>>> };
>>>>
>>>> @@ -491,6 +536,8 @@ struct i40e_pf {
>>>> #define I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED	BIT(27)
>>>> #define I40E_FLAG_SOURCE_PRUNING_DISABLED	BIT(28)
>>>> #define I40E_FLAG_TC_MQPRIO			BIT(29)
>>>> +#define I40E_FLAG_FD_SB_INACTIVE		BIT(30)
>>>> +#define I40E_FLAG_FD_SB_TO_CLOUD_FILTER		BIT(31)
>>>>
>>>> 	struct i40e_client_instance *cinst;
>>>> 	bool stat_offsets_loaded;
>>>> @@ -573,6 +620,8 @@ struct i40e_pf {
>>>> 	u16 phy_led_val;
>>>>
>>>> 	u16 override_q_count;
>>>> +	u16 last_sw_conf_flags;
>>>> +	u16 last_sw_conf_valid_flags;
>>>> };
>>>>
>>>> /**
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>>> index 2e567c2..feb3d42 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
>>>> @@ -1392,6 +1392,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>>> 		struct {
>>>> 			u8 data[16];
>>>> 		} v6;
>>>> +		struct {
>>>> +			__le16 data[8];
>>>> +		} raw_v6;
>>>> 	} ipaddr;
>>>> 	__le16	flags;
>>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>>> index 9567702..d9c9665 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_common.c
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
>>>> @@ -5434,5 +5434,194 @@ i40e_add_pinfo_to_list(struct i40e_hw *hw,
>>>>
>>>> 	status = i40e_aq_write_ppp(hw, (void *)sec, sec->data_end,
>>>> 				   track_id, &offset, &info, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_add_cloud_filters
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to add cloud filters from
>>>> + * @filters: Buffer which contains the filters to be added
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Set the cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>>> + * of the function.
>>>> + *
>>>> + **/
>>>> +enum i40e_status_code
>>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 seid,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	enum i40e_status_code status;
>>>> +	u16 buff_len;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_add_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_add_cloud_filters_bb
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to add cloud filters from
>>>> + * @filters: Buffer which contains the filters in big buffer to be added
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Set the big buffer cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>>> + * function.
>>>> + *
>>>> + **/
>>>> +i40e_status
>>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	i40e_status status;
>>>> +	u16 buff_len;
>>>> +	int i;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_add_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>>> +
>>>> +	for (i = 0; i < filter_count; i++) {
>>>> +		u16 tnl_type;
>>>> +		u32 ti;
>>>> +
>>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>>> +
>>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>>> +		 * byte than the offset for the Tenant ID for rest of the
>>>> +		 * tunnels.
>>>> +		 */
>>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>>> +		}
>>>> +	}
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_rem_cloud_filters
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to remove cloud filters from
>>>> + * @filters: Buffer which contains the filters to be removed
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Remove the cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_data are filled in by the caller
>>>> + * of the function.
>>>> + *
>>>> + **/
>>>> +enum i40e_status_code
>>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 seid,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	enum i40e_status_code status;
>>>> +	u16 buff_len;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> +	return status;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_aq_rem_cloud_filters_bb
>>>> + * @hw: pointer to the hardware structure
>>>> + * @seid: VSI seid to remove cloud filters from
>>>> + * @filters: Buffer which contains the filters in big buffer to be removed
>>>> + * @filter_count: number of filters contained in the buffer
>>>> + *
>>>> + * Remove the big buffer cloud filters for a given VSI.  The contents of the
>>>> + * i40e_aqc_cloud_filters_element_bb are filled in by the caller of the
>>>> + * function.
>>>> + *
>>>> + **/
>>>> +i40e_status
>>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count)
>>>> +{
>>>> +	struct i40e_aq_desc desc;
>>>> +	struct i40e_aqc_add_remove_cloud_filters *cmd =
>>>> +	(struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw;
>>>> +	i40e_status status;
>>>> +	u16 buff_len;
>>>> +	int i;
>>>> +
>>>> +	i40e_fill_default_direct_cmd_desc(&desc,
>>>> +					  i40e_aqc_opc_remove_cloud_filters);
>>>> +
>>>> +	buff_len = filter_count * sizeof(*filters);
>>>> +	desc.datalen = cpu_to_le16(buff_len);
>>>> +	desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
>>>> +	cmd->num_filters = filter_count;
>>>> +	cmd->seid = cpu_to_le16(seid);
>>>> +	cmd->big_buffer_flag = I40E_AQC_ADD_CLOUD_CMD_BB;
>>>> +
>>>> +	for (i = 0; i < filter_count; i++) {
>>>> +		u16 tnl_type;
>>>> +		u32 ti;
>>>> +
>>>> +		tnl_type = (le16_to_cpu(filters[i].element.flags) &
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK) >>
>>>> +			   I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT;
>>>> +
>>>> +		/* For Geneve, the VNI should be placed in offset shifted by a
>>>> +		 * byte than the offset for the Tenant ID for rest of the
>>>> +		 * tunnels.
>>>> +		 */
>>>> +		if (tnl_type == I40E_AQC_ADD_CLOUD_TNL_TYPE_GENEVE) {
>>>> +			ti = le32_to_cpu(filters[i].element.tenant_id);
>>>> +			filters[i].element.tenant_id = cpu_to_le32(ti << 8);
>>>> +		}
>>>> +	}
>>>> +
>>>> +	status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL);
>>>> +
>>>> 	return status;
>>>> }
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>>> index afcf08a..96ee608 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_main.c
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
>>>> @@ -69,6 +69,15 @@ static int i40e_reset(struct i40e_pf *pf);
>>>> static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired);
>>>> static void i40e_fdir_sb_setup(struct i40e_pf *pf);
>>>> static int i40e_veb_get_bw_info(struct i40e_veb *veb);
>>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>>> +				     struct i40e_cloud_filter *filter,
>>>> +				     bool add);
>>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>>> +					     struct i40e_cloud_filter *filter,
>>>> +					     bool add);
>>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>>> +				 enum i40e_admin_queue_opc list_type);
>>>> +
>>>>
>>>> /* i40e_pci_tbl - PCI Device ID Table
>>>>  *
>>>> @@ -5478,7 +5487,11 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
>>>>  **/
>>>> static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>>> {
>>>> +	enum i40e_admin_queue_err last_aq_status;
>>>> +	struct i40e_cloud_filter *cfilter;
>>>> 	struct i40e_channel *ch, *ch_tmp;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	struct hlist_node *node;
>>>> 	int ret, i;
>>>>
>>>> 	/* Reset rss size that was stored when reconfiguring rss for
>>>> @@ -5519,6 +5532,29 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
>>>> 				 "Failed to reset tx rate for ch->seid %u\n",
>>>> 				 ch->seid);
>>>>
>>>> +		/* delete cloud filters associated with this channel */
>>>> +		hlist_for_each_entry_safe(cfilter, node,
>>>> +					  &pf->cloud_filter_list, cloud_node) {
>>>> +			if (cfilter->seid != ch->seid)
>>>> +				continue;
>>>> +
>>>> +			hash_del(&cfilter->cloud_node);
>>>> +			if (cfilter->dst_port)
>>>> +				ret = i40e_add_del_cloud_filter_big_buf(vsi,
>>>> +									cfilter,
>>>> +									false);
>>>> +			else
>>>> +				ret = i40e_add_del_cloud_filter(vsi, cfilter,
>>>> +								false);
>>>> +			last_aq_status = pf->hw.aq.asq_last_status;
>>>> +			if (ret)
>>>> +				dev_info(&pf->pdev->dev,
>>>> +					 "Failed to delete cloud filter, err %s aq_err %s\n",
>>>> +					 i40e_stat_str(&pf->hw, ret),
>>>> +					 i40e_aq_str(&pf->hw, last_aq_status));
>>>> +			kfree(cfilter);
>>>> +		}
>>>> +
>>>> 		/* delete VSI from FW */
>>>> 		ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
>>>> 					     NULL);
>>>> @@ -5970,6 +6006,74 @@ static bool i40e_setup_channel(struct i40e_pf *pf, struct i40e_vsi *vsi,
>>>> }
>>>>
>>>> /**
>>>> + * i40e_validate_and_set_switch_mode - sets up switch mode correctly
>>>> + * @vsi: ptr to VSI which has PF backing
>>>> + * @l4type: true for TCP ond false for UDP
>>>> + * @port_type: true if port is destination and false if port is source
>>>> + *
>>>> + * Sets up switch mode correctly if it needs to be changed and perform
>>>> + * what are allowed modes.
>>>> + **/
>>>> +static int i40e_validate_and_set_switch_mode(struct i40e_vsi *vsi, bool l4type,
>>>> +					     bool port_type)
>>>> +{
>>>> +	u8 mode;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	struct i40e_hw *hw = &pf->hw;
>>>> +	int ret;
>>>> +
>>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_dev_capabilities);
>>>> +	if (ret)
>>>> +		return -EINVAL;
>>>> +
>>>> +	if (hw->dev_caps.switch_mode) {
>>>> +		/* if switch mode is set, support mode2 (non-tunneled for
>>>> +		 * cloud filter) for now
>>>> +		 */
>>>> +		u32 switch_mode = hw->dev_caps.switch_mode &
>>>> +							I40E_SWITCH_MODE_MASK;
>>>> +		if (switch_mode >= I40E_NVM_IMAGE_TYPE_MODE1) {
>>>> +			if (switch_mode == I40E_NVM_IMAGE_TYPE_MODE2)
>>>> +				return 0;
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"Invalid switch_mode (%d), only non-tunneled mode for cloud filter is supported\n",
>>>> +				hw->dev_caps.switch_mode);
>>>> +			return -EINVAL;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	/* port_type: true for destination port and false for source port
>>>> +	 * For now, supports only destination port type
>>>> +	 */
>>>> +	if (!port_type) {
>>>> +		dev_err(&pf->pdev->dev, "src port type not supported\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	/* Set Bit 7 to be valid */
>>>> +	mode = I40E_AQ_SET_SWITCH_BIT7_VALID;
>>>> +
>>>> +	/* Set L4type to both TCP and UDP support */
>>>> +	mode |= I40E_AQ_SET_SWITCH_L4_TYPE_BOTH;
>>>> +
>>>> +	/* Set cloud filter mode */
>>>> +	mode |= I40E_AQ_SET_SWITCH_MODE_NON_TUNNEL;
>>>> +
>>>> +	/* Prep mode field for set_switch_config */
>>>> +	ret = i40e_aq_set_switch_config(hw, pf->last_sw_conf_flags,
>>>> +					pf->last_sw_conf_valid_flags,
>>>> +					mode, NULL);
>>>> +	if (ret && hw->aq.asq_last_status != I40E_AQ_RC_ESRCH)
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"couldn't set switch config bits, err %s aq_err %s\n",
>>>> +			i40e_stat_str(hw, ret),
>>>> +			i40e_aq_str(hw,
>>>> +				    hw->aq.asq_last_status));
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>>  * i40e_create_queue_channel - function to create channel
>>>>  * @vsi: VSI to be configured
>>>>  * @ch: ptr to channel (it contains channel specific params)
>>>> @@ -6735,13 +6839,726 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
>>>> 	return ret;
>>>> }
>>>>
>>>> +/**
>>>> + * i40e_set_cld_element - sets cloud filter element data
>>>> + * @filter: cloud filter rule
>>>> + * @cld: ptr to cloud filter element data
>>>> + *
>>>> + * This is helper function to copy data into cloud filter element
>>>> + **/
>>>> +static inline void
>>>> +i40e_set_cld_element(struct i40e_cloud_filter *filter,
>>>> +		     struct i40e_aqc_cloud_filters_element_data *cld)
>>>> +{
>>>> +	int i, j;
>>>> +	u32 ipa;
>>>> +
>>>> +	memset(cld, 0, sizeof(*cld));
>>>> +	ether_addr_copy(cld->outer_mac, filter->dst_mac);
>>>> +	ether_addr_copy(cld->inner_mac, filter->src_mac);
>>>> +
>>>> +	if (filter->ip_version == IPV6_VERSION) {
>>>> +#define IPV6_MAX_INDEX	(ARRAY_SIZE(filter->dst_ipv6) - 1)
>>>> +		for (i = 0, j = 0; i < 4; i++, j += 2) {
>>>> +			ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
>>>> +			ipa = cpu_to_le32(ipa);
>>>> +			memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, 4);
>>>> +		}
>>>> +	} else {
>>>> +		ipa = be32_to_cpu(filter->dst_ip);
>>>> +		memcpy(&cld->ipaddr.v4.data, &ipa, 4);
>>>> +	}
>>>> +
>>>> +	cld->inner_vlan = cpu_to_le16(ntohs(filter->vlan_id));
>>>> +
>>>> +	/* tenant_id is not supported by FW now, once the support is enabled
>>>> +	 * fill the cld->tenant_id with cpu_to_le32(filter->tenant_id)
>>>> +	 */
>>>> +	if (filter->tenant_id)
>>>> +		return;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_add_del_cloud_filter - Add/del cloud filter
>>>> + * @vsi: pointer to VSI
>>>> + * @filter: cloud filter rule
>>>> + * @add: if true, add, if false, delete
>>>> + *
>>>> + * Add or delete a cloud filter for a specific flow spec.
>>>> + * Returns 0 if the filter were successfully added.
>>>> + **/
>>>> +static int i40e_add_del_cloud_filter(struct i40e_vsi *vsi,
>>>> +				     struct i40e_cloud_filter *filter, bool add)
>>>> +{
>>>> +	struct i40e_aqc_cloud_filters_element_data cld_filter;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int ret;
>>>> +	static const u16 flag_table[128] = {
>>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC]  =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC]  =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN]  =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_TEN_ID] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_OMAC_TEN_ID_IMAC] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IMAC_IVLAN_TEN_ID] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID,
>>>> +		[I40E_CLOUD_FILTER_FLAGS_IIP] =
>>>> +			I40E_AQC_ADD_CLOUD_FILTER_IIP,
>>>> +	};
>>>> +
>>>> +	if (filter->flags >= ARRAY_SIZE(flag_table))
>>>> +		return I40E_ERR_CONFIG;
>>>> +
>>>> +	/* copy element needed to add cloud filter from filter */
>>>> +	i40e_set_cld_element(filter, &cld_filter);
>>>> +
>>>> +	if (filter->tunnel_type != I40E_CLOUD_TNL_TYPE_NONE)
>>>> +		cld_filter.flags = cpu_to_le16(filter->tunnel_type <<
>>>> +					     I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT);
>>>> +
>>>> +	if (filter->ip_version == IPV6_VERSION)
>>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>>> +	else
>>>> +		cld_filter.flags |= cpu_to_le16(flag_table[filter->flags] |
>>>> +						I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>>> +
>>>> +	if (add)
>>>> +		ret = i40e_aq_add_cloud_filters(&pf->hw, filter->seid,
>>>> +						&cld_filter, 1);
>>>> +	else
>>>> +		ret = i40e_aq_rem_cloud_filters(&pf->hw, filter->seid,
>>>> +						&cld_filter, 1);
>>>> +	if (ret)
>>>> +		dev_dbg(&pf->pdev->dev,
>>>> +			"Failed to %s cloud filter using l4 port %u, err %d aq_err %d\n",
>>>> +			add ? "add" : "delete", filter->dst_port, ret,
>>>> +			pf->hw.aq.asq_last_status);
>>>> +	else
>>>> +		dev_info(&pf->pdev->dev,
>>>> +			 "%s cloud filter for VSI: %d\n",
>>>> +			 add ? "Added" : "Deleted", filter->seid);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_add_del_cloud_filter_big_buf - Add/del cloud filter using big_buf
>>>> + * @vsi: pointer to VSI
>>>> + * @filter: cloud filter rule
>>>> + * @add: if true, add, if false, delete
>>>> + *
>>>> + * Add or delete a cloud filter for a specific flow spec using big buffer.
>>>> + * Returns 0 if the filter were successfully added.
>>>> + **/
>>>> +static int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
>>>> +					     struct i40e_cloud_filter *filter,
>>>> +					     bool add)
>>>> +{
>>>> +	struct i40e_aqc_cloud_filters_element_bb cld_filter;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int ret;
>>>> +
>>>> +	/* Both (Outer/Inner) valid mac_addr are not supported */
>>>> +	if (is_valid_ether_addr(filter->dst_mac) &&
>>>> +	    is_valid_ether_addr(filter->src_mac))
>>>> +		return -EINVAL;
>>>> +
>>>> +	/* Make sure port is specified, otherwise bail out, for channel
>>>> +	 * specific cloud filter needs 'L4 port' to be non-zero
>>>> +	 */
>>>> +	if (!filter->dst_port)
>>>> +		return -EINVAL;
>>>> +
>>>> +	/* adding filter using src_port/src_ip is not supported at this stage */
>>>> +	if (filter->src_port || filter->src_ip ||
>>>> +	    !ipv6_addr_any((struct in6_addr *)&filter->src_ipv6))
>>>> +		return -EINVAL;
>>>> +
>>>> +	/* copy element needed to add cloud filter from filter */
>>>> +	i40e_set_cld_element(filter, &cld_filter.element);
>>>> +
>>>> +	if (is_valid_ether_addr(filter->dst_mac) ||
>>>> +	    is_valid_ether_addr(filter->src_mac) ||
>>>> +	    is_multicast_ether_addr(filter->dst_mac) ||
>>>> +	    is_multicast_ether_addr(filter->src_mac)) {
>>>> +		/* MAC + IP : unsupported mode */
>>>> +		if (filter->dst_ip)
>>>> +			return -EINVAL;
>>>> +
>>>> +		/* since we validated that L4 port must be valid before
>>>> +		 * we get here, start with respective "flags" value
>>>> +		 * and update if vlan is present or not
>>>> +		 */
>>>> +		cld_filter.element.flags =
>>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_PORT);
>>>> +
>>>> +		if (filter->vlan_id) {
>>>> +			cld_filter.element.flags =
>>>> +			cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT);
>>>> +		}
>>>> +
>>>> +	} else if (filter->dst_ip || filter->ip_version == IPV6_VERSION) {
>>>> +		cld_filter.element.flags =
>>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT);
>>>> +		if (filter->ip_version == IPV6_VERSION)
>>>> +			cld_filter.element.flags |=
>>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV6);
>>>> +		else
>>>> +			cld_filter.element.flags |=
>>>> +				cpu_to_le16(I40E_AQC_ADD_CLOUD_FLAGS_IPV4);
>>>> +	} else {
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"either mac or ip has to be valid for cloud filter\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	/* Now copy L4 port in Byte 6..7 in general fields */
>>>> +	cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0] =
>>>> +						be16_to_cpu(filter->dst_port);
>>>> +
>>>> +	if (add) {
>>>> +		bool proto_type, port_type;
>>>> +
>>>> +		proto_type = (filter->ip_proto == IPPROTO_TCP) ? true : false;
>>>> +		port_type = (filter->port_type & I40E_CLOUD_FILTER_PORT_DEST) ?
>>>> +			     true : false;
>>>> +
>>>> +		/* For now, src port based cloud filter for channel is not
>>>> +		 * supported
>>>> +		 */
>>>> +		if (!port_type) {
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"unsupported port type (src port)\n");
>>>> +			return -EOPNOTSUPP;
>>>> +		}
>>>> +
>>>> +		/* Validate current device switch mode, change if necessary */
>>>> +		ret = i40e_validate_and_set_switch_mode(vsi, proto_type,
>>>> +							port_type);
>>>> +		if (ret) {
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"failed to set switch mode, ret %d\n",
>>>> +				ret);
>>>> +			return ret;
>>>> +		}
>>>> +
>>>> +		ret = i40e_aq_add_cloud_filters_bb(&pf->hw, filter->seid,
>>>> +						   &cld_filter, 1);
>>>> +	} else {
>>>> +		ret = i40e_aq_rem_cloud_filters_bb(&pf->hw, filter->seid,
>>>> +						   &cld_filter, 1);
>>>> +	}
>>>> +
>>>> +	if (ret)
>>>> +		dev_dbg(&pf->pdev->dev,
>>>> +			"Failed to %s cloud filter(big buffer) err %d aq_err %d\n",
>>>> +			add ? "add" : "delete", ret, pf->hw.aq.asq_last_status);
>>>> +	else
>>>> +		dev_info(&pf->pdev->dev,
>>>> +			 "%s cloud filter for VSI: %d, L4 port: %d\n",
>>>> +			 add ? "add" : "delete", filter->seid,
>>>> +			 ntohs(filter->dst_port));
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_parse_cls_flower - Parse tc flower filters provided by kernel
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + * @filter: Pointer to cloud filter structure
>>>> + *
>>>> + **/
>>>> +static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
>>>> +				 struct tc_cls_flower_offload *f,
>>>> +				 struct i40e_cloud_filter *filter)
>>>> +{
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	u16 addr_type = 0;
>>>> +	u8 field_flags = 0;
>>>> +
>>>> +	if (f->dissector->used_keys &
>>>> +	    ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_BASIC) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_VLAN) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_PORTS) |
>>>> +	      BIT(FLOW_DISSECTOR_KEY_ENC_KEYID))) {
>>>> +		dev_err(&pf->pdev->dev, "Unsupported key used: 0x%x\n",
>>>> +			f->dissector->used_keys);
>>>> +		return -EOPNOTSUPP;
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
>>>> +		struct flow_dissector_key_keyid *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>>> +						  f->key);
>>>> +
>>>> +		struct flow_dissector_key_keyid *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ENC_KEYID,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->keyid != 0)
>>>> +			field_flags |= I40E_CLOUD_FIELD_TEN_ID;
>>>> +
>>>> +		filter->tenant_id = be32_to_cpu(key->keyid);
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
>>>> +		struct flow_dissector_key_basic *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_BASIC,
>>>> +						  f->key);
>>>> +
>>>> +		filter->ip_proto = key->ip_proto;
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
>>>> +		struct flow_dissector_key_eth_addrs *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>>> +						  f->key);
>>>> +
>>>> +		struct flow_dissector_key_eth_addrs *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_ETH_ADDRS,
>>>> +						  f->mask);
>>>> +
>>>> +		/* use is_broadcast and is_zero to check for all 0xf or 0 */
>>>> +		if (!is_zero_ether_addr(mask->dst)) {
>>>> +			if (is_broadcast_ether_addr(mask->dst)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_OMAC;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ether dest mask %pM\n",
>>>> +					mask->dst);
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (!is_zero_ether_addr(mask->src)) {
>>>> +			if (is_broadcast_ether_addr(mask->src)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IMAC;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ether src mask %pM\n",
>>>> +					mask->src);
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +		ether_addr_copy(filter->dst_mac, key->dst);
>>>> +		ether_addr_copy(filter->src_mac, key->src);
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_VLAN)) {
>>>> +		struct flow_dissector_key_vlan *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_vlan *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_VLAN,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->vlan_id) {
>>>> +			if (mask->vlan_id == VLAN_VID_MASK) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IVLAN;
>>>> +
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad vlan mask 0x%04x\n",
>>>> +					mask->vlan_id);
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		filter->vlan_id = cpu_to_be16(key->vlan_id);
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_CONTROL)) {
>>>> +		struct flow_dissector_key_control *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_CONTROL,
>>>> +						  f->key);
>>>> +
>>>> +		addr_type = key->addr_type;
>>>> +	}
>>>> +
>>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
>>>> +		struct flow_dissector_key_ipv4_addrs *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_ipv4_addrs *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV4_ADDRS,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->dst) {
>>>> +			if (mask->dst == cpu_to_be32(0xffffffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ip dst mask 0x%08x\n",
>>>> +					be32_to_cpu(mask->dst));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (mask->src) {
>>>> +			if (mask->src == cpu_to_be32(0xffffffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad ip src mask 0x%08x\n",
>>>> +					be32_to_cpu(mask->dst));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (field_flags & I40E_CLOUD_FIELD_TEN_ID) {
>>>> +			dev_err(&pf->pdev->dev, "Tenant id not allowed for ip filter\n");
>>>> +			return I40E_ERR_CONFIG;
>>>> +		}
>>>> +		filter->dst_ip = key->dst;
>>>> +		filter->src_ip = key->src;
>>>> +		filter->ip_version = IPV4_VERSION;
>>>> +	}
>>>> +
>>>> +	if (addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
>>>> +		struct flow_dissector_key_ipv6_addrs *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_ipv6_addrs *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_IPV6_ADDRS,
>>>> +						  f->mask);
>>>> +
>>>> +		/* src and dest IPV6 address should not be LOOPBACK
>>>> +		 * (0:0:0:0:0:0:0:1), which can be represented as ::1
>>>> +		 */
>>>> +		if (ipv6_addr_loopback(&key->dst) ||
>>>> +		    ipv6_addr_loopback(&key->src)) {
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"Bad ipv6, addr is LOOPBACK\n");
>>>> +			return I40E_ERR_CONFIG;
>>>> +		}
>>>> +		if (!ipv6_addr_any(&mask->dst) || !ipv6_addr_any(&mask->src))
>>>> +			field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +
>>>> +		memcpy(&filter->src_ipv6, &key->src.s6_addr32,
>>>> +		       sizeof(filter->src_ipv6));
>>>> +		memcpy(&filter->dst_ipv6, &key->dst.s6_addr32,
>>>> +		       sizeof(filter->dst_ipv6));
>>>> +
>>>> +		/* mark it as IPv6 filter, to be used later */
>>>> +		filter->ip_version = IPV6_VERSION;
>>>> +	}
>>>> +
>>>> +	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_PORTS)) {
>>>> +		struct flow_dissector_key_ports *key =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>>> +						  f->key);
>>>> +		struct flow_dissector_key_ports *mask =
>>>> +			skb_flow_dissector_target(f->dissector,
>>>> +						  FLOW_DISSECTOR_KEY_PORTS,
>>>> +						  f->mask);
>>>> +
>>>> +		if (mask->src) {
>>>> +			if (mask->src == cpu_to_be16(0xffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad src port mask 0x%04x\n",
>>>> +					be16_to_cpu(mask->src));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		if (mask->dst) {
>>>> +			if (mask->dst == cpu_to_be16(0xffff)) {
>>>> +				field_flags |= I40E_CLOUD_FIELD_IIP;
>>>> +			} else {
>>>> +				dev_err(&pf->pdev->dev, "Bad dst port mask 0x%04x\n",
>>>> +					be16_to_cpu(mask->dst));
>>>> +				return I40E_ERR_CONFIG;
>>>> +			}
>>>> +		}
>>>> +
>>>> +		filter->dst_port = key->dst;
>>>> +		filter->src_port = key->src;
>>>> +
>>>> +		/* For now, only supports destination port*/
>>>> +		filter->port_type |= I40E_CLOUD_FILTER_PORT_DEST;
>>>> +
>>>> +		switch (filter->ip_proto) {
>>>> +		case IPPROTO_TCP:
>>>> +		case IPPROTO_UDP:
>>>> +			break;
>>>> +		default:
>>>> +			dev_err(&pf->pdev->dev,
>>>> +				"Only UDP and TCP transport are supported\n");
>>>> +			return -EINVAL;
>>>> +		}
>>>> +	}
>>>> +	filter->flags = field_flags;
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_handle_redirect_action: Forward to a traffic class on the device
>>>> + * @vsi: Pointer to VSI
>>>> + * @ifindex: ifindex of the device to forwared to
>>>> + * @tc: traffic class index on the device
>>>> + * @filter: Pointer to cloud filter structure
>>>> + *
>>>> + **/
>>>> +static int i40e_handle_redirect_action(struct i40e_vsi *vsi, int ifindex, u8 tc,
>>>> +				       struct i40e_cloud_filter *filter)
>>>> +{
>>>> +	struct i40e_channel *ch, *ch_tmp;
>>>> +
>>>> +	/* redirect to a traffic class on the same device */
>>>> +	if (vsi->netdev->ifindex == ifindex) {
>>>> +		if (tc == 0) {
>>>> +			filter->seid = vsi->seid;
>>>> +			return 0;
>>>> +		} else if (vsi->tc_config.enabled_tc & BIT(tc)) {
>>>> +			if (!filter->dst_port) {
>>>> +				dev_err(&vsi->back->pdev->dev,
>>>> +					"Specify destination port to redirect to traffic class that is not default\n");
>>>> +				return -EINVAL;
>>>> +			}
>>>> +			if (list_empty(&vsi->ch_list))
>>>> +				return -EINVAL;
>>>> +			list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list,
>>>> +						 list) {
>>>> +				if (ch->seid == vsi->tc_seid_map[tc])
>>>> +					filter->seid = ch->seid;
>>>> +			}
>>>> +			return 0;
>>>> +		}
>>>> +	}
>>>> +	return -EINVAL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_parse_tc_actions - Parse tc actions
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + * @filter: Pointer to cloud filter structure
>>>> + *
>>>> + **/
>>>> +static int i40e_parse_tc_actions(struct i40e_vsi *vsi, struct tcf_exts *exts,
>>>> +				 struct i40e_cloud_filter *filter)
>>>> +{
>>>> +	const struct tc_action *a;
>>>> +	LIST_HEAD(actions);
>>>> +	int err;
>>>> +
>>>> +	if (!tcf_exts_has_actions(exts))
>>>> +		return -EINVAL;
>>>> +
>>>> +	tcf_exts_to_list(exts, &actions);
>>>> +	list_for_each_entry(a, &actions, list) {
>>>> +		/* Drop action */
>>>> +		if (is_tcf_gact_shot(a)) {
>>>> +			dev_err(&vsi->back->pdev->dev,
>>>> +				"Cloud filters do not support the drop action.\n");
>>>> +			return -EOPNOTSUPP;
>>>> +		}
>>>> +
>>>> +		/* Redirect to a traffic class on the same device */
>>>> +		if (!is_tcf_mirred_egress_redirect(a) && is_tcf_mirred_tc(a)) {
>>>> +			int ifindex = tcf_mirred_ifindex(a);
>>>> +			u8 tc = tcf_mirred_tc(a);
>>>> +
>>>> +			err = i40e_handle_redirect_action(vsi, ifindex, tc,
>>>> +							  filter);
>>>> +			if (err == 0)
>>>> +				return err;
>>>> +		}
>>>> +	}
>>>> +	return -EINVAL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_configure_clsflower - Configure tc flower filters
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + *
>>>> + **/
>>>> +static int i40e_configure_clsflower(struct i40e_vsi *vsi,
>>>> +				    struct tc_cls_flower_offload *cls_flower)
>>>> +{
>>>> +	struct i40e_cloud_filter *filter = NULL;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int err = 0;
>>>> +
>>>> +	if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
>>>> +	    test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
>>>> +		return -EBUSY;
>>>> +
>>>> +	if (pf->fdir_pf_active_filters ||
>>>> +	    (!hlist_empty(&pf->fdir_filter_list))) {
>>>> +		dev_err(&vsi->back->pdev->dev,
>>>> +			"Flow Director Sideband filters exists, turn ntuple off to configure cloud filters\n");
>>>> +		return -EINVAL;
>>>> +	}
>>>> +
>>>> +	if (vsi->back->flags & I40E_FLAG_FD_SB_ENABLED) {
>>>> +		dev_err(&vsi->back->pdev->dev,
>>>> +			"Disable Flow Director Sideband, configuring Cloud filters via tc-flower\n");
>>>> +		vsi->back->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +		vsi->back->flags |= I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>>> +	}
>>>> +
>>>> +	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
>>>> +	if (!filter)
>>>> +		return -ENOMEM;
>>>> +
>>>> +	filter->cookie = cls_flower->cookie;
>>>> +
>>>> +	err = i40e_parse_cls_flower(vsi, cls_flower, filter);
>>>> +	if (err < 0)
>>>> +		goto err;
>>>> +
>>>> +	err = i40e_parse_tc_actions(vsi, cls_flower->exts, filter);
>>>> +	if (err < 0)
>>>> +		goto err;
>>>> +
>>>> +	/* Add cloud filter */
>>>> +	if (filter->dst_port)
>>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, true);
>>>> +	else
>>>> +		err = i40e_add_del_cloud_filter(vsi, filter, true);
>>>> +
>>>> +	if (err) {
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"Failed to add cloud filter, err %s\n",
>>>> +			i40e_stat_str(&pf->hw, err));
>>>> +		err = i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>>> +		goto err;
>>>> +	}
>>>> +
>>>> +	/* add filter to the ordered list */
>>>> +	INIT_HLIST_NODE(&filter->cloud_node);
>>>> +
>>>> +	hlist_add_head(&filter->cloud_node, &pf->cloud_filter_list);
>>>> +
>>>> +	pf->num_cloud_filters++;
>>>> +
>>>> +	return err;
>>>> +err:
>>>> +	kfree(filter);
>>>> +	return err;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_find_cloud_filter - Find the could filter in the list
>>>> + * @vsi: Pointer to VSI
>>>> + * @cookie: filter specific cookie
>>>> + *
>>>> + **/
>>>> +static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
>>>> +							unsigned long *cookie)
>>>> +{
>>>> +	struct i40e_cloud_filter *filter = NULL;
>>>> +	struct hlist_node *node2;
>>>> +
>>>> +	hlist_for_each_entry_safe(filter, node2,
>>>> +				  &vsi->back->cloud_filter_list, cloud_node)
>>>> +		if (!memcmp(cookie, &filter->cookie, sizeof(filter->cookie)))
>>>> +			return filter;
>>>> +	return NULL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_delete_clsflower - Remove tc flower filters
>>>> + * @vsi: Pointer to VSI
>>>> + * @cls_flower: Pointer to struct tc_cls_flower_offload
>>>> + *
>>>> + **/
>>>> +static int i40e_delete_clsflower(struct i40e_vsi *vsi,
>>>> +				 struct tc_cls_flower_offload *cls_flower)
>>>> +{
>>>> +	struct i40e_cloud_filter *filter = NULL;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	int err = 0;
>>>> +
>>>> +	filter = i40e_find_cloud_filter(vsi, &cls_flower->cookie);
>>>> +
>>>> +	if (!filter)
>>>> +		return -EINVAL;
>>>> +
>>>> +	hash_del(&filter->cloud_node);
>>>> +
>>>> +	if (filter->dst_port)
>>>> +		err = i40e_add_del_cloud_filter_big_buf(vsi, filter, false);
>>>> +	else
>>>> +		err = i40e_add_del_cloud_filter(vsi, filter, false);
>>>> +	if (err) {
>>>> +		kfree(filter);
>>>> +		dev_err(&pf->pdev->dev,
>>>> +			"Failed to delete cloud filter, err %s\n",
>>>> +			i40e_stat_str(&pf->hw, err));
>>>> +		return i40e_aq_rc_to_posix(err, pf->hw.aq.asq_last_status);
>>>> +	}
>>>> +
>>>> +	kfree(filter);
>>>> +	pf->num_cloud_filters--;
>>>> +
>>>> +	if (!pf->num_cloud_filters)
>>>> +		if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>>> +		    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>>> +			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>>> +		}
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * i40e_setup_tc_cls_flower - flower classifier offloads
>>>> + * @netdev: net device to configure
>>>> + * @type_data: offload data
>>>> + **/
>>>> +static int i40e_setup_tc_cls_flower(struct net_device *netdev,
>>>> +				    struct tc_cls_flower_offload *cls_flower)
>>>> +{
>>>> +	struct i40e_netdev_priv *np = netdev_priv(netdev);
>>>> +	struct i40e_vsi *vsi = np->vsi;
>>>> +
>>>> +	if (!is_classid_clsact_ingress(cls_flower->common.classid) ||
>>>> +	    cls_flower->common.chain_index)
>>>> +		return -EOPNOTSUPP;
>>>> +
>>>> +	switch (cls_flower->command) {
>>>> +	case TC_CLSFLOWER_REPLACE:
>>>> +		return i40e_configure_clsflower(vsi, cls_flower);
>>>> +	case TC_CLSFLOWER_DESTROY:
>>>> +		return i40e_delete_clsflower(vsi, cls_flower);
>>>> +	case TC_CLSFLOWER_STATS:
>>>> +		return -EOPNOTSUPP;
>>>> +	default:
>>>> +		return -EINVAL;
>>>> +	}
>>>> +}
>>>> +
>>>> static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
>>>> 			   void *type_data)
>>>> {
>>>> -	if (type != TC_SETUP_MQPRIO)
>>>> +	switch (type) {
>>>> +	case TC_SETUP_MQPRIO:
>>>> +		return i40e_setup_tc(netdev, type_data);
>>>> +	case TC_SETUP_CLSFLOWER:
>>>> +		return i40e_setup_tc_cls_flower(netdev, type_data);
>>>> +	default:
>>>> 		return -EOPNOTSUPP;
>>>> -
>>>> -	return i40e_setup_tc(netdev, type_data);
>>>> +	}
>>>> }
>>>>
>>>> /**
>>>> @@ -6939,6 +7756,13 @@ static void i40e_cloud_filter_exit(struct i40e_pf *pf)
>>>> 		kfree(cfilter);
>>>> 	}
>>>> 	pf->num_cloud_filters = 0;
>>>> +
>>>> +	if ((pf->flags & I40E_FLAG_FD_SB_TO_CLOUD_FILTER) &&
>>>> +	    !(pf->flags & I40E_FLAG_FD_SB_INACTIVE)) {
>>>> +		pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>>> +		pf->flags &= ~I40E_FLAG_FD_SB_TO_CLOUD_FILTER;
>>>> +		pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>>> +	}
>>>> }
>>>>
>>>> /**
>>>> @@ -8046,7 +8870,8 @@ static int i40e_reconstitute_veb(struct i40e_veb *veb)
>>>>  * i40e_get_capabilities - get info about the HW
>>>>  * @pf: the PF struct
>>>>  **/
>>>> -static int i40e_get_capabilities(struct i40e_pf *pf)
>>>> +static int i40e_get_capabilities(struct i40e_pf *pf,
>>>> +				 enum i40e_admin_queue_opc list_type)
>>>> {
>>>> 	struct i40e_aqc_list_capabilities_element_resp *cap_buf;
>>>> 	u16 data_size;
>>>> @@ -8061,9 +8886,8 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>>>
>>>> 		/* this loads the data into the hw struct for us */
>>>> 		err = i40e_aq_discover_capabilities(&pf->hw, cap_buf, buf_len,
>>>> -					    &data_size,
>>>> -					    i40e_aqc_opc_list_func_capabilities,
>>>> -					    NULL);
>>>> +						    &data_size, list_type,
>>>> +						    NULL);
>>>> 		/* data loaded, buffer no longer needed */
>>>> 		kfree(cap_buf);
>>>>
>>>> @@ -8080,26 +8904,44 @@ static int i40e_get_capabilities(struct i40e_pf *pf)
>>>> 		}
>>>> 	} while (err);
>>>>
>>>> -	if (pf->hw.debug_mask & I40E_DEBUG_USER)
>>>> -		dev_info(&pf->pdev->dev,
>>>> -			 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>>> -			 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>>> -			 pf->hw.func_caps.num_msix_vectors,
>>>> -			 pf->hw.func_caps.num_msix_vectors_vf,
>>>> -			 pf->hw.func_caps.fd_filters_guaranteed,
>>>> -			 pf->hw.func_caps.fd_filters_best_effort,
>>>> -			 pf->hw.func_caps.num_tx_qp,
>>>> -			 pf->hw.func_caps.num_vsis);
>>>> -
>>>> +	if (pf->hw.debug_mask & I40E_DEBUG_USER) {
>>>> +		if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "pf=%d, num_vfs=%d, msix_pf=%d, msix_vf=%d, fd_g=%d, fd_b=%d, pf_max_q=%d num_vsi=%d\n",
>>>> +				 pf->hw.pf_id, pf->hw.func_caps.num_vfs,
>>>> +				 pf->hw.func_caps.num_msix_vectors,
>>>> +				 pf->hw.func_caps.num_msix_vectors_vf,
>>>> +				 pf->hw.func_caps.fd_filters_guaranteed,
>>>> +				 pf->hw.func_caps.fd_filters_best_effort,
>>>> +				 pf->hw.func_caps.num_tx_qp,
>>>> +				 pf->hw.func_caps.num_vsis);
>>>> +		} else if (list_type == i40e_aqc_opc_list_dev_capabilities) {
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "switch_mode=0x%04x, function_valid=0x%08x\n",
>>>> +				 pf->hw.dev_caps.switch_mode,
>>>> +				 pf->hw.dev_caps.valid_functions);
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "SR-IOV=%d, num_vfs for all function=%u\n",
>>>> +				 pf->hw.dev_caps.sr_iov_1_1,
>>>> +				 pf->hw.dev_caps.num_vfs);
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "num_vsis=%u, num_rx:%u, num_tx=%u\n",
>>>> +				 pf->hw.dev_caps.num_vsis,
>>>> +				 pf->hw.dev_caps.num_rx_qp,
>>>> +				 pf->hw.dev_caps.num_tx_qp);
>>>> +		}
>>>> +	}
>>>> +	if (list_type == i40e_aqc_opc_list_func_capabilities) {
>>>> #define DEF_NUM_VSI (1 + (pf->hw.func_caps.fcoe ? 1 : 0) \
>>>> 		       + pf->hw.func_caps.num_vfs)
>>>> -	if (pf->hw.revision_id == 0 && (DEF_NUM_VSI > pf->hw.func_caps.num_vsis)) {
>>>> -		dev_info(&pf->pdev->dev,
>>>> -			 "got num_vsis %d, setting num_vsis to %d\n",
>>>> -			 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>>> -		pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>>> +		if (pf->hw.revision_id == 0 &&
>>>> +		    (pf->hw.func_caps.num_vsis < DEF_NUM_VSI)) {
>>>> +			dev_info(&pf->pdev->dev,
>>>> +				 "got num_vsis %d, setting num_vsis to %d\n",
>>>> +				 pf->hw.func_caps.num_vsis, DEF_NUM_VSI);
>>>> +			pf->hw.func_caps.num_vsis = DEF_NUM_VSI;
>>>> +		}
>>>> 	}
>>>> -
>>>> 	return 0;
>>>> }
>>>>
>>>> @@ -8141,6 +8983,7 @@ static void i40e_fdir_sb_setup(struct i40e_pf *pf)
>>>> 		if (!vsi) {
>>>> 			dev_info(&pf->pdev->dev, "Couldn't create FDir VSI\n");
>>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 			return;
>>>> 		}
>>>> 	}
>>>> @@ -8163,6 +9006,48 @@ static void i40e_fdir_teardown(struct i40e_pf *pf)
>>>> }
>>>>
>>>> /**
>>>> + * i40e_rebuild_cloud_filters - Rebuilds cloud filters for VSIs
>>>> + * @vsi: PF main vsi
>>>> + * @seid: seid of main or channel VSIs
>>>> + *
>>>> + * Rebuilds cloud filters associated with main VSI and channel VSIs if they
>>>> + * existed before reset
>>>> + **/
>>>> +static int i40e_rebuild_cloud_filters(struct i40e_vsi *vsi, u16 seid)
>>>> +{
>>>> +	struct i40e_cloud_filter *cfilter;
>>>> +	struct i40e_pf *pf = vsi->back;
>>>> +	struct hlist_node *node;
>>>> +	i40e_status ret;
>>>> +
>>>> +	/* Add cloud filters back if they exist */
>>>> +	if (hlist_empty(&pf->cloud_filter_list))
>>>> +		return 0;
>>>> +
>>>> +	hlist_for_each_entry_safe(cfilter, node, &pf->cloud_filter_list,
>>>> +				  cloud_node) {
>>>> +		if (cfilter->seid != seid)
>>>> +			continue;
>>>> +
>>>> +		if (cfilter->dst_port)
>>>> +			ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter,
>>>> +								true);
>>>> +		else
>>>> +			ret = i40e_add_del_cloud_filter(vsi, cfilter, true);
>>>> +
>>>> +		if (ret) {
>>>> +			dev_dbg(&pf->pdev->dev,
>>>> +				"Failed to rebuild cloud filter, err %s aq_err %s\n",
>>>> +				i40e_stat_str(&pf->hw, ret),
>>>> +				i40e_aq_str(&pf->hw,
>>>> +					    pf->hw.aq.asq_last_status));
>>>> +			return ret;
>>>> +		}
>>>> +	}
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>>  * i40e_rebuild_channels - Rebuilds channel VSIs if they existed before reset
>>>>  * @vsi: PF main vsi
>>>>  *
>>>> @@ -8199,6 +9084,13 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi)
>>>> 						I40E_BW_CREDIT_DIVISOR,
>>>> 				ch->seid);
>>>> 		}
>>>> +		ret = i40e_rebuild_cloud_filters(vsi, ch->seid);
>>>> +		if (ret) {
>>>> +			dev_dbg(&vsi->back->pdev->dev,
>>>> +				"Failed to rebuild cloud filters for channel VSI %u\n",
>>>> +				ch->seid);
>>>> +			return ret;
>>>> +		}
>>>> 	}
>>>> 	return 0;
>>>> }
>>>> @@ -8365,7 +9257,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>>> 		i40e_verify_eeprom(pf);
>>>>
>>>> 	i40e_clear_pxe_mode(hw);
>>>> -	ret = i40e_get_capabilities(pf);
>>>> +	ret = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>>> 	if (ret)
>>>> 		goto end_core_reset;
>>>>
>>>> @@ -8482,6 +9374,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
>>>> 			goto end_unlock;
>>>> 	}
>>>>
>>>> +	ret = i40e_rebuild_cloud_filters(vsi, vsi->seid);
>>>> +	if (ret)
>>>> +		goto end_unlock;
>>>> +
>>>> 	/* PF Main VSI is rebuild by now, go ahead and rebuild channel VSIs
>>>> 	 * for this main VSI if they exist
>>>> 	 */
>>>> @@ -9404,6 +10300,7 @@ static int i40e_init_msix(struct i40e_pf *pf)
>>>> 	    (pf->num_fdsb_msix == 0)) {
>>>> 		dev_info(&pf->pdev->dev, "Sideband Flowdir disabled, not enough MSI-X vectors\n");
>>>> 		pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 	}
>>>> 	if ((pf->flags & I40E_FLAG_VMDQ_ENABLED) &&
>>>> 	    (pf->num_vmdq_msix == 0)) {
>>>> @@ -9521,6 +10418,7 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
>>>> 				       I40E_FLAG_FD_SB_ENABLED	|
>>>> 				       I40E_FLAG_FD_ATR_ENABLED	|
>>>> 				       I40E_FLAG_VMDQ_ENABLED);
>>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>>
>>>> 			/* rework the queue expectations without MSIX */
>>>> 			i40e_determine_queue_usage(pf);
>>>> @@ -10263,9 +11161,13 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>>> 		/* Enable filters and mark for reset */
>>>> 		if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
>>>> 			need_reset = true;
>>>> -		/* enable FD_SB only if there is MSI-X vector */
>>>> -		if (pf->num_fdsb_msix > 0)
>>>> +		/* enable FD_SB only if there is MSI-X vector and no cloud
>>>> +		 * filters exist
>>>> +		 */
>>>> +		if (pf->num_fdsb_msix > 0 && !pf->num_cloud_filters) {
>>>> 			pf->flags |= I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags &= ~I40E_FLAG_FD_SB_INACTIVE;
>>>> +		}
>>>> 	} else {
>>>> 		/* turn off filters, mark for reset and clear SW filter list */
>>>> 		if (pf->flags & I40E_FLAG_FD_SB_ENABLED) {
>>>> @@ -10274,6 +11176,8 @@ bool i40e_set_ntuple(struct i40e_pf *pf, netdev_features_t features)
>>>> 		}
>>>> 		pf->flags &= ~(I40E_FLAG_FD_SB_ENABLED |
>>>> 			       I40E_FLAG_FD_SB_AUTO_DISABLED);
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> +
>>>> 		/* reset fd counters */
>>>> 		pf->fd_add_err = 0;
>>>> 		pf->fd_atr_cnt = 0;
>>>> @@ -10857,7 +11761,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
>>>> 		netdev->hw_features |= NETIF_F_NTUPLE;
>>>> 	hw_features = hw_enc_features		|
>>>> 		      NETIF_F_HW_VLAN_CTAG_TX	|
>>>> -		      NETIF_F_HW_VLAN_CTAG_RX;
>>>> +		      NETIF_F_HW_VLAN_CTAG_RX	|
>>>> +		      NETIF_F_HW_TC;
>>>>
>>>> 	netdev->hw_features |= hw_features;
>>>>
>>>> @@ -12159,8 +13064,10 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>>> 	*/
>>>>
>>>> 	if ((pf->hw.pf_id == 0) &&
>>>> -	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
>>>> +	    !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
>>>> 		flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
>>>> +		pf->last_sw_conf_flags = flags;
>>>> +	}
>>>>
>>>> 	if (pf->hw.pf_id == 0) {
>>>> 		u16 valid_flags;
>>>> @@ -12176,6 +13083,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
>>>> 					     pf->hw.aq.asq_last_status));
>>>> 			/* not a fatal problem, just keep going */
>>>> 		}
>>>> +		pf->last_sw_conf_valid_flags = valid_flags;
>>>> 	}
>>>>
>>>> 	/* first time setup */
>>>> @@ -12273,6 +13181,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>>> 			       I40E_FLAG_DCB_ENABLED	|
>>>> 			       I40E_FLAG_SRIOV_ENABLED	|
>>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 	} else if (!(pf->flags & (I40E_FLAG_RSS_ENABLED |
>>>> 				  I40E_FLAG_FD_SB_ENABLED |
>>>> 				  I40E_FLAG_FD_ATR_ENABLED |
>>>> @@ -12287,6 +13196,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>>> 			       I40E_FLAG_FD_ATR_ENABLED	|
>>>> 			       I40E_FLAG_DCB_ENABLED	|
>>>> 			       I40E_FLAG_VMDQ_ENABLED);
>>>> +		pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 	} else {
>>>> 		/* Not enough queues for all TCs */
>>>> 		if ((pf->flags & I40E_FLAG_DCB_CAPABLE) &&
>>>> @@ -12310,6 +13220,7 @@ static void i40e_determine_queue_usage(struct i40e_pf *pf)
>>>> 			queues_left -= 1; /* save 1 queue for FD */
>>>> 		} else {
>>>> 			pf->flags &= ~I40E_FLAG_FD_SB_ENABLED;
>>>> +			pf->flags |= I40E_FLAG_FD_SB_INACTIVE;
>>>> 			dev_info(&pf->pdev->dev, "not enough queues for Flow Director. Flow Director feature is disabled\n");
>>>> 		}
>>>> 	}
>>>> @@ -12613,7 +13524,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
>>>> 		dev_warn(&pdev->dev, "This device is a pre-production adapter/LOM. Please be aware there may be issues with your hardware. If you are experiencing problems please contact your Intel or hardware representative who provided you with this hardware.\n");
>>>>
>>>> 	i40e_clear_pxe_mode(hw);
>>>> -	err = i40e_get_capabilities(pf);
>>>> +	err = i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities);
>>>> 	if (err)
>>>> 		goto err_adminq_setup;
>>>>
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>>> index 92869f5..3bb6659 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
>>>> @@ -283,6 +283,22 @@ i40e_status i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw,
>>>> 		struct i40e_asq_cmd_details *cmd_details);
>>>> i40e_status i40e_aq_resume_port_tx(struct i40e_hw *hw,
>>>> 				   struct i40e_asq_cmd_details *cmd_details);
>>>> +i40e_status
>>>> +i40e_aq_add_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count);
>>>> +enum i40e_status_code
>>>> +i40e_aq_add_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count);
>>>> +enum i40e_status_code
>>>> +i40e_aq_rem_cloud_filters(struct i40e_hw *hw, u16 vsi,
>>>> +			  struct i40e_aqc_cloud_filters_element_data *filters,
>>>> +			  u8 filter_count);
>>>> +i40e_status
>>>> +i40e_aq_rem_cloud_filters_bb(struct i40e_hw *hw, u16 seid,
>>>> +			     struct i40e_aqc_cloud_filters_element_bb *filters,
>>>> +			     u8 filter_count);
>>>> i40e_status i40e_read_lldp_cfg(struct i40e_hw *hw,
>>>> 			       struct i40e_lldp_variables *lldp_cfg);
>>>> /* i40e_common */
>>>> diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>>> index c019f46..af38881 100644
>>>> --- a/drivers/net/ethernet/intel/i40e/i40e_type.h
>>>> +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
>>>> @@ -287,6 +287,7 @@ struct i40e_hw_capabilities {
>>>> #define I40E_NVM_IMAGE_TYPE_MODE1	0x6
>>>> #define I40E_NVM_IMAGE_TYPE_MODE2	0x7
>>>> #define I40E_NVM_IMAGE_TYPE_MODE3	0x8
>>>> +#define I40E_SWITCH_MODE_MASK		0xF
>>>>
>>>> 	u32  management_mode;
>>>> 	u32  mng_protocols_over_mctp;
>>>> diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>>> index b8c78bf..4fe27f0 100644
>>>> --- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>>> +++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
>>>> @@ -1360,6 +1360,9 @@ struct i40e_aqc_cloud_filters_element_data {
>>>> 		struct {
>>>> 			u8 data[16];
>>>> 		} v6;
>>>> +		struct {
>>>> +			__le16 data[8];
>>>> +		} raw_v6;
>>>> 	} ipaddr;
>>>> 	__le16	flags;
>>>> #define I40E_AQC_ADD_CLOUD_FILTER_SHIFT			0
>>>>

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-09-29  6:20 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-13  9:59 [RFC PATCH v3 0/7] tc-flower based cloud filters in i40e Amritha Nambiar
2017-09-13  9:59 ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13  9:59 ` [RFC PATCH v3 1/7] tc_mirred: Clean up white-space noise Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13  9:59 ` [RFC PATCH v3 2/7] sched: act_mirred: Traffic class option for mirror/redirect action Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13 13:18   ` Jiri Pirko
2017-09-13 13:18     ` [Intel-wired-lan] " Jiri Pirko
2017-09-14  7:58     ` Nambiar, Amritha
2017-09-14  7:58       ` [Intel-wired-lan] " Nambiar, Amritha
2017-09-13  9:59 ` [RFC PATCH v3 3/7] i40e: Map TCs with the VSI seids Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13  9:59 ` [RFC PATCH v3 4/7] i40e: Cloud filter mode for set_switch_config command Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13  9:59 ` [RFC PATCH v3 5/7] i40e: Admin queue definitions for cloud filters Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13  9:59 ` [RFC PATCH v3 6/7] i40e: Clean up of " Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13  9:59 ` [RFC PATCH v3 7/7] i40e: Enable cloud filters via tc-flower Amritha Nambiar
2017-09-13  9:59   ` [Intel-wired-lan] " Amritha Nambiar
2017-09-13 13:26   ` Jiri Pirko
2017-09-13 13:26     ` [Intel-wired-lan] " Jiri Pirko
2017-09-14  8:00     ` Nambiar, Amritha
2017-09-14  8:00       ` [Intel-wired-lan] " Nambiar, Amritha
2017-09-28 19:22       ` Nambiar, Amritha
2017-09-28 19:22         ` [Intel-wired-lan] " Nambiar, Amritha
2017-09-29  6:20         ` Jiri Pirko
2017-09-29  6:20           ` [Intel-wired-lan] " Jiri Pirko
2017-09-13 10:12 ` [RFC PATCH v3 0/7] tc-flower based cloud filters in i40e Jiri Pirko
2017-09-13 10:12   ` [Intel-wired-lan] " Jiri Pirko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.