netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12
@ 2019-11-12 17:13 Saeed Mahameed
  2019-11-12 17:13 ` [net-next 1/8] net/mlx5: DR, Fix matcher builders select check Saeed Mahameed
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Saeed Mahameed

Hi Dave,

This series adds misc updates to mlx5 driver,
For more information please see tag log below.

Highlights:
1) Devlink reload support
2) SRIOV VF ACL vlan trunk via tc flower

Please pull and let me know if there is any problem.

Please note that the series starts with a merge of mlx5-next branch,
to resolve and avoid dependency with rdma tree.

Thanks,
Saeed.

---
The following changes since commit 73bb3b4ca77d012518d235838eeb451df96d5bc0:

  Merge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux (2019-11-12 08:59:12 -0800)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux.git tags/mlx5-updates-2019-11-12

for you to fetch changes up to 9beec84b9d9adda3daa93cb93e745a476c6bf565:

  net/mlx5: Add vf ACL access via tc flower (2019-11-12 08:59:20 -0800)

----------------------------------------------------------------
mlx5-updates-2019-11-12

1) Merge mlx5-next for devlink reload dependencies
2) Devlink reload support
3) SRIOV VF ACL vlan trunk via tc flower
4) Misc cleanup

----------------------------------------------------------------
Alex Vesker (1):
      net/mlx5: DR, Fix matcher builders select check

Ariel Levkovich (2):
      net/mlx5: Add eswitch ACL vlan trunk api
      net/mlx5: Add vf ACL access via tc flower

Eli Cohen (2):
      net/mlx5: Remove redundant NULL initializations
      net/mlx5e: Fix error flow cleanup in mlx5e_tc_tun_create_header_ipv4/6

Michael Guralnik (2):
      net/mlx5e: Set netdev name space on creation
      net/mlx5: Add devlink reload

Parav Pandit (1):
      net/mlx5: Read num_vfs before disabling SR-IOV

 drivers/net/ethernet/mellanox/mlx5/core/devlink.c  |  20 +
 .../net/ethernet/mellanox/mlx5/core/en/tc_tun.c    |  26 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |   2 +
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   |   2 +
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 139 +++++-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c  | 517 ++++++++++++++++-----
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.h  |  30 +-
 .../ethernet/mellanox/mlx5/core/eswitch_offloads.c |   8 +-
 drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h |   5 +
 drivers/net/ethernet/mellanox/mlx5/core/main.c     |   4 +-
 .../net/ethernet/mellanox/mlx5/core/mlx5_core.h    |   3 +
 drivers/net/ethernet/mellanox/mlx5/core/sriov.c    |  11 +-
 .../mellanox/mlx5/core/steering/dr_matcher.c       |   2 +-
 13 files changed, 609 insertions(+), 160 deletions(-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [net-next 1/8] net/mlx5: DR, Fix matcher builders select check
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 2/8] net/mlx5: Read num_vfs before disabling SR-IOV Saeed Mahameed
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Alex Vesker, Saeed Mahameed

From: Alex Vesker <valex@mellanox.com>

When selecting a matcher ste_builder_arr will always be evaluated
as true, instead check if num_of_builders is set for validity.

Fixes: 667f264676c7 ("net/mlx5: DR, Support IPv4 and IPv6 mixed matcher")
Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
index 5db947df8763..c6548980daf0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_matcher.c
@@ -154,7 +154,7 @@ int mlx5dr_matcher_select_builders(struct mlx5dr_matcher *matcher,
 	nic_matcher->num_of_builders =
 		nic_matcher->num_of_builders_arr[outer_ipv][inner_ipv];
 
-	if (!nic_matcher->ste_builder) {
+	if (!nic_matcher->num_of_builders) {
 		mlx5dr_dbg(matcher->tbl->dmn,
 			   "Rule not supported on this matcher due to IP related fields\n");
 		return -EINVAL;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 2/8] net/mlx5: Read num_vfs before disabling SR-IOV
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
  2019-11-12 17:13 ` [net-next 1/8] net/mlx5: DR, Fix matcher builders select check Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 3/8] net/mlx5: Remove redundant NULL initializations Saeed Mahameed
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Parav Pandit, Daniel Jurgens, Saeed Mahameed

From: Parav Pandit <parav@mellanox.com>

mlx5_device_disable_sriov() currently reads num_vfs from the PCI core.
However when mlx5_device_disable_sriov() is executed, SR-IOV is
already disabled at the PCI level.
Due to this disable_hca() cleanup is not done during SR-IOV disable
flow.

mlx5_sriov_disable()
  pci_enable_sriov()
  mlx5_device_disable_sriov() <- num_vfs is zero here.

When SR-IOV enablement fails during mlx5_sriov_enable(), HCA's are left
in enabled stage because mlx5_device_disable_sriov() relies on num_vfs
from PCI core.

mlx5_sriov_enable()
  mlx5_device_enable_sriov()
  pci_enable_sriov() <- Fails

Hence, to overcome above issues,
(a) Read num_vfs before disabling SR-IOV and use it.
(b) Use num_vfs given when enabling sriov in error unwinding path.

Fixes: d886aba677a0 ("net/mlx5: Reduce dependency on enabled_vfs counter and num_vfs")
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/sriov.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
index f641f1336402..03f037811f1d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
@@ -108,10 +108,10 @@ static int mlx5_device_enable_sriov(struct mlx5_core_dev *dev, int num_vfs)
 	return 0;
 }
 
-static void mlx5_device_disable_sriov(struct mlx5_core_dev *dev, bool clear_vf)
+static void
+mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf)
 {
 	struct mlx5_core_sriov *sriov = &dev->priv.sriov;
-	int num_vfs = pci_num_vf(dev->pdev);
 	int err;
 	int vf;
 
@@ -147,7 +147,7 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs)
 	err = pci_enable_sriov(pdev, num_vfs);
 	if (err) {
 		mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err);
-		mlx5_device_disable_sriov(dev, true);
+		mlx5_device_disable_sriov(dev, num_vfs, true);
 	}
 	return err;
 }
@@ -155,9 +155,10 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs)
 static void mlx5_sriov_disable(struct pci_dev *pdev)
 {
 	struct mlx5_core_dev *dev  = pci_get_drvdata(pdev);
+	int num_vfs = pci_num_vf(dev->pdev);
 
 	pci_disable_sriov(pdev);
-	mlx5_device_disable_sriov(dev, true);
+	mlx5_device_disable_sriov(dev, num_vfs, true);
 }
 
 int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs)
@@ -192,7 +193,7 @@ void mlx5_sriov_detach(struct mlx5_core_dev *dev)
 	if (!mlx5_core_is_pf(dev))
 		return;
 
-	mlx5_device_disable_sriov(dev, false);
+	mlx5_device_disable_sriov(dev, pci_num_vf(dev->pdev), false);
 }
 
 static u16 mlx5_get_max_vfs(struct mlx5_core_dev *dev)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 3/8] net/mlx5: Remove redundant NULL initializations
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
  2019-11-12 17:13 ` [net-next 1/8] net/mlx5: DR, Fix matcher builders select check Saeed Mahameed
  2019-11-12 17:13 ` [net-next 2/8] net/mlx5: Read num_vfs before disabling SR-IOV Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 4/8] net/mlx5e: Fix error flow cleanup in mlx5e_tc_tun_create_header_ipv4/6 Saeed Mahameed
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Eli Cohen, Vlad Buslov, Roi Dayan, Saeed Mahameed

From: Eli Cohen <eli@mellanox.com>

Neighbour initializations to NULL are not necessary as the pointers are
not used if an error is returned, and if success returned, pointers are
initialized.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Vlad Buslov <vladbu@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
index 13af72556987..4f78efeb6ee8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
@@ -77,8 +77,8 @@ static int mlx5e_route_lookup_ipv4(struct mlx5e_priv *priv,
 				   struct neighbour **out_n,
 				   u8 *out_ttl)
 {
+	struct neighbour *n;
 	struct rtable *rt;
-	struct neighbour *n = NULL;
 
 #if IS_ENABLED(CONFIG_INET)
 	struct mlx5_core_dev *mdev = priv->mdev;
@@ -138,8 +138,8 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
 				   struct neighbour **out_n,
 				   u8 *out_ttl)
 {
-	struct neighbour *n = NULL;
 	struct dst_entry *dst;
+	struct neighbour *n;
 
 #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6)
 	int ret;
@@ -212,8 +212,8 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
 	int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
 	const struct ip_tunnel_key *tun_key = &e->tun_info->key;
 	struct net_device *out_dev, *route_dev;
-	struct neighbour *n = NULL;
 	struct flowi4 fl4 = {};
+	struct neighbour *n;
 	int ipv4_encap_size;
 	char *encap_header;
 	u8 nud_state, ttl;
@@ -328,9 +328,9 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
 	int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
 	const struct ip_tunnel_key *tun_key = &e->tun_info->key;
 	struct net_device *out_dev, *route_dev;
-	struct neighbour *n = NULL;
 	struct flowi6 fl6 = {};
 	struct ipv6hdr *ip6h;
+	struct neighbour *n;
 	int ipv6_encap_size;
 	char *encap_header;
 	u8 nud_state, ttl;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 4/8] net/mlx5e: Fix error flow cleanup in mlx5e_tc_tun_create_header_ipv4/6
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
                   ` (2 preceding siblings ...)
  2019-11-12 17:13 ` [net-next 3/8] net/mlx5: Remove redundant NULL initializations Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 5/8] net/mlx5e: Set netdev name space on creation Saeed Mahameed
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Eli Cohen, Roi Dayan, Vlad Buslov, Saeed Mahameed

From: Eli Cohen <eli@mellanox.com>

Be sure to release the neighbour in case of failures after successful
route lookup.

Signed-off-by: Eli Cohen <eli@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Vlad Buslov <vladbu@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../ethernet/mellanox/mlx5/core/en/tc_tun.c    | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
index 4f78efeb6ee8..5316cedd78bf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
@@ -239,12 +239,15 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
 	if (max_encap_size < ipv4_encap_size) {
 		mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n",
 			       ipv4_encap_size, max_encap_size);
-		return -EOPNOTSUPP;
+		err = -EOPNOTSUPP;
+		goto out;
 	}
 
 	encap_header = kzalloc(ipv4_encap_size, GFP_KERNEL);
-	if (!encap_header)
-		return -ENOMEM;
+	if (!encap_header) {
+		err = -ENOMEM;
+		goto out;
+	}
 
 	/* used by mlx5e_detach_encap to lookup a neigh hash table
 	 * entry in the neigh hash table when a user deletes a rule
@@ -355,12 +358,15 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
 	if (max_encap_size < ipv6_encap_size) {
 		mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n",
 			       ipv6_encap_size, max_encap_size);
-		return -EOPNOTSUPP;
+		err = -EOPNOTSUPP;
+		goto out;
 	}
 
 	encap_header = kzalloc(ipv6_encap_size, GFP_KERNEL);
-	if (!encap_header)
-		return -ENOMEM;
+	if (!encap_header) {
+		err = -ENOMEM;
+		goto out;
+	}
 
 	/* used by mlx5e_detach_encap to lookup a neigh hash table
 	 * entry in the neigh hash table when a user deletes a rule
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 5/8] net/mlx5e: Set netdev name space on creation
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
                   ` (3 preceding siblings ...)
  2019-11-12 17:13 ` [net-next 4/8] net/mlx5e: Fix error flow cleanup in mlx5e_tc_tun_create_header_ipv4/6 Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 6/8] net/mlx5: Add devlink reload Saeed Mahameed
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Michael Guralnik, Jiri Pirko, Saeed Mahameed

From: Michael Guralnik <michaelgur@mellanox.com>

Use devlink instance name space to set the netdev net namespace.

Preparation patch for devlink reload implementation.

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  | 2 ++
 drivers/net/ethernet/mellanox/mlx5/core/en_rep.c   | 2 ++
 drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h | 5 +++++
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 772bfdbdeb9c..06a592fb62bf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -63,6 +63,7 @@
 #include "en/xsk/rx.h"
 #include "en/xsk/tx.h"
 #include "en/hv_vhca_stats.h"
+#include "lib/mlx5.h"
 
 
 bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev)
@@ -5427,6 +5428,7 @@ static void *mlx5e_add(struct mlx5_core_dev *mdev)
 		return NULL;
 	}
 
+	dev_net_set(netdev, mlx5_core_net(mdev));
 	priv = netdev_priv(netdev);
 
 	err = mlx5e_attach(mdev, priv);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index cd9bb7c7b341..c7f98f1fd9b1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -47,6 +47,7 @@
 #include "en/tc_tun.h"
 #include "fs_core.h"
 #include "lib/port_tun.h"
+#include "lib/mlx5.h"
 #define CREATE_TRACE_POINTS
 #include "diag/en_rep_tracepoint.h"
 
@@ -1877,6 +1878,7 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
 		return -EINVAL;
 	}
 
+	dev_net_set(netdev, mlx5_core_net(dev));
 	rpriv->netdev = netdev;
 	rep->rep_data[REP_ETH].priv = rpriv;
 	INIT_LIST_HEAD(&rpriv->vport_sqs_list);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
index b99d469e4e64..249539247e2e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
@@ -84,4 +84,9 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
 			       void *key, u32 sz_bytes, u32 *p_key_id);
 void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
 
+static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
+{
+	return devlink_net(priv_to_devlink(dev));
+}
+
 #endif
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 6/8] net/mlx5: Add devlink reload
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
                   ` (4 preceding siblings ...)
  2019-11-12 17:13 ` [net-next 5/8] net/mlx5e: Set netdev name space on creation Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 7/8] net/mlx5: Add eswitch ACL vlan trunk api Saeed Mahameed
  2019-11-12 17:13 ` [net-next 8/8] net/mlx5: Add vf ACL access via tc flower Saeed Mahameed
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Michael Guralnik, Jiri Pirko, Saeed Mahameed

From: Michael Guralnik <michaelgur@mellanox.com>

Implement devlink reload for mlx5.

Usage example:
devlink dev reload pci/0000:06:00.0

Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/devlink.c | 20 +++++++++++++++++++
 .../net/ethernet/mellanox/mlx5/core/main.c    |  4 ++--
 .../ethernet/mellanox/mlx5/core/mlx5_core.h   |  3 +++
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
index b2c26388edb1..ac108f1e5bd6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
@@ -85,6 +85,22 @@ mlx5_devlink_info_get(struct devlink *devlink, struct devlink_info_req *req,
 	return 0;
 }
 
+static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
+				    struct netlink_ext_ack *extack)
+{
+	struct mlx5_core_dev *dev = devlink_priv(devlink);
+
+	return mlx5_unload_one(dev, false);
+}
+
+static int mlx5_devlink_reload_up(struct devlink *devlink,
+				  struct netlink_ext_ack *extack)
+{
+	struct mlx5_core_dev *dev = devlink_priv(devlink);
+
+	return mlx5_load_one(dev, false);
+}
+
 static const struct devlink_ops mlx5_devlink_ops = {
 #ifdef CONFIG_MLX5_ESWITCH
 	.eswitch_mode_set = mlx5_devlink_eswitch_mode_set,
@@ -96,6 +112,8 @@ static const struct devlink_ops mlx5_devlink_ops = {
 #endif
 	.flash_update = mlx5_devlink_flash_update,
 	.info_get = mlx5_devlink_info_get,
+	.reload_down = mlx5_devlink_reload_down,
+	.reload_up = mlx5_devlink_reload_up,
 };
 
 struct devlink *mlx5_devlink_alloc(void)
@@ -235,6 +253,7 @@ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
 		goto params_reg_err;
 	mlx5_devlink_set_params_init_values(devlink);
 	devlink_params_publish(devlink);
+	devlink_reload_enable(devlink);
 	return 0;
 
 params_reg_err:
@@ -244,6 +263,7 @@ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
 
 void mlx5_devlink_unregister(struct devlink *devlink)
 {
+	devlink_reload_disable(devlink);
 	devlink_params_unregister(devlink, mlx5_devlink_params,
 				  ARRAY_SIZE(mlx5_devlink_params));
 	devlink_unregister(devlink);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index c9a091d3226c..31fbfd6e8bb9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -1168,7 +1168,7 @@ static void mlx5_unload(struct mlx5_core_dev *dev)
 	mlx5_put_uars_page(dev, dev->priv.uar);
 }
 
-static int mlx5_load_one(struct mlx5_core_dev *dev, bool boot)
+int mlx5_load_one(struct mlx5_core_dev *dev, bool boot)
 {
 	int err = 0;
 
@@ -1226,7 +1226,7 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, bool boot)
 	return err;
 }
 
-static int mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup)
+int mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup)
 {
 	if (cleanup) {
 		mlx5_unregister_device(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index b100489dc85c..da67b28d6e23 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -243,4 +243,7 @@ enum {
 
 u8 mlx5_get_nic_state(struct mlx5_core_dev *dev);
 void mlx5_set_nic_state(struct mlx5_core_dev *dev, u8 state);
+
+int mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup);
+int mlx5_load_one(struct mlx5_core_dev *dev, bool boot);
 #endif /* __MLX5_CORE_H__ */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 7/8] net/mlx5: Add eswitch ACL vlan trunk api
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
                   ` (5 preceding siblings ...)
  2019-11-12 17:13 ` [net-next 6/8] net/mlx5: Add devlink reload Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 17:13 ` [net-next 8/8] net/mlx5: Add vf ACL access via tc flower Saeed Mahameed
  7 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Ariel Levkovich, Jakub Kicinski, Saeed Mahameed

From: Ariel Levkovich <lariel@mellanox.com>

The patch is adding an api to add/remove vlan id
to/from the eswitch ACL tables, per vport.

With this api, the driver can add and remove vlan ids
to the vport's ACL table and control the vlan ids the
vport can receive and transmit.

By default, the eswitch configuration allows the vport
to receive and transmit all of the vlan ids.

Once the first vlan id is added to the ACL table using
this api for a specific direction (ingress/egress),
the rest of the vlan ids are blocked in that direction
and will have to be added to the table as well if required
by the vport.

Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 498 +++++++++++++-----
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  29 +-
 .../mellanox/mlx5/core/eswitch_offloads.c     |   8 +-
 3 files changed, 400 insertions(+), 135 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index f2e400a23a59..41d67ca6cce3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -58,6 +58,11 @@ struct vport_addr {
 	bool mc_promisc;
 };
 
+struct mlx5_acl_vlan {
+	struct mlx5_flow_handle	*acl_vlan_rule;
+	struct list_head list;
+};
+
 static void esw_destroy_legacy_fdb_table(struct mlx5_eswitch *esw);
 static void esw_cleanup_vepa_rules(struct mlx5_eswitch *esw);
 
@@ -950,6 +955,7 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 				struct mlx5_vport *vport)
 {
 	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+	struct mlx5_flow_group *untagged_grp = NULL;
 	struct mlx5_flow_group *vlan_grp = NULL;
 	struct mlx5_flow_group *drop_grp = NULL;
 	struct mlx5_core_dev *dev = esw->dev;
@@ -957,11 +963,7 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 	struct mlx5_flow_table *acl;
 	void *match_criteria;
 	u32 *flow_group_in;
-	/* The egress acl table contains 2 rules:
-	 * 1)Allow traffic with vlan_tag=vst_vlan_id
-	 * 2)Drop all other traffic.
-	 */
-	int table_size = 2;
+	int table_size;
 	int err = 0;
 
 	if (!MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support))
@@ -984,6 +986,13 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 	if (!flow_group_in)
 		return -ENOMEM;
 
+	/* The egress acl table contains 3 groups:
+	 * 1)Allow tagged traffic with vlan_tag=vst_vlan_id/vgt+_vlan_id
+	 * 2)Allow untagged traffic
+	 * 3)Drop all other traffic
+	 */
+	table_size = VLAN_N_VID + 2;
+
 	acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
 	if (IS_ERR(acl)) {
 		err = PTR_ERR(acl);
@@ -994,11 +1003,25 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 
 	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
 	match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
+
+	/* Create flow group for allowed untagged flow rule */
 	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.first_vid);
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
 
+	untagged_grp = mlx5_create_flow_group(acl, flow_group_in);
+	if (IS_ERR(untagged_grp)) {
+		err = PTR_ERR(untagged_grp);
+		esw_warn(dev, "Failed to create E-Switch vport[%d] egress untagged flow group, err(%d)\n",
+			 vport->vport, err);
+		goto out;
+	}
+
+	/* Create flow group for allowed tagged flow rules */
+	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.first_vid);
+	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
+	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, VLAN_N_VID);
+
 	vlan_grp = mlx5_create_flow_group(acl, flow_group_in);
 	if (IS_ERR(vlan_grp)) {
 		err = PTR_ERR(vlan_grp);
@@ -1007,9 +1030,10 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 		goto out;
 	}
 
+	/* Create flow group for drop rule */
 	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
-	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, VLAN_N_VID + 1);
+	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, VLAN_N_VID + 1);
 	drop_grp = mlx5_create_flow_group(acl, flow_group_in);
 	if (IS_ERR(drop_grp)) {
 		err = PTR_ERR(drop_grp);
@@ -1021,27 +1045,45 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 	vport->egress.acl = acl;
 	vport->egress.drop_grp = drop_grp;
 	vport->egress.allowed_vlans_grp = vlan_grp;
+	vport->egress.allow_untagged_grp = untagged_grp;
+
 out:
+	if (err) {
+		if (!IS_ERR_OR_NULL(vlan_grp))
+			mlx5_destroy_flow_group(vlan_grp);
+		if (!IS_ERR_OR_NULL(untagged_grp))
+			mlx5_destroy_flow_group(untagged_grp);
+		if (!IS_ERR_OR_NULL(acl))
+			mlx5_destroy_flow_table(acl);
+	}
 	kvfree(flow_group_in);
-	if (err && !IS_ERR_OR_NULL(vlan_grp))
-		mlx5_destroy_flow_group(vlan_grp);
-	if (err && !IS_ERR_OR_NULL(acl))
-		mlx5_destroy_flow_table(acl);
 	return err;
 }
 
 void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
 				    struct mlx5_vport *vport)
 {
-	if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan)) {
-		mlx5_del_flow_rules(vport->egress.allowed_vlan);
-		vport->egress.allowed_vlan = NULL;
+	struct mlx5_acl_vlan *trunk_vlan_rule, *tmp;
+
+	if (!IS_ERR_OR_NULL(vport->egress.allowed_vst_vlan)) {
+		mlx5_del_flow_rules(vport->egress.allowed_vst_vlan);
+		vport->egress.allowed_vst_vlan = NULL;
+	}
+
+	list_for_each_entry_safe(trunk_vlan_rule, tmp,
+				 &vport->egress.legacy.allowed_vlans_rules, list) {
+		mlx5_del_flow_rules(trunk_vlan_rule->acl_vlan_rule);
+		list_del(&trunk_vlan_rule->list);
+		kfree(trunk_vlan_rule);
 	}
 
 	if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_rule)) {
 		mlx5_del_flow_rules(vport->egress.legacy.drop_rule);
 		vport->egress.legacy.drop_rule = NULL;
 	}
+
+	if (!IS_ERR_OR_NULL(vport->egress.legacy.allow_untagged_rule))
+		mlx5_del_flow_rules(vport->egress.legacy.allow_untagged_rule);
 }
 
 void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
@@ -1053,9 +1095,11 @@ void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
 	esw_debug(esw->dev, "Destroy vport[%d] E-Switch egress ACL\n", vport->vport);
 
 	esw_vport_cleanup_egress_rules(esw, vport);
+	mlx5_destroy_flow_group(vport->egress.allow_untagged_grp);
 	mlx5_destroy_flow_group(vport->egress.allowed_vlans_grp);
 	mlx5_destroy_flow_group(vport->egress.drop_grp);
 	mlx5_destroy_flow_table(vport->egress.acl);
+	vport->egress.allow_untagged_grp = NULL;
 	vport->egress.allowed_vlans_grp = NULL;
 	vport->egress.drop_grp = NULL;
 	vport->egress.acl = NULL;
@@ -1065,12 +1109,18 @@ static int
 esw_vport_create_legacy_ingress_acl_groups(struct mlx5_eswitch *esw,
 					   struct mlx5_vport *vport)
 {
+	bool need_vlan_filter = !!bitmap_weight(vport->ingress.legacy.acl_vlan_8021q_bitmap,
+						VLAN_N_VID);
 	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+	struct mlx5_flow_group *untagged_spoof_grp = NULL;
+	struct mlx5_flow_table *acl = vport->ingress.acl;
+	struct mlx5_flow_group *tagged_spoof_grp = NULL;
+	struct mlx5_flow_group *drop_grp = NULL;
 	struct mlx5_core_dev *dev = esw->dev;
-	struct mlx5_flow_group *g;
 	void *match_criteria;
+	int allow_grp_sz = 0;
 	u32 *flow_group_in;
-	int err;
+	int err = 0;
 
 	flow_group_in = kvzalloc(inlen, GFP_KERNEL);
 	if (!flow_group_in)
@@ -1079,83 +1129,68 @@ esw_vport_create_legacy_ingress_acl_groups(struct mlx5_eswitch *esw,
 	match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
 
 	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+
+	if (vport->info.vlan || vport->info.qos || need_vlan_filter)
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
+
+	if (vport->info.spoofchk) {
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+	}
+
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
 
-	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
-	if (IS_ERR(g)) {
-		err = PTR_ERR(g);
+	untagged_spoof_grp = mlx5_create_flow_group(acl, flow_group_in);
+	if (IS_ERR(untagged_spoof_grp)) {
+		err = PTR_ERR(untagged_spoof_grp);
 		esw_warn(dev, "vport[%d] ingress create untagged spoofchk flow group, err(%d)\n",
 			 vport->vport, err);
-		goto spoof_err;
+		goto out;
 	}
-	vport->ingress.legacy.allow_untagged_spoofchk_grp = g;
+	allow_grp_sz += 1;
 
-	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
+	if (!need_vlan_filter)
+		goto drop_grp;
+
+	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.first_vid);
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
-	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, VLAN_N_VID);
 
-	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
-	if (IS_ERR(g)) {
-		err = PTR_ERR(g);
-		esw_warn(dev, "vport[%d] ingress create untagged flow group, err(%d)\n",
+	tagged_spoof_grp = mlx5_create_flow_group(acl, flow_group_in);
+	if (IS_ERR(tagged_spoof_grp)) {
+		err = PTR_ERR(tagged_spoof_grp);
+		esw_warn(dev, "Failed to create E-Switch vport[%d] ingress tagged spoofchk flow group, err(%d)\n",
 			 vport->vport, err);
-		goto untagged_err;
+		goto out;
 	}
-	vport->ingress.legacy.allow_untagged_only_grp = g;
+	allow_grp_sz += VLAN_N_VID;
 
+drop_grp:
 	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
-	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 2);
-	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 2);
+	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, allow_grp_sz);
+	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, allow_grp_sz);
 
-	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
-	if (IS_ERR(g)) {
-		err = PTR_ERR(g);
-		esw_warn(dev, "vport[%d] ingress create spoofchk flow group, err(%d)\n",
+	drop_grp = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
+	if (IS_ERR(drop_grp)) {
+		err = PTR_ERR(drop_grp);
+		esw_warn(dev, "vport[%d] ingress create drop flow group, err(%d)\n",
 			 vport->vport, err);
-		goto allow_spoof_err;
+		goto out;
 	}
-	vport->ingress.legacy.allow_spoofchk_only_grp = g;
 
-	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 3);
-	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 3);
+	vport->ingress.legacy.allow_untagged_spoofchk_grp = untagged_spoof_grp;
+	vport->ingress.legacy.allow_tagged_spoofchk_grp = tagged_spoof_grp;
+	vport->ingress.legacy.drop_grp = drop_grp;
 
-	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
-	if (IS_ERR(g)) {
-		err = PTR_ERR(g);
-		esw_warn(dev, "vport[%d] ingress create drop flow group, err(%d)\n",
-			 vport->vport, err);
-		goto drop_err;
+out:
+	if (err) {
+		if (!IS_ERR_OR_NULL(tagged_spoof_grp))
+			mlx5_destroy_flow_group(tagged_spoof_grp);
+		if (!IS_ERR_OR_NULL(untagged_spoof_grp))
+			mlx5_destroy_flow_group(untagged_spoof_grp);
 	}
-	vport->ingress.legacy.drop_grp = g;
-	kvfree(flow_group_in);
-	return 0;
 
-drop_err:
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_spoofchk_only_grp)) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
-		vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
-	}
-allow_spoof_err:
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_only_grp)) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
-		vport->ingress.legacy.allow_untagged_only_grp = NULL;
-	}
-untagged_err:
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_spoofchk_grp)) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
-		vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
-	}
-spoof_err:
 	kvfree(flow_group_in);
 	return err;
 }
@@ -1207,14 +1242,23 @@ void esw_vport_destroy_ingress_acl_table(struct mlx5_vport *vport)
 void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
 				     struct mlx5_vport *vport)
 {
+	struct mlx5_acl_vlan *trunk_vlan_rule, *tmp;
+
 	if (vport->ingress.legacy.drop_rule) {
 		mlx5_del_flow_rules(vport->ingress.legacy.drop_rule);
 		vport->ingress.legacy.drop_rule = NULL;
 	}
 
-	if (vport->ingress.allow_rule) {
-		mlx5_del_flow_rules(vport->ingress.allow_rule);
-		vport->ingress.allow_rule = NULL;
+	list_for_each_entry_safe(trunk_vlan_rule, tmp,
+				 &vport->ingress.legacy.allowed_vlans_rules, list) {
+		mlx5_del_flow_rules(trunk_vlan_rule->acl_vlan_rule);
+		list_del(&trunk_vlan_rule->list);
+		kfree(trunk_vlan_rule);
+	}
+
+	if (vport->ingress.allow_untagged_rule) {
+		mlx5_del_flow_rules(vport->ingress.allow_untagged_rule);
+		vport->ingress.allow_untagged_rule = NULL;
 	}
 }
 
@@ -1227,18 +1271,14 @@ static void esw_vport_disable_legacy_ingress_acl(struct mlx5_eswitch *esw,
 	esw_debug(esw->dev, "Destroy vport[%d] E-Switch ingress ACL\n", vport->vport);
 
 	esw_vport_cleanup_ingress_rules(esw, vport);
-	if (vport->ingress.legacy.allow_spoofchk_only_grp) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
-		vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
-	}
-	if (vport->ingress.legacy.allow_untagged_only_grp) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
-		vport->ingress.legacy.allow_untagged_only_grp = NULL;
-	}
 	if (vport->ingress.legacy.allow_untagged_spoofchk_grp) {
 		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
 		vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
 	}
+	if (vport->ingress.legacy.allow_tagged_spoofchk_grp) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_tagged_spoofchk_grp);
+		vport->ingress.legacy.allow_tagged_spoofchk_grp = NULL;
+	}
 	if (vport->ingress.legacy.drop_grp) {
 		mlx5_destroy_flow_group(vport->ingress.legacy.drop_grp);
 		vport->ingress.legacy.drop_grp = NULL;
@@ -1252,30 +1292,45 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 	struct mlx5_fc *counter = vport->ingress.legacy.drop_counter;
 	struct mlx5_flow_destination drop_ctr_dst = {0};
 	struct mlx5_flow_destination *dst = NULL;
+	struct mlx5_acl_vlan *trunk_vlan_rule;
 	struct mlx5_flow_act flow_act = {0};
 	struct mlx5_flow_spec *spec = NULL;
+	bool need_vlan_filter;
+	bool need_acl_table;
 	int dest_num = 0;
+	u16 vlan_id = 0;
+	int table_size;
 	int err = 0;
 	u8 *smac_v;
 
-	/* The ingress acl table contains 4 groups
-	 * (2 active rules at the same time -
-	 *      1 allow rule from one of the first 3 groups.
-	 *      1 drop rule from the last group):
-	 * 1)Allow untagged traffic with smac=original mac.
-	 * 2)Allow untagged traffic.
-	 * 3)Allow traffic with smac=original mac.
-	 * 4)Drop all other traffic.
-	 */
-	int table_size = 4;
-
 	esw_vport_cleanup_ingress_rules(esw, vport);
 
-	if (!vport->info.vlan && !vport->info.qos && !vport->info.spoofchk) {
+	need_vlan_filter = !!bitmap_weight(vport->ingress.legacy.acl_vlan_8021q_bitmap,
+					   VLAN_N_VID);
+	need_acl_table = vport->info.vlan || vport->info.qos ||
+			 vport->info.spoofchk || need_vlan_filter;
+
+	if (!need_acl_table) {
 		esw_vport_disable_legacy_ingress_acl(esw, vport);
 		return 0;
 	}
 
+	if ((vport->info.vlan || vport->info.qos) && need_vlan_filter) {
+		mlx5_core_warn(esw->dev,
+			       "vport[%d] configure ingress rules failed, Cannot enable both VGT+ and VST\n",
+			       vport->vport);
+		return -EPERM;
+	}
+
+	/* The ingress acl table contains 3 groups
+	 * (2 active rules at the same time -
+	 *	1 allow rule from one of the first 2 groups.
+	 *	1 drop rule from the last group):
+	 * 1)Allow untagged traffic with/without smac=original mac.
+	 * 2)Allow tagged (VLAN trunk list) traffic with/without smac=original mac.
+	 * 3)Drop all other traffic.
+	 */
+	table_size = need_vlan_filter ? 8192 : 4;
 	if (!vport->ingress.acl) {
 		err = esw_vport_create_ingress_acl_table(esw, vport, table_size);
 		if (err) {
@@ -1300,7 +1355,10 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		goto out;
 	}
 
-	if (vport->info.vlan || vport->info.qos)
+	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
+
+	if (vport->info.vlan || vport->info.qos || need_vlan_filter)
 		MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
 
 	if (vport->info.spoofchk) {
@@ -1312,20 +1370,52 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		ether_addr_copy(smac_v, vport->info.mac);
 	}
 
-	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
-	vport->ingress.allow_rule =
-		mlx5_add_flow_rules(vport->ingress.acl, spec,
-				    &flow_act, NULL, 0);
-	if (IS_ERR(vport->ingress.allow_rule)) {
-		err = PTR_ERR(vport->ingress.allow_rule);
-		esw_warn(esw->dev,
-			 "vport[%d] configure ingress allow rule, err(%d)\n",
-			 vport->vport, err);
-		vport->ingress.allow_rule = NULL;
-		goto out;
+	/* Allow untagged */
+	if (!need_vlan_filter ||
+	    (need_vlan_filter && test_bit(0, vport->ingress.legacy.acl_vlan_8021q_bitmap))) {
+		vport->ingress.allow_untagged_rule =
+			mlx5_add_flow_rules(vport->ingress.acl, spec,
+					    &flow_act, NULL, 0);
+		if (IS_ERR(vport->ingress.allow_untagged_rule)) {
+			err = PTR_ERR(vport->ingress.allow_untagged_rule);
+			esw_warn(esw->dev,
+				 "vport[%d] configure ingress allow rule, err(%d)\n",
+				 vport->vport, err);
+			vport->ingress.allow_untagged_rule = NULL;
+			goto out;
+		}
 	}
 
+	if (!need_vlan_filter)
+		goto drop_rule;
+
+	/* Allow tagged (VLAN trunk list) */
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
+
+	for_each_set_bit(vlan_id, vport->ingress.legacy.acl_vlan_8021q_bitmap, VLAN_N_VID) {
+		trunk_vlan_rule = kzalloc(sizeof(*trunk_vlan_rule), GFP_KERNEL);
+		if (!trunk_vlan_rule) {
+			err = -ENOMEM;
+			goto out;
+		}
+
+		MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid,
+			 vlan_id);
+		trunk_vlan_rule->acl_vlan_rule =
+			mlx5_add_flow_rules(vport->ingress.acl, spec, &flow_act, NULL, 0);
+		if (IS_ERR(trunk_vlan_rule->acl_vlan_rule)) {
+			err = PTR_ERR(trunk_vlan_rule->acl_vlan_rule);
+			esw_warn(esw->dev,
+				 "vport[%d] configure ingress allowed vlan rule failed, err(%d)\n",
+				 vport->vport, err);
+			kfree(trunk_vlan_rule);
+			continue;
+		}
+		list_add(&trunk_vlan_rule->list, &vport->ingress.legacy.allowed_vlans_rules);
+	}
+
+drop_rule:
 	memset(spec, 0, sizeof(*spec));
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
 
@@ -1348,11 +1438,11 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		vport->ingress.legacy.drop_rule = NULL;
 		goto out;
 	}
-	kvfree(spec);
-	return 0;
 
 out:
-	esw_vport_disable_legacy_ingress_acl(esw, vport);
+	if (err)
+		esw_vport_disable_legacy_ingress_acl(esw, vport);
+
 	kvfree(spec);
 	return err;
 }
@@ -1365,7 +1455,7 @@ int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
 	struct mlx5_flow_spec *spec;
 	int err = 0;
 
-	if (vport->egress.allowed_vlan)
+	if (vport->egress.allowed_vst_vlan)
 		return -EEXIST;
 
 	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
@@ -1379,15 +1469,15 @@ int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
 
 	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
 	flow_act.action = flow_action;
-	vport->egress.allowed_vlan =
+	vport->egress.allowed_vst_vlan =
 		mlx5_add_flow_rules(vport->egress.acl, spec,
 				    &flow_act, NULL, 0);
-	if (IS_ERR(vport->egress.allowed_vlan)) {
-		err = PTR_ERR(vport->egress.allowed_vlan);
+	if (IS_ERR(vport->egress.allowed_vst_vlan)) {
+		err = PTR_ERR(vport->egress.allowed_vst_vlan);
 		esw_warn(esw->dev,
 			 "vport[%d] configure egress vlan rule failed, err(%d)\n",
 			 vport->vport, err);
-		vport->egress.allowed_vlan = NULL;
+		vport->egress.allowed_vst_vlan = NULL;
 	}
 
 	kvfree(spec);
@@ -1400,14 +1490,21 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 	struct mlx5_fc *counter = vport->egress.legacy.drop_counter;
 	struct mlx5_flow_destination drop_ctr_dst = {0};
 	struct mlx5_flow_destination *dst = NULL;
+	struct mlx5_acl_vlan *trunk_vlan_rule;
 	struct mlx5_flow_act flow_act = {0};
 	struct mlx5_flow_spec *spec;
+	bool need_vlan_filter;
+	bool need_acl_table;
 	int dest_num = 0;
+	u16 vlan_id = 0;
 	int err = 0;
 
 	esw_vport_cleanup_egress_rules(esw, vport);
 
-	if (!vport->info.vlan && !vport->info.qos) {
+	need_vlan_filter = !!bitmap_weight(vport->egress.legacy.acl_vlan_8021q_bitmap,
+					   VLAN_N_VID);
+	need_acl_table = vport->info.vlan || vport->info.qos || need_vlan_filter;
+	if (!need_acl_table) {
 		esw_vport_disable_egress_acl(esw, vport);
 		return 0;
 	}
@@ -1424,17 +1521,67 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 		  "vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
 		  vport->vport, vport->info.vlan, vport->info.qos);
 
-	/* Allowed vlan rule */
-	err = mlx5_esw_create_vport_egress_acl_vlan(esw, vport, vport->info.vlan,
-						    MLX5_FLOW_CONTEXT_ACTION_ALLOW);
-	if (err)
-		return err;
-
-	/* Drop others rule (star rule) */
 	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
-	if (!spec)
+	if (!spec) {
+		err = -ENOMEM;
 		goto out;
+	}
+
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
+	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
+
+	/* Allow untagged */
+	if (need_vlan_filter && test_bit(0, vport->egress.legacy.acl_vlan_8021q_bitmap)) {
+		vport->egress.legacy.allow_untagged_rule =
+			mlx5_add_flow_rules(vport->egress.acl, spec,
+					    &flow_act, NULL, 0);
+		if (IS_ERR(vport->egress.legacy.allow_untagged_rule)) {
+			err = PTR_ERR(vport->egress.legacy.allow_untagged_rule);
+			esw_warn(esw->dev,
+				 "vport[%d] configure egress allow rule, err(%d)\n",
+				 vport->vport, err);
+			vport->egress.legacy.allow_untagged_rule = NULL;
+		}
+	}
 
+	/* VST rule */
+	if (vport->info.vlan || vport->info.qos) {
+		err = mlx5_esw_create_vport_egress_acl_vlan(esw, vport, vport->info.vlan,
+							    MLX5_FLOW_CONTEXT_ACTION_ALLOW);
+		if (err)
+			goto out;
+	}
+
+	/* Allowed trunk vlans rules (VGT+) */
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
+
+	for_each_set_bit(vlan_id, vport->egress.legacy.acl_vlan_8021q_bitmap, VLAN_N_VID) {
+		trunk_vlan_rule = kzalloc(sizeof(*trunk_vlan_rule), GFP_KERNEL);
+		if (!trunk_vlan_rule) {
+			err = -ENOMEM;
+			goto out;
+		}
+
+		MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid,
+			 vlan_id);
+		trunk_vlan_rule->acl_vlan_rule =
+			mlx5_add_flow_rules(vport->egress.acl, spec, &flow_act, NULL, 0);
+		if (IS_ERR(trunk_vlan_rule->acl_vlan_rule)) {
+			err = PTR_ERR(trunk_vlan_rule->acl_vlan_rule);
+			esw_warn(esw->dev,
+				 "vport[%d] configure egress allowed vlan rule failed, err(%d)\n",
+				 vport->vport, err);
+			kfree(trunk_vlan_rule);
+			continue;
+		}
+		list_add(&trunk_vlan_rule->list, &vport->egress.legacy.allowed_vlans_rules);
+	}
+
+	/* Drop others rule (star rule) */
+
+	memset(spec, 0, sizeof(*spec));
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
 
 	/* Attach egress drop flow counter */
@@ -1455,7 +1602,11 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 			 vport->vport, err);
 		vport->egress.legacy.drop_rule = NULL;
 	}
+
 out:
+	if (err)
+		esw_vport_cleanup_egress_rules(esw, vport);
+
 	kvfree(spec);
 	return err;
 }
@@ -1787,6 +1938,11 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 
 	esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
 
+	bitmap_zero(vport->ingress.legacy.acl_vlan_8021q_bitmap, VLAN_N_VID);
+	bitmap_zero(vport->egress.legacy.acl_vlan_8021q_bitmap, VLAN_N_VID);
+	INIT_LIST_HEAD(&vport->egress.legacy.allowed_vlans_rules);
+	INIT_LIST_HEAD(&vport->ingress.legacy.allowed_vlans_rules);
+
 	/* Restore old vport configuration */
 	esw_apply_vport_conf(esw, vport);
 
@@ -2268,6 +2424,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
 	ivi->trusted = evport->info.trusted;
 	ivi->min_tx_rate = evport->info.min_rate;
 	ivi->max_tx_rate = evport->info.max_rate;
+
 	mutex_unlock(&esw->state_lock);
 
 	return 0;
@@ -2286,6 +2443,15 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
 	if (vlan > 4095 || qos > 7)
 		return -EINVAL;
 
+	if (bitmap_weight(evport->ingress.legacy.acl_vlan_8021q_bitmap, VLAN_N_VID) ||
+	    bitmap_weight(evport->egress.legacy.acl_vlan_8021q_bitmap, VLAN_N_VID)) {
+		err = -EPERM;
+		mlx5_core_warn(esw->dev,
+			       "VST is not allowed when operating in VGT+ mode vport(%d)\n",
+			       vport);
+		return -EPERM;
+	}
+
 	err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags);
 	if (err)
 		return err;
@@ -2628,6 +2794,84 @@ static int mlx5_eswitch_query_vport_drop_stats(struct mlx5_core_dev *dev,
 	return 0;
 }
 
+int mlx5_eswitch_add_vport_trunk_vlan(struct mlx5_eswitch *esw,
+				      u16 vport, u16 vlanid,
+				      enum mlx5_acl_flow_direction dir)
+{
+	struct mlx5_vport *evport;
+	unsigned long *vlan_bm;
+	int err = 0;
+
+	if (!ESW_ALLOWED(esw))
+		return -EPERM;
+
+	mutex_lock(&esw->state_lock);
+
+	evport = mlx5_eswitch_get_vport(esw, vport);
+	if (IS_ERR(evport) || vlanid >= VLAN_N_VID) {
+		err = -EINVAL;
+		goto unlock;
+	}
+
+	if (evport->info.vlan || evport->info.qos) {
+		err = -EPERM;
+		mlx5_core_warn(esw->dev,
+			       "VGT+ is not allowed when operating in VST mode vport(%d)\n",
+			       vport);
+		goto unlock;
+	}
+
+	vlan_bm = (dir == MLX5_ACL_EGRESS) ? evport->egress.legacy.acl_vlan_8021q_bitmap :
+					     evport->ingress.legacy.acl_vlan_8021q_bitmap;
+	if (!test_bit(vlanid, vlan_bm)) {
+		bitmap_set(vlan_bm, vlanid, 1);
+		err = (dir == MLX5_ACL_EGRESS) ? esw_vport_egress_config(esw, evport) :
+						 esw_vport_ingress_config(esw, evport);
+	}
+
+	if (err)
+		bitmap_clear(vlan_bm, vlanid, 1);
+
+unlock:
+	mutex_unlock(&esw->state_lock);
+
+	return err;
+}
+
+int mlx5_eswitch_del_vport_trunk_vlan(struct mlx5_eswitch *esw,
+				      u16 vport, u16 vlanid,
+				      enum mlx5_acl_flow_direction dir)
+{
+	struct mlx5_vport *evport;
+	unsigned long *vlan_bm;
+	int err = 0;
+
+	if (!ESW_ALLOWED(esw))
+		return -EPERM;
+
+	mutex_lock(&esw->state_lock);
+
+	evport = mlx5_eswitch_get_vport(esw, vport);
+
+	if (IS_ERR(evport) || vlanid >= VLAN_N_VID) {
+		err = -EINVAL;
+		goto unlock;
+	}
+
+	vlan_bm = (dir == MLX5_ACL_EGRESS) ? evport->egress.legacy.acl_vlan_8021q_bitmap :
+					     evport->ingress.legacy.acl_vlan_8021q_bitmap;
+	if (test_bit(vlanid, vlan_bm)) {
+		bitmap_clear(vlan_bm, vlanid, 1);
+		err = (dir == MLX5_ACL_EGRESS) ? esw_vport_egress_config(esw, evport) :
+						 esw_vport_ingress_config(esw, evport);
+	}
+
+unlock:
+	mutex_unlock(&esw->state_lock);
+
+	return err;
+}
+
 int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
 				 u16 vport_num,
 				 struct ifla_vf_stats *vf_stats)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 920d8f529fb9..5c84dbdb300f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -35,6 +35,8 @@
 
 #include <linux/if_ether.h>
 #include <linux/if_link.h>
+#include <linux/if_vlan.h>
+#include <linux/bitmap.h>
 #include <linux/atomic.h>
 #include <net/devlink.h>
 #include <linux/mlx5/device.h>
@@ -51,6 +53,9 @@
 #define MLX5_MAX_MC_PER_VPORT(dev) \
 	(1 << MLX5_CAP_GEN(dev, log_max_current_mc_list))
 
+#define MLX5_MAX_VLAN_PER_VPORT(dev) \
+	(1 << MLX5_CAP_GEN(dev, log_max_vlan_list))
+
 #define MLX5_MIN_BW_SHARE 1
 
 #define MLX5_RATE_TO_BW_SHARE(rate, divider, limit) \
@@ -65,14 +70,15 @@
 
 struct vport_ingress {
 	struct mlx5_flow_table *acl;
-	struct mlx5_flow_handle *allow_rule;
+	struct mlx5_flow_handle *allow_untagged_rule;
 	struct {
-		struct mlx5_flow_group *allow_spoofchk_only_grp;
+		DECLARE_BITMAP(acl_vlan_8021q_bitmap, VLAN_N_VID);
 		struct mlx5_flow_group *allow_untagged_spoofchk_grp;
-		struct mlx5_flow_group *allow_untagged_only_grp;
+		struct mlx5_flow_group *allow_tagged_spoofchk_grp;
 		struct mlx5_flow_group *drop_grp;
 		struct mlx5_flow_handle *drop_rule;
 		struct mlx5_fc *drop_counter;
+		struct list_head allowed_vlans_rules;
 	} legacy;
 	struct {
 		struct mlx5_flow_group *metadata_grp;
@@ -83,11 +89,15 @@ struct vport_ingress {
 
 struct vport_egress {
 	struct mlx5_flow_table *acl;
+	struct mlx5_flow_group *allow_untagged_grp;
 	struct mlx5_flow_group *allowed_vlans_grp;
 	struct mlx5_flow_group *drop_grp;
-	struct mlx5_flow_handle  *allowed_vlan;
+	struct mlx5_flow_handle  *allowed_vst_vlan;
 	struct {
+		DECLARE_BITMAP(acl_vlan_8021q_bitmap, VLAN_N_VID);
 		struct mlx5_flow_handle *drop_rule;
+		struct list_head allowed_vlans_rules;
+		struct mlx5_flow_handle *allow_untagged_rule;
 		struct mlx5_fc *drop_counter;
 	} legacy;
 };
@@ -221,6 +231,11 @@ enum {
 	MLX5_ESWITCH_VPORT_MATCH_METADATA = BIT(0),
 };
 
+enum mlx5_acl_flow_direction {
+	MLX5_ACL_EGRESS,
+	MLX5_ACL_INGRESS,
+};
+
 struct mlx5_eswitch {
 	struct mlx5_core_dev    *dev;
 	struct mlx5_nb          nb;
@@ -292,6 +307,12 @@ int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting);
 int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting);
 int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
 				  u16 vport, struct ifla_vf_info *ivi);
+int mlx5_eswitch_add_vport_trunk_vlan(struct mlx5_eswitch *esw,
+				      u16 vport, u16 vlanid,
+				      enum mlx5_acl_flow_direction dir);
+int mlx5_eswitch_del_vport_trunk_vlan(struct mlx5_eswitch *esw,
+				      u16 vport, u16 vlanid,
+				      enum mlx5_acl_flow_direction dir);
 int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
 				 u16 vport,
 				 struct ifla_vf_stats *vf_stats);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 138a9d328b91..633cda35dd1f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1782,15 +1782,15 @@ static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw,
 		flow_act.modify_hdr = vport->ingress.offloads.modify_metadata;
 	}
 
-	vport->ingress.allow_rule =
+	vport->ingress.allow_untagged_rule =
 		mlx5_add_flow_rules(vport->ingress.acl, spec,
 				    &flow_act, NULL, 0);
-	if (IS_ERR(vport->ingress.allow_rule)) {
-		err = PTR_ERR(vport->ingress.allow_rule);
+	if (IS_ERR(vport->ingress.allow_untagged_rule)) {
+		err = PTR_ERR(vport->ingress.allow_untagged_rule);
 		esw_warn(esw->dev,
 			 "vport[%d] configure ingress untagged allow rule, err(%d)\n",
 			 vport->vport, err);
-		vport->ingress.allow_rule = NULL;
+		vport->ingress.allow_untagged_rule = NULL;
 		goto out;
 	}
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
                   ` (6 preceding siblings ...)
  2019-11-12 17:13 ` [net-next 7/8] net/mlx5: Add eswitch ACL vlan trunk api Saeed Mahameed
@ 2019-11-12 17:13 ` Saeed Mahameed
  2019-11-12 23:41   ` Jakub Kicinski
  7 siblings, 1 reply; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-12 17:13 UTC (permalink / raw)
  To: David S. Miller; +Cc: netdev, Ariel Levkovich, Jakub Kicinski, Saeed Mahameed

From: Ariel Levkovich <lariel@mellanox.com>

Implementing vf ACL access via tc flower api to allow
admins configure the allowed vlan ids on a vf interface.

To add a vlan id to a vf's ingress/egress ACL table while
in legacy sriov mode, the implementation intercepts tc flows
created on the pf device where the flower matching keys include
the vf's mac address as the src_mac (eswitch ingress) or the
dst_mac (eswitch egress) while the action is accept.

In such cases, the mlx5 driver interpets these flows as adding
a vlan id to the vf's ingress/egress ACL table and updates
the rules in that table using eswitch ACL configuration api
that is introduced in a previous patch.

Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 139 +++++++++++++++++-
 .../net/ethernet/mellanox/mlx5/core/eswitch.c |  19 +++
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |   1 +
 3 files changed, 152 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index bb970b2ebf8a..66b51a3d67c9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -68,6 +68,12 @@ struct mlx5_nic_flow_attr {
 	struct mlx5_fc		*counter;
 };
 
+struct mlx5_vf_acl_flow_attr {
+	u16 acl_vlanid;
+	u16 vport;
+	enum mlx5_acl_flow_direction dir;
+};
+
 #define MLX5E_TC_FLOW_BASE (MLX5E_TC_FLAG_LAST_EXPORTED_BIT + 1)
 
 enum {
@@ -82,6 +88,7 @@ enum {
 	MLX5E_TC_FLOW_FLAG_DUP		= MLX5E_TC_FLOW_BASE + 4,
 	MLX5E_TC_FLOW_FLAG_NOT_READY	= MLX5E_TC_FLOW_BASE + 5,
 	MLX5E_TC_FLOW_FLAG_DELETED	= MLX5E_TC_FLOW_BASE + 6,
+	MLX5E_TC_FLOW_FLAG_VF_ACL	= MLX5E_TC_FLOW_BASE + 7,
 };
 
 #define MLX5E_TC_MAX_SPLITS 1
@@ -135,6 +142,7 @@ struct mlx5e_tc_flow {
 	union {
 		struct mlx5_esw_flow_attr esw_attr[0];
 		struct mlx5_nic_flow_attr nic_attr[0];
+		struct mlx5_vf_acl_flow_attr acl_attr[0];
 	};
 };
 
@@ -991,6 +999,15 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
 	return PTR_ERR_OR_ZERO(flow->rule[0]);
 }
 
+static void mlx5e_tc_del_vf_acl_flow(struct mlx5e_priv *priv,
+				     struct mlx5e_tc_flow *flow)
+{
+	mlx5_eswitch_del_vport_trunk_vlan(priv->mdev->priv.eswitch,
+					  flow->acl_attr->vport,
+					  flow->acl_attr->acl_vlanid,
+					  flow->acl_attr->dir);
+}
+
 static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv,
 				  struct mlx5e_tc_flow *flow)
 {
@@ -1640,6 +1657,8 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
 	if (mlx5e_is_eswitch_flow(flow)) {
 		mlx5e_tc_del_fdb_peer_flow(flow);
 		mlx5e_tc_del_fdb_flow(priv, flow);
+	} else if (flow_flag_test(flow, VF_ACL)) {
+		mlx5e_tc_del_vf_acl_flow(priv, flow);
 	} else {
 		mlx5e_tc_del_nic_flow(priv, flow);
 	}
@@ -3718,6 +3737,110 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
 	return err;
 }
 
+static bool
+parse_vf_acl_flow(struct mlx5_eswitch *esw,
+		  struct flow_cls_offload *f,
+		  struct mlx5_vf_acl_flow_attr *attr)
+{
+	struct flow_rule *rule	 = flow_cls_offload_flow_rule(f);
+	struct flow_dissector *dissector = rule->match.dissector;
+	struct flow_action *flow_action = &rule->action;
+	struct flow_match_eth_addrs match_eth;
+	const struct flow_action_entry *act;
+	struct flow_match_vlan match_vlan;
+	int i;
+
+	if (!esw || esw->mode != MLX5_ESWITCH_LEGACY)
+		return false;
+
+	if (dissector->used_keys ^
+	    (BIT(FLOW_DISSECTOR_KEY_CONTROL) |
+	     BIT(FLOW_DISSECTOR_KEY_BASIC) |
+	     BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
+	     BIT(FLOW_DISSECTOR_KEY_VLAN)))
+		return false;
+
+	/* Allowing only ACCEPT action for acl */
+	if (!flow_action_has_entries(flow_action))
+		return false;
+	flow_action_for_each(i, act, flow_action)
+		if (act->id != FLOW_ACTION_ACCEPT)
+			return false;
+
+	flow_rule_match_vlan(rule, &match_vlan);
+	if (!match_vlan.mask->vlan_id ||
+	    match_vlan.mask->vlan_priority)
+		return false;
+
+	if (match_vlan.key->vlan_tpid != htons(ETH_P_8021Q))
+		return false;
+
+	attr->acl_vlanid = match_vlan.key->vlan_id;
+
+	flow_rule_match_eth_addrs(rule, &match_eth);
+	if (!is_zero_ether_addr(match_eth.mask->dst) &&
+	    !is_zero_ether_addr(match_eth.mask->src))
+		return false;
+
+	if (!is_zero_ether_addr(match_eth.mask->src))
+		attr->dir = MLX5_ACL_INGRESS;
+	else if (!is_zero_ether_addr(match_eth.mask->dst))
+		attr->dir = MLX5_ACL_EGRESS;
+	else
+		return false;
+
+	return true;
+}
+
+static int
+mlx5e_add_vf_acl_flow(struct mlx5e_priv *priv,
+		      struct flow_cls_offload *f,
+		      unsigned long flow_flags,
+		      struct mlx5_vf_acl_flow_attr *acl_attr,
+		      struct mlx5e_tc_flow **__flow)
+{
+	struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+	struct mlx5e_tc_flow_parse_attr *parse_attr;
+	struct flow_match_eth_addrs match;
+	struct mlx5e_tc_flow *flow;
+	int attr_size, vport, err;
+
+	flow_rule_match_eth_addrs(rule, &match);
+	vport = mlx5_get_vport_by_mac(esw, (acl_attr->dir == MLX5_ACL_INGRESS) ?
+					   match.key->src :
+					   match.key->dst);
+	if (vport <= 0)
+		return -ENOENT;
+
+	acl_attr->vport = vport;
+	flow_flags |= BIT(MLX5E_TC_FLOW_FLAG_VF_ACL);
+
+	attr_size  = sizeof(struct mlx5_vf_acl_flow_attr);
+	err = mlx5e_alloc_flow(priv, attr_size, f, flow_flags,
+			       &parse_attr, &flow);
+	if (err)
+		return err;
+
+	*flow->acl_attr = *acl_attr;
+	err = mlx5_eswitch_add_vport_trunk_vlan(esw, acl_attr->vport,
+						acl_attr->acl_vlanid, acl_attr->dir);
+	if (err)
+		goto err_free;
+
+	flow_flag_set(flow, OFFLOADED);
+	kvfree(parse_attr);
+	*__flow = flow;
+
+	return 0;
+
+err_free:
+	mlx5e_flow_put(priv, flow);
+	kvfree(parse_attr);
+
+	return err;
+}
+
 static int
 mlx5e_add_nic_flow(struct mlx5e_priv *priv,
 		   struct flow_cls_offload *f,
@@ -3777,8 +3900,8 @@ mlx5e_tc_add_flow(struct mlx5e_priv *priv,
 		  struct mlx5e_tc_flow **flow)
 {
 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+	struct mlx5_vf_acl_flow_attr attr;
 	unsigned long flow_flags;
-	int err;
 
 	get_flags(flags, &flow_flags);
 
@@ -3786,13 +3909,15 @@ mlx5e_tc_add_flow(struct mlx5e_priv *priv,
 		return -EOPNOTSUPP;
 
 	if (esw && esw->mode == MLX5_ESWITCH_OFFLOADS)
-		err = mlx5e_add_fdb_flow(priv, f, flow_flags,
-					 filter_dev, flow);
-	else
-		err = mlx5e_add_nic_flow(priv, f, flow_flags,
-					 filter_dev, flow);
+		return mlx5e_add_fdb_flow(priv, f, flow_flags,
+					  filter_dev, flow);
 
-	return err;
+	if (parse_vf_acl_flow(esw, f, &attr))
+		return mlx5e_add_vf_acl_flow(priv, f, flow_flags,
+					     &attr, flow);
+
+	return mlx5e_add_nic_flow(priv, f, flow_flags,
+				  filter_dev, flow);
 }
 
 int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 41d67ca6cce3..11436f19b566 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -66,6 +66,25 @@ struct mlx5_acl_vlan {
 static void esw_destroy_legacy_fdb_table(struct mlx5_eswitch *esw);
 static void esw_cleanup_vepa_rules(struct mlx5_eswitch *esw);
 
+int mlx5_get_vport_by_mac(struct mlx5_eswitch *esw, u8 *mac)
+{
+	struct mlx5_vport *evport;
+	int vport = -ENOENT;
+	int i;
+
+	mutex_lock(&esw->state_lock);
+	mlx5_esw_for_all_vports(esw, i, evport) {
+		if (!ether_addr_equal(mac, evport->info.mac))
+			continue;
+
+		vport = i;
+		break;
+	}
+
+	mutex_unlock(&esw->state_lock);
+	return vport;
+}
+
 struct mlx5_vport *__must_check
 mlx5_eswitch_get_vport(struct mlx5_eswitch *esw, u16 vport_num)
 {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 5c84dbdb300f..9cfe34d8877f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -630,6 +630,7 @@ bool mlx5_eswitch_is_vf_vport(const struct mlx5_eswitch *esw, u16 vport_num);
 
 void mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, const int num_vfs);
 int mlx5_esw_funcs_changed_handler(struct notifier_block *nb, unsigned long type, void *data);
+int mlx5_get_vport_by_mac(struct mlx5_eswitch *esw, u8 *mac);
 
 int
 mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-12 17:13 ` [net-next 8/8] net/mlx5: Add vf ACL access via tc flower Saeed Mahameed
@ 2019-11-12 23:41   ` Jakub Kicinski
  2019-11-13  0:21     ` Marcelo Ricardo Leitner
                       ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Jakub Kicinski @ 2019-11-12 23:41 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: David S. Miller, netdev, Ariel Levkovich

On Tue, 12 Nov 2019 17:13:53 +0000, Saeed Mahameed wrote:
> From: Ariel Levkovich <lariel@mellanox.com>
> 
> Implementing vf ACL access via tc flower api to allow
> admins configure the allowed vlan ids on a vf interface.
> 
> To add a vlan id to a vf's ingress/egress ACL table while
> in legacy sriov mode, the implementation intercepts tc flows
> created on the pf device where the flower matching keys include
> the vf's mac address as the src_mac (eswitch ingress) or the
> dst_mac (eswitch egress) while the action is accept.
> 
> In such cases, the mlx5 driver interpets these flows as adding
> a vlan id to the vf's ingress/egress ACL table and updates
> the rules in that table using eswitch ACL configuration api
> that is introduced in a previous patch.

Nack, the magic interpretation of rules installed on the PF is a no go.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-12 23:41   ` Jakub Kicinski
@ 2019-11-13  0:21     ` Marcelo Ricardo Leitner
  2019-11-13  0:31     ` Saeed Mahameed
  2019-11-13 13:06     ` Jiri Pirko
  2 siblings, 0 replies; 15+ messages in thread
From: Marcelo Ricardo Leitner @ 2019-11-13  0:21 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: Saeed Mahameed, David S. Miller, netdev, Ariel Levkovich

On Tue, Nov 12, 2019 at 03:41:24PM -0800, Jakub Kicinski wrote:
> On Tue, 12 Nov 2019 17:13:53 +0000, Saeed Mahameed wrote:
> > From: Ariel Levkovich <lariel@mellanox.com>
> > 
> > Implementing vf ACL access via tc flower api to allow
> > admins configure the allowed vlan ids on a vf interface.
> > 
> > To add a vlan id to a vf's ingress/egress ACL table while
> > in legacy sriov mode, the implementation intercepts tc flows
> > created on the pf device where the flower matching keys include
> > the vf's mac address as the src_mac (eswitch ingress) or the
> > dst_mac (eswitch egress) while the action is accept.
> > 
> > In such cases, the mlx5 driver interpets these flows as adding
> > a vlan id to the vf's ingress/egress ACL table and updates
> > the rules in that table using eswitch ACL configuration api
> > that is introduced in a previous patch.
> 
> Nack, the magic interpretation of rules installed on the PF is a no go.

+1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-12 23:41   ` Jakub Kicinski
  2019-11-13  0:21     ` Marcelo Ricardo Leitner
@ 2019-11-13  0:31     ` Saeed Mahameed
  2019-11-13 20:19       ` Jakub Kicinski
  2019-11-13 13:06     ` Jiri Pirko
  2 siblings, 1 reply; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-13  0:31 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: Saeed Mahameed, David S. Miller, netdev, Ariel Levkovich

On Tue, Nov 12, 2019 at 3:41 PM Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
>
> On Tue, 12 Nov 2019 17:13:53 +0000, Saeed Mahameed wrote:
> > From: Ariel Levkovich <lariel@mellanox.com>
> >
> > Implementing vf ACL access via tc flower api to allow
> > admins configure the allowed vlan ids on a vf interface.
> >
> > To add a vlan id to a vf's ingress/egress ACL table while
> > in legacy sriov mode, the implementation intercepts tc flows
> > created on the pf device where the flower matching keys include
> > the vf's mac address as the src_mac (eswitch ingress) or the
> > dst_mac (eswitch egress) while the action is accept.
> >
> > In such cases, the mlx5 driver interpets these flows as adding
> > a vlan id to the vf's ingress/egress ACL table and updates
> > the rules in that table using eswitch ACL configuration api
> > that is introduced in a previous patch.
>
> Nack, the magic interpretation of rules installed on the PF is a no go.

PF is the eswitch manager it is legit for the PF to forward rules to
the eswitch FDB,
we do it all over the place, this is how ALL legacy ndos work, why
this should be treated differently ?

Anyway just for the record, I don't think you are being fair here, you
just come up with rules on the go just to block anything related to
legacy mode.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-12 23:41   ` Jakub Kicinski
  2019-11-13  0:21     ` Marcelo Ricardo Leitner
  2019-11-13  0:31     ` Saeed Mahameed
@ 2019-11-13 13:06     ` Jiri Pirko
  2 siblings, 0 replies; 15+ messages in thread
From: Jiri Pirko @ 2019-11-13 13:06 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: Saeed Mahameed, David S. Miller, netdev, Ariel Levkovich

Wed, Nov 13, 2019 at 12:41:24AM CET, jakub.kicinski@netronome.com wrote:
>On Tue, 12 Nov 2019 17:13:53 +0000, Saeed Mahameed wrote:
>> From: Ariel Levkovich <lariel@mellanox.com>
>> 
>> Implementing vf ACL access via tc flower api to allow
>> admins configure the allowed vlan ids on a vf interface.
>> 
>> To add a vlan id to a vf's ingress/egress ACL table while
>> in legacy sriov mode, the implementation intercepts tc flows
>> created on the pf device where the flower matching keys include
>> the vf's mac address as the src_mac (eswitch ingress) or the
>> dst_mac (eswitch egress) while the action is accept.
>> 
>> In such cases, the mlx5 driver interpets these flows as adding
>> a vlan id to the vf's ingress/egress ACL table and updates
>> the rules in that table using eswitch ACL configuration api
>> that is introduced in a previous patch.
>
>Nack, the magic interpretation of rules installed on the PF is a no go.

For the record, I don't like this approach either. The solution is
out there, one just have to properly implement bridge offloading.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-13  0:31     ` Saeed Mahameed
@ 2019-11-13 20:19       ` Jakub Kicinski
  2019-11-13 21:35         ` Saeed Mahameed
  0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2019-11-13 20:19 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: Saeed Mahameed, David S. Miller, netdev, Ariel Levkovich

On Tue, 12 Nov 2019 16:31:19 -0800, Saeed Mahameed wrote:
> On Tue, Nov 12, 2019 at 3:41 PM Jakub Kicinski wrote:
> > On Tue, 12 Nov 2019 17:13:53 +0000, Saeed Mahameed wrote:  
> > > From: Ariel Levkovich <lariel@mellanox.com>
> > >
> > > Implementing vf ACL access via tc flower api to allow
> > > admins configure the allowed vlan ids on a vf interface.
> > >
> > > To add a vlan id to a vf's ingress/egress ACL table while
> > > in legacy sriov mode, the implementation intercepts tc flows
> > > created on the pf device where the flower matching keys include
> > > the vf's mac address as the src_mac (eswitch ingress) or the
> > > dst_mac (eswitch egress) while the action is accept.
> > >
> > > In such cases, the mlx5 driver interpets these flows as adding
> > > a vlan id to the vf's ingress/egress ACL table and updates
> > > the rules in that table using eswitch ACL configuration api
> > > that is introduced in a previous patch.  
> >
> > Nack, the magic interpretation of rules installed on the PF is a no go.  
> 
> PF is the eswitch manager it is legit for the PF to forward rules to
> the eswitch FDB,
> we do it all over the place, this is how ALL legacy ndos work, why
> this should be treated differently ?

It's not a legacy NDO, there's little precedent for it, and you're
inventing a new meaning for an operation.

> Anyway just for the record, I don't think you are being fair here, you
> just come up with rules on the go just to block anything related to
> legacy mode.

I tried to block everything related to legacy NDOs for a while now, and
I'm not the only one (/me remembers Or in netdevconf 1.1). I'm sorry but
I won't go and dig out the links now, it's a waste of time.

Maybe we differ on the definition of fairness. I'm against this exactly
_because_ I'm fair, nobody gets a free pass, no matter how much we
otherwise appreciate given company contributing to the kernel...

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [net-next 8/8] net/mlx5: Add vf ACL access via tc flower
  2019-11-13 20:19       ` Jakub Kicinski
@ 2019-11-13 21:35         ` Saeed Mahameed
  0 siblings, 0 replies; 15+ messages in thread
From: Saeed Mahameed @ 2019-11-13 21:35 UTC (permalink / raw)
  To: saeedm, jakub.kicinski; +Cc: Ariel Levkovich, davem, netdev

On Wed, 2019-11-13 at 12:19 -0800, Jakub Kicinski wrote:
> On Tue, 12 Nov 2019 16:31:19 -0800, Saeed Mahameed wrote:
> > On Tue, Nov 12, 2019 at 3:41 PM Jakub Kicinski wrote:
> > > On Tue, 12 Nov 2019 17:13:53 +0000, Saeed Mahameed wrote:  
> > > > From: Ariel Levkovich <lariel@mellanox.com>
> > > > 
> > > > Implementing vf ACL access via tc flower api to allow
> > > > admins configure the allowed vlan ids on a vf interface.
> > > > 
> > > > To add a vlan id to a vf's ingress/egress ACL table while
> > > > in legacy sriov mode, the implementation intercepts tc flows
> > > > created on the pf device where the flower matching keys include
> > > > the vf's mac address as the src_mac (eswitch ingress) or the
> > > > dst_mac (eswitch egress) while the action is accept.
> > > > 
> > > > In such cases, the mlx5 driver interpets these flows as adding
> > > > a vlan id to the vf's ingress/egress ACL table and updates
> > > > the rules in that table using eswitch ACL configuration api
> > > > that is introduced in a previous patch.  
> > > 
> > > Nack, the magic interpretation of rules installed on the PF is a
> > > no go.  
> > 
> > PF is the eswitch manager it is legit for the PF to forward rules
> > to
> > the eswitch FDB,
> > we do it all over the place, this is how ALL legacy ndos work, why
> > this should be treated differently ?
> 
> It's not a legacy NDO, there's little precedent for it, and you're
> inventing a new meaning for an operation.
> 
> > Anyway just for the record, I don't think you are being fair here,
> > you
> > just come up with rules on the go just to block anything related to
> > legacy mode.
> 
> I tried to block everything related to legacy NDOs for a while now,
> and
> I'm not the only one (/me remembers Or in netdevconf 1.1). I'm sorry
> but
> I won't go and dig out the links now, it's a waste of time.
> 
> Maybe we differ on the definition of fairness. I'm against this
> exactly
> _because_ I'm fair, nobody gets a free pass, no matter how much we
> otherwise appreciate given company contributing to the kernel...

I wasn't looking for free passes, we just disagree on how pf driver
should interpret TC flower in case of legacy sriov, which  was never
defined and no one really cared about it until this patch.

My only concern here is that people will make up their own
rules/interpretation on the go as they see fit to promote their own
agenda, this applies to both of us, this what makes it unfair, we must
go with your rules and interpretations at the end of the day.

Anyway message received, we don't like legacy sriov and every thing
related to it will be handled with an iron fist, I will drop this
patch.



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-11-13 21:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-12 17:13 [pull request][net-next 0/8] Mellanox, mlx5 updates 2019-11-12 Saeed Mahameed
2019-11-12 17:13 ` [net-next 1/8] net/mlx5: DR, Fix matcher builders select check Saeed Mahameed
2019-11-12 17:13 ` [net-next 2/8] net/mlx5: Read num_vfs before disabling SR-IOV Saeed Mahameed
2019-11-12 17:13 ` [net-next 3/8] net/mlx5: Remove redundant NULL initializations Saeed Mahameed
2019-11-12 17:13 ` [net-next 4/8] net/mlx5e: Fix error flow cleanup in mlx5e_tc_tun_create_header_ipv4/6 Saeed Mahameed
2019-11-12 17:13 ` [net-next 5/8] net/mlx5e: Set netdev name space on creation Saeed Mahameed
2019-11-12 17:13 ` [net-next 6/8] net/mlx5: Add devlink reload Saeed Mahameed
2019-11-12 17:13 ` [net-next 7/8] net/mlx5: Add eswitch ACL vlan trunk api Saeed Mahameed
2019-11-12 17:13 ` [net-next 8/8] net/mlx5: Add vf ACL access via tc flower Saeed Mahameed
2019-11-12 23:41   ` Jakub Kicinski
2019-11-13  0:21     ` Marcelo Ricardo Leitner
2019-11-13  0:31     ` Saeed Mahameed
2019-11-13 20:19       ` Jakub Kicinski
2019-11-13 21:35         ` Saeed Mahameed
2019-11-13 13:06     ` Jiri Pirko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).