linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019
@ 2019-10-28 23:34 Saeed Mahameed
  2019-10-28 23:34 ` [PATCH mlx5-next 01/18] net/mlx5: Fixed a typo in a comment in esw_del_uc_addr() Saeed Mahameed
                   ` (18 more replies)
  0 siblings, 19 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:34 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma

Hi, 

This series refactors and tide up eswitch vport and ACL code to provide
better separation between eswitch legacy mode and switchdev mode 
implementation in mlx5, for better future scalability and introduction of
new switchdev mode functionality which might rely on shared structures
with legacy mode, such as vport ACL tables.

Summary:

1. Do vport ACL configuration on per vport basis when enabling/disabling a vport.
This enables to have vports enabled/disabled outside of eswitch config for future.

2. Split the code for legacy vs offloads mode and make it clear

3. Tide up vport locking and workqueue usage

4. Fix metadata enablement for ECPF

5. Make explicit use of VF property to publish IB_DEVICE_VIRTUAL_FUNCTION

In case of no objection this series will be applied to mlx5-next branch
and sent later as pull request to both rdma-next and net-next branches.

Thanks,
Saeed.

---

Parav Pandit (14):
  net/mlx5: E-switch, Introduce and use vlan rule config helper
  net/mlx5: Introduce and use mlx5_esw_is_manager_vport()
  net/mlx5: Correct comment for legacy fields
  net/mlx5: Move metdata fields under offloads structure
  net/mlx5: Move legacy drop counter and rule under legacy structure
  net/mlx5: Tide up state_lock and vport enabled flag usage
  net/mlx5: E-switch, Prepare code to handle vport enable error
  net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs
  net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle
  net/mlx5: E-switch, Offloads introduce and use per vport acl tables
    APIs
  net/mlx5: Restrict metadata disablement to offloads mode
  net/mlx5: Refactor ingress acl configuration
  net/mlx5: E-switch, Enable metadata on own vport
  IB/mlx5: Introduce and use mlx5_core_is_vf()

Qing Huang (1):
  net/mlx5: Fixed a typo in a comment in esw_del_uc_addr()

Vu Pham (3):
  net/mlx5: E-Switch, Rename egress config to generic name
  net/mlx5: E-Switch, Rename ingress acl config in offloads mode
  net/mlx5: E-switch, Offloads shift ACL programming during
    enable/disable vport

 drivers/infiniband/hw/mlx5/main.c             |   2 +-
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 573 +++++++++++-------
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  66 +-
 .../mellanox/mlx5/core/eswitch_offloads.c     | 259 ++++----
 include/linux/mlx5/driver.h                   |   5 +
 5 files changed, 537 insertions(+), 368 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 01/18] net/mlx5: Fixed a typo in a comment in esw_del_uc_addr()
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
@ 2019-10-28 23:34 ` Saeed Mahameed
  2019-10-28 23:34 ` [PATCH mlx5-next 02/18] net/mlx5: E-Switch, Rename egress config to generic name Saeed Mahameed
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:34 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Qing Huang

From: Qing Huang <qing.huang@oracle.com>

Changed "managerss" to "managers".

Fixes: a1b3839ac4a4 ("net/mlx5: E-Switch, Properly refer to the esw manager vport")
Signed-off-by: Qing Huang <qing.huang@oracle.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 30aae76b6a1d..4c18ac1299ae 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -530,7 +530,7 @@ static int esw_del_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
 	u16 vport = vaddr->vport;
 	int err = 0;
 
-	/* Skip mlx5_mpfs_del_mac for eswitch managerss,
+	/* Skip mlx5_mpfs_del_mac for eswitch managers,
 	 * it is already done by its netdev in mlx5e_execute_l2_action
 	 */
 	if (!vaddr->mpfs || esw->manager_vport == vport)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 02/18] net/mlx5: E-Switch, Rename egress config to generic name
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
  2019-10-28 23:34 ` [PATCH mlx5-next 01/18] net/mlx5: Fixed a typo in a comment in esw_del_uc_addr() Saeed Mahameed
@ 2019-10-28 23:34 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 03/18] net/mlx5: E-Switch, Rename ingress acl config in offloads mode Saeed Mahameed
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:34 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Vu Pham, Parav Pandit

From: Vu Pham <vuhuong@mellanox.com>

Refactor vport egress config in offloads mode

Refactoring vport egress configuration in offloads mode that
includes egress prio tag configuration.
This makes code symmetric to ingress configuration.

Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../mellanox/mlx5/core/eswitch_offloads.c     | 50 ++++++++++---------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 00d71db15f22..506cea8181f9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1864,32 +1864,16 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
 	struct mlx5_flow_spec *spec;
 	int err = 0;
 
-	if (!MLX5_CAP_GEN(esw->dev, prio_tag_required))
-		return 0;
-
 	/* For prio tag mode, there is only 1 FTEs:
 	 * 1) prio tag packets - pop the prio tag VLAN, allow
 	 * Unmatched traffic is allowed by default
 	 */
-
-	esw_vport_cleanup_egress_rules(esw, vport);
-
-	err = esw_vport_enable_egress_acl(esw, vport);
-	if (err) {
-		mlx5_core_warn(esw->dev,
-			       "failed to enable egress acl (%d) on vport[%d]\n",
-			       err, vport->vport);
-		return err;
-	}
-
 	esw_debug(esw->dev,
 		  "vport[%d] configure prio tag egress rules\n", vport->vport);
 
 	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
-	if (!spec) {
-		err = -ENOMEM;
-		goto out_no_mem;
-	}
+	if (!spec)
+		return -ENOMEM;
 
 	/* prio tag vlan rule - pop it so VF receives untagged packets */
 	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
@@ -1909,14 +1893,9 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
 			 "vport[%d] configure egress pop prio tag vlan rule failed, err(%d)\n",
 			 vport->vport, err);
 		vport->egress.allowed_vlan = NULL;
-		goto out;
 	}
 
-out:
 	kvfree(spec);
-out_no_mem:
-	if (err)
-		esw_vport_cleanup_egress_rules(esw, vport);
 	return err;
 }
 
@@ -1961,6 +1940,29 @@ static int esw_vport_ingress_common_config(struct mlx5_eswitch *esw,
 	return err;
 }
 
+static int esw_vport_egress_config(struct mlx5_eswitch *esw,
+				   struct mlx5_vport *vport)
+{
+	int err;
+
+	if (!MLX5_CAP_GEN(esw->dev, prio_tag_required))
+		return 0;
+
+	esw_vport_cleanup_egress_rules(esw, vport);
+
+	err = esw_vport_enable_egress_acl(esw, vport);
+	if (err)
+		return err;
+
+	esw_debug(esw->dev, "vport(%d) configure egress rules\n", vport->vport);
+
+	err = esw_vport_egress_prio_tag_config(esw, vport);
+	if (err)
+		esw_vport_disable_egress_acl(esw, vport);
+
+	return err;
+}
+
 static bool
 esw_check_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
 {
@@ -1996,7 +1998,7 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
 			goto err_ingress;
 
 		if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
-			err = esw_vport_egress_prio_tag_config(esw, vport);
+			err = esw_vport_egress_config(esw, vport);
 			if (err)
 				goto err_egress;
 		}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 03/18] net/mlx5: E-Switch, Rename ingress acl config in offloads mode
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
  2019-10-28 23:34 ` [PATCH mlx5-next 01/18] net/mlx5: Fixed a typo in a comment in esw_del_uc_addr() Saeed Mahameed
  2019-10-28 23:34 ` [PATCH mlx5-next 02/18] net/mlx5: E-Switch, Rename egress config to generic name Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 04/18] net/mlx5: E-switch, Introduce and use vlan rule config helper Saeed Mahameed
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Vu Pham, Parav Pandit

From: Vu Pham <vuhuong@mellanox.com>

Changing the function name esw_ingress_acl_common_config() to
esw_ingress_acl_config() to be consistent with egress config
function naming in offloads mode.

Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 506cea8181f9..48adec168a7c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1899,8 +1899,8 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
 	return err;
 }
 
-static int esw_vport_ingress_common_config(struct mlx5_eswitch *esw,
-					   struct mlx5_vport *vport)
+static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
+				    struct mlx5_vport *vport)
 {
 	int err;
 
@@ -1993,7 +1993,7 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
 		esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
 
 	mlx5_esw_for_all_vports(esw, i, vport) {
-		err = esw_vport_ingress_common_config(esw, vport);
+		err = esw_vport_ingress_config(esw, vport);
 		if (err)
 			goto err_ingress;
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 04/18] net/mlx5: E-switch, Introduce and use vlan rule config helper
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (2 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 03/18] net/mlx5: E-Switch, Rename ingress acl config in offloads mode Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 05/18] net/mlx5: Introduce and use mlx5_esw_is_manager_vport() Saeed Mahameed
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

Between legacy mode and switchdev mode, only two fields are changed,
vlan_tag and flow action.
Hence to avoid duplicte code between two modes, introduce and and use
helper function to configure allowed VLAN rule.

While at it, get rid of duplicate debug message.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 68 ++++++++++++-------
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  4 ++
 .../mellanox/mlx5/core/eswitch_offloads.c     | 54 +++------------
 3 files changed, 58 insertions(+), 68 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 4c18ac1299ae..ef7d84a1dbc2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1328,6 +1328,43 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 	return err;
 }
 
+int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
+					  struct mlx5_vport *vport,
+					  u16 vlan_id, u32 flow_action)
+{
+	struct mlx5_flow_act flow_act = {};
+	struct mlx5_flow_spec *spec;
+	int err = 0;
+
+	if (vport->egress.allowed_vlan)
+		return -EEXIST;
+
+	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+	if (!spec)
+		return -ENOMEM;
+
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
+	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
+	MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, vlan_id);
+
+	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+	flow_act.action = flow_action;
+	vport->egress.allowed_vlan =
+		mlx5_add_flow_rules(vport->egress.acl, spec,
+				    &flow_act, NULL, 0);
+	if (IS_ERR(vport->egress.allowed_vlan)) {
+		err = PTR_ERR(vport->egress.allowed_vlan);
+		esw_warn(esw->dev,
+			 "vport[%d] configure egress vlan rule failed, err(%d)\n",
+			 vport->vport, err);
+		vport->egress.allowed_vlan = NULL;
+	}
+
+	kvfree(spec);
+	return err;
+}
+
 static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 				   struct mlx5_vport *vport)
 {
@@ -1358,34 +1395,17 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 		  "vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
 		  vport->vport, vport->info.vlan, vport->info.qos);
 
-	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
-	if (!spec) {
-		err = -ENOMEM;
-		goto out;
-	}
-
 	/* Allowed vlan rule */
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
-	MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, vport->info.vlan);
+	err = mlx5_esw_create_vport_egress_acl_vlan(esw, vport, vport->info.vlan,
+						    MLX5_FLOW_CONTEXT_ACTION_ALLOW);
+	if (err)
+		return err;
 
-	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
-	vport->egress.allowed_vlan =
-		mlx5_add_flow_rules(vport->egress.acl, spec,
-				    &flow_act, NULL, 0);
-	if (IS_ERR(vport->egress.allowed_vlan)) {
-		err = PTR_ERR(vport->egress.allowed_vlan);
-		esw_warn(esw->dev,
-			 "vport[%d] configure egress allowed vlan rule failed, err(%d)\n",
-			 vport->vport, err);
-		vport->egress.allowed_vlan = NULL;
+	/* Drop others rule (star rule) */
+	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+	if (!spec)
 		goto out;
-	}
 
-	/* Drop others rule (star rule) */
-	memset(spec, 0, sizeof(*spec));
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
 
 	/* Attach egress drop flow counter */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 6bd6f5895244..1824b0ad7c9f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -421,6 +421,10 @@ int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
 int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
 				  u16 vport, u16 vlan, u8 qos, u8 set_flags);
 
+int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
+					  struct mlx5_vport *vport,
+					  u16 vlan_id, u32 flow_action);
+
 static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev,
 						       u8 vlan_depth)
 {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 48adec168a7c..f0c7abd09120 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1857,48 +1857,6 @@ void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
 	}
 }
 
-static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
-					    struct mlx5_vport *vport)
-{
-	struct mlx5_flow_act flow_act = {0};
-	struct mlx5_flow_spec *spec;
-	int err = 0;
-
-	/* For prio tag mode, there is only 1 FTEs:
-	 * 1) prio tag packets - pop the prio tag VLAN, allow
-	 * Unmatched traffic is allowed by default
-	 */
-	esw_debug(esw->dev,
-		  "vport[%d] configure prio tag egress rules\n", vport->vport);
-
-	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
-	if (!spec)
-		return -ENOMEM;
-
-	/* prio tag vlan rule - pop it so VF receives untagged packets */
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
-	MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, 0);
-
-	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_VLAN_POP |
-			  MLX5_FLOW_CONTEXT_ACTION_ALLOW;
-	vport->egress.allowed_vlan =
-		mlx5_add_flow_rules(vport->egress.acl, spec,
-				    &flow_act, NULL, 0);
-	if (IS_ERR(vport->egress.allowed_vlan)) {
-		err = PTR_ERR(vport->egress.allowed_vlan);
-		esw_warn(esw->dev,
-			 "vport[%d] configure egress pop prio tag vlan rule failed, err(%d)\n",
-			 vport->vport, err);
-		vport->egress.allowed_vlan = NULL;
-	}
-
-	kvfree(spec);
-	return err;
-}
-
 static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 				    struct mlx5_vport *vport)
 {
@@ -1954,9 +1912,17 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 	if (err)
 		return err;
 
-	esw_debug(esw->dev, "vport(%d) configure egress rules\n", vport->vport);
+	/* For prio tag mode, there is only 1 FTEs:
+	 * 1) prio tag packets - pop the prio tag VLAN, allow
+	 * Unmatched traffic is allowed by default
+	 */
+	esw_debug(esw->dev,
+		  "vport[%d] configure prio tag egress rules\n", vport->vport);
 
-	err = esw_vport_egress_prio_tag_config(esw, vport);
+	/* prio tag vlan rule - pop it so VF receives untagged packets */
+	err = mlx5_esw_create_vport_egress_acl_vlan(esw, vport, 0,
+						    MLX5_FLOW_CONTEXT_ACTION_VLAN_POP |
+						    MLX5_FLOW_CONTEXT_ACTION_ALLOW);
 	if (err)
 		esw_vport_disable_egress_acl(esw, vport);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 05/18] net/mlx5: Introduce and use mlx5_esw_is_manager_vport()
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (3 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 04/18] net/mlx5: E-switch, Introduce and use vlan rule config helper Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 06/18] net/mlx5: Correct comment for legacy fields Saeed Mahameed
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

Currently esw_enable_vport() does vport check for zero to enable drop
counters regardless of execution on ECPF/PF.
While esw_disable_vport() considers such scenario.

To keep consistency across code for checking for manager_vport,
introduce and use mlx5_esw_is_manager_vport() to check if a specified
vport is eswitch manager vport or not.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 13 +++++++------
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.h |  6 ++++++
 2 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index ef7d84a1dbc2..fa1228a8005f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -501,7 +501,7 @@ static int esw_add_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
 	/* Skip mlx5_mpfs_add_mac for eswitch_managers,
 	 * it is already done by its netdev in mlx5e_execute_l2_action
 	 */
-	if (esw->manager_vport == vport)
+	if (mlx5_esw_is_manager_vport(esw, vport))
 		goto fdb_add;
 
 	err = mlx5_mpfs_add_mac(esw->dev, mac);
@@ -533,7 +533,7 @@ static int esw_del_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
 	/* Skip mlx5_mpfs_del_mac for eswitch managers,
 	 * it is already done by its netdev in mlx5e_execute_l2_action
 	 */
-	if (!vaddr->mpfs || esw->manager_vport == vport)
+	if (!vaddr->mpfs || mlx5_esw_is_manager_vport(esw, vport))
 		goto fdb_del;
 
 	err = mlx5_mpfs_del_mac(esw->dev, mac);
@@ -1639,7 +1639,7 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
 	u16 vport_num = vport->vport;
 	int flags;
 
-	if (esw->manager_vport == vport_num)
+	if (mlx5_esw_is_manager_vport(esw, vport_num))
 		return;
 
 	mlx5_modify_vport_admin_state(esw->dev,
@@ -1713,7 +1713,8 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 	esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
 
 	/* Create steering drop counters for ingress and egress ACLs */
-	if (vport_num && esw->mode == MLX5_ESWITCH_LEGACY)
+	if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
+	    esw->mode == MLX5_ESWITCH_LEGACY)
 		esw_vport_create_drop_counters(vport);
 
 	/* Restore old vport configuration */
@@ -1731,7 +1732,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 	/* Esw manager is trusted by default. Host PF (vport 0) is trusted as well
 	 * in smartNIC as it's a vport group manager.
 	 */
-	if (esw->manager_vport == vport_num ||
+	if (mlx5_esw_is_manager_vport(esw, vport_num) ||
 	    (!vport_num && mlx5_core_is_ecpf(esw->dev)))
 		vport->info.trusted = true;
 
@@ -1766,7 +1767,7 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
 	esw_vport_change_handle_locked(vport);
 	vport->enabled_events = 0;
 	esw_vport_disable_qos(esw, vport);
-	if (esw->manager_vport != vport_num &&
+	if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
 	    esw->mode == MLX5_ESWITCH_LEGACY) {
 		mlx5_modify_vport_admin_state(esw->dev,
 					      MLX5_VPORT_STATE_OP_MOD_ESW_VPORT,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 1824b0ad7c9f..75e69644d70e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -463,6 +463,12 @@ static inline u16 mlx5_eswitch_manager_vport(struct mlx5_core_dev *dev)
 		MLX5_VPORT_ECPF : MLX5_VPORT_PF;
 }
 
+static inline bool
+mlx5_esw_is_manager_vport(const struct mlx5_eswitch *esw, u16 vport_num)
+{
+	return esw->manager_vport == vport_num;
+}
+
 static inline u16 mlx5_eswitch_first_host_vport_num(struct mlx5_core_dev *dev)
 {
 	return mlx5_core_is_ecpf_esw_manager(dev) ?
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 06/18] net/mlx5: Correct comment for legacy fields
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (4 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 05/18] net/mlx5: Introduce and use mlx5_esw_is_manager_vport() Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 07/18] net/mlx5: Move metdata fields under offloads structure Saeed Mahameed
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

fdb_table is used for both legacy and offloads mode.
It was incorrect to comment that fdb_table is legacy specific.
Hence, fix the comment to reflect that fdb_table is used in legacy and
offloads mode.

Fixes: 131ce7014043 ("net/mlx5: E-Switch, Remove redundant mc_promisc NULL check")
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 75e69644d70e..a41d4aad9d28 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -217,8 +217,8 @@ enum {
 struct mlx5_eswitch {
 	struct mlx5_core_dev    *dev;
 	struct mlx5_nb          nb;
-	/* legacy data structures */
 	struct mlx5_eswitch_fdb fdb_table;
+	/* legacy data structures */
 	struct hlist_head       mc_table[MLX5_L2_ADDR_HASH_SIZE];
 	struct esw_mc_addr mc_promisc;
 	/* end of legacy */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 07/18] net/mlx5: Move metdata fields under offloads structure
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (5 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 06/18] net/mlx5: Correct comment for legacy fields Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 08/18] net/mlx5: Move legacy drop counter and rule under legacy structure Saeed Mahameed
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

Metadata fields are offload mode specific.
To improve code readability, move metadata under offloads structure.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  6 ++--
 .../mellanox/mlx5/core/eswitch_offloads.c     | 33 ++++++++++---------
 2 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index a41d4aad9d28..5f862992b9c8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -69,11 +69,13 @@ struct vport_ingress {
 	struct mlx5_flow_group *allow_spoofchk_only_grp;
 	struct mlx5_flow_group *allow_untagged_only_grp;
 	struct mlx5_flow_group *drop_grp;
-	struct mlx5_modify_hdr   *modify_metadata;
-	struct mlx5_flow_handle  *modify_metadata_rule;
 	struct mlx5_flow_handle  *allow_rule;
 	struct mlx5_flow_handle  *drop_rule;
 	struct mlx5_fc           *drop_counter;
+	struct {
+		struct mlx5_modify_hdr *modify_metadata;
+		struct mlx5_flow_handle *modify_metadata_rule;
+	} offloads;
 };
 
 struct vport_egress {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index f0c7abd09120..874e70e3792a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1778,9 +1778,9 @@ static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw,
 	flow_act.vlan[0].vid = 0;
 	flow_act.vlan[0].prio = 0;
 
-	if (vport->ingress.modify_metadata_rule) {
+	if (vport->ingress.offloads.modify_metadata_rule) {
 		flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
-		flow_act.modify_hdr = vport->ingress.modify_metadata;
+		flow_act.modify_hdr = vport->ingress.offloads.modify_metadata;
 	}
 
 	vport->ingress.allow_rule =
@@ -1816,11 +1816,11 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
 	MLX5_SET(set_action_in, action, data,
 		 mlx5_eswitch_get_vport_metadata_for_match(esw, vport->vport));
 
-	vport->ingress.modify_metadata =
+	vport->ingress.offloads.modify_metadata =
 		mlx5_modify_header_alloc(esw->dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
 					 1, action);
-	if (IS_ERR(vport->ingress.modify_metadata)) {
-		err = PTR_ERR(vport->ingress.modify_metadata);
+	if (IS_ERR(vport->ingress.offloads.modify_metadata)) {
+		err = PTR_ERR(vport->ingress.offloads.modify_metadata);
 		esw_warn(esw->dev,
 			 "failed to alloc modify header for vport %d ingress acl (%d)\n",
 			 vport->vport, err);
@@ -1828,32 +1828,33 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
 	}
 
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_ALLOW;
-	flow_act.modify_hdr = vport->ingress.modify_metadata;
-	vport->ingress.modify_metadata_rule = mlx5_add_flow_rules(vport->ingress.acl,
-								  &spec, &flow_act, NULL, 0);
-	if (IS_ERR(vport->ingress.modify_metadata_rule)) {
-		err = PTR_ERR(vport->ingress.modify_metadata_rule);
+	flow_act.modify_hdr = vport->ingress.offloads.modify_metadata;
+	vport->ingress.offloads.modify_metadata_rule =
+				mlx5_add_flow_rules(vport->ingress.acl,
+						    &spec, &flow_act, NULL, 0);
+	if (IS_ERR(vport->ingress.offloads.modify_metadata_rule)) {
+		err = PTR_ERR(vport->ingress.offloads.modify_metadata_rule);
 		esw_warn(esw->dev,
 			 "failed to add setting metadata rule for vport %d ingress acl, err(%d)\n",
 			 vport->vport, err);
-		vport->ingress.modify_metadata_rule = NULL;
+		vport->ingress.offloads.modify_metadata_rule = NULL;
 		goto out;
 	}
 
 out:
 	if (err)
-		mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata);
+		mlx5_modify_header_dealloc(esw->dev, vport->ingress.offloads.modify_metadata);
 	return err;
 }
 
 void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
 					       struct mlx5_vport *vport)
 {
-	if (vport->ingress.modify_metadata_rule) {
-		mlx5_del_flow_rules(vport->ingress.modify_metadata_rule);
-		mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata);
+	if (vport->ingress.offloads.modify_metadata_rule) {
+		mlx5_del_flow_rules(vport->ingress.offloads.modify_metadata_rule);
+		mlx5_modify_header_dealloc(esw->dev, vport->ingress.offloads.modify_metadata);
 
-		vport->ingress.modify_metadata_rule = NULL;
+		vport->ingress.offloads.modify_metadata_rule = NULL;
 	}
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 08/18] net/mlx5: Move legacy drop counter and rule under legacy structure
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (6 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 07/18] net/mlx5: Move metdata fields under offloads structure Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 09/18] net/mlx5: Tide up state_lock and vport enabled flag usage Saeed Mahameed
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit

From: Parav Pandit <parav@mellanox.com>

To improve code readability, move legacy drop counters and droup rule
under legacy structure.

While at it,
(a) prefix drop flow counters helper with legacy_.
(b) nullify the rule pointers only if they were valid.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 82 ++++++++++---------
 .../net/ethernet/mellanox/mlx5/core/eswitch.h | 12 ++-
 2 files changed, 50 insertions(+), 44 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index fa1228a8005f..0dd5e5d5ea35 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1040,14 +1040,15 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
 				    struct mlx5_vport *vport)
 {
-	if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan))
+	if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan)) {
 		mlx5_del_flow_rules(vport->egress.allowed_vlan);
+		vport->egress.allowed_vlan = NULL;
+	}
 
-	if (!IS_ERR_OR_NULL(vport->egress.drop_rule))
-		mlx5_del_flow_rules(vport->egress.drop_rule);
-
-	vport->egress.allowed_vlan = NULL;
-	vport->egress.drop_rule = NULL;
+	if (!IS_ERR_OR_NULL(vport->egress.legacy.drop_rule)) {
+		mlx5_del_flow_rules(vport->egress.legacy.drop_rule);
+		vport->egress.legacy.drop_rule = NULL;
+	}
 }
 
 void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
@@ -1202,14 +1203,15 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
 void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
 				     struct mlx5_vport *vport)
 {
-	if (!IS_ERR_OR_NULL(vport->ingress.drop_rule))
-		mlx5_del_flow_rules(vport->ingress.drop_rule);
+	if (!IS_ERR_OR_NULL(vport->ingress.legacy.drop_rule)) {
+		mlx5_del_flow_rules(vport->ingress.legacy.drop_rule);
+		vport->ingress.legacy.drop_rule = NULL;
+	}
 
-	if (!IS_ERR_OR_NULL(vport->ingress.allow_rule))
+	if (!IS_ERR_OR_NULL(vport->ingress.allow_rule)) {
 		mlx5_del_flow_rules(vport->ingress.allow_rule);
-
-	vport->ingress.drop_rule = NULL;
-	vport->ingress.allow_rule = NULL;
+		vport->ingress.allow_rule = NULL;
+	}
 
 	esw_vport_del_ingress_acl_modify_metadata(esw, vport);
 }
@@ -1238,7 +1240,7 @@ void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
 static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 				    struct mlx5_vport *vport)
 {
-	struct mlx5_fc *counter = vport->ingress.drop_counter;
+	struct mlx5_fc *counter = vport->ingress.legacy.drop_counter;
 	struct mlx5_flow_destination drop_ctr_dst = {0};
 	struct mlx5_flow_destination *dst = NULL;
 	struct mlx5_flow_act flow_act = {0};
@@ -1309,15 +1311,15 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		dst = &drop_ctr_dst;
 		dest_num++;
 	}
-	vport->ingress.drop_rule =
+	vport->ingress.legacy.drop_rule =
 		mlx5_add_flow_rules(vport->ingress.acl, spec,
 				    &flow_act, dst, dest_num);
-	if (IS_ERR(vport->ingress.drop_rule)) {
-		err = PTR_ERR(vport->ingress.drop_rule);
+	if (IS_ERR(vport->ingress.legacy.drop_rule)) {
+		err = PTR_ERR(vport->ingress.legacy.drop_rule);
 		esw_warn(esw->dev,
 			 "vport[%d] configure ingress drop rule, err(%d)\n",
 			 vport->vport, err);
-		vport->ingress.drop_rule = NULL;
+		vport->ingress.legacy.drop_rule = NULL;
 		goto out;
 	}
 
@@ -1368,7 +1370,7 @@ int mlx5_esw_create_vport_egress_acl_vlan(struct mlx5_eswitch *esw,
 static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 				   struct mlx5_vport *vport)
 {
-	struct mlx5_fc *counter = vport->egress.drop_counter;
+	struct mlx5_fc *counter = vport->egress.legacy.drop_counter;
 	struct mlx5_flow_destination drop_ctr_dst = {0};
 	struct mlx5_flow_destination *dst = NULL;
 	struct mlx5_flow_act flow_act = {0};
@@ -1416,15 +1418,15 @@ static int esw_vport_egress_config(struct mlx5_eswitch *esw,
 		dst = &drop_ctr_dst;
 		dest_num++;
 	}
-	vport->egress.drop_rule =
+	vport->egress.legacy.drop_rule =
 		mlx5_add_flow_rules(vport->egress.acl, spec,
 				    &flow_act, dst, dest_num);
-	if (IS_ERR(vport->egress.drop_rule)) {
-		err = PTR_ERR(vport->egress.drop_rule);
+	if (IS_ERR(vport->egress.legacy.drop_rule)) {
+		err = PTR_ERR(vport->egress.legacy.drop_rule);
 		esw_warn(esw->dev,
 			 "vport[%d] configure egress drop rule failed, err(%d)\n",
 			 vport->vport, err);
-		vport->egress.drop_rule = NULL;
+		vport->egress.legacy.drop_rule = NULL;
 	}
 out:
 	kvfree(spec);
@@ -1667,39 +1669,39 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
 	}
 }
 
-static void esw_vport_create_drop_counters(struct mlx5_vport *vport)
+static void esw_legacy_vport_create_drop_counters(struct mlx5_vport *vport)
 {
 	struct mlx5_core_dev *dev = vport->dev;
 
 	if (MLX5_CAP_ESW_INGRESS_ACL(dev, flow_counter)) {
-		vport->ingress.drop_counter = mlx5_fc_create(dev, false);
-		if (IS_ERR(vport->ingress.drop_counter)) {
+		vport->ingress.legacy.drop_counter = mlx5_fc_create(dev, false);
+		if (IS_ERR(vport->ingress.legacy.drop_counter)) {
 			esw_warn(dev,
 				 "vport[%d] configure ingress drop rule counter failed\n",
 				 vport->vport);
-			vport->ingress.drop_counter = NULL;
+			vport->ingress.legacy.drop_counter = NULL;
 		}
 	}
 
 	if (MLX5_CAP_ESW_EGRESS_ACL(dev, flow_counter)) {
-		vport->egress.drop_counter = mlx5_fc_create(dev, false);
-		if (IS_ERR(vport->egress.drop_counter)) {
+		vport->egress.legacy.drop_counter = mlx5_fc_create(dev, false);
+		if (IS_ERR(vport->egress.legacy.drop_counter)) {
 			esw_warn(dev,
 				 "vport[%d] configure egress drop rule counter failed\n",
 				 vport->vport);
-			vport->egress.drop_counter = NULL;
+			vport->egress.legacy.drop_counter = NULL;
 		}
 	}
 }
 
-static void esw_vport_destroy_drop_counters(struct mlx5_vport *vport)
+static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
 {
 	struct mlx5_core_dev *dev = vport->dev;
 
-	if (vport->ingress.drop_counter)
-		mlx5_fc_destroy(dev, vport->ingress.drop_counter);
-	if (vport->egress.drop_counter)
-		mlx5_fc_destroy(dev, vport->egress.drop_counter);
+	if (vport->ingress.legacy.drop_counter)
+		mlx5_fc_destroy(dev, vport->ingress.legacy.drop_counter);
+	if (vport->egress.legacy.drop_counter)
+		mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
 }
 
 static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
@@ -1715,7 +1717,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 	/* Create steering drop counters for ingress and egress ACLs */
 	if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
 	    esw->mode == MLX5_ESWITCH_LEGACY)
-		esw_vport_create_drop_counters(vport);
+		esw_legacy_vport_create_drop_counters(vport);
 
 	/* Restore old vport configuration */
 	esw_apply_vport_conf(esw, vport);
@@ -1775,7 +1777,7 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
 					      MLX5_VPORT_ADMIN_STATE_DOWN);
 		esw_vport_disable_egress_acl(esw, vport);
 		esw_vport_disable_ingress_acl(esw, vport);
-		esw_vport_destroy_drop_counters(vport);
+		esw_legacy_vport_destroy_drop_counters(vport);
 	}
 	esw->enabled_vports--;
 	mutex_unlock(&esw->state_lock);
@@ -2495,12 +2497,12 @@ static int mlx5_eswitch_query_vport_drop_stats(struct mlx5_core_dev *dev,
 	if (!vport->enabled || esw->mode != MLX5_ESWITCH_LEGACY)
 		return 0;
 
-	if (vport->egress.drop_counter)
-		mlx5_fc_query(dev, vport->egress.drop_counter,
+	if (vport->egress.legacy.drop_counter)
+		mlx5_fc_query(dev, vport->egress.legacy.drop_counter,
 			      &stats->rx_dropped, &bytes);
 
-	if (vport->ingress.drop_counter)
-		mlx5_fc_query(dev, vport->ingress.drop_counter,
+	if (vport->ingress.legacy.drop_counter)
+		mlx5_fc_query(dev, vport->ingress.legacy.drop_counter,
 			      &stats->tx_dropped, &bytes);
 
 	if (!MLX5_CAP_GEN(dev, receive_discard_vport_down) &&
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 5f862992b9c8..ec0ef7c5d539 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -70,8 +70,10 @@ struct vport_ingress {
 	struct mlx5_flow_group *allow_untagged_only_grp;
 	struct mlx5_flow_group *drop_grp;
 	struct mlx5_flow_handle  *allow_rule;
-	struct mlx5_flow_handle  *drop_rule;
-	struct mlx5_fc           *drop_counter;
+	struct {
+		struct mlx5_flow_handle *drop_rule;
+		struct mlx5_fc *drop_counter;
+	} legacy;
 	struct {
 		struct mlx5_modify_hdr *modify_metadata;
 		struct mlx5_flow_handle *modify_metadata_rule;
@@ -83,8 +85,10 @@ struct vport_egress {
 	struct mlx5_flow_group *allowed_vlans_grp;
 	struct mlx5_flow_group *drop_grp;
 	struct mlx5_flow_handle  *allowed_vlan;
-	struct mlx5_flow_handle  *drop_rule;
-	struct mlx5_fc           *drop_counter;
+	struct {
+		struct mlx5_flow_handle *drop_rule;
+		struct mlx5_fc *drop_counter;
+	} legacy;
 };
 
 struct mlx5_vport_drop_stats {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 09/18] net/mlx5: Tide up state_lock and vport enabled flag usage
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (7 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 08/18] net/mlx5: Move legacy drop counter and rule under legacy structure Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 10/18] net/mlx5: E-switch, Prepare code to handle vport enable error Saeed Mahameed
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

When eswitch is disabled, vport event handler is unregistered.
This unregistration already synchronizes with running EQ event handler
in below code flow.

mlx5_eswitch_disable()
  mlx5_eswitch_event_handlers_unregister()
    mlx5_eq_notifier_unregister()
      atomic_notifier_chain_unregister()
        synchronize_rcu()

notifier_callchain
  eswitch_vport_event()
    queue_work()

Additionally vport->enabled flag is set under state_lock during
esw_enable_vport() but is not read under state_lock in
(a) esw_disable_vport() and (b) under atomic context
eswitch_vport_event().

It is also necessary to synchronize with already scheduled vport event.
This is already achieved using below sequence.

mlx5_eswitch_event_handlers_unregister()
  [..]
  flush_workqueue()

Hence,
(a) Remove vport->enabled check in eswitch_vport_event() which
doesn't make any sense.
(b) Remove redundant flush_workqueue() on every vport disable.
(c) Keep esw_disable_vport() symmetric with esw_enable_vport() for
state_lock.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 0dd5e5d5ea35..f15ffb8f5cac 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1750,18 +1750,16 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
 {
 	u16 vport_num = vport->vport;
 
+	mutex_lock(&esw->state_lock);
 	if (!vport->enabled)
-		return;
+		goto done;
 
 	esw_debug(esw->dev, "Disabling vport(%d)\n", vport_num);
 	/* Mark this vport as disabled to discard new events */
 	vport->enabled = false;
 
-	/* Wait for current already scheduled events to complete */
-	flush_workqueue(esw->work_queue);
 	/* Disable events from this vport */
 	arm_vport_context_events_cmd(esw->dev, vport->vport, 0);
-	mutex_lock(&esw->state_lock);
 	/* We don't assume VFs will cleanup after themselves.
 	 * Calling vport change handler while vport is disabled will cleanup
 	 * the vport resources.
@@ -1780,6 +1778,8 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
 		esw_legacy_vport_destroy_drop_counters(vport);
 	}
 	esw->enabled_vports--;
+
+done:
 	mutex_unlock(&esw->state_lock);
 }
 
@@ -1793,12 +1793,8 @@ static int eswitch_vport_event(struct notifier_block *nb,
 
 	vport_num = be16_to_cpu(eqe->data.vport_change.vport_num);
 	vport = mlx5_eswitch_get_vport(esw, vport_num);
-	if (IS_ERR(vport))
-		return NOTIFY_OK;
-
-	if (vport->enabled)
+	if (!IS_ERR(vport))
 		queue_work(esw->work_queue, &vport->vport_change_handler);
-
 	return NOTIFY_OK;
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 10/18] net/mlx5: E-switch, Prepare code to handle vport enable error
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (8 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 09/18] net/mlx5: Tide up state_lock and vport enabled flag usage Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 11/18] net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs Saeed Mahameed
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

In subsequent patch, esw_enable_vport() could fail and return error.
Prepare code to handle such error.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 62 ++++++++++++++-----
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  2 +-
 .../mellanox/mlx5/core/eswitch_offloads.c     |  5 +-
 3 files changed, 50 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index f15ffb8f5cac..0bdaef508e74 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -452,6 +452,13 @@ static int esw_create_legacy_table(struct mlx5_eswitch *esw)
 	return err;
 }
 
+static void esw_destroy_legacy_table(struct mlx5_eswitch *esw)
+{
+	esw_cleanup_vepa_rules(esw);
+	esw_destroy_legacy_fdb_table(esw);
+	esw_destroy_legacy_vepa_table(esw);
+}
+
 #define MLX5_LEGACY_SRIOV_VPORT_EVENTS (MLX5_VPORT_UC_ADDR_CHANGE | \
 					MLX5_VPORT_MC_ADDR_CHANGE | \
 					MLX5_VPORT_PROMISC_CHANGE)
@@ -464,15 +471,10 @@ static int esw_legacy_enable(struct mlx5_eswitch *esw)
 	if (ret)
 		return ret;
 
-	mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_LEGACY_SRIOV_VPORT_EVENTS);
-	return 0;
-}
-
-static void esw_destroy_legacy_table(struct mlx5_eswitch *esw)
-{
-	esw_cleanup_vepa_rules(esw);
-	esw_destroy_legacy_fdb_table(esw);
-	esw_destroy_legacy_vepa_table(esw);
+	ret = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_LEGACY_SRIOV_VPORT_EVENTS);
+	if (ret)
+		esw_destroy_legacy_table(esw);
+	return ret;
 }
 
 static void esw_legacy_disable(struct mlx5_eswitch *esw)
@@ -1704,8 +1706,8 @@ static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
 		mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
 }
 
-static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
-			     enum mlx5_eswitch_vport_event enabled_events)
+static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
+			    enum mlx5_eswitch_vport_event enabled_events)
 {
 	u16 vport_num = vport->vport;
 
@@ -1743,6 +1745,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 	esw->enabled_vports++;
 	esw_debug(esw->dev, "Enabled VPORT(%d)\n", vport_num);
 	mutex_unlock(&esw->state_lock);
+	return 0;
 }
 
 static void esw_disable_vport(struct mlx5_eswitch *esw,
@@ -1856,26 +1859,51 @@ static void mlx5_eswitch_event_handlers_unregister(struct mlx5_eswitch *esw)
 /* mlx5_eswitch_enable_pf_vf_vports() enables vports of PF, ECPF and VFs
  * whichever are present on the eswitch.
  */
-void
+int
 mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
 				 enum mlx5_eswitch_vport_event enabled_events)
 {
 	struct mlx5_vport *vport;
+	int num_vfs;
+	int ret;
 	int i;
 
 	/* Enable PF vport */
 	vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
-	esw_enable_vport(esw, vport, enabled_events);
+	ret = esw_enable_vport(esw, vport, enabled_events);
+	if (ret)
+		return ret;
 
-	/* Enable ECPF vports */
+	/* Enable ECPF vport */
 	if (mlx5_ecpf_vport_exists(esw->dev)) {
 		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
-		esw_enable_vport(esw, vport, enabled_events);
+		ret = esw_enable_vport(esw, vport, enabled_events);
+		if (ret)
+			goto ecpf_err;
 	}
 
 	/* Enable VF vports */
-	mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
-		esw_enable_vport(esw, vport, enabled_events);
+	mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs) {
+		ret = esw_enable_vport(esw, vport, enabled_events);
+		if (ret)
+			goto vf_err;
+	}
+	return 0;
+
+vf_err:
+	num_vfs = i - 1;
+	mlx5_esw_for_each_vf_vport_reverse(esw, i, vport, num_vfs)
+		esw_disable_vport(esw, vport);
+
+	if (mlx5_ecpf_vport_exists(esw->dev)) {
+		vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF);
+		esw_disable_vport(esw, vport);
+	}
+
+ecpf_err:
+	vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
+	esw_disable_vport(esw, vport);
+	return ret;
 }
 
 /* mlx5_eswitch_disable_pf_vf_vports() disables vports of PF, ECPF and VFs
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index ec0ef7c5d539..36edee35f155 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -609,7 +609,7 @@ bool mlx5_eswitch_is_vf_vport(const struct mlx5_eswitch *esw, u16 vport_num);
 void mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, const int num_vfs);
 int mlx5_esw_funcs_changed_handler(struct notifier_block *nb, unsigned long type, void *data);
 
-void
+int
 mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
 				 enum mlx5_eswitch_vport_event enabled_events);
 void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 874e70e3792a..98df1eeee873 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2139,7 +2139,9 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
 	if (err)
 		goto err_vport_metadata;
 
-	mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE);
+	err = mlx5_eswitch_enable_pf_vf_vports(esw, MLX5_VPORT_UC_ADDR_CHANGE);
+	if (err)
+		goto err_vports;
 
 	err = esw_offloads_load_all_reps(esw);
 	if (err)
@@ -2152,6 +2154,7 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
 
 err_reps:
 	mlx5_eswitch_disable_pf_vf_vports(esw);
+err_vports:
 	esw_set_passing_vport_metadata(esw, false);
 err_vport_metadata:
 	esw_offloads_steering_cleanup(esw);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 11/18] net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (9 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 10/18] net/mlx5: E-switch, Prepare code to handle vport enable error Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 12/18] net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle Saeed Mahameed
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit

From: Parav Pandit <parav@mellanox.com>

Introduce and use per vport ACL tables creation and destroy APIs, so that
subsequently patch can use them during enabling/disabling a vport in
unified way for legacy vs offloads mode.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 73 +++++++++++++++----
 1 file changed, 60 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 0bdaef508e74..47555e272dda 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1663,12 +1663,6 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
 		SET_VLAN_STRIP | SET_VLAN_INSERT : 0;
 	modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan, vport->info.qos,
 			       flags);
-
-	/* Only legacy mode needs ACLs */
-	if (esw->mode == MLX5_ESWITCH_LEGACY) {
-		esw_vport_ingress_config(esw, vport);
-		esw_vport_egress_config(esw, vport);
-	}
 }
 
 static void esw_legacy_vport_create_drop_counters(struct mlx5_vport *vport)
@@ -1706,10 +1700,59 @@ static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
 		mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
 }
 
+static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
+					      struct mlx5_vport *vport)
+{
+	int ret;
+
+	/* Only non manager vports need ACL in legacy mode */
+	if (mlx5_esw_is_manager_vport(esw, vport->vport))
+		return 0;
+
+	ret = esw_vport_ingress_config(esw, vport);
+	if (ret)
+		return ret;
+
+	ret = esw_vport_egress_config(esw, vport);
+	if (ret)
+		esw_vport_disable_ingress_acl(esw, vport);
+
+	return ret;
+}
+
+static int esw_vport_setup_acl(struct mlx5_eswitch *esw,
+			       struct mlx5_vport *vport)
+{
+	if (esw->mode == MLX5_ESWITCH_LEGACY)
+		return esw_vport_create_legacy_acl_tables(esw, vport);
+
+	return 0;
+}
+
+static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
+						struct mlx5_vport *vport)
+
+{
+	if (mlx5_esw_is_manager_vport(esw, vport->vport))
+		return;
+
+	esw_vport_disable_egress_acl(esw, vport);
+	esw_vport_disable_ingress_acl(esw, vport);
+	esw_legacy_vport_destroy_drop_counters(vport);
+}
+
+static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
+				  struct mlx5_vport *vport)
+{
+	if (esw->mode == MLX5_ESWITCH_LEGACY)
+		esw_vport_destroy_legacy_acl_tables(esw, vport);
+}
+
 static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 			    enum mlx5_eswitch_vport_event enabled_events)
 {
 	u16 vport_num = vport->vport;
+	int ret;
 
 	mutex_lock(&esw->state_lock);
 	WARN_ON(vport->enabled);
@@ -1724,6 +1767,10 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 	/* Restore old vport configuration */
 	esw_apply_vport_conf(esw, vport);
 
+	ret = esw_vport_setup_acl(esw, vport);
+	if (ret)
+		goto done;
+
 	/* Attach vport to the eswitch rate limiter */
 	if (esw_vport_enable_qos(esw, vport, vport->info.max_rate,
 				 vport->qos.bw_share))
@@ -1744,8 +1791,9 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 
 	esw->enabled_vports++;
 	esw_debug(esw->dev, "Enabled VPORT(%d)\n", vport_num);
+done:
 	mutex_unlock(&esw->state_lock);
-	return 0;
+	return ret;
 }
 
 static void esw_disable_vport(struct mlx5_eswitch *esw,
@@ -1770,16 +1818,15 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
 	esw_vport_change_handle_locked(vport);
 	vport->enabled_events = 0;
 	esw_vport_disable_qos(esw, vport);
-	if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
-	    esw->mode == MLX5_ESWITCH_LEGACY) {
+
+	if (!mlx5_esw_is_manager_vport(esw, vport->vport) &&
+	    esw->mode == MLX5_ESWITCH_LEGACY)
 		mlx5_modify_vport_admin_state(esw->dev,
 					      MLX5_VPORT_STATE_OP_MOD_ESW_VPORT,
 					      vport_num, 1,
 					      MLX5_VPORT_ADMIN_STATE_DOWN);
-		esw_vport_disable_egress_acl(esw, vport);
-		esw_vport_disable_ingress_acl(esw, vport);
-		esw_legacy_vport_destroy_drop_counters(vport);
-	}
+
+	esw_vport_cleanup_acl(esw, vport);
 	esw->enabled_vports--;
 
 done:
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 12/18] net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (10 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 11/18] net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 13/18] net/mlx5: E-switch, Offloads introduce and use per vport acl tables APIs Saeed Mahameed
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

It is better to create/destroy ACL related drop counters where the actual
drop rule ACLs are created/destroyed, so that ACL configuration is self
contained for ingress and egress.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 74 +++++++++----------
 1 file changed, 35 insertions(+), 39 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 47555e272dda..0e5113167739 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1665,58 +1665,55 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
 			       flags);
 }
 
-static void esw_legacy_vport_create_drop_counters(struct mlx5_vport *vport)
+static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
+					      struct mlx5_vport *vport)
 {
-	struct mlx5_core_dev *dev = vport->dev;
+	int ret;
 
-	if (MLX5_CAP_ESW_INGRESS_ACL(dev, flow_counter)) {
-		vport->ingress.legacy.drop_counter = mlx5_fc_create(dev, false);
+	/* Only non manager vports need ACL in legacy mode */
+	if (mlx5_esw_is_manager_vport(esw, vport->vport))
+		return 0;
+
+	if (!mlx5_esw_is_manager_vport(esw, vport->vport) &&
+	    MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
+		vport->ingress.legacy.drop_counter = mlx5_fc_create(esw->dev, false);
 		if (IS_ERR(vport->ingress.legacy.drop_counter)) {
-			esw_warn(dev,
+			esw_warn(esw->dev,
 				 "vport[%d] configure ingress drop rule counter failed\n",
 				 vport->vport);
 			vport->ingress.legacy.drop_counter = NULL;
 		}
 	}
 
-	if (MLX5_CAP_ESW_EGRESS_ACL(dev, flow_counter)) {
-		vport->egress.legacy.drop_counter = mlx5_fc_create(dev, false);
+	ret = esw_vport_ingress_config(esw, vport);
+	if (ret)
+		goto ingress_err;
+
+	if (!mlx5_esw_is_manager_vport(esw, vport->vport) &&
+	    MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
+		vport->egress.legacy.drop_counter = mlx5_fc_create(esw->dev, false);
 		if (IS_ERR(vport->egress.legacy.drop_counter)) {
-			esw_warn(dev,
+			esw_warn(esw->dev,
 				 "vport[%d] configure egress drop rule counter failed\n",
 				 vport->vport);
 			vport->egress.legacy.drop_counter = NULL;
 		}
 	}
-}
-
-static void esw_legacy_vport_destroy_drop_counters(struct mlx5_vport *vport)
-{
-	struct mlx5_core_dev *dev = vport->dev;
-
-	if (vport->ingress.legacy.drop_counter)
-		mlx5_fc_destroy(dev, vport->ingress.legacy.drop_counter);
-	if (vport->egress.legacy.drop_counter)
-		mlx5_fc_destroy(dev, vport->egress.legacy.drop_counter);
-}
-
-static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
-					      struct mlx5_vport *vport)
-{
-	int ret;
-
-	/* Only non manager vports need ACL in legacy mode */
-	if (mlx5_esw_is_manager_vport(esw, vport->vport))
-		return 0;
-
-	ret = esw_vport_ingress_config(esw, vport);
-	if (ret)
-		return ret;
 
 	ret = esw_vport_egress_config(esw, vport);
 	if (ret)
-		esw_vport_disable_ingress_acl(esw, vport);
+		goto egress_err;
+
+	return 0;
 
+egress_err:
+	esw_vport_disable_ingress_acl(esw, vport);
+	mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+	vport->egress.legacy.drop_counter = NULL;
+
+ingress_err:
+	mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
+	vport->ingress.legacy.drop_counter = NULL;
 	return ret;
 }
 
@@ -1737,8 +1734,12 @@ static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
 		return;
 
 	esw_vport_disable_egress_acl(esw, vport);
+	mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+	vport->egress.legacy.drop_counter = NULL;
+
 	esw_vport_disable_ingress_acl(esw, vport);
-	esw_legacy_vport_destroy_drop_counters(vport);
+	mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
+	vport->ingress.legacy.drop_counter = NULL;
 }
 
 static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
@@ -1759,11 +1760,6 @@ static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 
 	esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
 
-	/* Create steering drop counters for ingress and egress ACLs */
-	if (!mlx5_esw_is_manager_vport(esw, vport_num) &&
-	    esw->mode == MLX5_ESWITCH_LEGACY)
-		esw_legacy_vport_create_drop_counters(vport);
-
 	/* Restore old vport configuration */
 	esw_apply_vport_conf(esw, vport);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 13/18] net/mlx5: E-switch, Offloads introduce and use per vport acl tables APIs
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (11 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 12/18] net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 14/18] net/mlx5: E-switch, Offloads shift ACL programming during enable/disable vport Saeed Mahameed
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit

From: Parav Pandit <parav@mellanox.com>

Introduce and use per vport ACL tables creation and destroy APIs, so that
subsequently patch can use them during enabling/disabling a vport.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../mellanox/mlx5/core/eswitch_offloads.c     | 49 ++++++++++++-------
 1 file changed, 32 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 98df1eeee873..94eb18ae33a4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1950,6 +1950,32 @@ esw_check_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
 	return true;
 }
 
+static int
+esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
+				     struct mlx5_vport *vport)
+{
+	int err;
+
+	err = esw_vport_ingress_config(esw, vport);
+	if (err)
+		return err;
+
+	if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
+		err = esw_vport_egress_config(esw, vport);
+		if (err)
+			esw_vport_disable_ingress_acl(esw, vport);
+	}
+	return err;
+}
+
+static void
+esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
+				      struct mlx5_vport *vport)
+{
+	esw_vport_disable_egress_acl(esw, vport);
+	esw_vport_disable_ingress_acl(esw, vport);
+}
+
 static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
 {
 	struct mlx5_vport *vport;
@@ -1960,15 +1986,9 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
 		esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
 
 	mlx5_esw_for_all_vports(esw, i, vport) {
-		err = esw_vport_ingress_config(esw, vport);
+		err = esw_vport_create_offloads_acl_tables(esw, vport);
 		if (err)
-			goto err_ingress;
-
-		if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
-			err = esw_vport_egress_config(esw, vport);
-			if (err)
-				goto err_egress;
-		}
+			goto err_acl_table;
 	}
 
 	if (mlx5_eswitch_vport_match_metadata_enabled(esw))
@@ -1976,13 +1996,10 @@ static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
 
 	return 0;
 
-err_egress:
-	esw_vport_disable_ingress_acl(esw, vport);
-err_ingress:
+err_acl_table:
 	for (j = MLX5_VPORT_PF; j < i; j++) {
 		vport = &esw->vports[j];
-		esw_vport_disable_egress_acl(esw, vport);
-		esw_vport_disable_ingress_acl(esw, vport);
+		esw_vport_destroy_offloads_acl_tables(esw, vport);
 	}
 
 	return err;
@@ -1993,10 +2010,8 @@ static void esw_destroy_offloads_acl_tables(struct mlx5_eswitch *esw)
 	struct mlx5_vport *vport;
 	int i;
 
-	mlx5_esw_for_all_vports(esw, i, vport) {
-		esw_vport_disable_egress_acl(esw, vport);
-		esw_vport_disable_ingress_acl(esw, vport);
-	}
+	mlx5_esw_for_all_vports(esw, i, vport)
+		esw_vport_destroy_offloads_acl_tables(esw, vport);
 
 	esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 14/18] net/mlx5: E-switch, Offloads shift ACL programming during enable/disable vport
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (12 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 13/18] net/mlx5: E-switch, Offloads introduce and use per vport acl tables APIs Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 15/18] net/mlx5: Restrict metadata disablement to offloads mode Saeed Mahameed
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Vu Pham, Parav Pandit

From: Vu Pham <vuhuong@mellanox.com>

Currently legacy mode enables ACL while enabling vport, while offloads
mode enable ACL when moving to offloads mode.

Bring consistency to both modes by enabling/disabling ACL when
enabling/disabling a vport.

It also eliminates creating ingress ACL table on unused ECPF vport in
offloads mode.

Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c |  6 ++-
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  7 ++++
 .../mellanox/mlx5/core/eswitch_offloads.c     | 42 ++++++-------------
 3 files changed, 24 insertions(+), 31 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 0e5113167739..1ce6ae1c446e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1722,8 +1722,8 @@ static int esw_vport_setup_acl(struct mlx5_eswitch *esw,
 {
 	if (esw->mode == MLX5_ESWITCH_LEGACY)
 		return esw_vport_create_legacy_acl_tables(esw, vport);
-
-	return 0;
+	else
+		return esw_vport_create_offloads_acl_tables(esw, vport);
 }
 
 static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
@@ -1747,6 +1747,8 @@ static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
 {
 	if (esw->mode == MLX5_ESWITCH_LEGACY)
 		esw_vport_destroy_legacy_acl_tables(esw, vport);
+	else
+		esw_vport_destroy_offloads_acl_tables(esw, vport);
 }
 
 static int esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 36edee35f155..d926bdacbdcc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -614,6 +614,13 @@ mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
 				 enum mlx5_eswitch_vport_event enabled_events);
 void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw);
 
+int
+esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
+				     struct mlx5_vport *vport);
+void
+esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
+				      struct mlx5_vport *vport);
+
 #else  /* CONFIG_MLX5_ESWITCH */
 /* eswitch API stubs */
 static inline int  mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 94eb18ae33a4..ce30ead90617 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1950,7 +1950,7 @@ esw_check_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
 	return true;
 }
 
-static int
+int
 esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
 				     struct mlx5_vport *vport)
 {
@@ -1968,7 +1968,7 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
 	return err;
 }
 
-static void
+void
 esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
 				      struct mlx5_vport *vport)
 {
@@ -1976,43 +1976,27 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
 	esw_vport_disable_ingress_acl(esw, vport);
 }
 
-static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
+static int esw_create_uplink_offloads_acl_tables(struct mlx5_eswitch *esw)
 {
 	struct mlx5_vport *vport;
-	int i, j;
 	int err;
 
 	if (esw_check_vport_match_metadata_supported(esw))
 		esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
 
-	mlx5_esw_for_all_vports(esw, i, vport) {
-		err = esw_vport_create_offloads_acl_tables(esw, vport);
-		if (err)
-			goto err_acl_table;
-	}
-
-	if (mlx5_eswitch_vport_match_metadata_enabled(esw))
-		esw_info(esw->dev, "Use metadata reg_c as source vport to match\n");
-
-	return 0;
-
-err_acl_table:
-	for (j = MLX5_VPORT_PF; j < i; j++) {
-		vport = &esw->vports[j];
-		esw_vport_destroy_offloads_acl_tables(esw, vport);
-	}
-
+	vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_UPLINK);
+	err = esw_vport_create_offloads_acl_tables(esw, vport);
+	if (err)
+		esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
 	return err;
 }
 
-static void esw_destroy_offloads_acl_tables(struct mlx5_eswitch *esw)
+static void esw_destroy_uplink_offloads_acl_tables(struct mlx5_eswitch *esw)
 {
 	struct mlx5_vport *vport;
-	int i;
-
-	mlx5_esw_for_all_vports(esw, i, vport)
-		esw_vport_destroy_offloads_acl_tables(esw, vport);
 
+	vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_UPLINK);
+	esw_vport_destroy_offloads_acl_tables(esw, vport);
 	esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
 }
 
@@ -2030,7 +2014,7 @@ static int esw_offloads_steering_init(struct mlx5_eswitch *esw)
 	memset(&esw->fdb_table.offloads, 0, sizeof(struct offloads_fdb));
 	mutex_init(&esw->fdb_table.offloads.fdb_prio_lock);
 
-	err = esw_create_offloads_acl_tables(esw);
+	err = esw_create_uplink_offloads_acl_tables(esw);
 	if (err)
 		return err;
 
@@ -2055,7 +2039,7 @@ static int esw_offloads_steering_init(struct mlx5_eswitch *esw)
 	esw_destroy_offloads_fdb_tables(esw);
 
 create_fdb_err:
-	esw_destroy_offloads_acl_tables(esw);
+	esw_destroy_uplink_offloads_acl_tables(esw);
 
 	return err;
 }
@@ -2065,7 +2049,7 @@ static void esw_offloads_steering_cleanup(struct mlx5_eswitch *esw)
 	esw_destroy_vport_rx_group(esw);
 	esw_destroy_offloads_table(esw);
 	esw_destroy_offloads_fdb_tables(esw);
-	esw_destroy_offloads_acl_tables(esw);
+	esw_destroy_uplink_offloads_acl_tables(esw);
 }
 
 static void
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 15/18] net/mlx5: Restrict metadata disablement to offloads mode
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (13 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 14/18] net/mlx5: E-switch, Offloads shift ACL programming during enable/disable vport Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 16/18] net/mlx5: Refactor ingress acl configuration Saeed Mahameed
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

Now that there is clear separation for acl setup/cleanup between legacy
and offloads mode, limit metdata disablement to offloads mode.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c        | 2 --
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.h        | 2 --
 .../net/ethernet/mellanox/mlx5/core/eswitch_offloads.c   | 9 ++++++---
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 1ce6ae1c446e..61459c06f56c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1214,8 +1214,6 @@ void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
 		mlx5_del_flow_rules(vport->ingress.allow_rule);
 		vport->ingress.allow_rule = NULL;
 	}
-
-	esw_vport_del_ingress_acl_modify_metadata(esw, vport);
 }
 
 void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index d926bdacbdcc..aa3588446cba 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -267,8 +267,6 @@ void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
 				  struct mlx5_vport *vport);
 void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
 				   struct mlx5_vport *vport);
-void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
-					       struct mlx5_vport *vport);
 int mlx5_esw_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num,
 			       u32 rate_mbps);
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index ce30ead90617..b536c8fa0061 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1847,8 +1847,8 @@ static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
 	return err;
 }
 
-void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
-					       struct mlx5_vport *vport)
+static void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
+						      struct mlx5_vport *vport)
 {
 	if (vport->ingress.offloads.modify_metadata_rule) {
 		mlx5_del_flow_rules(vport->ingress.offloads.modify_metadata_rule);
@@ -1962,8 +1962,10 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
 
 	if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
 		err = esw_vport_egress_config(esw, vport);
-		if (err)
+		if (err) {
+			esw_vport_del_ingress_acl_modify_metadata(esw, vport);
 			esw_vport_disable_ingress_acl(esw, vport);
+		}
 	}
 	return err;
 }
@@ -1973,6 +1975,7 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
 				      struct mlx5_vport *vport)
 {
 	esw_vport_disable_egress_acl(esw, vport);
+	esw_vport_del_ingress_acl_modify_metadata(esw, vport);
 	esw_vport_disable_ingress_acl(esw, vport);
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 16/18] net/mlx5: Refactor ingress acl configuration
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (14 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 15/18] net/mlx5: Restrict metadata disablement to offloads mode Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 17/18] net/mlx5: E-switch, Enable metadata on own vport Saeed Mahameed
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit

From: Parav Pandit <parav@mellanox.com>

Drop, untagged, spoof check and untagged spoof check flow groups are
limited to legacy mode only.

Therefore, following refactoring is done to
(a) improve code readability
(b) have better code split between legacy and offloads mode

1. Move legacy flow groups under legacy structure
2. Add validity check for group deletion
3. Restrict scope of esw_vport_disable_ingress_acl to legacy mode
4. Rename esw_vport_enable_ingress_acl() to
esw_vport_create_ingress_acl_table() and limit its scope to
table creation
5. Introduce legacy flow groups creation helper
esw_legacy_create_ingress_acl_groups() and keep its scope to legacy mode
6. Reduce offloads ingress groups from 4 to just 1 metadata group
per vport
7. Removed redundant IS_ERR_OR_NULL as entries are marked NULL on free.
8. Shortern error message to remove redundant 'E-switch'

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 228 ++++++++++--------
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  19 +-
 .../mellanox/mlx5/core/eswitch_offloads.c     |  67 ++++-
 3 files changed, 200 insertions(+), 114 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 61459c06f56c..cc8d43d8c469 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1070,57 +1070,21 @@ void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
 	vport->egress.acl = NULL;
 }
 
-int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
-				 struct mlx5_vport *vport)
+static int
+esw_vport_create_legacy_ingress_acl_groups(struct mlx5_eswitch *esw,
+					   struct mlx5_vport *vport)
 {
 	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
 	struct mlx5_core_dev *dev = esw->dev;
-	struct mlx5_flow_namespace *root_ns;
-	struct mlx5_flow_table *acl;
 	struct mlx5_flow_group *g;
 	void *match_criteria;
 	u32 *flow_group_in;
-	/* The ingress acl table contains 4 groups
-	 * (2 active rules at the same time -
-	 *      1 allow rule from one of the first 3 groups.
-	 *      1 drop rule from the last group):
-	 * 1)Allow untagged traffic with smac=original mac.
-	 * 2)Allow untagged traffic.
-	 * 3)Allow traffic with smac=original mac.
-	 * 4)Drop all other traffic.
-	 */
-	int table_size = 4;
-	int err = 0;
-
-	if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
-		return -EOPNOTSUPP;
-
-	if (!IS_ERR_OR_NULL(vport->ingress.acl))
-		return 0;
-
-	esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
-		  vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
-
-	root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
-			mlx5_eswitch_vport_num_to_index(esw, vport->vport));
-	if (!root_ns) {
-		esw_warn(dev, "Failed to get E-Switch ingress flow namespace for vport (%d)\n", vport->vport);
-		return -EOPNOTSUPP;
-	}
+	int err;
 
 	flow_group_in = kvzalloc(inlen, GFP_KERNEL);
 	if (!flow_group_in)
 		return -ENOMEM;
 
-	acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
-	if (IS_ERR(acl)) {
-		err = PTR_ERR(acl);
-		esw_warn(dev, "Failed to create E-Switch vport[%d] ingress flow Table, err(%d)\n",
-			 vport->vport, err);
-		goto out;
-	}
-	vport->ingress.acl = acl;
-
 	match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
 
 	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
@@ -1130,14 +1094,14 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
 
-	g = mlx5_create_flow_group(acl, flow_group_in);
+	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
-		esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged spoofchk flow group, err(%d)\n",
+		esw_warn(dev, "vport[%d] ingress create untagged spoofchk flow group, err(%d)\n",
 			 vport->vport, err);
-		goto out;
+		goto spoof_err;
 	}
-	vport->ingress.allow_untagged_spoofchk_grp = g;
+	vport->ingress.legacy.allow_untagged_spoofchk_grp = g;
 
 	memset(flow_group_in, 0, inlen);
 	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
@@ -1145,14 +1109,14 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
 
-	g = mlx5_create_flow_group(acl, flow_group_in);
+	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
-		esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged flow group, err(%d)\n",
+		esw_warn(dev, "vport[%d] ingress create untagged flow group, err(%d)\n",
 			 vport->vport, err);
-		goto out;
+		goto untagged_err;
 	}
-	vport->ingress.allow_untagged_only_grp = g;
+	vport->ingress.legacy.allow_untagged_only_grp = g;
 
 	memset(flow_group_in, 0, inlen);
 	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
@@ -1161,80 +1125,134 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 2);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 2);
 
-	g = mlx5_create_flow_group(acl, flow_group_in);
+	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
-		esw_warn(dev, "Failed to create E-Switch vport[%d] ingress spoofchk flow group, err(%d)\n",
+		esw_warn(dev, "vport[%d] ingress create spoofchk flow group, err(%d)\n",
 			 vport->vport, err);
-		goto out;
+		goto allow_spoof_err;
 	}
-	vport->ingress.allow_spoofchk_only_grp = g;
+	vport->ingress.legacy.allow_spoofchk_only_grp = g;
 
 	memset(flow_group_in, 0, inlen);
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 3);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 3);
 
-	g = mlx5_create_flow_group(acl, flow_group_in);
+	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
-		esw_warn(dev, "Failed to create E-Switch vport[%d] ingress drop flow group, err(%d)\n",
+		esw_warn(dev, "vport[%d] ingress create drop flow group, err(%d)\n",
 			 vport->vport, err);
-		goto out;
+		goto drop_err;
 	}
-	vport->ingress.drop_grp = g;
+	vport->ingress.legacy.drop_grp = g;
+	kvfree(flow_group_in);
+	return 0;
 
-out:
-	if (err) {
-		if (!IS_ERR_OR_NULL(vport->ingress.allow_spoofchk_only_grp))
-			mlx5_destroy_flow_group(
-					vport->ingress.allow_spoofchk_only_grp);
-		if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_only_grp))
-			mlx5_destroy_flow_group(
-					vport->ingress.allow_untagged_only_grp);
-		if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_spoofchk_grp))
-			mlx5_destroy_flow_group(
-				vport->ingress.allow_untagged_spoofchk_grp);
-		if (!IS_ERR_OR_NULL(vport->ingress.acl))
-			mlx5_destroy_flow_table(vport->ingress.acl);
+drop_err:
+	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_spoofchk_only_grp)) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
+		vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
 	}
-
+allow_spoof_err:
+	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_only_grp)) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
+		vport->ingress.legacy.allow_untagged_only_grp = NULL;
+	}
+untagged_err:
+	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_spoofchk_grp)) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
+		vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
+	}
+spoof_err:
 	kvfree(flow_group_in);
 	return err;
 }
 
+int esw_vport_create_ingress_acl_table(struct mlx5_eswitch *esw,
+				       struct mlx5_vport *vport, int table_size)
+{
+	struct mlx5_core_dev *dev = esw->dev;
+	struct mlx5_flow_namespace *root_ns;
+	struct mlx5_flow_table *acl;
+	int vport_index;
+	int err;
+
+	if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
+		return -EOPNOTSUPP;
+
+	esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
+		  vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
+
+	vport_index = mlx5_eswitch_vport_num_to_index(esw, vport->vport);
+	root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+						    vport_index);
+	if (!root_ns) {
+		esw_warn(dev, "Failed to get E-Switch ingress flow namespace for vport (%d)\n",
+			 vport->vport);
+		return -EOPNOTSUPP;
+	}
+
+	acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
+	if (IS_ERR(acl)) {
+		err = PTR_ERR(acl);
+		esw_warn(dev, "vport[%d] ingress create flow Table, err(%d)\n",
+			 vport->vport, err);
+		return err;
+	}
+	vport->ingress.acl = acl;
+	return 0;
+}
+
+void esw_vport_destroy_ingress_acl_table(struct mlx5_vport *vport)
+{
+	if (!vport->ingress.acl)
+		return;
+
+	mlx5_destroy_flow_table(vport->ingress.acl);
+	vport->ingress.acl = NULL;
+}
+
 void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
 				     struct mlx5_vport *vport)
 {
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.drop_rule)) {
+	if (vport->ingress.legacy.drop_rule) {
 		mlx5_del_flow_rules(vport->ingress.legacy.drop_rule);
 		vport->ingress.legacy.drop_rule = NULL;
 	}
 
-	if (!IS_ERR_OR_NULL(vport->ingress.allow_rule)) {
+	if (vport->ingress.allow_rule) {
 		mlx5_del_flow_rules(vport->ingress.allow_rule);
 		vport->ingress.allow_rule = NULL;
 	}
 }
 
-void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
-				   struct mlx5_vport *vport)
+static void esw_vport_disable_legacy_ingress_acl(struct mlx5_eswitch *esw,
+						 struct mlx5_vport *vport)
 {
-	if (IS_ERR_OR_NULL(vport->ingress.acl))
+	if (!vport->ingress.acl)
 		return;
 
 	esw_debug(esw->dev, "Destroy vport[%d] E-Switch ingress ACL\n", vport->vport);
 
 	esw_vport_cleanup_ingress_rules(esw, vport);
-	mlx5_destroy_flow_group(vport->ingress.allow_spoofchk_only_grp);
-	mlx5_destroy_flow_group(vport->ingress.allow_untagged_only_grp);
-	mlx5_destroy_flow_group(vport->ingress.allow_untagged_spoofchk_grp);
-	mlx5_destroy_flow_group(vport->ingress.drop_grp);
-	mlx5_destroy_flow_table(vport->ingress.acl);
-	vport->ingress.acl = NULL;
-	vport->ingress.drop_grp = NULL;
-	vport->ingress.allow_spoofchk_only_grp = NULL;
-	vport->ingress.allow_untagged_only_grp = NULL;
-	vport->ingress.allow_untagged_spoofchk_grp = NULL;
+	if (vport->ingress.legacy.allow_spoofchk_only_grp) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
+		vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
+	}
+	if (vport->ingress.legacy.allow_untagged_only_grp) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
+		vport->ingress.legacy.allow_untagged_only_grp = NULL;
+	}
+	if (vport->ingress.legacy.allow_untagged_spoofchk_grp) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
+		vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
+	}
+	if (vport->ingress.legacy.drop_grp) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.drop_grp);
+		vport->ingress.legacy.drop_grp = NULL;
+	}
+	esw_vport_destroy_ingress_acl_table(vport);
 }
 
 static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
@@ -1249,19 +1267,36 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 	int err = 0;
 	u8 *smac_v;
 
+	/* The ingress acl table contains 4 groups
+	 * (2 active rules at the same time -
+	 *      1 allow rule from one of the first 3 groups.
+	 *      1 drop rule from the last group):
+	 * 1)Allow untagged traffic with smac=original mac.
+	 * 2)Allow untagged traffic.
+	 * 3)Allow traffic with smac=original mac.
+	 * 4)Drop all other traffic.
+	 */
+	int table_size = 4;
+
 	esw_vport_cleanup_ingress_rules(esw, vport);
 
 	if (!vport->info.vlan && !vport->info.qos && !vport->info.spoofchk) {
-		esw_vport_disable_ingress_acl(esw, vport);
+		esw_vport_disable_legacy_ingress_acl(esw, vport);
 		return 0;
 	}
 
-	err = esw_vport_enable_ingress_acl(esw, vport);
-	if (err) {
-		mlx5_core_warn(esw->dev,
-			       "failed to enable ingress acl (%d) on vport[%d]\n",
-			       err, vport->vport);
-		return err;
+	if (!vport->ingress.acl) {
+		err = esw_vport_create_ingress_acl_table(esw, vport, table_size);
+		if (err) {
+			esw_warn(esw->dev,
+				 "vport[%d] enable ingress acl err (%d)\n",
+				 err, vport->vport);
+			return err;
+		}
+
+		err = esw_vport_create_legacy_ingress_acl_groups(esw, vport);
+		if (err)
+			goto out;
 	}
 
 	esw_debug(esw->dev,
@@ -1322,10 +1357,11 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		vport->ingress.legacy.drop_rule = NULL;
 		goto out;
 	}
+	kvfree(spec);
+	return 0;
 
 out:
-	if (err)
-		esw_vport_cleanup_ingress_rules(esw, vport);
+	esw_vport_disable_legacy_ingress_acl(esw, vport);
 	kvfree(spec);
 	return err;
 }
@@ -1705,7 +1741,7 @@ static int esw_vport_create_legacy_acl_tables(struct mlx5_eswitch *esw,
 	return 0;
 
 egress_err:
-	esw_vport_disable_ingress_acl(esw, vport);
+	esw_vport_disable_legacy_ingress_acl(esw, vport);
 	mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
 	vport->egress.legacy.drop_counter = NULL;
 
@@ -1735,7 +1771,7 @@ static void esw_vport_destroy_legacy_acl_tables(struct mlx5_eswitch *esw,
 	mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
 	vport->egress.legacy.drop_counter = NULL;
 
-	esw_vport_disable_ingress_acl(esw, vport);
+	esw_vport_disable_legacy_ingress_acl(esw, vport);
 	mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
 	vport->ingress.legacy.drop_counter = NULL;
 }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index aa3588446cba..5e91735726b7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -65,16 +65,17 @@
 
 struct vport_ingress {
 	struct mlx5_flow_table *acl;
-	struct mlx5_flow_group *allow_untagged_spoofchk_grp;
-	struct mlx5_flow_group *allow_spoofchk_only_grp;
-	struct mlx5_flow_group *allow_untagged_only_grp;
-	struct mlx5_flow_group *drop_grp;
-	struct mlx5_flow_handle  *allow_rule;
+	struct mlx5_flow_handle *allow_rule;
 	struct {
+		struct mlx5_flow_group *allow_spoofchk_only_grp;
+		struct mlx5_flow_group *allow_untagged_spoofchk_grp;
+		struct mlx5_flow_group *allow_untagged_only_grp;
+		struct mlx5_flow_group *drop_grp;
 		struct mlx5_flow_handle *drop_rule;
 		struct mlx5_fc *drop_counter;
 	} legacy;
 	struct {
+		struct mlx5_flow_group *metadata_grp;
 		struct mlx5_modify_hdr *modify_metadata;
 		struct mlx5_flow_handle *modify_metadata_rule;
 	} offloads;
@@ -257,16 +258,16 @@ void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw);
 int esw_offloads_init_reps(struct mlx5_eswitch *esw);
 void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
 				     struct mlx5_vport *vport);
-int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
-				 struct mlx5_vport *vport);
+int esw_vport_create_ingress_acl_table(struct mlx5_eswitch *esw,
+				       struct mlx5_vport *vport,
+				       int table_size);
+void esw_vport_destroy_ingress_acl_table(struct mlx5_vport *vport);
 void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
 				    struct mlx5_vport *vport);
 int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
 				struct mlx5_vport *vport);
 void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
 				  struct mlx5_vport *vport);
-void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
-				   struct mlx5_vport *vport);
 int mlx5_esw_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num,
 			       u32 rate_mbps);
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index b536c8fa0061..807372a7211b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1858,6 +1858,44 @@ static void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
 	}
 }
 
+static int esw_vport_create_ingress_acl_group(struct mlx5_eswitch *esw,
+					      struct mlx5_vport *vport)
+{
+	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+	struct mlx5_flow_group *g;
+	u32 *flow_group_in;
+	int ret = 0;
+
+	flow_group_in = kvzalloc(inlen, GFP_KERNEL);
+	if (!flow_group_in)
+		return -ENOMEM;
+
+	memset(flow_group_in, 0, inlen);
+	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
+
+	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
+	if (IS_ERR(g)) {
+		ret = PTR_ERR(g);
+		esw_warn(esw->dev,
+			 "Failed to create vport[%d] ingress metdata group, err(%d)\n",
+			 vport->vport, ret);
+		goto grp_err;
+	}
+	vport->ingress.offloads.metadata_grp = g;
+grp_err:
+	kvfree(flow_group_in);
+	return ret;
+}
+
+static void esw_vport_destroy_ingress_acl_group(struct mlx5_vport *vport)
+{
+	if (vport->ingress.offloads.metadata_grp) {
+		mlx5_destroy_flow_group(vport->ingress.offloads.metadata_grp);
+		vport->ingress.offloads.metadata_grp = NULL;
+	}
+}
+
 static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 				    struct mlx5_vport *vport)
 {
@@ -1868,8 +1906,7 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		return 0;
 
 	esw_vport_cleanup_ingress_rules(esw, vport);
-
-	err = esw_vport_enable_ingress_acl(esw, vport);
+	err = esw_vport_create_ingress_acl_table(esw, vport, 1);
 	if (err) {
 		esw_warn(esw->dev,
 			 "failed to enable ingress acl (%d) on vport[%d]\n",
@@ -1877,25 +1914,34 @@ static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
 		return err;
 	}
 
+	err = esw_vport_create_ingress_acl_group(esw, vport);
+	if (err)
+		goto group_err;
+
 	esw_debug(esw->dev,
 		  "vport[%d] configure ingress rules\n", vport->vport);
 
 	if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
 		err = esw_vport_add_ingress_acl_modify_metadata(esw, vport);
 		if (err)
-			goto out;
+			goto metadata_err;
 	}
 
 	if (MLX5_CAP_GEN(esw->dev, prio_tag_required) &&
 	    mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
 		err = esw_vport_ingress_prio_tag_config(esw, vport);
 		if (err)
-			goto out;
+			goto prio_tag_err;
 	}
+	return 0;
 
-out:
-	if (err)
-		esw_vport_disable_ingress_acl(esw, vport);
+prio_tag_err:
+	esw_vport_del_ingress_acl_modify_metadata(esw, vport);
+metadata_err:
+	esw_vport_cleanup_ingress_rules(esw, vport);
+	esw_vport_destroy_ingress_acl_group(vport);
+group_err:
+	esw_vport_destroy_ingress_acl_table(vport);
 	return err;
 }
 
@@ -1964,7 +2010,8 @@ esw_vport_create_offloads_acl_tables(struct mlx5_eswitch *esw,
 		err = esw_vport_egress_config(esw, vport);
 		if (err) {
 			esw_vport_del_ingress_acl_modify_metadata(esw, vport);
-			esw_vport_disable_ingress_acl(esw, vport);
+			esw_vport_cleanup_ingress_rules(esw, vport);
+			esw_vport_destroy_ingress_acl_table(vport);
 		}
 	}
 	return err;
@@ -1976,7 +2023,9 @@ esw_vport_destroy_offloads_acl_tables(struct mlx5_eswitch *esw,
 {
 	esw_vport_disable_egress_acl(esw, vport);
 	esw_vport_del_ingress_acl_modify_metadata(esw, vport);
-	esw_vport_disable_ingress_acl(esw, vport);
+	esw_vport_cleanup_ingress_rules(esw, vport);
+	esw_vport_destroy_ingress_acl_group(vport);
+	esw_vport_destroy_ingress_acl_table(vport);
 }
 
 static int esw_create_uplink_offloads_acl_tables(struct mlx5_eswitch *esw)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 17/18] net/mlx5: E-switch, Enable metadata on own vport
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (15 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 16/18] net/mlx5: Refactor ingress acl configuration Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-10-28 23:35 ` [PATCH mlx5-next 18/18] IB/mlx5: Introduce and use mlx5_core_is_vf() Saeed Mahameed
  2019-11-01 21:41 ` [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit

From: Parav Pandit <parav@mellanox.com>

Currently on ECPF, metadata is enabled on the ECPF vport = 0xfffe
(manager vport).
Metadata when supported, must be enabled on own vport which is
used to pass metadata to vport of NIC Rx Flow Table.

Due to this error, traffic tagged by ingress ACL is not processed
correctly at NIC rx flow table level which is supposed to work
on metadata tag.

Hence, instead of working on eswitch manager vport, always working on
eswitch own vport regardless of PF or ECPF.

Given that mlx5_eswitch_query/modify_esw_vport_context() is used to
access other vport in legacy mode and own vport settings in switchdev mode,
extend low level API to explicitly specify other_vport.

Fixes: c1286050cf47 ("net/mlx5: E-Switch, Pass metadata from FDB to eswitch manager")
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 29 +++++++------------
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  6 ++--
 .../mellanox/mlx5/core/eswitch_offloads.c     |  4 +--
 3 files changed, 16 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index cc8d43d8c469..24c2217a4ce8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -111,42 +111,32 @@ static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
 }
 
 /* E-Switch vport context HW commands */
-static int modify_esw_vport_context_cmd(struct mlx5_core_dev *dev, u16 vport,
-					void *in, int inlen)
+int mlx5_eswitch_modify_esw_vport_context(struct mlx5_core_dev *dev, u16 vport,
+					  bool other_vport,
+					  void *in, int inlen)
 {
 	u32 out[MLX5_ST_SZ_DW(modify_esw_vport_context_out)] = {0};
 
 	MLX5_SET(modify_esw_vport_context_in, in, opcode,
 		 MLX5_CMD_OP_MODIFY_ESW_VPORT_CONTEXT);
 	MLX5_SET(modify_esw_vport_context_in, in, vport_number, vport);
-	MLX5_SET(modify_esw_vport_context_in, in, other_vport, 1);
+	MLX5_SET(modify_esw_vport_context_in, in, other_vport, other_vport);
 	return mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
 }
 
-int mlx5_eswitch_modify_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
-					  void *in, int inlen)
-{
-	return modify_esw_vport_context_cmd(esw->dev, vport, in, inlen);
-}
-
-static int query_esw_vport_context_cmd(struct mlx5_core_dev *dev, u16 vport,
-				       void *out, int outlen)
+int mlx5_eswitch_query_esw_vport_context(struct mlx5_core_dev *dev, u16 vport,
+					 bool other_vport,
+					 void *out, int outlen)
 {
 	u32 in[MLX5_ST_SZ_DW(query_esw_vport_context_in)] = {};
 
 	MLX5_SET(query_esw_vport_context_in, in, opcode,
 		 MLX5_CMD_OP_QUERY_ESW_VPORT_CONTEXT);
 	MLX5_SET(modify_esw_vport_context_in, in, vport_number, vport);
-	MLX5_SET(modify_esw_vport_context_in, in, other_vport, 1);
+	MLX5_SET(modify_esw_vport_context_in, in, other_vport, other_vport);
 	return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen);
 }
 
-int mlx5_eswitch_query_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
-					 void *out, int outlen)
-{
-	return query_esw_vport_context_cmd(esw->dev, vport, out, outlen);
-}
-
 static int modify_esw_vport_cvlan(struct mlx5_core_dev *dev, u16 vport,
 				  u16 vlan, u8 qos, u8 set_flags)
 {
@@ -179,7 +169,8 @@ static int modify_esw_vport_cvlan(struct mlx5_core_dev *dev, u16 vport,
 	MLX5_SET(modify_esw_vport_context_in, in,
 		 field_select.vport_cvlan_insert, 1);
 
-	return modify_esw_vport_context_cmd(dev, vport, in, sizeof(in));
+	return mlx5_eswitch_modify_esw_vport_context(dev, vport, true,
+						     in, sizeof(in));
 }
 
 /* E-Switch FDB */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 5e91735726b7..a05b948a6287 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -297,9 +297,11 @@ int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
 				 struct ifla_vf_stats *vf_stats);
 void mlx5_eswitch_del_send_to_vport_rule(struct mlx5_flow_handle *rule);
 
-int mlx5_eswitch_modify_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
+int mlx5_eswitch_modify_esw_vport_context(struct mlx5_core_dev *dev, u16 vport,
+					  bool other_vport,
 					  void *in, int inlen);
-int mlx5_eswitch_query_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
+int mlx5_eswitch_query_esw_vport_context(struct mlx5_core_dev *dev, u16 vport,
+					 bool other_vport,
 					 void *out, int outlen);
 
 struct mlx5_flow_spec;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 807372a7211b..59eebcae5df6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -600,7 +600,7 @@ static int esw_set_passing_vport_metadata(struct mlx5_eswitch *esw, bool enable)
 	if (!mlx5_eswitch_vport_match_metadata_enabled(esw))
 		return 0;
 
-	err = mlx5_eswitch_query_esw_vport_context(esw, esw->manager_vport,
+	err = mlx5_eswitch_query_esw_vport_context(esw->dev, 0, false,
 						   out, sizeof(out));
 	if (err)
 		return err;
@@ -619,7 +619,7 @@ static int esw_set_passing_vport_metadata(struct mlx5_eswitch *esw, bool enable)
 	MLX5_SET(modify_esw_vport_context_in, in,
 		 field_select.fdb_to_vport_reg_c_id, 1);
 
-	return mlx5_eswitch_modify_esw_vport_context(esw, esw->manager_vport,
+	return mlx5_eswitch_modify_esw_vport_context(esw->dev, 0, false,
 						     in, sizeof(in));
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH mlx5-next 18/18] IB/mlx5: Introduce and use mlx5_core_is_vf()
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (16 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 17/18] net/mlx5: E-switch, Enable metadata on own vport Saeed Mahameed
@ 2019-10-28 23:35 ` Saeed Mahameed
  2019-11-01 21:41 ` [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-10-28 23:35 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky; +Cc: netdev, linux-rdma, Parav Pandit, Vu Pham

From: Parav Pandit <parav@mellanox.com>

Instead of deciding a given device is virtual function or
not based on a device is PF or not, use already defined
MLX5_COREDEV_VF by introducing an helper API mlx5_core_is_vf().

This enables to clearly identify PF, VF and non virtual functions.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/infiniband/hw/mlx5/main.c | 2 +-
 include/linux/mlx5/driver.h       | 5 +++++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 831539419c30..8343a740c91e 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1031,7 +1031,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
 	if (MLX5_CAP_GEN(mdev, cd))
 		props->device_cap_flags |= IB_DEVICE_CROSS_CHANNEL;
 
-	if (!mlx5_core_is_pf(mdev))
+	if (mlx5_core_is_vf(mdev))
 		props->device_cap_flags |= IB_DEVICE_VIRTUAL_FUNCTION;
 
 	if (mlx5_ib_port_link_layer(ibdev, 1) ==
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 3e80f03a387f..7b4801e96feb 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1121,6 +1121,11 @@ static inline bool mlx5_core_is_pf(const struct mlx5_core_dev *dev)
 	return dev->coredev_type == MLX5_COREDEV_PF;
 }
 
+static inline bool mlx5_core_is_vf(const struct mlx5_core_dev *dev)
+{
+	return dev->coredev_type == MLX5_COREDEV_VF;
+}
+
 static inline bool mlx5_core_is_ecpf(struct mlx5_core_dev *dev)
 {
 	return dev->caps.embedded_cpu;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019
  2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
                   ` (17 preceding siblings ...)
  2019-10-28 23:35 ` [PATCH mlx5-next 18/18] IB/mlx5: Introduce and use mlx5_core_is_vf() Saeed Mahameed
@ 2019-11-01 21:41 ` Saeed Mahameed
  18 siblings, 0 replies; 20+ messages in thread
From: Saeed Mahameed @ 2019-11-01 21:41 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: netdev, linux-rdma

On Mon, 2019-10-28 at 23:34 +0000, Saeed Mahameed wrote:
> Hi, 
> 
> This series refactors and tide up eswitch vport and ACL code to
> provide
> better separation between eswitch legacy mode and switchdev mode 
> implementation in mlx5, for better future scalability and
> introduction of
> new switchdev mode functionality which might rely on shared
> structures
> with legacy mode, such as vport ACL tables.
> 
> Summary:
> 
> 1. Do vport ACL configuration on per vport basis when
> enabling/disabling a vport.
> This enables to have vports enabled/disabled outside of eswitch
> config for future.
> 
> 2. Split the code for legacy vs offloads mode and make it clear
> 
> 3. Tide up vport locking and workqueue usage
> 
> 4. Fix metadata enablement for ECPF
> 
> 5. Make explicit use of VF property to publish
> IB_DEVICE_VIRTUAL_FUNCTION
> 
> In case of no objection this series will be applied to mlx5-next
> branch
> and sent later as pull request to both rdma-next and net-next
> branches.
> 


Series applied to mlx5-next branch.
Thanks,
Saeed.


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-11-01 21:41 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-28 23:34 [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed
2019-10-28 23:34 ` [PATCH mlx5-next 01/18] net/mlx5: Fixed a typo in a comment in esw_del_uc_addr() Saeed Mahameed
2019-10-28 23:34 ` [PATCH mlx5-next 02/18] net/mlx5: E-Switch, Rename egress config to generic name Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 03/18] net/mlx5: E-Switch, Rename ingress acl config in offloads mode Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 04/18] net/mlx5: E-switch, Introduce and use vlan rule config helper Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 05/18] net/mlx5: Introduce and use mlx5_esw_is_manager_vport() Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 06/18] net/mlx5: Correct comment for legacy fields Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 07/18] net/mlx5: Move metdata fields under offloads structure Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 08/18] net/mlx5: Move legacy drop counter and rule under legacy structure Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 09/18] net/mlx5: Tide up state_lock and vport enabled flag usage Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 10/18] net/mlx5: E-switch, Prepare code to handle vport enable error Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 11/18] net/mlx5: E-switch, Legacy introduce and use per vport acl tables APIs Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 12/18] net/mlx5: Move ACL drop counters life cycle close to ACL lifecycle Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 13/18] net/mlx5: E-switch, Offloads introduce and use per vport acl tables APIs Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 14/18] net/mlx5: E-switch, Offloads shift ACL programming during enable/disable vport Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 15/18] net/mlx5: Restrict metadata disablement to offloads mode Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 16/18] net/mlx5: Refactor ingress acl configuration Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 17/18] net/mlx5: E-switch, Enable metadata on own vport Saeed Mahameed
2019-10-28 23:35 ` [PATCH mlx5-next 18/18] IB/mlx5: Introduce and use mlx5_core_is_vf() Saeed Mahameed
2019-11-01 21:41 ` [PATCH mlx5-next 00/18] Mellanox, mlx5-next updates 28-10-2019 Saeed Mahameed

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).