All of lore.kernel.org
 help / color / mirror / Atom feed
* [pull request][net-next 00/15] mlx5 updates 2022-12-03
@ 2022-12-03 22:13 Saeed Mahameed
  2022-12-03 22:13 ` [net-next 01/15] net/mlx5e: E-Switch, handle flow attribute with no destinations Saeed Mahameed
                   ` (14 more replies)
  0 siblings, 15 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan

From: Saeed Mahameed <saeedm@nvidia.com>

Two updates to mlx5 driver:
  1) Support tc police jump conform-exceed attribute
  2) Support 802.1ad in SRIOV VST

For more information please see tag log below.

Please pull and let me know if there is any problem.

Thanks,
Saeed.


The following changes since commit 65e6af6cebefbf7d8d8ac52b71cd251c2071ad00:

  net: ethernet: mtk_wed: fix sleep while atomic in mtk_wed_wo_queue_refill (2022-12-02 21:23:02 -0800)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux.git tags/mlx5-updates-2022-12-03

for you to fetch changes up to aab2e7ac34c563f9f580458956799e281cf3ad48:

  net/mlx5: SRIOV, Allow ingress tagged packets on VST (2022-12-03 14:10:44 -0800)

----------------------------------------------------------------
mlx5-updates-2022-12-03

1) Support tc police jump conform-exceed attribute

The tc police action conform-exceed option defines how to handle
packets which exceed or conform to the configured bandwidth limit.
One of the possible conform-exceed values is jump, which skips over
a specified number of actions.
This series adds support for conform-exceed jump action.

The series adds platform support for branching actions by providing
true/false flow attributes to the branching action.
This is necessary for supporting police jump, as each branch may
execute a different action list.

The first five patches are preparation patches:
- Patches 1 and 2 add support for actions with no destinations (e.g. drop)
- Patch 3 refactor the code for subsequent function reuse
- Patch 4 defines an abstract way for identifying terminating actions
- Patch 5 updates action list validations logic considering branching actions

The following three patches introduce an interface for abstracting branching actions:
- Patch 6 introduces an abstract api for defining branching actions
- Patch 7 generically instantiates the branching flow attributes using the abstract API

Patch 8 adds the platform support for jump actions, by executing the following sequence:
  a. Store the jumping flow attr
  b. Identify the jump target action while iterating the actions list.
  c. Instantiate a new flow attribute after the jump target action.
     This is the flow attribute that the branching action should jump to.
  d. Set the target post action id on:
    d.1. The jumping attribute, thus realizing the jump functionality.
    d.2. The attribute preceding the target jump attr, if not terminating.

The next patches apply the platform's branching attributes to the police action:
- Patch 9 is a refactor patch
- Patch 10 initializes the post meter table with the red/green flow attributes,
           as were initialized by the platform
- Patch 11 enables the offload of meter actions using jump conform-exceed value.

2) Support 802.1ad in SRIOV VST
  2.1) Refactor ACL table creation and layout to support the new vlan mode
  2.2)  Implement 802.1ad VST when device supports push vlan and pop vlan
    steering actions on vport ACLs. In case device doesn't support these
    steering actions, fall back to setting eswitch vport context, which
    supports only 802.1q.

----------------------------------------------------------------
Moshe Shemesh (4):
      net/mlx5: SRIOV, Remove two unused ingress flow group
      net/mlx5: SRIOV, Recreate egress ACL table on config change
      net/mlx5: SRIOV, Add 802.1ad VST support
      net/mlx5: SRIOV, Allow ingress tagged packets on VST

Oz Shlomo (11):
      net/mlx5e: E-Switch, handle flow attribute with no destinations
      net/mlx5: fs, assert null dest pointer when dest_num is 0
      net/mlx5e: TC, reuse flow attribute post parser processing
      net/mlx5e: TC, add terminating actions
      net/mlx5e: TC, validate action list per attribute
      net/mlx5e: TC, set control params for branching actions
      net/mlx5e: TC, initialize branch flow attributes
      net/mlx5e: TC, initialize branching action with target attr
      net/mlx5e: TC, rename post_meter actions
      net/mlx5e: TC, init post meter rules with branching attributes
      net/mlx5e: TC, allow meter jump control action

 .../ethernet/mellanox/mlx5/core/en/tc/act/accept.c |   1 +
 .../ethernet/mellanox/mlx5/core/en/tc/act/act.c    |   2 +-
 .../ethernet/mellanox/mlx5/core/en/tc/act/act.h    |  12 +
 .../ethernet/mellanox/mlx5/core/en/tc/act/drop.c   |   1 +
 .../ethernet/mellanox/mlx5/core/en/tc/act/goto.c   |   1 +
 .../ethernet/mellanox/mlx5/core/en/tc/act/mirred.c |   7 +
 .../mellanox/mlx5/core/en/tc/act/mirred_nic.c      |   1 +
 .../ethernet/mellanox/mlx5/core/en/tc/act/police.c |  66 +++-
 .../net/ethernet/mellanox/mlx5/core/en/tc/meter.c  |  24 +-
 .../net/ethernet/mellanox/mlx5/core/en/tc/meter.h  |   4 +-
 .../ethernet/mellanox/mlx5/core/en/tc/post_meter.c |  98 +++--
 .../ethernet/mellanox/mlx5/core/en/tc/post_meter.h |   8 +-
 .../net/ethernet/mellanox/mlx5/core/en/tc_priv.h   |   4 -
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c  |   5 +-
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.c    | 428 +++++++++++++++------
 drivers/net/ethernet/mellanox/mlx5/core/en_tc.h    |   4 +
 .../mellanox/mlx5/core/esw/acl/egress_lgcy.c       |  93 +++--
 .../mellanox/mlx5/core/esw/acl/egress_ofld.c       |   5 +-
 .../ethernet/mellanox/mlx5/core/esw/acl/helper.c   |  20 +-
 .../ethernet/mellanox/mlx5/core/esw/acl/helper.h   |   5 +-
 .../mellanox/mlx5/core/esw/acl/ingress_lgcy.c      | 218 +++++------
 .../net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h |   8 +
 .../net/ethernet/mellanox/mlx5/core/esw/legacy.c   |  12 +-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.c  |  46 ++-
 drivers/net/ethernet/mellanox/mlx5/core/eswitch.h  |  33 +-
 .../ethernet/mellanox/mlx5/core/eswitch_offloads.c |  15 +-
 drivers/net/ethernet/mellanox/mlx5/core/fs_core.c  |   3 +
 include/linux/mlx5/device.h                        |   7 +
 include/linux/mlx5/mlx5_ifc.h                      |   3 +-
 29 files changed, 761 insertions(+), 373 deletions(-)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [net-next 01/15] net/mlx5e: E-Switch, handle flow attribute with no destinations
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 02/15] net/mlx5: fs, assert null dest pointer when dest_num is 0 Saeed Mahameed
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Rules with drop action are not required to have a destination.
Currently the destination list is allocated with the maximum number of
destinations and passed to the fs_core layer along with the actual number
of destinations.

Remove redundant passing of dest pointer when count of dest is 0.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 9b6fbb19c22a..db1e282900aa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -640,6 +640,11 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
 		goto err_esw_get;
 	}
 
+	if (!i) {
+		kfree(dest);
+		dest = NULL;
+	}
+
 	if (mlx5_eswitch_termtbl_required(esw, attr, &flow_act, spec))
 		rule = mlx5_eswitch_add_termtbl_rule(esw, fdb, spec, esw_attr,
 						     &flow_act, dest, i);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 02/15] net/mlx5: fs, assert null dest pointer when dest_num is 0
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
  2022-12-03 22:13 ` [net-next 01/15] net/mlx5e: E-Switch, handle flow attribute with no destinations Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 03/15] net/mlx5e: TC, reuse flow attribute post parser processing Saeed Mahameed
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan, Mark Bloch

From: Oz Shlomo <ozsh@nvidia.com>

Currently create_flow_handle() assumes a null dest pointer when there
are no destinations.
This might not be the case as the caller may pass an allocated dest
array while setting the dest_num parameter to 0.

Assert null dest array for flow rules that have no destinations (e.g. drop
rule).

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index d53749248fa0..d53190f22871 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -1962,6 +1962,9 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft,
 	if (flow_act->fg && ft->autogroup.active)
 		return ERR_PTR(-EINVAL);
 
+	if (dest && dest_num <= 0)
+		return ERR_PTR(-EINVAL);
+
 	for (i = 0; i < dest_num; i++) {
 		if (!dest_is_valid(&dest[i], flow_act, ft))
 			return ERR_PTR(-EINVAL);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 03/15] net/mlx5e: TC, reuse flow attribute post parser processing
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
  2022-12-03 22:13 ` [net-next 01/15] net/mlx5e: E-Switch, handle flow attribute with no destinations Saeed Mahameed
  2022-12-03 22:13 ` [net-next 02/15] net/mlx5: fs, assert null dest pointer when dest_num is 0 Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 04/15] net/mlx5e: TC, add terminating actions Saeed Mahameed
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

After the tc action parsing phase the flow attribute is initialized with
relevant eswitch offload objects such as tunnel, vlan, header modify and
counter attributes. The post processing is done both for fdb and post-action
attributes.

Reuse the flow attribute post parsing logic by both fdb and post-action
offloads.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 96 ++++++++++---------
 1 file changed, 51 insertions(+), 45 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 10d1609ece58..46222541e435 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -606,6 +606,12 @@ int mlx5e_get_flow_namespace(struct mlx5e_tc_flow *flow)
 		MLX5_FLOW_NAMESPACE_FDB : MLX5_FLOW_NAMESPACE_KERNEL;
 }
 
+static struct mlx5_core_dev *
+get_flow_counter_dev(struct mlx5e_tc_flow *flow)
+{
+	return mlx5e_is_eswitch_flow(flow) ? flow->attr->esw_attr->counter_dev : flow->priv->mdev;
+}
+
 static struct mod_hdr_tbl *
 get_mod_hdr_table(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow)
 {
@@ -1718,6 +1724,48 @@ clean_encap_dests(struct mlx5e_priv *priv,
 	}
 }
 
+static int
+post_process_attr(struct mlx5e_tc_flow *flow,
+		  struct mlx5_flow_attr *attr,
+		  bool is_post_act_attr,
+		  struct netlink_ext_ack *extack)
+{
+	struct mlx5_eswitch *esw = flow->priv->mdev->priv.eswitch;
+	bool vf_tun;
+	int err = 0;
+
+	err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun);
+	if (err)
+		goto err_out;
+
+	if (mlx5e_is_eswitch_flow(flow)) {
+		err = mlx5_eswitch_add_vlan_action(esw, attr);
+		if (err)
+			goto err_out;
+	}
+
+	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
+		if (vf_tun || is_post_act_attr) {
+			err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr);
+			if (err)
+				goto err_out;
+		} else {
+			err = mlx5e_attach_mod_hdr(flow->priv, flow, attr->parse_attr);
+			if (err)
+				goto err_out;
+		}
+	}
+
+	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
+		err = alloc_flow_attr_counter(get_flow_counter_dev(flow), attr);
+		if (err)
+			goto err_out;
+	}
+
+err_out:
+	return err;
+}
+
 static int
 mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
 		      struct mlx5e_tc_flow *flow,
@@ -1728,7 +1776,6 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
 	struct mlx5_flow_attr *attr = flow->attr;
 	struct mlx5_esw_flow_attr *esw_attr;
 	u32 max_prio, max_chain;
-	bool vf_tun;
 	int err = 0;
 
 	parse_attr = attr->parse_attr;
@@ -1818,32 +1865,10 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
 		esw_attr->int_port = int_port;
 	}
 
-	err = set_encap_dests(priv, flow, attr, extack, &vf_tun);
-	if (err)
-		goto err_out;
-
-	err = mlx5_eswitch_add_vlan_action(esw, attr);
+	err = post_process_attr(flow, attr, false, extack);
 	if (err)
 		goto err_out;
 
-	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
-		if (vf_tun) {
-			err = mlx5e_tc_add_flow_mod_hdr(priv, flow, attr);
-			if (err)
-				goto err_out;
-		} else {
-			err = mlx5e_attach_mod_hdr(priv, flow, parse_attr);
-			if (err)
-				goto err_out;
-		}
-	}
-
-	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
-		err = alloc_flow_attr_counter(esw_attr->counter_dev, attr);
-		if (err)
-			goto err_out;
-	}
-
 	/* we get here if one of the following takes place:
 	 * (1) there's no error
 	 * (2) there's an encap action and we don't have valid neigh
@@ -3639,12 +3664,6 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr,
 	return attr2;
 }
 
-static struct mlx5_core_dev *
-get_flow_counter_dev(struct mlx5e_tc_flow *flow)
-{
-	return mlx5e_is_eswitch_flow(flow) ? flow->attr->esw_attr->counter_dev : flow->priv->mdev;
-}
-
 struct mlx5_flow_attr *
 mlx5e_tc_get_encap_attr(struct mlx5e_tc_flow *flow)
 {
@@ -3754,7 +3773,6 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack)
 	struct mlx5e_post_act *post_act = get_post_action(flow->priv);
 	struct mlx5_flow_attr *attr, *next_attr = NULL;
 	struct mlx5e_post_act_handle *handle;
-	bool vf_tun;
 	int err;
 
 	/* This is going in reverse order as needed.
@@ -3776,26 +3794,14 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack)
 		if (list_is_last(&attr->list, &flow->attrs))
 			break;
 
-		err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun);
+		err = actions_prepare_mod_hdr_actions(flow->priv, flow, attr, extack);
 		if (err)
 			goto out_free;
 
-		err = actions_prepare_mod_hdr_actions(flow->priv, flow, attr, extack);
+		err = post_process_attr(flow, attr, true, extack);
 		if (err)
 			goto out_free;
 
-		if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
-			err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr);
-			if (err)
-				goto out_free;
-		}
-
-		if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
-			err = alloc_flow_attr_counter(get_flow_counter_dev(flow), attr);
-			if (err)
-				goto out_free;
-		}
-
 		handle = mlx5e_tc_post_act_add(post_act, attr);
 		if (IS_ERR(handle)) {
 			err = PTR_ERR(handle);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 04/15] net/mlx5e: TC, add terminating actions
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (2 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 03/15] net/mlx5e: TC, reuse flow attribute post parser processing Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 05/15] net/mlx5e: TC, validate action list per attribute Saeed Mahameed
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Extend act api to identify actions that terminate action list.
Pre-step for terminating branching actions.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/accept.c | 1 +
 drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c    | 2 +-
 drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h    | 3 +++
 drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/drop.c   | 1 +
 drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/goto.c   | 1 +
 drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c | 7 +++++++
 .../net/ethernet/mellanox/mlx5/core/en/tc/act/mirred_nic.c | 1 +
 7 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/accept.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/accept.c
index 21aab96357b5..a278f52d52b0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/accept.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/accept.c
@@ -28,4 +28,5 @@ tc_act_parse_accept(struct mlx5e_tc_act_parse_state *parse_state,
 struct mlx5e_tc_act mlx5e_tc_act_accept = {
 	.can_offload = tc_act_can_offload_accept,
 	.parse_action = tc_act_parse_accept,
+	.is_terminating_action = true,
 };
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
index 3337241cfd84..eba0c8698926 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
@@ -11,7 +11,7 @@ static struct mlx5e_tc_act *tc_acts_fdb[NUM_FLOW_ACTIONS] = {
 	[FLOW_ACTION_DROP] = &mlx5e_tc_act_drop,
 	[FLOW_ACTION_TRAP] = &mlx5e_tc_act_trap,
 	[FLOW_ACTION_GOTO] = &mlx5e_tc_act_goto,
-	[FLOW_ACTION_REDIRECT] = &mlx5e_tc_act_mirred,
+	[FLOW_ACTION_REDIRECT] = &mlx5e_tc_act_redirect,
 	[FLOW_ACTION_MIRRED] = &mlx5e_tc_act_mirred,
 	[FLOW_ACTION_REDIRECT_INGRESS] = &mlx5e_tc_act_redirect_ingress,
 	[FLOW_ACTION_VLAN_PUSH] = &mlx5e_tc_act_vlan,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
index e1570ff056ae..8ede44902284 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
@@ -60,6 +60,8 @@ struct mlx5e_tc_act {
 
 	int (*stats_action)(struct mlx5e_priv *priv,
 			    struct flow_offload_action *fl_act);
+
+	bool is_terminating_action;
 };
 
 struct mlx5e_tc_flow_action {
@@ -81,6 +83,7 @@ extern struct mlx5e_tc_act mlx5e_tc_act_vlan_mangle;
 extern struct mlx5e_tc_act mlx5e_tc_act_mpls_push;
 extern struct mlx5e_tc_act mlx5e_tc_act_mpls_pop;
 extern struct mlx5e_tc_act mlx5e_tc_act_mirred;
+extern struct mlx5e_tc_act mlx5e_tc_act_redirect;
 extern struct mlx5e_tc_act mlx5e_tc_act_mirred_nic;
 extern struct mlx5e_tc_act mlx5e_tc_act_ct;
 extern struct mlx5e_tc_act mlx5e_tc_act_sample;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/drop.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/drop.c
index dd025a95c439..7d16aeabb119 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/drop.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/drop.c
@@ -27,4 +27,5 @@ tc_act_parse_drop(struct mlx5e_tc_act_parse_state *parse_state,
 struct mlx5e_tc_act mlx5e_tc_act_drop = {
 	.can_offload = tc_act_can_offload_drop,
 	.parse_action = tc_act_parse_drop,
+	.is_terminating_action = true,
 };
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/goto.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/goto.c
index 25174f68613e..0923e6db2d0a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/goto.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/goto.c
@@ -121,4 +121,5 @@ struct mlx5e_tc_act mlx5e_tc_act_goto = {
 	.can_offload = tc_act_can_offload_goto,
 	.parse_action = tc_act_parse_goto,
 	.post_parse = tc_act_post_parse_goto,
+	.is_terminating_action = true,
 };
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c
index 4ac7de3f6afa..78c427b38048 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred.c
@@ -334,4 +334,11 @@ tc_act_parse_mirred(struct mlx5e_tc_act_parse_state *parse_state,
 struct mlx5e_tc_act mlx5e_tc_act_mirred = {
 	.can_offload = tc_act_can_offload_mirred,
 	.parse_action = tc_act_parse_mirred,
+	.is_terminating_action = false,
+};
+
+struct mlx5e_tc_act mlx5e_tc_act_redirect = {
+	.can_offload = tc_act_can_offload_mirred,
+	.parse_action = tc_act_parse_mirred,
+	.is_terminating_action = true,
 };
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred_nic.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred_nic.c
index 90b4c1b34776..7f409692b18f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred_nic.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/mirred_nic.c
@@ -48,4 +48,5 @@ tc_act_parse_mirred_nic(struct mlx5e_tc_act_parse_state *parse_state,
 struct mlx5e_tc_act mlx5e_tc_act_mirred_nic = {
 	.can_offload = tc_act_can_offload_mirred_nic,
 	.parse_action = tc_act_parse_mirred_nic,
+	.is_terminating_action = true,
 };
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 05/15] net/mlx5e: TC, validate action list per attribute
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (3 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 04/15] net/mlx5e: TC, add terminating actions Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 06/15] net/mlx5e: TC, set control params for branching actions Saeed Mahameed
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Currently the entire flow action list is validate for offload limitations.
For example, flow with both forward and drop actions are declared invalid
due to hardware restrictions.
However, a multi-table hardware model changes the limitations from a flow
scope to a single flow attribute scope.

Apply offload limitations to flow attributes instead of the entire flow.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 62 ++++++++++---------
 1 file changed, 32 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 46222541e435..7eaf6c73b091 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1724,6 +1724,30 @@ clean_encap_dests(struct mlx5e_priv *priv,
 	}
 }
 
+static int
+verify_attr_actions(u32 actions, struct netlink_ext_ack *extack)
+{
+	if (!(actions &
+	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
+		NL_SET_ERR_MSG_MOD(extack, "Rule must have at least one forward/drop action");
+		return -EOPNOTSUPP;
+	}
+
+	if (!(~actions &
+	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
+		NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");
+		return -EOPNOTSUPP;
+	}
+
+	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
+	    actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
+		NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
 static int
 post_process_attr(struct mlx5e_tc_flow *flow,
 		  struct mlx5_flow_attr *attr,
@@ -1734,6 +1758,10 @@ post_process_attr(struct mlx5e_tc_flow *flow,
 	bool vf_tun;
 	int err = 0;
 
+	err = verify_attr_actions(attr->action, extack);
+	if (err)
+		goto err_out;
+
 	err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun);
 	if (err)
 		goto err_out;
@@ -3532,36 +3560,6 @@ actions_match_supported(struct mlx5e_priv *priv,
 	ct_clear = flow->attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR;
 	ct_flow = flow_flag_test(flow, CT) && !ct_clear;
 
-	if (!(actions &
-	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
-		NL_SET_ERR_MSG_MOD(extack, "Rule must have at least one forward/drop action");
-		return false;
-	}
-
-	if (!(~actions &
-	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
-		NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");
-		return false;
-	}
-
-	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
-	    actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
-		NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
-		return false;
-	}
-
-	if (!(~actions &
-	      (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
-		NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");
-		return false;
-	}
-
-	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
-	    actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
-		NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
-		return false;
-	}
-
 	if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
 	    !modify_header_match_supported(priv, &parse_attr->spec, flow_action,
 					   actions, ct_flow, ct_clear, extack))
@@ -3957,6 +3955,10 @@ parse_tc_nic_actions(struct mlx5e_priv *priv,
 	if (err)
 		return err;
 
+	err = verify_attr_actions(attr->action, extack);
+	if (err)
+		return err;
+
 	if (!actions_match_supported(priv, flow_action, parse_state->actions,
 				     parse_attr, flow, extack))
 		return -EOPNOTSUPP;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 06/15] net/mlx5e: TC, set control params for branching actions
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (4 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 05/15] net/mlx5e: TC, validate action list per attribute Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 07/15] net/mlx5e: TC, initialize branch flow attributes Saeed Mahameed
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Extend the act tc api to set the branch control params aligning with
the police conform/exceed use case.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../ethernet/mellanox/mlx5/core/en/tc/act/act.h    |  9 +++++++++
 .../ethernet/mellanox/mlx5/core/en/tc/act/police.c | 14 ++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
index 8ede44902284..8346557eeaf6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
@@ -32,6 +32,11 @@ struct mlx5e_tc_act_parse_state {
 	struct mlx5_tc_ct_priv *ct_priv;
 };
 
+struct mlx5e_tc_act_branch_ctrl {
+	enum flow_action_id act_id;
+	u32 extval;
+};
+
 struct mlx5e_tc_act {
 	bool (*can_offload)(struct mlx5e_tc_act_parse_state *parse_state,
 			    const struct flow_action_entry *act,
@@ -61,6 +66,10 @@ struct mlx5e_tc_act {
 	int (*stats_action)(struct mlx5e_priv *priv,
 			    struct flow_offload_action *fl_act);
 
+	bool (*get_branch_ctrl)(const struct flow_action_entry *act,
+				struct mlx5e_tc_act_branch_ctrl *cond_true,
+				struct mlx5e_tc_act_branch_ctrl *cond_false);
+
 	bool is_terminating_action;
 };
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
index c8e5ca65bb6e..81aa4185c64a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
@@ -147,6 +147,19 @@ tc_act_police_stats(struct mlx5e_priv *priv,
 	return 0;
 }
 
+static bool
+tc_act_police_get_branch_ctrl(const struct flow_action_entry *act,
+			      struct mlx5e_tc_act_branch_ctrl *cond_true,
+			      struct mlx5e_tc_act_branch_ctrl *cond_false)
+{
+	cond_true->act_id = act->police.notexceed.act_id;
+	cond_true->extval = act->police.notexceed.extval;
+
+	cond_false->act_id = act->police.exceed.act_id;
+	cond_false->extval = act->police.exceed.extval;
+	return true;
+}
+
 struct mlx5e_tc_act mlx5e_tc_act_police = {
 	.can_offload = tc_act_can_offload_police,
 	.parse_action = tc_act_parse_police,
@@ -154,4 +167,5 @@ struct mlx5e_tc_act mlx5e_tc_act_police = {
 	.offload_action = tc_act_police_offload,
 	.destroy_action = tc_act_police_destroy,
 	.stats_action = tc_act_police_stats,
+	.get_branch_ctrl = tc_act_police_get_branch_ctrl,
 };
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 07/15] net/mlx5e: TC, initialize branch flow attributes
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (5 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 06/15] net/mlx5e: TC, set control params for branching actions Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 08/15] net/mlx5e: TC, initialize branching action with target attr Saeed Mahameed
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Initialize flow attribute for drop, accept, pipe and jump branching actions.

Instantiate a flow attribute instance according to the specified branch
control action. Store the branching attributes on the branching action
flow attribute during the parsing phase. Then, during the offload phase,
allocate the relevant mod header objects to the branching actions.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 156 ++++++++++++++++--
 .../net/ethernet/mellanox/mlx5/core/en_tc.h   |   2 +
 2 files changed, 142 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 7eaf6c73b091..6bd71a85d56d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -160,6 +160,7 @@ static struct lock_class_key tc_ht_lock_key;
 
 static void mlx5e_put_flow_tunnel_id(struct mlx5e_tc_flow *flow);
 static void free_flow_post_acts(struct mlx5e_tc_flow *flow);
+static void mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr);
 
 void
 mlx5e_tc_match_to_reg_match(struct mlx5_flow_spec *spec,
@@ -1784,6 +1785,20 @@ post_process_attr(struct mlx5e_tc_flow *flow,
 		}
 	}
 
+	if (attr->branch_true &&
+	    attr->branch_true->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
+		err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr->branch_true);
+		if (err)
+			goto err_out;
+	}
+
+	if (attr->branch_false &&
+	    attr->branch_false->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
+		err = mlx5e_tc_add_flow_mod_hdr(flow->priv, flow, attr->branch_false);
+		if (err)
+			goto err_out;
+	}
+
 	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
 		err = alloc_flow_attr_counter(get_flow_counter_dev(flow), attr);
 		if (err)
@@ -1932,6 +1947,16 @@ static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow)
 	return !!geneve_tlv_opt_0_data;
 }
 
+static void free_branch_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
+{
+	if (!attr)
+		return;
+
+	mlx5_free_flow_attr(flow, attr);
+	kvfree(attr->parse_attr);
+	kfree(attr);
+}
+
 static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
 				  struct mlx5e_tc_flow *flow)
 {
@@ -1987,6 +2012,8 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
 		mlx5e_detach_decap(priv, flow);
 
 	free_flow_post_acts(flow);
+	free_branch_attr(flow, attr->branch_true);
+	free_branch_attr(flow, attr->branch_false);
 
 	if (flow->attr->lag.count)
 		mlx5_lag_del_mpesw_rule(esw->dev);
@@ -3659,6 +3686,8 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr,
 		attr2->esw_attr->split_count = 0;
 	}
 
+	attr2->branch_true = NULL;
+	attr2->branch_false = NULL;
 	return attr2;
 }
 
@@ -3697,28 +3726,15 @@ mlx5e_tc_unoffload_flow_post_acts(struct mlx5e_tc_flow *flow)
 static void
 free_flow_post_acts(struct mlx5e_tc_flow *flow)
 {
-	struct mlx5_core_dev *counter_dev = get_flow_counter_dev(flow);
-	struct mlx5e_post_act *post_act = get_post_action(flow->priv);
 	struct mlx5_flow_attr *attr, *tmp;
-	bool vf_tun;
 
 	list_for_each_entry_safe(attr, tmp, &flow->attrs, list) {
 		if (list_is_last(&attr->list, &flow->attrs))
 			break;
 
-		if (attr->post_act_handle)
-			mlx5e_tc_post_act_del(post_act, attr->post_act_handle);
-
-		clean_encap_dests(flow->priv, flow, attr, &vf_tun);
-
-		if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
-			mlx5_fc_destroy(counter_dev, attr->counter);
-
-		if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
-			mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
-			if (attr->modify_hdr)
-				mlx5_modify_header_dealloc(flow->priv->mdev, attr->modify_hdr);
-		}
+		mlx5_free_flow_attr(flow, attr);
+		free_branch_attr(flow, attr->branch_true);
+		free_branch_attr(flow, attr->branch_false);
 
 		list_del(&attr->list);
 		kvfree(attr->parse_attr);
@@ -3825,6 +3841,86 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack)
 	return err;
 }
 
+static int
+alloc_branch_attr(struct mlx5e_tc_flow *flow,
+		  struct mlx5e_tc_act_branch_ctrl *cond,
+		  struct mlx5_flow_attr **cond_attr,
+		  u32 *jump_count,
+		  struct netlink_ext_ack *extack)
+{
+	struct mlx5_flow_attr *attr;
+	int err = 0;
+
+	*cond_attr = mlx5e_clone_flow_attr_for_post_act(flow->attr,
+							mlx5e_get_flow_namespace(flow));
+	if (!(*cond_attr))
+		return -ENOMEM;
+
+	attr = *cond_attr;
+
+	switch (cond->act_id) {
+	case FLOW_ACTION_DROP:
+		attr->action |= MLX5_FLOW_CONTEXT_ACTION_DROP;
+		break;
+	case FLOW_ACTION_ACCEPT:
+	case FLOW_ACTION_PIPE:
+		attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+		attr->dest_ft = mlx5e_tc_post_act_get_ft(get_post_action(flow->priv));
+		break;
+	case FLOW_ACTION_JUMP:
+		if (*jump_count) {
+			NL_SET_ERR_MSG_MOD(extack, "Cannot offload flows with nested jumps");
+			err = -EOPNOTSUPP;
+			goto out_err;
+		}
+		*jump_count = cond->extval;
+		attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+		attr->dest_ft = mlx5e_tc_post_act_get_ft(get_post_action(flow->priv));
+		break;
+	default:
+		err = -EOPNOTSUPP;
+		goto out_err;
+	}
+
+	return err;
+out_err:
+	kfree(*cond_attr);
+	*cond_attr = NULL;
+	return err;
+}
+
+static int
+parse_branch_ctrl(struct flow_action_entry *act, struct mlx5e_tc_act *tc_act,
+		  struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr,
+		  struct netlink_ext_ack *extack)
+{
+	struct mlx5e_tc_act_branch_ctrl cond_true, cond_false;
+	u32 jump_count;
+	int err;
+
+	if (!tc_act->get_branch_ctrl)
+		return 0;
+
+	tc_act->get_branch_ctrl(act, &cond_true, &cond_false);
+
+	err = alloc_branch_attr(flow, &cond_true,
+				&attr->branch_true, &jump_count, extack);
+	if (err)
+		goto out_err;
+
+	err = alloc_branch_attr(flow, &cond_false,
+				&attr->branch_false, &jump_count, extack);
+	if (err)
+		goto err_branch_false;
+
+	return 0;
+
+err_branch_false:
+	free_branch_attr(flow, attr->branch_true);
+out_err:
+	return err;
+}
+
 static int
 parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
 		 struct flow_action *flow_action)
@@ -3868,6 +3964,10 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
 		if (err)
 			goto out_free;
 
+		err = parse_branch_ctrl(act, tc_act, flow, attr, extack);
+		if (err)
+			goto out_free;
+
 		parse_state->actions |= attr->action;
 
 		/* Split attr for multi table act if not the last act. */
@@ -4196,6 +4296,30 @@ mlx5_alloc_flow_attr(enum mlx5_flow_namespace_type type)
 	return attr;
 }
 
+static void
+mlx5_free_flow_attr(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr)
+{
+	struct mlx5_core_dev *counter_dev = get_flow_counter_dev(flow);
+	bool vf_tun;
+
+	if (!attr)
+		return;
+
+	if (attr->post_act_handle)
+		mlx5e_tc_post_act_del(get_post_action(flow->priv), attr->post_act_handle);
+
+	clean_encap_dests(flow->priv, flow, attr, &vf_tun);
+
+	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT)
+		mlx5_fc_destroy(counter_dev, attr->counter);
+
+	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) {
+		mlx5e_mod_hdr_dealloc(&attr->parse_attr->mod_hdr_acts);
+		if (attr->modify_hdr)
+			mlx5_modify_header_dealloc(flow->priv->mdev, attr->modify_hdr);
+	}
+}
+
 static int
 mlx5e_alloc_flow(struct mlx5e_priv *priv, int attr_size,
 		 struct flow_cls_offload *f, unsigned long flow_flags,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
index 0db41fa4a9a6..cee88f7dd50f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
@@ -95,6 +95,8 @@ struct mlx5_flow_attr {
 		 */
 		bool count;
 	} lag;
+	struct mlx5_flow_attr *branch_true;
+	struct mlx5_flow_attr *branch_false;
 	/* keep this union last */
 	union {
 		DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr);
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 08/15] net/mlx5e: TC, initialize branching action with target attr
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (6 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 07/15] net/mlx5e: TC, initialize branch flow attributes Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 09/15] net/mlx5e: TC, rename post_meter actions Saeed Mahameed
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Identify the jump target action when iterating the action list.
Initialize the jump target attr with the jumping attribute during the
parsing phase. Initialize the jumping attr post action with the target
during the offload phase.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 86 +++++++++++++++++--
 .../net/ethernet/mellanox/mlx5/core/en_tc.h   |  2 +
 2 files changed, 83 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 6bd71a85d56d..d52f8601fef4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -132,6 +132,15 @@ struct mlx5e_tc_attr_to_reg_mapping mlx5e_tc_attr_to_reg_mappings[] = {
 	[PACKET_COLOR_TO_REG] = packet_color_to_reg,
 };
 
+struct mlx5e_tc_jump_state {
+	u32 jump_count;
+	bool jump_target;
+	struct mlx5_flow_attr *jumping_attr;
+
+	enum flow_action_id last_id;
+	u32 last_index;
+};
+
 struct mlx5e_tc_table *mlx5e_tc_table_alloc(void)
 {
 	struct mlx5e_tc_table *tc;
@@ -3688,6 +3697,7 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr,
 
 	attr2->branch_true = NULL;
 	attr2->branch_false = NULL;
+	attr2->jumping_attr = NULL;
 	return attr2;
 }
 
@@ -3796,7 +3806,9 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack)
 		if (!next_attr) {
 			/* Set counter action on last post act rule. */
 			attr->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
-		} else {
+		}
+
+		if (next_attr && !(attr->flags & MLX5_ATTR_FLAG_TERMINATING)) {
 			err = mlx5e_tc_act_set_next_post_act(flow, attr, next_attr);
 			if (err)
 				goto out_free;
@@ -3823,6 +3835,13 @@ alloc_flow_post_acts(struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack)
 		}
 
 		attr->post_act_handle = handle;
+
+		if (attr->jumping_attr) {
+			err = mlx5e_tc_act_set_next_post_act(flow, attr->jumping_attr, attr);
+			if (err)
+				goto out_free;
+		}
+
 		next_attr = attr;
 	}
 
@@ -3889,13 +3908,58 @@ alloc_branch_attr(struct mlx5e_tc_flow *flow,
 	return err;
 }
 
+static void
+dec_jump_count(struct flow_action_entry *act, struct mlx5e_tc_act *tc_act,
+	       struct mlx5_flow_attr *attr, struct mlx5e_priv *priv,
+	       struct mlx5e_tc_jump_state *jump_state)
+{
+	if (!jump_state->jump_count)
+		return;
+
+	/* Single tc action can instantiate multiple offload actions (e.g. pedit)
+	 * Jump only over a tc action
+	 */
+	if (act->id == jump_state->last_id && act->hw_index == jump_state->last_index)
+		return;
+
+	jump_state->last_id = act->id;
+	jump_state->last_index = act->hw_index;
+
+	/* nothing to do for intermediate actions */
+	if (--jump_state->jump_count > 1)
+		return;
+
+	if (jump_state->jump_count == 1) { /* last action in the jump action list */
+
+		/* create a new attribute after this action */
+		jump_state->jump_target = true;
+
+		if (tc_act->is_terminating_action) { /* the branch ends here */
+			attr->flags |= MLX5_ATTR_FLAG_TERMINATING;
+			attr->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
+		} else { /* the branch continues executing the rest of the actions */
+			struct mlx5e_post_act *post_act;
+
+			attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+			post_act = get_post_action(priv);
+			attr->dest_ft = mlx5e_tc_post_act_get_ft(post_act);
+		}
+	} else if (jump_state->jump_count == 0) { /* first attr after the jump action list */
+		/* This is the post action for the jumping attribute (either red or green)
+		 * Use the stored jumping_attr to set the post act id on the jumping attribute
+		 */
+		attr->jumping_attr = jump_state->jumping_attr;
+	}
+}
+
 static int
 parse_branch_ctrl(struct flow_action_entry *act, struct mlx5e_tc_act *tc_act,
 		  struct mlx5e_tc_flow *flow, struct mlx5_flow_attr *attr,
+		  struct mlx5e_tc_jump_state *jump_state,
 		  struct netlink_ext_ack *extack)
 {
 	struct mlx5e_tc_act_branch_ctrl cond_true, cond_false;
-	u32 jump_count;
+	u32 jump_count = jump_state->jump_count;
 	int err;
 
 	if (!tc_act->get_branch_ctrl)
@@ -3908,11 +3972,18 @@ parse_branch_ctrl(struct flow_action_entry *act, struct mlx5e_tc_act *tc_act,
 	if (err)
 		goto out_err;
 
+	if (jump_count)
+		jump_state->jumping_attr = attr->branch_true;
+
 	err = alloc_branch_attr(flow, &cond_false,
 				&attr->branch_false, &jump_count, extack);
 	if (err)
 		goto err_branch_false;
 
+	if (jump_count && !jump_state->jumping_attr)
+		jump_state->jumping_attr = attr->branch_false;
+
+	jump_state->jump_count = jump_count;
 	return 0;
 
 err_branch_false:
@@ -3928,6 +3999,7 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
 	struct netlink_ext_ack *extack = parse_state->extack;
 	struct mlx5e_tc_flow_action flow_action_reorder;
 	struct mlx5e_tc_flow *flow = parse_state->flow;
+	struct mlx5e_tc_jump_state jump_state = {};
 	struct mlx5_flow_attr *attr = flow->attr;
 	enum mlx5_flow_namespace_type ns_type;
 	struct mlx5e_priv *priv = flow->priv;
@@ -3947,6 +4019,7 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
 	list_add(&attr->list, &flow->attrs);
 
 	flow_action_for_each(i, _act, &flow_action_reorder) {
+		jump_state.jump_target = false;
 		act = *_act;
 		tc_act = mlx5e_tc_act_get(act->id, ns_type);
 		if (!tc_act) {
@@ -3964,16 +4037,19 @@ parse_tc_actions(struct mlx5e_tc_act_parse_state *parse_state,
 		if (err)
 			goto out_free;
 
-		err = parse_branch_ctrl(act, tc_act, flow, attr, extack);
+		dec_jump_count(act, tc_act, attr, priv, &jump_state);
+
+		err = parse_branch_ctrl(act, tc_act, flow, attr, &jump_state, extack);
 		if (err)
 			goto out_free;
 
 		parse_state->actions |= attr->action;
 
 		/* Split attr for multi table act if not the last act. */
-		if (tc_act->is_multi_table_act &&
+		if (jump_state.jump_target ||
+		    (tc_act->is_multi_table_act &&
 		    tc_act->is_multi_table_act(priv, act, attr) &&
-		    i < flow_action_reorder.num_entries - 1) {
+		    i < flow_action_reorder.num_entries - 1)) {
 			err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type);
 			if (err)
 				goto out_free;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
index cee88f7dd50f..f2677d9ca0b4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
@@ -97,6 +97,7 @@ struct mlx5_flow_attr {
 	} lag;
 	struct mlx5_flow_attr *branch_true;
 	struct mlx5_flow_attr *branch_false;
+	struct mlx5_flow_attr *jumping_attr;
 	/* keep this union last */
 	union {
 		DECLARE_FLEX_ARRAY(struct mlx5_esw_flow_attr, esw_attr);
@@ -112,6 +113,7 @@ enum {
 	MLX5_ATTR_FLAG_SAMPLE        = BIT(4),
 	MLX5_ATTR_FLAG_ACCEPT        = BIT(5),
 	MLX5_ATTR_FLAG_CT            = BIT(6),
+	MLX5_ATTR_FLAG_TERMINATING   = BIT(7),
 };
 
 /* Returns true if any of the flags that require skipping further TC/NF processing are set. */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 09/15] net/mlx5e: TC, rename post_meter actions
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (7 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 08/15] net/mlx5e: TC, initialize branching action with target attr Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 10/15] net/mlx5e: TC, init post meter rules with branching attributes Saeed Mahameed
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Currently post meter supports only the pipe/drop conform-exceed policy.
This assumption is reflected in several variable names.
Rename the following variables as a pre-step for using the generalized
branching action platform.

Rename fwd_green_rule/drop_red_rule to green_rule/red_rule respectively.
Repurpose red_counter/green_counter to act_counter/drop_counter to allow
police conform-exceed configurations that do not drop.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../ethernet/mellanox/mlx5/core/en/tc/meter.c | 24 +++++++--------
 .../ethernet/mellanox/mlx5/core/en/tc/meter.h |  4 +--
 .../mellanox/mlx5/core/en/tc/post_meter.c     | 30 +++++++++----------
 .../mellanox/mlx5/core/en/tc/post_meter.h     |  4 +--
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   |  4 +--
 5 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
index be74e1403328..4e5f4aa44724 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.c
@@ -257,16 +257,16 @@ __mlx5e_flow_meter_alloc(struct mlx5e_flow_meters *flow_meters)
 	counter = mlx5_fc_create(mdev, true);
 	if (IS_ERR(counter)) {
 		err = PTR_ERR(counter);
-		goto err_red_counter;
+		goto err_drop_counter;
 	}
-	meter->red_counter = counter;
+	meter->drop_counter = counter;
 
 	counter = mlx5_fc_create(mdev, true);
 	if (IS_ERR(counter)) {
 		err = PTR_ERR(counter);
-		goto err_green_counter;
+		goto err_act_counter;
 	}
-	meter->green_counter = counter;
+	meter->act_counter = counter;
 
 	meters_obj = list_first_entry_or_null(&flow_meters->partial_list,
 					      struct mlx5e_flow_meter_aso_obj,
@@ -313,10 +313,10 @@ __mlx5e_flow_meter_alloc(struct mlx5e_flow_meters *flow_meters)
 err_mem:
 	mlx5e_flow_meter_destroy_aso_obj(mdev, id);
 err_create:
-	mlx5_fc_destroy(mdev, meter->green_counter);
-err_green_counter:
-	mlx5_fc_destroy(mdev, meter->red_counter);
-err_red_counter:
+	mlx5_fc_destroy(mdev, meter->act_counter);
+err_act_counter:
+	mlx5_fc_destroy(mdev, meter->drop_counter);
+err_drop_counter:
 	kfree(meter);
 	return ERR_PTR(err);
 }
@@ -329,8 +329,8 @@ __mlx5e_flow_meter_free(struct mlx5e_flow_meter_handle *meter)
 	struct mlx5e_flow_meter_aso_obj *meters_obj;
 	int n, pos;
 
-	mlx5_fc_destroy(mdev, meter->green_counter);
-	mlx5_fc_destroy(mdev, meter->red_counter);
+	mlx5_fc_destroy(mdev, meter->act_counter);
+	mlx5_fc_destroy(mdev, meter->drop_counter);
 
 	meters_obj = meter->meters_obj;
 	pos = (meter->obj_id - meters_obj->base_id) * 2 + meter->idx;
@@ -575,8 +575,8 @@ mlx5e_tc_meter_get_stats(struct mlx5e_flow_meter_handle *meter,
 	u64 bytes1, packets1, lastuse1;
 	u64 bytes2, packets2, lastuse2;
 
-	mlx5_fc_query_cached(meter->green_counter, &bytes1, &packets1, &lastuse1);
-	mlx5_fc_query_cached(meter->red_counter, &bytes2, &packets2, &lastuse2);
+	mlx5_fc_query_cached(meter->act_counter, &bytes1, &packets1, &lastuse1);
+	mlx5_fc_query_cached(meter->drop_counter, &bytes2, &packets2, &lastuse2);
 
 	*bytes = bytes1 + bytes2;
 	*packets = packets1 + packets2;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h
index 6de6e8a16327..f16abf33bb51 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/meter.h
@@ -32,8 +32,8 @@ struct mlx5e_flow_meter_handle {
 	struct hlist_node hlist;
 	struct mlx5e_flow_meter_params params;
 
-	struct mlx5_fc *green_counter;
-	struct mlx5_fc *red_counter;
+	struct mlx5_fc *act_counter;
+	struct mlx5_fc *drop_counter;
 };
 
 struct mlx5e_meter_attr {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
index 8b77e822810e..60209205f683 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
@@ -11,8 +11,8 @@
 struct mlx5e_post_meter_priv {
 	struct mlx5_flow_table *ft;
 	struct mlx5_flow_group *fg;
-	struct mlx5_flow_handle *fwd_green_rule;
-	struct mlx5_flow_handle *drop_red_rule;
+	struct mlx5_flow_handle *green_rule;
+	struct mlx5_flow_handle *red_rule;
 };
 
 struct mlx5_flow_table *
@@ -85,8 +85,8 @@ static int
 mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 			      struct mlx5e_post_meter_priv *post_meter,
 			      struct mlx5e_post_act *post_act,
-			      struct mlx5_fc *green_counter,
-			      struct mlx5_fc *red_counter)
+			      struct mlx5_fc *act_counter,
+			      struct mlx5_fc *drop_counter)
 {
 	struct mlx5_flow_destination dest[2] = {};
 	struct mlx5_flow_act flow_act = {};
@@ -104,7 +104,7 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 			  MLX5_FLOW_CONTEXT_ACTION_COUNT;
 	flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
 	dest[0].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
-	dest[0].counter_id = mlx5_fc_id(red_counter);
+	dest[0].counter_id = mlx5_fc_id(drop_counter);
 
 	rule = mlx5_add_flow_rules(post_meter->ft, spec, &flow_act, dest, 1);
 	if (IS_ERR(rule)) {
@@ -112,7 +112,7 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 		err = PTR_ERR(rule);
 		goto err_red;
 	}
-	post_meter->drop_red_rule = rule;
+	post_meter->red_rule = rule;
 
 	mlx5e_tc_match_to_reg_match(spec, PACKET_COLOR_TO_REG,
 				    MLX5_FLOW_METER_COLOR_GREEN, MLX5_PACKET_COLOR_MASK);
@@ -121,7 +121,7 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 	dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
 	dest[0].ft = mlx5e_tc_post_act_get_ft(post_act);
 	dest[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
-	dest[1].counter_id = mlx5_fc_id(green_counter);
+	dest[1].counter_id = mlx5_fc_id(act_counter);
 
 	rule = mlx5_add_flow_rules(post_meter->ft, spec, &flow_act, dest, 2);
 	if (IS_ERR(rule)) {
@@ -129,13 +129,13 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 		err = PTR_ERR(rule);
 		goto err_green;
 	}
-	post_meter->fwd_green_rule = rule;
+	post_meter->green_rule = rule;
 
 	kvfree(spec);
 	return 0;
 
 err_green:
-	mlx5_del_flow_rules(post_meter->drop_red_rule);
+	mlx5_del_flow_rules(post_meter->red_rule);
 err_red:
 	kvfree(spec);
 	return err;
@@ -144,8 +144,8 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 static void
 mlx5e_post_meter_rules_destroy(struct mlx5e_post_meter_priv *post_meter)
 {
-	mlx5_del_flow_rules(post_meter->drop_red_rule);
-	mlx5_del_flow_rules(post_meter->fwd_green_rule);
+	mlx5_del_flow_rules(post_meter->red_rule);
+	mlx5_del_flow_rules(post_meter->green_rule);
 }
 
 static void
@@ -164,8 +164,8 @@ struct mlx5e_post_meter_priv *
 mlx5e_post_meter_init(struct mlx5e_priv *priv,
 		      enum mlx5_flow_namespace_type ns_type,
 		      struct mlx5e_post_act *post_act,
-		      struct mlx5_fc *green_counter,
-		      struct mlx5_fc *red_counter)
+		      struct mlx5_fc *act_counter,
+		      struct mlx5_fc *drop_counter)
 {
 	struct mlx5e_post_meter_priv *post_meter;
 	int err;
@@ -182,8 +182,8 @@ mlx5e_post_meter_init(struct mlx5e_priv *priv,
 	if (err)
 		goto err_fg;
 
-	err = mlx5e_post_meter_rules_create(priv, post_meter, post_act, green_counter,
-					    red_counter);
+	err = mlx5e_post_meter_rules_create(priv, post_meter, post_act,
+					    act_counter, drop_counter);
 	if (err)
 		goto err_rules;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
index 34d0e4b9fc7a..37c74e7bfb6a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
@@ -21,8 +21,8 @@ struct mlx5e_post_meter_priv *
 mlx5e_post_meter_init(struct mlx5e_priv *priv,
 		      enum mlx5_flow_namespace_type ns_type,
 		      struct mlx5e_post_act *post_act,
-		      struct mlx5_fc *green_counter,
-		      struct mlx5_fc *red_counter);
+		      struct mlx5_fc *act_counter,
+		      struct mlx5_fc *drop_counter);
 void
 mlx5e_post_meter_cleanup(struct mlx5e_post_meter_priv *post_meter);
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index d52f8601fef4..ce449fa4f36e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -422,8 +422,8 @@ mlx5e_tc_add_flow_meter(struct mlx5e_priv *priv,
 	}
 
 	ns_type = mlx5e_tc_meter_get_namespace(meter->flow_meters);
-	post_meter = mlx5e_post_meter_init(priv, ns_type, post_act, meter->green_counter,
-					   meter->red_counter);
+	post_meter = mlx5e_post_meter_init(priv, ns_type, post_act, meter->act_counter,
+					   meter->drop_counter);
 	if (IS_ERR(post_meter)) {
 		mlx5_core_err(priv->mdev, "Failed to init post meter\n");
 		goto err_meter_init;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 10/15] net/mlx5e: TC, init post meter rules with branching attributes
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (8 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 09/15] net/mlx5e: TC, rename post_meter actions Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 11/15] net/mlx5e: TC, allow meter jump control action Saeed Mahameed
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Instantiate the post meter actions with the platform initialized branching
action attributes.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/en/tc/post_meter.c     | 84 +++++++++++++------
 .../mellanox/mlx5/core/en/tc/post_meter.h     |  6 +-
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 11 +--
 3 files changed, 67 insertions(+), 34 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
index 60209205f683..c38211097746 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.c
@@ -12,7 +12,9 @@ struct mlx5e_post_meter_priv {
 	struct mlx5_flow_table *ft;
 	struct mlx5_flow_group *fg;
 	struct mlx5_flow_handle *green_rule;
+	struct mlx5_flow_attr *green_attr;
 	struct mlx5_flow_handle *red_rule;
+	struct mlx5_flow_attr *red_attr;
 };
 
 struct mlx5_flow_table *
@@ -81,15 +83,48 @@ mlx5e_post_meter_fg_create(struct mlx5e_priv *priv,
 	return err;
 }
 
+static struct mlx5_flow_handle *
+mlx5e_post_meter_add_rule(struct mlx5e_priv *priv,
+			  struct mlx5e_post_meter_priv *post_meter,
+			  struct mlx5_flow_spec *spec,
+			  struct mlx5_flow_attr *attr,
+			  struct mlx5_fc *act_counter,
+			  struct mlx5_fc *drop_counter)
+{
+	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+	struct mlx5_flow_handle *ret;
+
+	attr->action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
+	if (attr->action & MLX5_FLOW_CONTEXT_ACTION_DROP)
+		attr->counter = drop_counter;
+	else
+		attr->counter = act_counter;
+
+	attr->ft = post_meter->ft;
+	attr->flags |= MLX5_ATTR_FLAG_NO_IN_PORT;
+	attr->outer_match_level = MLX5_MATCH_NONE;
+	attr->chain = 0;
+	attr->prio = 0;
+
+	ret = mlx5_eswitch_add_offloaded_rule(esw, spec, attr);
+
+	/* We did not create the counter, so we can't delete it.
+	 * Avoid freeing the counter when the attr is deleted in free_branching_attr
+	 */
+	attr->action &= ~MLX5_FLOW_CONTEXT_ACTION_COUNT;
+
+	return ret;
+}
+
 static int
 mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 			      struct mlx5e_post_meter_priv *post_meter,
 			      struct mlx5e_post_act *post_act,
 			      struct mlx5_fc *act_counter,
-			      struct mlx5_fc *drop_counter)
+			      struct mlx5_fc *drop_counter,
+			      struct mlx5_flow_attr *green_attr,
+			      struct mlx5_flow_attr *red_attr)
 {
-	struct mlx5_flow_destination dest[2] = {};
-	struct mlx5_flow_act flow_act = {};
 	struct mlx5_flow_handle *rule;
 	struct mlx5_flow_spec *spec;
 	int err;
@@ -100,36 +135,28 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 
 	mlx5e_tc_match_to_reg_match(spec, PACKET_COLOR_TO_REG,
 				    MLX5_FLOW_METER_COLOR_RED, MLX5_PACKET_COLOR_MASK);
-	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP |
-			  MLX5_FLOW_CONTEXT_ACTION_COUNT;
-	flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
-	dest[0].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
-	dest[0].counter_id = mlx5_fc_id(drop_counter);
 
-	rule = mlx5_add_flow_rules(post_meter->ft, spec, &flow_act, dest, 1);
+	rule = mlx5e_post_meter_add_rule(priv, post_meter, spec, red_attr,
+					 act_counter, drop_counter);
 	if (IS_ERR(rule)) {
-		mlx5_core_warn(priv->mdev, "Failed to create post_meter flow drop rule\n");
+		mlx5_core_warn(priv->mdev, "Failed to create post_meter exceed rule\n");
 		err = PTR_ERR(rule);
 		goto err_red;
 	}
 	post_meter->red_rule = rule;
+	post_meter->red_attr = red_attr;
 
 	mlx5e_tc_match_to_reg_match(spec, PACKET_COLOR_TO_REG,
 				    MLX5_FLOW_METER_COLOR_GREEN, MLX5_PACKET_COLOR_MASK);
-	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
-			  MLX5_FLOW_CONTEXT_ACTION_COUNT;
-	dest[0].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
-	dest[0].ft = mlx5e_tc_post_act_get_ft(post_act);
-	dest[1].type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
-	dest[1].counter_id = mlx5_fc_id(act_counter);
-
-	rule = mlx5_add_flow_rules(post_meter->ft, spec, &flow_act, dest, 2);
+	rule = mlx5e_post_meter_add_rule(priv, post_meter, spec, green_attr,
+					 act_counter, drop_counter);
 	if (IS_ERR(rule)) {
-		mlx5_core_warn(priv->mdev, "Failed to create post_meter flow fwd rule\n");
+		mlx5_core_warn(priv->mdev, "Failed to create post_meter notexceed rule\n");
 		err = PTR_ERR(rule);
 		goto err_green;
 	}
 	post_meter->green_rule = rule;
+	post_meter->green_attr = green_attr;
 
 	kvfree(spec);
 	return 0;
@@ -142,10 +169,11 @@ mlx5e_post_meter_rules_create(struct mlx5e_priv *priv,
 }
 
 static void
-mlx5e_post_meter_rules_destroy(struct mlx5e_post_meter_priv *post_meter)
+mlx5e_post_meter_rules_destroy(struct mlx5_eswitch *esw,
+			       struct mlx5e_post_meter_priv *post_meter)
 {
-	mlx5_del_flow_rules(post_meter->red_rule);
-	mlx5_del_flow_rules(post_meter->green_rule);
+	mlx5_eswitch_del_offloaded_rule(esw, post_meter->red_rule, post_meter->red_attr);
+	mlx5_eswitch_del_offloaded_rule(esw, post_meter->green_rule, post_meter->green_attr);
 }
 
 static void
@@ -165,7 +193,9 @@ mlx5e_post_meter_init(struct mlx5e_priv *priv,
 		      enum mlx5_flow_namespace_type ns_type,
 		      struct mlx5e_post_act *post_act,
 		      struct mlx5_fc *act_counter,
-		      struct mlx5_fc *drop_counter)
+		      struct mlx5_fc *drop_counter,
+		      struct mlx5_flow_attr *branch_true,
+		      struct mlx5_flow_attr *branch_false)
 {
 	struct mlx5e_post_meter_priv *post_meter;
 	int err;
@@ -182,8 +212,8 @@ mlx5e_post_meter_init(struct mlx5e_priv *priv,
 	if (err)
 		goto err_fg;
 
-	err = mlx5e_post_meter_rules_create(priv, post_meter, post_act,
-					    act_counter, drop_counter);
+	err = mlx5e_post_meter_rules_create(priv, post_meter, post_act, act_counter,
+					    drop_counter, branch_true, branch_false);
 	if (err)
 		goto err_rules;
 
@@ -199,9 +229,9 @@ mlx5e_post_meter_init(struct mlx5e_priv *priv,
 }
 
 void
-mlx5e_post_meter_cleanup(struct mlx5e_post_meter_priv *post_meter)
+mlx5e_post_meter_cleanup(struct mlx5_eswitch *esw, struct mlx5e_post_meter_priv *post_meter)
 {
-	mlx5e_post_meter_rules_destroy(post_meter);
+	mlx5e_post_meter_rules_destroy(esw, post_meter);
 	mlx5e_post_meter_fg_destroy(post_meter);
 	mlx5e_post_meter_table_destroy(post_meter);
 	kfree(post_meter);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
index 37c74e7bfb6a..a4075d33fde2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_meter.h
@@ -22,8 +22,10 @@ mlx5e_post_meter_init(struct mlx5e_priv *priv,
 		      enum mlx5_flow_namespace_type ns_type,
 		      struct mlx5e_post_act *post_act,
 		      struct mlx5_fc *act_counter,
-		      struct mlx5_fc *drop_counter);
+		      struct mlx5_fc *drop_counter,
+		      struct mlx5_flow_attr *branch_true,
+		      struct mlx5_flow_attr *branch_false);
 void
-mlx5e_post_meter_cleanup(struct mlx5e_post_meter_priv *post_meter);
+mlx5e_post_meter_cleanup(struct mlx5_eswitch *esw, struct mlx5e_post_meter_priv *post_meter);
 
 #endif /* __MLX5_EN_POST_METER_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index ce449fa4f36e..338e0b21d0b3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -422,8 +422,9 @@ mlx5e_tc_add_flow_meter(struct mlx5e_priv *priv,
 	}
 
 	ns_type = mlx5e_tc_meter_get_namespace(meter->flow_meters);
-	post_meter = mlx5e_post_meter_init(priv, ns_type, post_act, meter->act_counter,
-					   meter->drop_counter);
+	post_meter = mlx5e_post_meter_init(priv, ns_type, post_act,
+					   meter->act_counter, meter->drop_counter,
+					   attr->branch_true, attr->branch_false);
 	if (IS_ERR(post_meter)) {
 		mlx5_core_err(priv->mdev, "Failed to init post meter\n");
 		goto err_meter_init;
@@ -442,9 +443,9 @@ mlx5e_tc_add_flow_meter(struct mlx5e_priv *priv,
 }
 
 static void
-mlx5e_tc_del_flow_meter(struct mlx5_flow_attr *attr)
+mlx5e_tc_del_flow_meter(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr)
 {
-	mlx5e_post_meter_cleanup(attr->meter_attr.post_meter);
+	mlx5e_post_meter_cleanup(esw, attr->meter_attr.post_meter);
 	mlx5e_tc_meter_put(attr->meter_attr.meter);
 }
 
@@ -505,7 +506,7 @@ mlx5e_tc_rule_unoffload(struct mlx5e_priv *priv,
 	mlx5_eswitch_del_offloaded_rule(esw, rule, attr);
 
 	if (attr->meter_attr.meter)
-		mlx5e_tc_del_flow_meter(attr);
+		mlx5e_tc_del_flow_meter(esw, attr);
 }
 
 int
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 11/15] net/mlx5e: TC, allow meter jump control action
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (9 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 10/15] net/mlx5e: TC, init post meter rules with branching attributes Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 12/15] net/mlx5: SRIOV, Remove two unused ingress flow group Saeed Mahameed
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Oz Shlomo, Roi Dayan

From: Oz Shlomo <ozsh@nvidia.com>

Separate the matchall police action validation from flower validation.
Isolate the action validation logic in the police action parser.

Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/en/tc/act/police.c     | 52 +++++++++++++++----
 .../ethernet/mellanox/mlx5/core/en/tc_priv.h  |  4 --
 .../net/ethernet/mellanox/mlx5/core/en_tc.c   | 21 ++++----
 3 files changed, 54 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
index 81aa4185c64a..898fe16a4384 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/police.c
@@ -4,20 +4,54 @@
 #include "act.h"
 #include "en/tc_priv.h"
 
+static bool police_act_validate_control(enum flow_action_id act_id,
+					struct netlink_ext_ack *extack)
+{
+	if (act_id != FLOW_ACTION_PIPE &&
+	    act_id != FLOW_ACTION_ACCEPT &&
+	    act_id != FLOW_ACTION_JUMP &&
+	    act_id != FLOW_ACTION_DROP) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Offload not supported when conform-exceed action is not pipe, ok, jump or drop");
+		return false;
+	}
+
+	return true;
+}
+
+static int police_act_validate(const struct flow_action_entry *act,
+			       struct netlink_ext_ack *extack)
+{
+	if (!police_act_validate_control(act->police.exceed.act_id, extack) ||
+	    !police_act_validate_control(act->police.notexceed.act_id, extack))
+		return -EOPNOTSUPP;
+
+	if (act->police.peakrate_bytes_ps ||
+	    act->police.avrate || act->police.overhead) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Offload not supported when peakrate/avrate/overhead is configured");
+		return -EOPNOTSUPP;
+	}
+
+	if (act->police.rate_pkt_ps) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "QoS offload not support packets per second");
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
 static bool
 tc_act_can_offload_police(struct mlx5e_tc_act_parse_state *parse_state,
 			  const struct flow_action_entry *act,
 			  int act_index,
 			  struct mlx5_flow_attr *attr)
 {
-	if (act->police.notexceed.act_id != FLOW_ACTION_PIPE &&
-	    act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) {
-		NL_SET_ERR_MSG_MOD(parse_state->extack,
-				   "Offload not supported when conform action is not pipe or ok");
-		return false;
-	}
-	if (mlx5e_policer_validate(parse_state->flow_action, act,
-				   parse_state->extack))
+	int err;
+
+	err = police_act_validate(act, parse_state->extack);
+	if (err)
 		return false;
 
 	return !!mlx5e_get_flow_meters(parse_state->flow->priv->mdev);
@@ -79,7 +113,7 @@ tc_act_police_offload(struct mlx5e_priv *priv,
 	struct mlx5e_flow_meter_handle *meter;
 	int err = 0;
 
-	err = mlx5e_policer_validate(&fl_act->action, act, fl_act->extack);
+	err = police_act_validate(act, fl_act->extack);
 	if (err)
 		return err;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
index 2e42d7c5451e..2b7fd1c0e643 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_priv.h
@@ -211,8 +211,4 @@ struct mlx5e_flow_meters *mlx5e_get_flow_meters(struct mlx5_core_dev *dev);
 void *mlx5e_get_match_headers_value(u32 flags, struct mlx5_flow_spec *spec);
 void *mlx5e_get_match_headers_criteria(u32 flags, struct mlx5_flow_spec *spec);
 
-int mlx5e_policer_validate(const struct flow_action *action,
-			   const struct flow_action_entry *act,
-			   struct netlink_ext_ack *extack);
-
 #endif /* __MLX5_EN_TC_PRIV_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 338e0b21d0b3..227fa6ef9e41 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -4939,10 +4939,17 @@ static int apply_police_params(struct mlx5e_priv *priv, u64 rate,
 	return err;
 }
 
-int mlx5e_policer_validate(const struct flow_action *action,
-			   const struct flow_action_entry *act,
-			   struct netlink_ext_ack *extack)
+static int
+tc_matchall_police_validate(const struct flow_action *action,
+			    const struct flow_action_entry *act,
+			    struct netlink_ext_ack *extack)
 {
+	if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) {
+		NL_SET_ERR_MSG_MOD(extack,
+				   "Offload not supported when conform action is not continue");
+		return -EOPNOTSUPP;
+	}
+
 	if (act->police.exceed.act_id != FLOW_ACTION_DROP) {
 		NL_SET_ERR_MSG_MOD(extack,
 				   "Offload not supported when exceed action is not drop");
@@ -4993,13 +5000,7 @@ static int scan_tc_matchall_fdb_actions(struct mlx5e_priv *priv,
 	flow_action_for_each(i, act, flow_action) {
 		switch (act->id) {
 		case FLOW_ACTION_POLICE:
-			if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) {
-				NL_SET_ERR_MSG_MOD(extack,
-						   "Offload not supported when conform action is not continue");
-				return -EOPNOTSUPP;
-			}
-
-			err = mlx5e_policer_validate(flow_action, act, extack);
+			err = tc_matchall_police_validate(flow_action, act, extack);
 			if (err)
 				return err;
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 12/15] net/mlx5: SRIOV, Remove two unused ingress flow group
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (10 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 11/15] net/mlx5e: TC, allow meter jump control action Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 13/15] net/mlx5: SRIOV, Recreate egress ACL table on config change Saeed Mahameed
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Moshe Shemesh

From: Moshe Shemesh <moshe@nvidia.com>

As in SRIOV ingress ACL table we use only one rule for allowed traffic
and one drop rule, there is no need in four flow groups. Since the
groups can be created dynamically on configuration changes, the group
layout can be changed dynamically as well, instead of creating four
different groups with four different layouts statically.

Set two flow groups according to the needed flow steering rules and
remove the other unused groups. To avoid resetting the flow counter
while recreating the flow table, take the flow counter handling
separately.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/esw/acl/ingress_lgcy.c | 173 +++++++-----------
 .../mellanox/mlx5/core/esw/acl/lgcy.h         |   4 +
 .../ethernet/mellanox/mlx5/core/esw/legacy.c  |   3 +
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |   4 +-
 4 files changed, 72 insertions(+), 112 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
index b1a5199260f6..0b37edb9490d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
@@ -33,9 +33,12 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 
 	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
 		 MLX5_MATCH_OUTER_HEADERS);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+	if (vport->info.vlan || vport->info.qos)
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
+	if (vport->info.spoofchk) {
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+	}
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
 
@@ -44,47 +47,14 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 		err = PTR_ERR(g);
 		esw_warn(dev, "vport[%d] ingress create untagged spoofchk flow group, err(%d)\n",
 			 vport->vport, err);
-		goto spoof_err;
+		goto allow_err;
 	}
-	vport->ingress.legacy.allow_untagged_spoofchk_grp = g;
+	vport->ingress.legacy.allow_grp = g;
 
 	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
-		 MLX5_MATCH_OUTER_HEADERS);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
 
-	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
-	if (IS_ERR(g)) {
-		err = PTR_ERR(g);
-		esw_warn(dev, "vport[%d] ingress create untagged flow group, err(%d)\n",
-			 vport->vport, err);
-		goto untagged_err;
-	}
-	vport->ingress.legacy.allow_untagged_only_grp = g;
-
-	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
-		 MLX5_MATCH_OUTER_HEADERS);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
-	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 2);
-	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 2);
-
-	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
-	if (IS_ERR(g)) {
-		err = PTR_ERR(g);
-		esw_warn(dev, "vport[%d] ingress create spoofchk flow group, err(%d)\n",
-			 vport->vport, err);
-		goto allow_spoof_err;
-	}
-	vport->ingress.legacy.allow_spoofchk_only_grp = g;
-
-	memset(flow_group_in, 0, inlen);
-	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 3);
-	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 3);
-
 	g = mlx5_create_flow_group(vport->ingress.acl, flow_group_in);
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
@@ -97,38 +67,20 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 	return 0;
 
 drop_err:
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_spoofchk_only_grp)) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
-		vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
+	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_grp)) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_grp);
+		vport->ingress.legacy.allow_grp = NULL;
 	}
-allow_spoof_err:
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_only_grp)) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
-		vport->ingress.legacy.allow_untagged_only_grp = NULL;
-	}
-untagged_err:
-	if (!IS_ERR_OR_NULL(vport->ingress.legacy.allow_untagged_spoofchk_grp)) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
-		vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
-	}
-spoof_err:
+allow_err:
 	kvfree(flow_group_in);
 	return err;
 }
 
 static void esw_acl_ingress_lgcy_groups_destroy(struct mlx5_vport *vport)
 {
-	if (vport->ingress.legacy.allow_spoofchk_only_grp) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_spoofchk_only_grp);
-		vport->ingress.legacy.allow_spoofchk_only_grp = NULL;
-	}
-	if (vport->ingress.legacy.allow_untagged_only_grp) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_only_grp);
-		vport->ingress.legacy.allow_untagged_only_grp = NULL;
-	}
-	if (vport->ingress.legacy.allow_untagged_spoofchk_grp) {
-		mlx5_destroy_flow_group(vport->ingress.legacy.allow_untagged_spoofchk_grp);
-		vport->ingress.legacy.allow_untagged_spoofchk_grp = NULL;
+	if (vport->ingress.legacy.allow_grp) {
+		mlx5_destroy_flow_group(vport->ingress.legacy.allow_grp);
+		vport->ingress.legacy.allow_grp = NULL;
 	}
 	if (vport->ingress.legacy.drop_grp) {
 		mlx5_destroy_flow_group(vport->ingress.legacy.drop_grp);
@@ -143,56 +95,33 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 	struct mlx5_flow_destination *dst = NULL;
 	struct mlx5_flow_act flow_act = {};
 	struct mlx5_flow_spec *spec = NULL;
-	struct mlx5_fc *counter = NULL;
-	/* The ingress acl table contains 4 groups
-	 * (2 active rules at the same time -
-	 *      1 allow rule from one of the first 3 groups.
-	 *      1 drop rule from the last group):
-	 * 1)Allow untagged traffic with smac=original mac.
-	 * 2)Allow untagged traffic.
-	 * 3)Allow traffic with smac=original mac.
-	 * 4)Drop all other traffic.
+	struct mlx5_fc *counter;
+	/* The ingress acl table contains 2 groups
+	 * 1)Allowed traffic according to tagging and spoofcheck settings
+	 * 2)Drop all other traffic.
 	 */
-	int table_size = 4;
+	int table_size = 2;
 	int dest_num = 0;
 	int err = 0;
 	u8 *smac_v;
 
-	esw_acl_ingress_lgcy_rules_destroy(vport);
-
-	if (vport->ingress.legacy.drop_counter) {
-		counter = vport->ingress.legacy.drop_counter;
-	} else if (MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter)) {
-		counter = mlx5_fc_create(esw->dev, false);
-		if (IS_ERR(counter)) {
-			esw_warn(esw->dev,
-				 "vport[%d] configure ingress drop rule counter failed\n",
-				 vport->vport);
-			counter = NULL;
-		}
-		vport->ingress.legacy.drop_counter = counter;
-	}
-
-	if (!vport->info.vlan && !vport->info.qos && !vport->info.spoofchk) {
-		esw_acl_ingress_lgcy_cleanup(esw, vport);
+	esw_acl_ingress_lgcy_cleanup(esw, vport);
+	if (!vport->info.vlan && !vport->info.qos && !vport->info.spoofchk)
 		return 0;
-	}
 
-	if (!vport->ingress.acl) {
-		vport->ingress.acl = esw_acl_table_create(esw, vport,
-							  MLX5_FLOW_NAMESPACE_ESW_INGRESS,
-							  table_size);
-		if (IS_ERR(vport->ingress.acl)) {
-			err = PTR_ERR(vport->ingress.acl);
-			vport->ingress.acl = NULL;
-			return err;
-		}
-
-		err = esw_acl_ingress_lgcy_groups_create(esw, vport);
-		if (err)
-			goto out;
+	vport->ingress.acl = esw_acl_table_create(esw, vport,
+						  MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+						  table_size);
+	if (IS_ERR(vport->ingress.acl)) {
+		err = PTR_ERR(vport->ingress.acl);
+		vport->ingress.acl = NULL;
+		return err;
 	}
 
+	err = esw_acl_ingress_lgcy_groups_create(esw, vport);
+	if (err)
+		goto out;
+
 	esw_debug(esw->dev,
 		  "vport[%d] configure ingress rules, vlan(%d) qos(%d)\n",
 		  vport->vport, vport->info.vlan, vport->info.qos);
@@ -235,6 +164,7 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 	memset(&flow_act, 0, sizeof(flow_act));
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
 	/* Attach drop flow counter */
+	counter = vport->ingress.legacy.drop_counter;
 	if (counter) {
 		flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
 		drop_ctr_dst.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
@@ -266,17 +196,42 @@ void esw_acl_ingress_lgcy_cleanup(struct mlx5_eswitch *esw,
 				  struct mlx5_vport *vport)
 {
 	if (IS_ERR_OR_NULL(vport->ingress.acl))
-		goto clean_drop_counter;
+		return;
 
 	esw_debug(esw->dev, "Destroy vport[%d] E-Switch ingress ACL\n", vport->vport);
 
 	esw_acl_ingress_lgcy_rules_destroy(vport);
 	esw_acl_ingress_lgcy_groups_destroy(vport);
 	esw_acl_ingress_table_destroy(vport);
+}
+
+void esw_acl_ingress_lgcy_create_counter(struct mlx5_eswitch *esw,
+					 struct mlx5_vport *vport)
+{
+	struct mlx5_fc *counter;
 
-clean_drop_counter:
-	if (vport->ingress.legacy.drop_counter) {
-		mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
-		vport->ingress.legacy.drop_counter = NULL;
+	vport->ingress.legacy.drop_counter = NULL;
+
+	if (!MLX5_CAP_ESW_INGRESS_ACL(esw->dev, flow_counter))
+		return;
+
+	counter = mlx5_fc_create(esw->dev, false);
+	if (IS_ERR(counter)) {
+		esw_warn(esw->dev,
+			 "vport[%d] configure ingress drop rule counter failed\n",
+			 vport->vport);
+		return;
 	}
+
+	vport->ingress.legacy.drop_counter = counter;
+}
+
+void esw_acl_ingress_lgcy_destroy_counter(struct mlx5_eswitch *esw,
+					  struct mlx5_vport *vport)
+{
+	if (!vport->ingress.legacy.drop_counter)
+		return;
+
+	mlx5_fc_destroy(esw->dev, vport->ingress.legacy.drop_counter);
+	vport->ingress.legacy.drop_counter = NULL;
 }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h
index 44c152da3d83..c4a624ffca43 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h
@@ -13,5 +13,9 @@ void esw_acl_egress_lgcy_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vp
 /* Eswitch acl ingress external APIs */
 int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
 void esw_acl_ingress_lgcy_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
+void esw_acl_ingress_lgcy_create_counter(struct mlx5_eswitch *esw,
+					 struct mlx5_vport *vport);
+void esw_acl_ingress_lgcy_destroy_counter(struct mlx5_eswitch *esw,
+					  struct mlx5_vport *vport);
 
 #endif /* __MLX5_ESWITCH_ACL_LGCY_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
index fabe49a35a5c..97a104668723 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
@@ -356,6 +356,7 @@ int esw_legacy_vport_acl_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vpor
 	if (mlx5_esw_is_manager_vport(esw, vport->vport))
 		return 0;
 
+	esw_acl_ingress_lgcy_create_counter(esw, vport);
 	ret = esw_acl_ingress_lgcy_setup(esw, vport);
 	if (ret)
 		goto ingress_err;
@@ -369,6 +370,7 @@ int esw_legacy_vport_acl_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vpor
 egress_err:
 	esw_acl_ingress_lgcy_cleanup(esw, vport);
 ingress_err:
+	esw_acl_ingress_lgcy_destroy_counter(esw, vport);
 	return ret;
 }
 
@@ -379,6 +381,7 @@ void esw_legacy_vport_acl_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *v
 
 	esw_acl_egress_lgcy_cleanup(esw, vport);
 	esw_acl_ingress_lgcy_cleanup(esw, vport);
+	esw_acl_ingress_lgcy_destroy_counter(esw, vport);
 }
 
 int mlx5_esw_query_vport_drop_stats(struct mlx5_core_dev *dev,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 42d9df417e20..b7779826e725 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -97,9 +97,7 @@ struct vport_ingress {
 	struct mlx5_flow_table *acl;
 	struct mlx5_flow_handle *allow_rule;
 	struct {
-		struct mlx5_flow_group *allow_spoofchk_only_grp;
-		struct mlx5_flow_group *allow_untagged_spoofchk_grp;
-		struct mlx5_flow_group *allow_untagged_only_grp;
+		struct mlx5_flow_group *allow_grp;
 		struct mlx5_flow_group *drop_grp;
 		struct mlx5_flow_handle *drop_rule;
 		struct mlx5_fc *drop_counter;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 13/15] net/mlx5: SRIOV, Recreate egress ACL table on config change
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (11 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 12/15] net/mlx5: SRIOV, Remove two unused ingress flow group Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-03 22:13 ` [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support Saeed Mahameed
  2022-12-03 22:13 ` [net-next 15/15] net/mlx5: SRIOV, Allow ingress tagged packets on VST Saeed Mahameed
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

From: Moshe Shemesh <moshe@nvidia.com>

The egress ACL table has flow group for cvlan and flow group for drop.
A downstream patch will add also svlan flow steering rule and so will
need a flow group for svlan. Recreating the egress ACL table enables us
to keep one group for allowed traffic, either cvlan or svlan and one
group for drop traffic, similar to the ingress ACL table.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/esw/acl/egress_lgcy.c  | 82 ++++++++++---------
 .../mellanox/mlx5/core/esw/acl/lgcy.h         |  4 +
 .../ethernet/mellanox/mlx5/core/esw/legacy.c  |  3 +
 3 files changed, 52 insertions(+), 37 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
index 60a73990017c..83ecad5ea80b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
@@ -69,8 +69,8 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
 {
 	struct mlx5_flow_destination drop_ctr_dst = {};
 	struct mlx5_flow_destination *dst = NULL;
-	struct mlx5_fc *drop_counter = NULL;
 	struct mlx5_flow_act flow_act = {};
+	struct mlx5_fc *drop_counter;
 	/* The egress acl table contains 2 rules:
 	 * 1)Allow traffic with vlan_tag=vst_vlan_id
 	 * 2)Drop all other traffic.
@@ -79,41 +79,23 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
 	int dest_num = 0;
 	int err = 0;
 
-	if (vport->egress.legacy.drop_counter) {
-		drop_counter = vport->egress.legacy.drop_counter;
-	} else if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter)) {
-		drop_counter = mlx5_fc_create(esw->dev, false);
-		if (IS_ERR(drop_counter)) {
-			esw_warn(esw->dev,
-				 "vport[%d] configure egress drop rule counter err(%ld)\n",
-				 vport->vport, PTR_ERR(drop_counter));
-			drop_counter = NULL;
-		}
-		vport->egress.legacy.drop_counter = drop_counter;
-	}
-
-	esw_acl_egress_lgcy_rules_destroy(vport);
-
-	if (!vport->info.vlan && !vport->info.qos) {
-		esw_acl_egress_lgcy_cleanup(esw, vport);
+	esw_acl_egress_lgcy_cleanup(esw, vport);
+	if (!vport->info.vlan && !vport->info.qos)
 		return 0;
-	}
 
-	if (!vport->egress.acl) {
-		vport->egress.acl = esw_acl_table_create(esw, vport,
-							 MLX5_FLOW_NAMESPACE_ESW_EGRESS,
-							 table_size);
-		if (IS_ERR(vport->egress.acl)) {
-			err = PTR_ERR(vport->egress.acl);
-			vport->egress.acl = NULL;
-			goto out;
-		}
-
-		err = esw_acl_egress_lgcy_groups_create(esw, vport);
-		if (err)
-			goto out;
+	vport->egress.acl = esw_acl_table_create(esw, vport,
+						 MLX5_FLOW_NAMESPACE_ESW_EGRESS,
+						 table_size);
+	if (IS_ERR(vport->egress.acl)) {
+		err = PTR_ERR(vport->egress.acl);
+		vport->egress.acl = NULL;
+		goto out;
 	}
 
+	err = esw_acl_egress_lgcy_groups_create(esw, vport);
+	if (err)
+		goto out;
+
 	esw_debug(esw->dev,
 		  "vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
 		  vport->vport, vport->info.vlan, vport->info.qos);
@@ -127,6 +109,7 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
 
 	/* Attach egress drop flow counter */
+	drop_counter = vport->egress.legacy.drop_counter;
 	if (drop_counter) {
 		flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
 		drop_ctr_dst.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
@@ -157,17 +140,42 @@ void esw_acl_egress_lgcy_cleanup(struct mlx5_eswitch *esw,
 				 struct mlx5_vport *vport)
 {
 	if (IS_ERR_OR_NULL(vport->egress.acl))
-		goto clean_drop_counter;
+		return;
 
 	esw_debug(esw->dev, "Destroy vport[%d] E-Switch egress ACL\n", vport->vport);
 
 	esw_acl_egress_lgcy_rules_destroy(vport);
 	esw_acl_egress_lgcy_groups_destroy(vport);
 	esw_acl_egress_table_destroy(vport);
+}
 
-clean_drop_counter:
-	if (vport->egress.legacy.drop_counter) {
-		mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
-		vport->egress.legacy.drop_counter = NULL;
+void esw_acl_egress_lgcy_create_counter(struct mlx5_eswitch *esw,
+					struct mlx5_vport *vport)
+{
+	struct mlx5_fc *counter;
+
+	vport->egress.legacy.drop_counter = NULL;
+
+	if (!MLX5_CAP_ESW_EGRESS_ACL(esw->dev, flow_counter))
+		return;
+
+	counter = mlx5_fc_create(esw->dev, false);
+	if (IS_ERR(counter)) {
+		esw_warn(esw->dev,
+			 "vport[%d] configure egress drop rule counter failed\n",
+			 vport->vport);
+		return;
 	}
+
+	vport->egress.legacy.drop_counter = counter;
+}
+
+void esw_acl_egress_lgcy_destroy_counter(struct mlx5_eswitch *esw,
+					 struct mlx5_vport *vport)
+{
+	if (!vport->egress.legacy.drop_counter)
+		return;
+
+	mlx5_fc_destroy(esw->dev, vport->egress.legacy.drop_counter);
+	vport->egress.legacy.drop_counter = NULL;
 }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h
index c4a624ffca43..aa5523eb0abd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/lgcy.h
@@ -9,6 +9,10 @@
 /* Eswitch acl egress external APIs */
 int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
 void esw_acl_egress_lgcy_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
+void esw_acl_egress_lgcy_create_counter(struct mlx5_eswitch *esw,
+					struct mlx5_vport *vport);
+void esw_acl_egress_lgcy_destroy_counter(struct mlx5_eswitch *esw,
+					 struct mlx5_vport *vport);
 
 /* Eswitch acl ingress external APIs */
 int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
index 97a104668723..bddf1086d771 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
@@ -361,6 +361,7 @@ int esw_legacy_vport_acl_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vpor
 	if (ret)
 		goto ingress_err;
 
+	esw_acl_egress_lgcy_create_counter(esw, vport);
 	ret = esw_acl_egress_lgcy_setup(esw, vport);
 	if (ret)
 		goto egress_err;
@@ -368,6 +369,7 @@ int esw_legacy_vport_acl_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vpor
 	return 0;
 
 egress_err:
+	esw_acl_egress_lgcy_destroy_counter(esw, vport);
 	esw_acl_ingress_lgcy_cleanup(esw, vport);
 ingress_err:
 	esw_acl_ingress_lgcy_destroy_counter(esw, vport);
@@ -380,6 +382,7 @@ void esw_legacy_vport_acl_cleanup(struct mlx5_eswitch *esw, struct mlx5_vport *v
 		return;
 
 	esw_acl_egress_lgcy_cleanup(esw, vport);
+	esw_acl_egress_lgcy_destroy_counter(esw, vport);
 	esw_acl_ingress_lgcy_cleanup(esw, vport);
 	esw_acl_ingress_lgcy_destroy_counter(esw, vport);
 }
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (12 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 13/15] net/mlx5: SRIOV, Recreate egress ACL table on config change Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  2022-12-07  4:34   ` Jakub Kicinski
  2022-12-03 22:13 ` [net-next 15/15] net/mlx5: SRIOV, Allow ingress tagged packets on VST Saeed Mahameed
  14 siblings, 1 reply; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

From: Moshe Shemesh <moshe@nvidia.com>

Implement 802.1ad VST when device supports push vlan and pop vlan
steering actions on vport ACLs. In case device doesn't support these
steering actions, fall back to setting eswitch vport context, which
supports only 802.1q.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_main.c |  5 +-
 .../mellanox/mlx5/core/esw/acl/egress_lgcy.c  | 11 +++--
 .../mellanox/mlx5/core/esw/acl/egress_ofld.c  |  5 +-
 .../mellanox/mlx5/core/esw/acl/helper.c       | 20 ++++++--
 .../mellanox/mlx5/core/esw/acl/helper.h       |  5 +-
 .../mellanox/mlx5/core/esw/acl/ingress_lgcy.c | 47 ++++++++++++++-----
 .../ethernet/mellanox/mlx5/core/esw/legacy.c  |  6 +--
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 28 ++++++++---
 .../net/ethernet/mellanox/mlx5/core/eswitch.h | 21 +++++++--
 .../mellanox/mlx5/core/eswitch_offloads.c     | 10 ++--
 10 files changed, 113 insertions(+), 45 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 8d36e2de53a9..7911edefc622 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4440,11 +4440,8 @@ static int mlx5e_set_vf_vlan(struct net_device *dev, int vf, u16 vlan, u8 qos,
 	struct mlx5e_priv *priv = netdev_priv(dev);
 	struct mlx5_core_dev *mdev = priv->mdev;
 
-	if (vlan_proto != htons(ETH_P_8021Q))
-		return -EPROTONOSUPPORT;
-
 	return mlx5_eswitch_set_vport_vlan(mdev->priv.eswitch, vf + 1,
-					   vlan, qos);
+					   vlan, qos, ntohs(vlan_proto));
 }
 
 static int mlx5e_set_vf_spoofchk(struct net_device *dev, int vf, bool setting)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
index 83ecad5ea80b..e9f8b2619b46 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c
@@ -24,7 +24,7 @@ static int esw_acl_egress_lgcy_groups_create(struct mlx5_eswitch *esw,
 	u32 *flow_group_in;
 	int err = 0;
 
-	err = esw_acl_egress_vlan_grp_create(esw, vport);
+	err = esw_acl_egress_vlan_grp_create(esw, vport, vport->info.vlan_proto);
 	if (err)
 		return err;
 
@@ -67,6 +67,7 @@ static void esw_acl_egress_lgcy_groups_destroy(struct mlx5_vport *vport)
 int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
 			      struct mlx5_vport *vport)
 {
+	enum esw_vst_mode vst_mode = esw_get_vst_mode(esw);
 	struct mlx5_flow_destination drop_ctr_dst = {};
 	struct mlx5_flow_destination *dst = NULL;
 	struct mlx5_flow_act flow_act = {};
@@ -77,6 +78,7 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
 	 */
 	int table_size = 2;
 	int dest_num = 0;
+	int actions_flag;
 	int err = 0;
 
 	esw_acl_egress_lgcy_cleanup(esw, vport);
@@ -101,8 +103,11 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw,
 		  vport->vport, vport->info.vlan, vport->info.qos);
 
 	/* Allowed vlan rule */
-	err = esw_egress_acl_vlan_create(esw, vport, NULL, vport->info.vlan,
-					 MLX5_FLOW_CONTEXT_ACTION_ALLOW);
+	actions_flag = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
+	if (vst_mode == ESW_VST_MODE_STEERING)
+		actions_flag |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP;
+	err = esw_egress_acl_vlan_create(esw, vport, NULL, vport->info.vlan_proto,
+					 vport->info.vlan, actions_flag);
 	if (err)
 		goto out;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c
index 2e504c7461c6..848edd9b9af8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c
@@ -73,7 +73,8 @@ static int esw_acl_egress_ofld_rules_create(struct mlx5_eswitch *esw,
 			  MLX5_FLOW_CONTEXT_ACTION_ALLOW;
 
 		/* prio tag vlan rule - pop it so vport receives untagged packets */
-		err = esw_egress_acl_vlan_create(esw, vport, fwd_dest, 0, action);
+		err = esw_egress_acl_vlan_create(esw, vport, fwd_dest,
+						 ETH_P_8021Q, 0, action);
 		if (err)
 			goto prio_err;
 	}
@@ -109,7 +110,7 @@ static int esw_acl_egress_ofld_groups_create(struct mlx5_eswitch *esw,
 	int ret = 0;
 
 	if (MLX5_CAP_GEN(esw->dev, prio_tag_required)) {
-		ret = esw_acl_egress_vlan_grp_create(esw, vport);
+		ret = esw_acl_egress_vlan_grp_create(esw, vport, ETH_P_8021Q);
 		if (ret)
 			return ret;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.c
index 45b839116212..69ee780a7a1b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.c
@@ -48,7 +48,7 @@ esw_acl_table_create(struct mlx5_eswitch *esw, struct mlx5_vport *vport, int ns,
 int esw_egress_acl_vlan_create(struct mlx5_eswitch *esw,
 			       struct mlx5_vport *vport,
 			       struct mlx5_flow_destination *fwd_dest,
-			       u16 vlan_id, u32 flow_action)
+			       u16 vlan_proto, u16 vlan_id, u32 flow_action)
 {
 	struct mlx5_flow_act flow_act = {};
 	struct mlx5_flow_spec *spec;
@@ -61,8 +61,14 @@ int esw_egress_acl_vlan_create(struct mlx5_eswitch *esw,
 	if (!spec)
 		return -ENOMEM;
 
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
-	MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
+	if (vlan_proto == ETH_P_8021AD) {
+		MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.svlan_tag);
+		MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.svlan_tag);
+	} else {
+		MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
+		MLX5_SET_TO_ONES(fte_match_param, spec->match_value, outer_headers.cvlan_tag);
+	}
+	/* Both outer_headers.cvlan_tag and outer_headers.svlan_tag match only first vlan */
 	MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.first_vid);
 	MLX5_SET(fte_match_param, spec->match_value, outer_headers.first_vid, vlan_id);
 
@@ -91,7 +97,8 @@ void esw_acl_egress_vlan_destroy(struct mlx5_vport *vport)
 	}
 }
 
-int esw_acl_egress_vlan_grp_create(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
+int esw_acl_egress_vlan_grp_create(struct mlx5_eswitch *esw,
+				   struct mlx5_vport *vport, u16 vlan_proto)
 {
 	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
 	struct mlx5_flow_group *vlan_grp;
@@ -107,7 +114,10 @@ int esw_acl_egress_vlan_grp_create(struct mlx5_eswitch *esw, struct mlx5_vport *
 		 match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
 	match_criteria = MLX5_ADDR_OF(create_flow_group_in,
 				      flow_group_in, match_criteria);
-	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
+	if (vlan_proto == ETH_P_8021AD)
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.svlan_tag);
+	else
+		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
 	MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.first_vid);
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
 	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.h
index a47063fab57e..f41af663251e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/helper.h
@@ -14,9 +14,10 @@ esw_acl_table_create(struct mlx5_eswitch *esw, struct mlx5_vport *vport, int ns,
 void esw_acl_egress_table_destroy(struct mlx5_vport *vport);
 int esw_egress_acl_vlan_create(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
 			       struct mlx5_flow_destination *fwd_dest,
-			       u16 vlan_id, u32 flow_action);
+			       u16 vlan_proto, u16 vlan_id, u32 flow_action);
 void esw_acl_egress_vlan_destroy(struct mlx5_vport *vport);
-int esw_acl_egress_vlan_grp_create(struct mlx5_eswitch *esw, struct mlx5_vport *vport);
+int esw_acl_egress_vlan_grp_create(struct mlx5_eswitch *esw,
+				   struct mlx5_vport *vport, u16 vlan_proto);
 void esw_acl_egress_vlan_grp_destroy(struct mlx5_vport *vport);
 
 /* Ingress acl helper functions */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
index 0b37edb9490d..eb68d6e6d5aa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
@@ -18,10 +18,12 @@ static void esw_acl_ingress_lgcy_rules_destroy(struct mlx5_vport *vport)
 static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 					      struct mlx5_vport *vport)
 {
+	enum esw_vst_mode vst_mode = esw_get_vst_mode(esw);
 	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
 	struct mlx5_core_dev *dev = esw->dev;
 	struct mlx5_flow_group *g;
 	void *match_criteria;
+	bool push_on_any_pkt;
 	u32 *flow_group_in;
 	int err;
 
@@ -31,9 +33,11 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 
 	match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
 
-	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
-		 MLX5_MATCH_OUTER_HEADERS);
-	if (vport->info.vlan || vport->info.qos)
+	push_on_any_pkt = (vst_mode == ESW_VST_MODE_STEERING && !vport->info.spoofchk);
+	if (!push_on_any_pkt)
+		MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+			 MLX5_MATCH_OUTER_HEADERS);
+	if (vst_mode == ESW_VST_MODE_BASIC && (vport->info.vlan || vport->info.qos))
 		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.cvlan_tag);
 	if (vport->info.spoofchk) {
 		MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
@@ -50,6 +54,8 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 		goto allow_err;
 	}
 	vport->ingress.legacy.allow_grp = g;
+	if (push_on_any_pkt)
+		goto out;
 
 	memset(flow_group_in, 0, inlen);
 	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
@@ -63,6 +69,7 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 		goto drop_err;
 	}
 	vport->ingress.legacy.drop_grp = g;
+out:
 	kvfree(flow_group_in);
 	return 0;
 
@@ -91,6 +98,7 @@ static void esw_acl_ingress_lgcy_groups_destroy(struct mlx5_vport *vport)
 int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 			       struct mlx5_vport *vport)
 {
+	enum esw_vst_mode vst_mode = esw_get_vst_mode(esw);
 	struct mlx5_flow_destination drop_ctr_dst = {};
 	struct mlx5_flow_destination *dst = NULL;
 	struct mlx5_flow_act flow_act = {};
@@ -100,6 +108,7 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 	 * 1)Allowed traffic according to tagging and spoofcheck settings
 	 * 2)Drop all other traffic.
 	 */
+	bool push_on_any_pkt;
 	int table_size = 2;
 	int dest_num = 0;
 	int err = 0;
@@ -123,8 +132,8 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 		goto out;
 
 	esw_debug(esw->dev,
-		  "vport[%d] configure ingress rules, vlan(%d) qos(%d)\n",
-		  vport->vport, vport->info.vlan, vport->info.qos);
+		  "vport[%d] configure ingress rules, vlan(%d) qos(%d) vst_mode (%d)\n",
+		  vport->vport, vport->info.vlan, vport->info.qos, vst_mode);
 
 	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
 	if (!spec) {
@@ -132,9 +141,21 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 		goto out;
 	}
 
-	if (vport->info.vlan || vport->info.qos)
-		MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
-				 outer_headers.cvlan_tag);
+	push_on_any_pkt = (vst_mode == ESW_VST_MODE_STEERING && !vport->info.spoofchk);
+	if (!push_on_any_pkt)
+		spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+
+	/* Create ingress allow rule */
+	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
+	if (vst_mode == ESW_VST_MODE_STEERING && (vport->info.vlan || vport->info.qos)) {
+		flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
+		flow_act.vlan[0].prio = vport->info.qos;
+		flow_act.vlan[0].vid = vport->info.vlan;
+		flow_act.vlan[0].ethtype = vport->info.vlan_proto;
+	}
+
+	if (vst_mode == ESW_VST_MODE_BASIC && (vport->info.vlan || vport->info.qos))
+		MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.cvlan_tag);
 
 	if (vport->info.spoofchk) {
 		MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
@@ -147,9 +168,6 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 		ether_addr_copy(smac_v, vport->info.mac);
 	}
 
-	/* Create ingress allow rule */
-	spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_ALLOW;
 	vport->ingress.allow_rule = mlx5_add_flow_rules(vport->ingress.acl, spec,
 							&flow_act, NULL, 0);
 	if (IS_ERR(vport->ingress.allow_rule)) {
@@ -161,6 +179,10 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 		goto out;
 	}
 
+	if (push_on_any_pkt)
+		goto out;
+
+	memset(spec, 0, sizeof(*spec));
 	memset(&flow_act, 0, sizeof(flow_act));
 	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
 	/* Attach drop flow counter */
@@ -187,7 +209,8 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 	return 0;
 
 out:
-	esw_acl_ingress_lgcy_cleanup(esw, vport);
+	if (err)
+		esw_acl_ingress_lgcy_cleanup(esw, vport);
 	kvfree(spec);
 	return err;
 }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
index bddf1086d771..6650ac72c801 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
@@ -431,8 +431,8 @@ int mlx5_esw_query_vport_drop_stats(struct mlx5_core_dev *dev,
 	return err;
 }
 
-int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
-				u16 vport, u16 vlan, u8 qos)
+int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan,
+				u8 qos, u16 vlan_proto)
 {
 	u8 set_flags = 0;
 	int err = 0;
@@ -452,7 +452,7 @@ int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
 		goto unlock;
 	}
 
-	err = __mlx5_eswitch_set_vport_vlan(esw, vport, vlan, qos, set_flags);
+	err = __mlx5_eswitch_set_vport_vlan(esw, vport, vlan, qos, vlan_proto, set_flags);
 
 unlock:
 	mutex_unlock(&esw->state_lock);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 374e3fbdc2cf..9d4599a1b8a6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -774,6 +774,7 @@ static void esw_vport_cleanup_acl(struct mlx5_eswitch *esw,
 
 static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
 {
+	enum esw_vst_mode vst_mode = esw_get_vst_mode(esw);
 	u16 vport_num = vport->vport;
 	int flags;
 	int err;
@@ -800,8 +801,9 @@ static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
 
 	flags = (vport->info.vlan || vport->info.qos) ?
 		SET_VLAN_STRIP | SET_VLAN_INSERT : 0;
-	modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan,
-			       vport->info.qos, flags);
+	if (vst_mode == ESW_VST_MODE_BASIC)
+		modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan,
+				       vport->info.qos, flags);
 
 	return 0;
 }
@@ -990,6 +992,7 @@ static void mlx5_eswitch_clear_vf_vports_info(struct mlx5_eswitch *esw)
 		memset(&vport->qos, 0, sizeof(vport->qos));
 		memset(&vport->info, 0, sizeof(vport->info));
 		vport->info.link_state = MLX5_VPORT_ADMIN_STATE_AUTO;
+		vport->info.vlan_proto = ETH_P_8021Q;
 	}
 }
 
@@ -1790,6 +1793,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
 	ivi->linkstate = evport->info.link_state;
 	ivi->vlan = evport->info.vlan;
 	ivi->qos = evport->info.qos;
+	ivi->vlan_proto = htons(evport->info.vlan_proto);
 	ivi->spoofchk = evport->info.spoofchk;
 	ivi->trusted = evport->info.trusted;
 	if (evport->qos.enabled) {
@@ -1801,23 +1805,33 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
 	return 0;
 }
 
-int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
-				  u16 vport, u16 vlan, u8 qos, u8 set_flags)
+int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan,
+				  u8 qos, u16 proto, u8 set_flags)
 {
 	struct mlx5_vport *evport = mlx5_eswitch_get_vport(esw, vport);
+	enum esw_vst_mode vst_mode;
 	int err = 0;
 
 	if (IS_ERR(evport))
 		return PTR_ERR(evport);
 	if (vlan > 4095 || qos > 7)
 		return -EINVAL;
+	if (proto != ETH_P_8021Q && proto != ETH_P_8021AD)
+		return -EINVAL;
 
-	err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags);
-	if (err)
-		return err;
+	vst_mode = esw_get_vst_mode(esw);
+	if (proto == ETH_P_8021AD && vst_mode != ESW_VST_MODE_STEERING)
+		return -EPROTONOSUPPORT;
+
+	if (esw->mode == MLX5_ESWITCH_OFFLOADS || vst_mode == ESW_VST_MODE_BASIC) {
+		err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags);
+		if (err)
+			return err;
+	}
 
 	evport->info.vlan = vlan;
 	evport->info.qos = qos;
+	evport->info.vlan_proto = proto;
 	if (evport->enabled && esw->mode == MLX5_ESWITCH_LEGACY) {
 		err = esw_acl_ingress_lgcy_setup(esw, evport);
 		if (err)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index b7779826e725..6f368c0442bf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -149,6 +149,7 @@ struct mlx5_vport_info {
 	u64                     node_guid;
 	int                     link_state;
 	u8                      qos;
+	u16			vlan_proto;
 	u8                      spoofchk: 1;
 	u8                      trusted: 1;
 };
@@ -371,7 +372,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
 int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
 				 u16 vport, int link_state);
 int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
-				u16 vport, u16 vlan, u8 qos);
+				u16 vport, u16 vlan, u8 qos, u16 vlan_proto);
 int mlx5_eswitch_set_vport_spoofchk(struct mlx5_eswitch *esw,
 				    u16 vport, bool spoofchk);
 int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw,
@@ -513,8 +514,22 @@ int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw,
 				 struct mlx5_flow_attr *attr);
 int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
 				 struct mlx5_flow_attr *attr);
-int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
-				  u16 vport, u16 vlan, u8 qos, u8 set_flags);
+int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan,
+				  u8 qos, u16 proto, u8 set_flags);
+
+enum esw_vst_mode {
+	ESW_VST_MODE_BASIC,
+	ESW_VST_MODE_STEERING,
+};
+
+static inline enum esw_vst_mode esw_get_vst_mode(struct mlx5_eswitch *esw)
+{
+	if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, pop_vlan) &&
+	    MLX5_CAP_ESW_INGRESS_ACL(esw->dev, push_vlan))
+		return ESW_VST_MODE_STEERING;
+
+	return ESW_VST_MODE_BASIC;
+}
 
 static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev,
 						       u8 vlan_depth)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index db1e282900aa..ab8465e75c39 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -824,7 +824,8 @@ static int esw_set_global_vlan_pop(struct mlx5_eswitch *esw, u8 val)
 		if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
 			continue;
 
-		err = __mlx5_eswitch_set_vport_vlan(esw, rep->vport, 0, 0, val);
+		err = __mlx5_eswitch_set_vport_vlan(esw, rep->vport, 0, 0,
+						    ETH_P_8021Q, val);
 		if (err)
 			goto out;
 	}
@@ -939,7 +940,8 @@ int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw,
 			goto skip_set_push;
 
 		err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport, esw_attr->vlan_vid[0],
-						    0, SET_VLAN_INSERT | SET_VLAN_STRIP);
+						    0, ETH_P_8021Q,
+						    SET_VLAN_INSERT | SET_VLAN_STRIP);
 		if (err)
 			goto out;
 		vport->vlan = esw_attr->vlan_vid[0];
@@ -992,8 +994,8 @@ int mlx5_eswitch_del_vlan_action(struct mlx5_eswitch *esw,
 			goto skip_unset_push;
 
 		vport->vlan = 0;
-		err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport,
-						    0, 0, SET_VLAN_STRIP);
+		err = __mlx5_eswitch_set_vport_vlan(esw, vport->vport, 0, 0,
+						    ETH_P_8021Q, SET_VLAN_STRIP);
 		if (err)
 			goto out;
 	}
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [net-next 15/15] net/mlx5: SRIOV, Allow ingress tagged packets on VST
  2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
                   ` (13 preceding siblings ...)
  2022-12-03 22:13 ` [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support Saeed Mahameed
@ 2022-12-03 22:13 ` Saeed Mahameed
  14 siblings, 0 replies; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-03 22:13 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
  Cc: Saeed Mahameed, netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

From: Moshe Shemesh <moshe@nvidia.com>

Depends on esw capability, VST mode behavior is changed to insert cvlan
tag either if there is already vlan tag on the packet or not. Previous
VST mode behavior was to insert cvlan tag only if there is no vlan tag
already on the packet.

This change enables sending packets with two cvlan tags, the outer is
added by the eswitch.

Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../mellanox/mlx5/core/esw/acl/ingress_lgcy.c |  4 +--
 .../net/ethernet/mellanox/mlx5/core/eswitch.c | 26 ++++++++++++-------
 .../net/ethernet/mellanox/mlx5/core/eswitch.h | 12 +++++++--
 include/linux/mlx5/device.h                   |  7 +++++
 include/linux/mlx5/mlx5_ifc.h                 |  3 ++-
 5 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
index eb68d6e6d5aa..5c10e487497a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
@@ -33,7 +33,7 @@ static int esw_acl_ingress_lgcy_groups_create(struct mlx5_eswitch *esw,
 
 	match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
 
-	push_on_any_pkt = (vst_mode == ESW_VST_MODE_STEERING && !vport->info.spoofchk);
+	push_on_any_pkt = (vst_mode != ESW_VST_MODE_BASIC && !vport->info.spoofchk);
 	if (!push_on_any_pkt)
 		MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
 			 MLX5_MATCH_OUTER_HEADERS);
@@ -141,7 +141,7 @@ int esw_acl_ingress_lgcy_setup(struct mlx5_eswitch *esw,
 		goto out;
 	}
 
-	push_on_any_pkt = (vst_mode == ESW_VST_MODE_STEERING && !vport->info.spoofchk);
+	push_on_any_pkt = (vst_mode != ESW_VST_MODE_BASIC && !vport->info.spoofchk);
 	if (!push_on_any_pkt)
 		spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 9d4599a1b8a6..caa03e13a28b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -145,7 +145,8 @@ int mlx5_eswitch_modify_esw_vport_context(struct mlx5_core_dev *dev, u16 vport,
 }
 
 static int modify_esw_vport_cvlan(struct mlx5_core_dev *dev, u16 vport,
-				  u16 vlan, u8 qos, u8 set_flags)
+				  u16 vlan, u8 qos, u8 set_flags,
+				  enum esw_vst_mode vst_mode)
 {
 	u32 in[MLX5_ST_SZ_DW(modify_esw_vport_context_in)] = {};
 
@@ -161,10 +162,17 @@ static int modify_esw_vport_cvlan(struct mlx5_core_dev *dev, u16 vport,
 			 esw_vport_context.vport_cvlan_strip, 1);
 
 	if (set_flags & SET_VLAN_INSERT) {
-		/* insert only if no vlan in packet */
-		MLX5_SET(modify_esw_vport_context_in, in,
-			 esw_vport_context.vport_cvlan_insert, 1);
-
+		if (vst_mode == ESW_VST_MODE_INSERT_ALWAYS) {
+			/* insert either if vlan exist in packet or not */
+			MLX5_SET(modify_esw_vport_context_in, in,
+				 esw_vport_context.vport_cvlan_insert,
+				 MLX5_VPORT_CVLAN_INSERT_ALWAYS);
+		} else {
+			/* insert only if no vlan in packet */
+			MLX5_SET(modify_esw_vport_context_in, in,
+				 esw_vport_context.vport_cvlan_insert,
+				 MLX5_VPORT_CVLAN_INSERT_WHEN_NO_CVLAN);
+		}
 		MLX5_SET(modify_esw_vport_context_in, in,
 			 esw_vport_context.cvlan_pcp, qos);
 		MLX5_SET(modify_esw_vport_context_in, in,
@@ -801,9 +809,9 @@ static int esw_vport_setup(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
 
 	flags = (vport->info.vlan || vport->info.qos) ?
 		SET_VLAN_STRIP | SET_VLAN_INSERT : 0;
-	if (vst_mode == ESW_VST_MODE_BASIC)
+	if (vst_mode != ESW_VST_MODE_STEERING)
 		modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan,
-				       vport->info.qos, flags);
+				       vport->info.qos, flags, vst_mode);
 
 	return 0;
 }
@@ -1823,8 +1831,8 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan,
 	if (proto == ETH_P_8021AD && vst_mode != ESW_VST_MODE_STEERING)
 		return -EPROTONOSUPPORT;
 
-	if (esw->mode == MLX5_ESWITCH_OFFLOADS || vst_mode == ESW_VST_MODE_BASIC) {
-		err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags);
+	if (esw->mode == MLX5_ESWITCH_OFFLOADS || vst_mode != ESW_VST_MODE_STEERING) {
+		err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set_flags, vst_mode);
 		if (err)
 			return err;
 	}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 6f368c0442bf..15adb6f20c0d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -520,15 +520,23 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, u16 vport, u16 vlan,
 enum esw_vst_mode {
 	ESW_VST_MODE_BASIC,
 	ESW_VST_MODE_STEERING,
+	ESW_VST_MODE_INSERT_ALWAYS,
 };
 
 static inline enum esw_vst_mode esw_get_vst_mode(struct mlx5_eswitch *esw)
 {
+	/*  vst mode precedence:
+	 *  if vst steering mode is supported use it
+	 *  if not, look for vst vport insert always support
+	 *  if both not supported, we use basic vst, can't support QinQ
+	 */
 	if (MLX5_CAP_ESW_EGRESS_ACL(esw->dev, pop_vlan) &&
 	    MLX5_CAP_ESW_INGRESS_ACL(esw->dev, push_vlan))
 		return ESW_VST_MODE_STEERING;
-
-	return ESW_VST_MODE_BASIC;
+	else if (MLX5_CAP_ESW(esw->dev, vport_cvlan_insert_always))
+		return ESW_VST_MODE_INSERT_ALWAYS;
+	else
+		return ESW_VST_MODE_BASIC;
 }
 
 static inline bool mlx5_eswitch_vlan_actions_supported(struct mlx5_core_dev *dev,
diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
index 5fe5d198b57a..84ec364e0751 100644
--- a/include/linux/mlx5/device.h
+++ b/include/linux/mlx5/device.h
@@ -1090,6 +1090,13 @@ enum {
 	MLX5_VPORT_ADMIN_STATE_AUTO  = 0x2,
 };
 
+enum {
+	MLX5_VPORT_CVLAN_NO_INSERT             = 0x0,
+	MLX5_VPORT_CVLAN_INSERT_WHEN_NO_CVLAN  = 0x1,
+	MLX5_VPORT_CVLAN_INSERT_OR_OVERWRITE   = 0x2,
+	MLX5_VPORT_CVLAN_INSERT_ALWAYS         = 0x3,
+};
+
 enum {
 	MLX5_L3_PROT_TYPE_IPV4		= 0,
 	MLX5_L3_PROT_TYPE_IPV6		= 1,
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 5a4e914e2a6f..e45bdec73baf 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -907,7 +907,8 @@ struct mlx5_ifc_e_switch_cap_bits {
 	u8         vport_svlan_insert[0x1];
 	u8         vport_cvlan_insert_if_not_exist[0x1];
 	u8         vport_cvlan_insert_overwrite[0x1];
-	u8         reserved_at_5[0x2];
+	u8         reserved_at_5[0x1];
+	u8         vport_cvlan_insert_always[0x1];
 	u8         esw_shared_ingress_acl[0x1];
 	u8         esw_uplink_ingress_acl[0x1];
 	u8         root_ft_on_other_esw[0x1];
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-03 22:13 ` [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support Saeed Mahameed
@ 2022-12-07  4:34   ` Jakub Kicinski
  2022-12-07  5:20     ` Saeed Mahameed
  0 siblings, 1 reply; 25+ messages in thread
From: Jakub Kicinski @ 2022-12-07  4:34 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: David S. Miller, Paolo Abeni, Eric Dumazet, Saeed Mahameed,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On Sat,  3 Dec 2022 14:13:36 -0800 Saeed Mahameed wrote:
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index 8d36e2de53a9..7911edefc622 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -4440,11 +4440,8 @@ static int mlx5e_set_vf_vlan(struct net_device *dev, int vf, u16 vlan, u8 qos,
>  	struct mlx5e_priv *priv = netdev_priv(dev);
>  	struct mlx5_core_dev *mdev = priv->mdev;
>  
> -	if (vlan_proto != htons(ETH_P_8021Q))
> -		return -EPROTONOSUPPORT;

I can't take this with clear conscience :( I started nacking any new use
of the legacy VF NDOs. You already have bridging offload implemented,
why can bridging be used?

>  	return mlx5_eswitch_set_vport_vlan(mdev->priv.eswitch, vf + 1,
> -					   vlan, qos);
> +					   vlan, qos, ntohs(vlan_proto));
>  }
>  

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-07  4:34   ` Jakub Kicinski
@ 2022-12-07  5:20     ` Saeed Mahameed
  2022-12-07 17:25       ` Jakub Kicinski
  0 siblings, 1 reply; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-07  5:20 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On 06 Dec 20:34, Jakub Kicinski wrote:
>On Sat,  3 Dec 2022 14:13:36 -0800 Saeed Mahameed wrote:
>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>> index 8d36e2de53a9..7911edefc622 100644
>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>> @@ -4440,11 +4440,8 @@ static int mlx5e_set_vf_vlan(struct net_device *dev, int vf, u16 vlan, u8 qos,
>>  	struct mlx5e_priv *priv = netdev_priv(dev);
>>  	struct mlx5_core_dev *mdev = priv->mdev;
>>
>> -	if (vlan_proto != htons(ETH_P_8021Q))
>> -		return -EPROTONOSUPPORT;
>
>I can't take this with clear conscience :( I started nacking any new use
>of the legacy VF NDOs. You already have bridging offload implemented,
>why can bridging be used?
>

I really tried, many customers aren't ready to make this leap yet.

I understand your point, my goal is to move as many customers to use
upstream and step away from out of tree drivers, if it makes it any
easier you can look at this as filling a small gap in mlx5 which will
help me bring more users to the upstream driver, after all the feature
is already implemented in mlx5, this is just a small gap we previously
missed to upstream.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-07  5:20     ` Saeed Mahameed
@ 2022-12-07 17:25       ` Jakub Kicinski
  2022-12-08  8:28         ` Saeed Mahameed
  0 siblings, 1 reply; 25+ messages in thread
From: Jakub Kicinski @ 2022-12-07 17:25 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On Tue, 6 Dec 2022 21:20:54 -0800 Saeed Mahameed wrote:
>> I can't take this with clear conscience :( I started nacking any new use
>> of the legacy VF NDOs. You already have bridging offload implemented,
>> why can bridging be used?
> 
> I really tried, many customers aren't ready to make this leap yet.
> 
> I understand your point, my goal is to move as many customers to use
> upstream and step away from out of tree drivers, if it makes it any
> easier you can look at this as filling a small gap in mlx5 which will
> help me bring more users to the upstream driver, after all the feature
> is already implemented in mlx5, this is just a small gap we previously
> missed to upstream.

I recently nacked a new driver from AMD which was using legacy NDO, and
they can be somewhat rightfully miffed that those who got there earlier
can keep their support. If we let the older drivers also extend the
support the fairness will suffer even more.

We need to draw the line somewhere, so what do you propose as our
policy? Assuming we want people to move to new APIs and be fair 
to new vs old drivers. 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-07 17:25       ` Jakub Kicinski
@ 2022-12-08  8:28         ` Saeed Mahameed
  2022-12-09  1:04           ` Jakub Kicinski
  0 siblings, 1 reply; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-08  8:28 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On 07 Dec 09:25, Jakub Kicinski wrote:
>On Tue, 6 Dec 2022 21:20:54 -0800 Saeed Mahameed wrote:
>>> I can't take this with clear conscience :( I started nacking any new use
>>> of the legacy VF NDOs. You already have bridging offload implemented,
>>> why can bridging be used?
>>
>> I really tried, many customers aren't ready to make this leap yet.
>>
>> I understand your point, my goal is to move as many customers to use
>> upstream and step away from out of tree drivers, if it makes it any
>> easier you can look at this as filling a small gap in mlx5 which will
>> help me bring more users to the upstream driver, after all the feature
>> is already implemented in mlx5, this is just a small gap we previously
>> missed to upstream.
>
>I recently nacked a new driver from AMD which was using legacy NDO, and
>they can be somewhat rightfully miffed that those who got there earlier
>can keep their support. If we let the older drivers also extend the
>support the fairness will suffer even more.
>
>We need to draw the line somewhere, so what do you propose as our
>policy? Assuming we want people to move to new APIs and be fair
>to new vs old drivers.

You put me in a tough spot ! 
You know where we stand at nVidia in all of this, we are advocating only
for switchdev mode, and we are 100% behind your decision to not extending
legacy support.

So if the part in this series of adding support for 802.1ad, falls under that
policy, then i must agree with you. I will drop it.

But another part in this series is fixing a critical bug were we drop VF tagged
packets when vst in ON, we will strip that part out and send it as a
bug fix, it is really critical, mlx5 vst basically doesn't work upstream for
tagged traffic.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-08  8:28         ` Saeed Mahameed
@ 2022-12-09  1:04           ` Jakub Kicinski
  2022-12-09  1:57             ` Saeed Mahameed
  0 siblings, 1 reply; 25+ messages in thread
From: Jakub Kicinski @ 2022-12-09  1:04 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On Thu, 8 Dec 2022 00:28:38 -0800 Saeed Mahameed wrote:
> So if the part in this series of adding support for 802.1ad, falls under that
> policy, then i must agree with you. I will drop it.

Part of me was hoping there's a silver bullet or a compromise,
and I was not seeing it.. :)

> But another part in this series is fixing a critical bug were we drop VF tagged
> packets when vst in ON, we will strip that part out and send it as a
> bug fix, it is really critical, mlx5 vst basically doesn't work upstream for
> tagged traffic.

What's the setup in that case?  My immediate thought is why would 
VST be on if it's only needed for .1ad and that's not used?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-09  1:04           ` Jakub Kicinski
@ 2022-12-09  1:57             ` Saeed Mahameed
  2022-12-09  2:04               ` Jakub Kicinski
  0 siblings, 1 reply; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-09  1:57 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On 08 Dec 17:04, Jakub Kicinski wrote:
>On Thu, 8 Dec 2022 00:28:38 -0800 Saeed Mahameed wrote:
>> So if the part in this series of adding support for 802.1ad, falls under that
>> policy, then i must agree with you. I will drop it.
>
>Part of me was hoping there's a silver bullet or a compromise,
>and I was not seeing it.. :)
>
>> But another part in this series is fixing a critical bug were we drop VF tagged
>> packets when vst in ON, we will strip that part out and send it as a
>> bug fix, it is really critical, mlx5 vst basically doesn't work upstream for
>> tagged traffic.
>
>What's the setup in that case?  My immediate thought is why would
>VST be on if it's only needed for .1ad and that's not used?

So the whole thing started from finding these gaps in our out of tree 
driver. there's the bug fix i will explain below, and the addition of .1ad
both were found missing upstream when we convinced a customer to switch
to upstream/inbox driver.

vst .1q and vst .1ad are both totally separate scenarios and use cases for
the customers.

Currently upstream mlx5 only support VST for vlan proto .1q, 
but it's buggy when traffic from the guest comes with a vlan tag, 
depending on the HW/FW version, either the packets get dropped or
the guest vlans get overridden with the VST host vlan, this is due to
wrong interpretation of the hw steering rules in the initial driver
implementation. in both cases it's a bug and the latter is even worse.



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-09  1:57             ` Saeed Mahameed
@ 2022-12-09  2:04               ` Jakub Kicinski
  2022-12-09  5:05                 ` Saeed Mahameed
  0 siblings, 1 reply; 25+ messages in thread
From: Jakub Kicinski @ 2022-12-09  2:04 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On Thu, 8 Dec 2022 17:57:57 -0800 Saeed Mahameed wrote:
> So the whole thing started from finding these gaps in our out of tree 
> driver. there's the bug fix i will explain below, and the addition of .1ad
> both were found missing upstream when we convinced a customer to switch
> to upstream/inbox driver.
> 
> vst .1q and vst .1ad are both totally separate scenarios and use cases for
> the customers.
> 
> Currently upstream mlx5 only support VST for vlan proto .1q, 
> but it's buggy when traffic from the guest comes with a vlan tag, 
> depending on the HW/FW version, either the packets get dropped or
> the guest vlans get overridden with the VST host vlan, this is due to
> wrong interpretation of the hw steering rules in the initial driver
> implementation. in both cases it's a bug and the latter is even worse.

I see, but that's the fix? Uniformly drop?
Start stacking with just .1q? 
Auto-select the ethtype for .1ad if there's already a tag?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-09  2:04               ` Jakub Kicinski
@ 2022-12-09  5:05                 ` Saeed Mahameed
  2022-12-09 16:34                   ` Jakub Kicinski
  0 siblings, 1 reply; 25+ messages in thread
From: Saeed Mahameed @ 2022-12-09  5:05 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On 08 Dec 18:04, Jakub Kicinski wrote:
>On Thu, 8 Dec 2022 17:57:57 -0800 Saeed Mahameed wrote:
>> So the whole thing started from finding these gaps in our out of tree
>> driver. there's the bug fix i will explain below, and the addition of .1ad
>> both were found missing upstream when we convinced a customer to switch
>> to upstream/inbox driver.
>>
>> vst .1q and vst .1ad are both totally separate scenarios and use cases for
>> the customers.
>>
>> Currently upstream mlx5 only support VST for vlan proto .1q,
>> but it's buggy when traffic from the guest comes with a vlan tag,
>> depending on the HW/FW version, either the packets get dropped or
>> the guest vlans get overridden with the VST host vlan, this is due to
>> wrong interpretation of the hw steering rules in the initial driver
>> implementation. in both cases it's a bug and the latter is even worse.
>
>I see, but that's the fix? Uniformly drop?
>Start stacking with just .1q?
>Auto-select the ethtype for .1ad if there's already a tag?

push the vst .1q tag. keep original tags intact.
per policy we won't have .1ad support :( .. 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support
  2022-12-09  5:05                 ` Saeed Mahameed
@ 2022-12-09 16:34                   ` Jakub Kicinski
  0 siblings, 0 replies; 25+ messages in thread
From: Jakub Kicinski @ 2022-12-09 16:34 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Saeed Mahameed, David S. Miller, Paolo Abeni, Eric Dumazet,
	netdev, Tariq Toukan, Moshe Shemesh, Mark Bloch

On Thu, 8 Dec 2022 21:05:23 -0800 Saeed Mahameed wrote:
> >I see, but that's the fix? Uniformly drop?
> >Start stacking with just .1q?
> >Auto-select the ethtype for .1ad if there's already a tag?  
> 
> push the vst .1q tag. keep original tags intact.
> per policy we won't have .1ad support :( .. 

Yup, stacking another .1q sounds okay to me.
Not exactly standard-compliant but I bet there's already a device out
there which does exactly this so meh..

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-12-09 16:34 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-03 22:13 [pull request][net-next 00/15] mlx5 updates 2022-12-03 Saeed Mahameed
2022-12-03 22:13 ` [net-next 01/15] net/mlx5e: E-Switch, handle flow attribute with no destinations Saeed Mahameed
2022-12-03 22:13 ` [net-next 02/15] net/mlx5: fs, assert null dest pointer when dest_num is 0 Saeed Mahameed
2022-12-03 22:13 ` [net-next 03/15] net/mlx5e: TC, reuse flow attribute post parser processing Saeed Mahameed
2022-12-03 22:13 ` [net-next 04/15] net/mlx5e: TC, add terminating actions Saeed Mahameed
2022-12-03 22:13 ` [net-next 05/15] net/mlx5e: TC, validate action list per attribute Saeed Mahameed
2022-12-03 22:13 ` [net-next 06/15] net/mlx5e: TC, set control params for branching actions Saeed Mahameed
2022-12-03 22:13 ` [net-next 07/15] net/mlx5e: TC, initialize branch flow attributes Saeed Mahameed
2022-12-03 22:13 ` [net-next 08/15] net/mlx5e: TC, initialize branching action with target attr Saeed Mahameed
2022-12-03 22:13 ` [net-next 09/15] net/mlx5e: TC, rename post_meter actions Saeed Mahameed
2022-12-03 22:13 ` [net-next 10/15] net/mlx5e: TC, init post meter rules with branching attributes Saeed Mahameed
2022-12-03 22:13 ` [net-next 11/15] net/mlx5e: TC, allow meter jump control action Saeed Mahameed
2022-12-03 22:13 ` [net-next 12/15] net/mlx5: SRIOV, Remove two unused ingress flow group Saeed Mahameed
2022-12-03 22:13 ` [net-next 13/15] net/mlx5: SRIOV, Recreate egress ACL table on config change Saeed Mahameed
2022-12-03 22:13 ` [net-next 14/15] net/mlx5: SRIOV, Add 802.1ad VST support Saeed Mahameed
2022-12-07  4:34   ` Jakub Kicinski
2022-12-07  5:20     ` Saeed Mahameed
2022-12-07 17:25       ` Jakub Kicinski
2022-12-08  8:28         ` Saeed Mahameed
2022-12-09  1:04           ` Jakub Kicinski
2022-12-09  1:57             ` Saeed Mahameed
2022-12-09  2:04               ` Jakub Kicinski
2022-12-09  5:05                 ` Saeed Mahameed
2022-12-09 16:34                   ` Jakub Kicinski
2022-12-03 22:13 ` [net-next 15/15] net/mlx5: SRIOV, Allow ingress tagged packets on VST Saeed Mahameed

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.