All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03
@ 2022-07-03 20:54 Saeed Mahameed
  2022-07-03 20:54 ` [PATCH mlx5-next 1/5] net/mlx5: Expose the ability to point to any UID from shared UID Saeed Mahameed
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-03 20:54 UTC (permalink / raw)
  To: Leon Romanovsky, Saeed Mahameed; +Cc: Jason Gunthorpe, netdev, linux-rdma

From: Saeed Mahameed <saeedm@nvidia.com>

Mark Bloch Says:
================
Expose steering anchor

Expose a steering anchor per priority to allow users to re-inject
packets back into default NIC pipeline for additional processing.

MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
a user can use to re-inject packets at a specific priority.

A FTE (flow table entry) can be created and the flow table ID
used as a destination.

When a packet is taken into a RDMA-controlled steering domain (like
software steering) there may be a need to insert the packet back into
the default NIC pipeline. This exposes a flow table ID to the user that can
be used as a destination in a flow table entry.

With this new method priorities that are exposed to users via
MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.

As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
thus it's impossible to point to a NIC core flow table (core driver flow tables
are created with UID value of zero) from userspace.
Create flow tables that are exposed to users with the shared UID, this
allows users to point to default NIC flow tables.

Steering loops are prevented at FW level as FW enforces that no flow
table at level X can point to a table at level lower than X. 

===============

Mark Bloch (5):
  net/mlx5: Expose the ability to point to any UID from shared UID
  net/mlx5: fs, expose flow table ID to users
  net/mlx5: fs, allow flow table creation with a UID
  RDMA/mlx5: Refactor get flow table function
  RDMA/mlx5: Expose steering anchor to userspace

 drivers/infiniband/hw/mlx5/fs.c               | 159 ++++++++++++++++--
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |   6 +
 .../net/ethernet/mellanox/mlx5/core/fs_cmd.c  |  16 +-
 .../net/ethernet/mellanox/mlx5/core/fs_cmd.h  |   2 +-
 .../net/ethernet/mellanox/mlx5/core/fs_core.c |   8 +-
 .../mellanox/mlx5/core/steering/dr_cmd.c      |   1 +
 .../mellanox/mlx5/core/steering/dr_table.c    |   8 +-
 .../mellanox/mlx5/core/steering/dr_types.h    |   1 +
 .../mellanox/mlx5/core/steering/fs_dr.c       |   7 +-
 .../mellanox/mlx5/core/steering/mlx5dr.h      |   3 +-
 include/linux/mlx5/fs.h                       |   2 +
 include/linux/mlx5/mlx5_ifc.h                 |   6 +-
 include/uapi/rdma/mlx5_user_ioctl_cmds.h      |  17 ++
 13 files changed, 204 insertions(+), 32 deletions(-)

-- 
2.36.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH mlx5-next 1/5] net/mlx5: Expose the ability to point to any UID from shared UID
  2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
@ 2022-07-03 20:54 ` Saeed Mahameed
  2022-07-03 20:54 ` [PATCH mlx5-next 2/5] net/mlx5: fs, expose flow table ID to users Saeed Mahameed
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-03 20:54 UTC (permalink / raw)
  To: Leon Romanovsky, Saeed Mahameed
  Cc: Jason Gunthorpe, netdev, linux-rdma, Mark Bloch, Maor Gottlieb

From: Mark Bloch <mbloch@nvidia.com>

Expose shared_object_to_user_object_allowed, this capability
means an object created with shared UID can point to any UID.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 include/linux/mlx5/mlx5_ifc.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 8e87eb47f9dc..9321d774e2d8 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -1371,7 +1371,9 @@ enum {
 };
 
 struct mlx5_ifc_cmd_hca_cap_bits {
-	u8         reserved_at_0[0x1f];
+	u8         reserved_at_0[0x10];
+	u8         shared_object_to_user_object_allowed[0x1];
+	u8         reserved_at_13[0xe];
 	u8         vhca_resource_manager[0x1];
 
 	u8         hca_cap_2[0x1];
@@ -8507,7 +8509,7 @@ struct mlx5_ifc_create_flow_table_out_bits {
 
 struct mlx5_ifc_create_flow_table_in_bits {
 	u8         opcode[0x10];
-	u8         reserved_at_10[0x10];
+	u8         uid[0x10];
 
 	u8         reserved_at_20[0x10];
 	u8         op_mod[0x10];
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH mlx5-next 2/5] net/mlx5: fs, expose flow table ID to users
  2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
  2022-07-03 20:54 ` [PATCH mlx5-next 1/5] net/mlx5: Expose the ability to point to any UID from shared UID Saeed Mahameed
@ 2022-07-03 20:54 ` Saeed Mahameed
  2022-07-03 20:54 ` [PATCH mlx5-next 3/5] net/mlx5: fs, allow flow table creation with a UID Saeed Mahameed
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-03 20:54 UTC (permalink / raw)
  To: Leon Romanovsky, Saeed Mahameed
  Cc: Jason Gunthorpe, netdev, linux-rdma, Mark Bloch, Maor Gottlieb

From: Mark Bloch <mbloch@nvidia.com>

Expose the flow table ID to users. This will be used by downstream
patches to allow creating steering rules that point to a flow table ID.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 6 ++++++
 include/linux/mlx5/fs.h                           | 1 +
 2 files changed, 7 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 14187e50e2f9..1da3dc7c95fa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -1195,6 +1195,12 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
 }
 EXPORT_SYMBOL(mlx5_create_flow_table);
 
+u32 mlx5_flow_table_id(struct mlx5_flow_table *ft)
+{
+	return ft->id;
+}
+EXPORT_SYMBOL(mlx5_flow_table_id);
+
 struct mlx5_flow_table *
 mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
 			     struct mlx5_flow_table_attr *ft_attr, u16 vport)
diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
index ece3e35622d7..eee07d416b56 100644
--- a/include/linux/mlx5/fs.h
+++ b/include/linux/mlx5/fs.h
@@ -315,4 +315,5 @@ struct mlx5_pkt_reformat *mlx5_packet_reformat_alloc(struct mlx5_core_dev *dev,
 void mlx5_packet_reformat_dealloc(struct mlx5_core_dev *dev,
 				  struct mlx5_pkt_reformat *reformat);
 
+u32 mlx5_flow_table_id(struct mlx5_flow_table *ft);
 #endif
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH mlx5-next 3/5] net/mlx5: fs, allow flow table creation with a UID
  2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
  2022-07-03 20:54 ` [PATCH mlx5-next 1/5] net/mlx5: Expose the ability to point to any UID from shared UID Saeed Mahameed
  2022-07-03 20:54 ` [PATCH mlx5-next 2/5] net/mlx5: fs, expose flow table ID to users Saeed Mahameed
@ 2022-07-03 20:54 ` Saeed Mahameed
  2022-07-03 20:54 ` [PATCH rdma-next 4/5] RDMA/mlx5: Refactor get flow table function Saeed Mahameed
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-03 20:54 UTC (permalink / raw)
  To: Leon Romanovsky, Saeed Mahameed
  Cc: Jason Gunthorpe, netdev, linux-rdma, Mark Bloch, Alex Vesker

From: Mark Bloch <mbloch@nvidia.com>

Add UID field to flow table attributes to allow creating flow tables
with a non default (zero) uid.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c | 16 ++++++++++------
 drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h |  2 +-
 .../net/ethernet/mellanox/mlx5/core/fs_core.c    |  2 +-
 .../mellanox/mlx5/core/steering/dr_cmd.c         |  1 +
 .../mellanox/mlx5/core/steering/dr_table.c       |  8 +++++---
 .../mellanox/mlx5/core/steering/dr_types.h       |  1 +
 .../ethernet/mellanox/mlx5/core/steering/fs_dr.c |  7 ++++---
 .../mellanox/mlx5/core/steering/mlx5dr.h         |  3 ++-
 include/linux/mlx5/fs.h                          |  1 +
 9 files changed, 26 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
index 735dc805dad7..e735e19461ba 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
@@ -50,10 +50,12 @@ static int mlx5_cmd_stub_update_root_ft(struct mlx5_flow_root_namespace *ns,
 
 static int mlx5_cmd_stub_create_flow_table(struct mlx5_flow_root_namespace *ns,
 					   struct mlx5_flow_table *ft,
-					   unsigned int size,
+					   struct mlx5_flow_table_attr *ft_attr,
 					   struct mlx5_flow_table *next_ft)
 {
-	ft->max_fte = size ? roundup_pow_of_two(size) : 1;
+	int max_fte = ft_attr->max_fte;
+
+	ft->max_fte = max_fte ? roundup_pow_of_two(max_fte) : 1;
 
 	return 0;
 }
@@ -258,7 +260,7 @@ static int mlx5_cmd_update_root_ft(struct mlx5_flow_root_namespace *ns,
 
 static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns,
 				      struct mlx5_flow_table *ft,
-				      unsigned int size,
+				      struct mlx5_flow_table_attr *ft_attr,
 				      struct mlx5_flow_table *next_ft)
 {
 	int en_encap = !!(ft->flags & MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT);
@@ -267,17 +269,19 @@ static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns,
 	u32 out[MLX5_ST_SZ_DW(create_flow_table_out)] = {};
 	u32 in[MLX5_ST_SZ_DW(create_flow_table_in)] = {};
 	struct mlx5_core_dev *dev = ns->dev;
+	unsigned int size;
 	int err;
 
-	if (size != POOL_NEXT_SIZE)
-		size = roundup_pow_of_two(size);
-	size = mlx5_ft_pool_get_avail_sz(dev, ft->type, size);
+	if (ft_attr->max_fte != POOL_NEXT_SIZE)
+		size = roundup_pow_of_two(ft_attr->max_fte);
+	size = mlx5_ft_pool_get_avail_sz(dev, ft->type, ft_attr->max_fte);
 	if (!size)
 		return -ENOSPC;
 
 	MLX5_SET(create_flow_table_in, in, opcode,
 		 MLX5_CMD_OP_CREATE_FLOW_TABLE);
 
+	MLX5_SET(create_flow_table_in, in, uid, ft_attr->uid);
 	MLX5_SET(create_flow_table_in, in, table_type, ft->type);
 	MLX5_SET(create_flow_table_in, in, flow_table_context.level, ft->level);
 	MLX5_SET(create_flow_table_in, in, flow_table_context.log_size, size ? ilog2(size) : 0);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
index 274004e80f03..8ef4254b9ea1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
@@ -38,7 +38,7 @@
 struct mlx5_flow_cmds {
 	int (*create_flow_table)(struct mlx5_flow_root_namespace *ns,
 				 struct mlx5_flow_table *ft,
-				 unsigned int size,
+				 struct mlx5_flow_table_attr *ft_attr,
 				 struct mlx5_flow_table *next_ft);
 	int (*destroy_flow_table)(struct mlx5_flow_root_namespace *ns,
 				  struct mlx5_flow_table *ft);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 1da3dc7c95fa..35d89edb1bcd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -1155,7 +1155,7 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
 			      find_next_chained_ft(fs_prio);
 	ft->def_miss_action = ns->def_miss_action;
 	ft->ns = ns;
-	err = root->cmds->create_flow_table(root, ft, ft_attr->max_fte, next_ft);
+	err = root->cmds->create_flow_table(root, ft, ft_attr, next_ft);
 	if (err)
 		goto free_ft;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
index 223c8741b7ae..16d65fe4f654 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c
@@ -439,6 +439,7 @@ int mlx5dr_cmd_create_flow_table(struct mlx5_core_dev *mdev,
 
 	MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE);
 	MLX5_SET(create_flow_table_in, in, table_type, attr->table_type);
+	MLX5_SET(create_flow_table_in, in, uid, attr->uid);
 
 	ft_mdev = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context);
 	MLX5_SET(flow_table_context, ft_mdev, termination_table, attr->term_tbl);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c
index e5f6412baea9..31d443dd8386 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c
@@ -214,7 +214,7 @@ static int dr_table_destroy_sw_owned_tbl(struct mlx5dr_table *tbl)
 					     tbl->table_type);
 }
 
-static int dr_table_create_sw_owned_tbl(struct mlx5dr_table *tbl)
+static int dr_table_create_sw_owned_tbl(struct mlx5dr_table *tbl, u16 uid)
 {
 	bool en_encap = !!(tbl->flags & MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT);
 	bool en_decap = !!(tbl->flags & MLX5_FLOW_TABLE_TUNNEL_EN_DECAP);
@@ -236,6 +236,7 @@ static int dr_table_create_sw_owned_tbl(struct mlx5dr_table *tbl)
 	ft_attr.sw_owner = true;
 	ft_attr.decap_en = en_decap;
 	ft_attr.reformat_en = en_encap;
+	ft_attr.uid = uid;
 
 	ret = mlx5dr_cmd_create_flow_table(tbl->dmn->mdev, &ft_attr,
 					   NULL, &tbl->table_id);
@@ -243,7 +244,8 @@ static int dr_table_create_sw_owned_tbl(struct mlx5dr_table *tbl)
 	return ret;
 }
 
-struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_domain *dmn, u32 level, u32 flags)
+struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_domain *dmn, u32 level,
+					 u32 flags, u16 uid)
 {
 	struct mlx5dr_table *tbl;
 	int ret;
@@ -263,7 +265,7 @@ struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_domain *dmn, u32 level, u
 	if (ret)
 		goto free_tbl;
 
-	ret = dr_table_create_sw_owned_tbl(tbl);
+	ret = dr_table_create_sw_owned_tbl(tbl, uid);
 	if (ret)
 		goto uninit_tbl;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
index 98320e3945ad..50b0dd4fb4a9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
@@ -1200,6 +1200,7 @@ struct mlx5dr_cmd_query_flow_table_details {
 
 struct mlx5dr_cmd_create_flow_table_attr {
 	u32 table_type;
+	u16 uid;
 	u64 icm_addr_rx;
 	u64 icm_addr_tx;
 	u8 level;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
index 6a9abba92df6..c30ed8e18458 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
@@ -62,7 +62,7 @@ static int set_miss_action(struct mlx5_flow_root_namespace *ns,
 
 static int mlx5_cmd_dr_create_flow_table(struct mlx5_flow_root_namespace *ns,
 					 struct mlx5_flow_table *ft,
-					 unsigned int size,
+					 struct mlx5_flow_table_attr *ft_attr,
 					 struct mlx5_flow_table *next_ft)
 {
 	struct mlx5dr_table *tbl;
@@ -71,7 +71,7 @@ static int mlx5_cmd_dr_create_flow_table(struct mlx5_flow_root_namespace *ns,
 
 	if (mlx5_dr_is_fw_table(ft->flags))
 		return mlx5_fs_cmd_get_fw_cmds()->create_flow_table(ns, ft,
-								    size,
+								    ft_attr,
 								    next_ft);
 	flags = ft->flags;
 	/* turn off encap/decap if not supported for sw-str by fw */
@@ -79,7 +79,8 @@ static int mlx5_cmd_dr_create_flow_table(struct mlx5_flow_root_namespace *ns,
 		flags = ft->flags & ~(MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT |
 				      MLX5_FLOW_TABLE_TUNNEL_EN_DECAP);
 
-	tbl = mlx5dr_table_create(ns->fs_dr_domain.dr_domain, ft->level, flags);
+	tbl = mlx5dr_table_create(ns->fs_dr_domain.dr_domain, ft->level, flags,
+				  ft_attr->uid);
 	if (!tbl) {
 		mlx5_core_err(ns->dev, "Failed creating dr flow_table\n");
 		return -EINVAL;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
index 7626c85643b1..3bb14860b36d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
@@ -51,7 +51,8 @@ void mlx5dr_domain_set_peer(struct mlx5dr_domain *dmn,
 			    struct mlx5dr_domain *peer_dmn);
 
 struct mlx5dr_table *
-mlx5dr_table_create(struct mlx5dr_domain *domain, u32 level, u32 flags);
+mlx5dr_table_create(struct mlx5dr_domain *domain, u32 level, u32 flags,
+		    u16 uid);
 
 struct mlx5dr_table *
 mlx5dr_table_get_from_fs_ft(struct mlx5_flow_table *ft);
diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
index eee07d416b56..8e73c377da2c 100644
--- a/include/linux/mlx5/fs.h
+++ b/include/linux/mlx5/fs.h
@@ -178,6 +178,7 @@ struct mlx5_flow_table_attr {
 	int max_fte;
 	u32 level;
 	u32 flags;
+	u16 uid;
 	struct mlx5_flow_table *next_ft;
 
 	struct {
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH rdma-next 4/5] RDMA/mlx5: Refactor get flow table function
  2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
                   ` (2 preceding siblings ...)
  2022-07-03 20:54 ` [PATCH mlx5-next 3/5] net/mlx5: fs, allow flow table creation with a UID Saeed Mahameed
@ 2022-07-03 20:54 ` Saeed Mahameed
  2022-07-03 20:54 ` [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace Saeed Mahameed
  2022-07-19 17:54 ` [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Leon Romanovsky
  5 siblings, 0 replies; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-03 20:54 UTC (permalink / raw)
  To: Leon Romanovsky, Saeed Mahameed
  Cc: Jason Gunthorpe, netdev, linux-rdma, Mark Bloch, Maor Gottlieb

From: Mark Bloch <mbloch@nvidia.com>

_get_flow_table() requires the entire matcher being passed
while all it needs is the priority and namespace type.
Pass the priority and namespace type directly instead.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/infiniband/hw/mlx5/fs.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
index 39ffb363ba0c..b1402aea29c1 100644
--- a/drivers/infiniband/hw/mlx5/fs.c
+++ b/drivers/infiniband/hw/mlx5/fs.c
@@ -1407,8 +1407,8 @@ static struct ib_flow *mlx5_ib_create_flow(struct ib_qp *qp,
 }
 
 static struct mlx5_ib_flow_prio *
-_get_flow_table(struct mlx5_ib_dev *dev,
-		struct mlx5_ib_flow_matcher *fs_matcher,
+_get_flow_table(struct mlx5_ib_dev *dev, u16 user_priority,
+		enum mlx5_flow_namespace_type ns_type,
 		bool mcast)
 {
 	struct mlx5_flow_namespace *ns = NULL;
@@ -1421,11 +1421,11 @@ _get_flow_table(struct mlx5_ib_dev *dev,
 	if (mcast)
 		priority = MLX5_IB_FLOW_MCAST_PRIO;
 	else
-		priority = ib_prio_to_core_prio(fs_matcher->priority, false);
+		priority = ib_prio_to_core_prio(user_priority, false);
 
 	esw_encap = mlx5_eswitch_get_encap_mode(dev->mdev) !=
 		DEVLINK_ESWITCH_ENCAP_MODE_NONE;
-	switch (fs_matcher->ns_type) {
+	switch (ns_type) {
 	case MLX5_FLOW_NAMESPACE_BYPASS:
 		max_table_size = BIT(
 			MLX5_CAP_FLOWTABLE_NIC_RX(dev->mdev, log_max_ft_size));
@@ -1452,17 +1452,17 @@ _get_flow_table(struct mlx5_ib_dev *dev,
 					       reformat_l3_tunnel_to_l2) &&
 		    esw_encap)
 			flags |= MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;
-		priority = fs_matcher->priority;
+		priority = user_priority;
 		break;
 	case MLX5_FLOW_NAMESPACE_RDMA_RX:
 		max_table_size = BIT(
 			MLX5_CAP_FLOWTABLE_RDMA_RX(dev->mdev, log_max_ft_size));
-		priority = fs_matcher->priority;
+		priority = user_priority;
 		break;
 	case MLX5_FLOW_NAMESPACE_RDMA_TX:
 		max_table_size = BIT(
 			MLX5_CAP_FLOWTABLE_RDMA_TX(dev->mdev, log_max_ft_size));
-		priority = fs_matcher->priority;
+		priority = user_priority;
 		break;
 	default:
 		break;
@@ -1470,11 +1470,11 @@ _get_flow_table(struct mlx5_ib_dev *dev,
 
 	max_table_size = min_t(int, max_table_size, MLX5_FS_MAX_ENTRIES);
 
-	ns = mlx5_get_flow_namespace(dev->mdev, fs_matcher->ns_type);
+	ns = mlx5_get_flow_namespace(dev->mdev, ns_type);
 	if (!ns)
 		return ERR_PTR(-EOPNOTSUPP);
 
-	switch (fs_matcher->ns_type) {
+	switch (ns_type) {
 	case MLX5_FLOW_NAMESPACE_BYPASS:
 		prio = &dev->flow_db->prios[priority];
 		break;
@@ -1618,7 +1618,8 @@ static struct mlx5_ib_flow_handler *raw_fs_rule_add(
 	mcast = raw_fs_is_multicast(fs_matcher, cmd_in);
 	mutex_lock(&dev->flow_db->lock);
 
-	ft_prio = _get_flow_table(dev, fs_matcher, mcast);
+	ft_prio = _get_flow_table(dev, fs_matcher->priority,
+				  fs_matcher->ns_type, mcast);
 	if (IS_ERR(ft_prio)) {
 		err = PTR_ERR(ft_prio);
 		goto unlock;
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace
  2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
                   ` (3 preceding siblings ...)
  2022-07-03 20:54 ` [PATCH rdma-next 4/5] RDMA/mlx5: Refactor get flow table function Saeed Mahameed
@ 2022-07-03 20:54 ` Saeed Mahameed
  2022-07-13 22:31   ` Saeed Mahameed
  2022-07-19 17:54 ` [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Leon Romanovsky
  5 siblings, 1 reply; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-03 20:54 UTC (permalink / raw)
  To: Leon Romanovsky, Saeed Mahameed
  Cc: Jason Gunthorpe, netdev, linux-rdma, Mark Bloch, Yishai Hadas

From: Mark Bloch <mbloch@nvidia.com>

Expose a steering anchor per priority to allow users to re-inject
packets back into default NIC pipeline for additional processing.

MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
a user can use to re-inject packets at a specific priority.

A FTE (flow table entry) can be created and the flow table ID
used as a destination.

When a packet is taken into a RDMA-controlled steering domain (like
software steering) there may be a need to insert the packet back into
the default NIC pipeline. This exposes a flow table ID to the user that can
be used as a destination in a flow table entry.

With this new method priorities that are exposed to users via
MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.

As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
thus it's impossible to point to a NIC core flow table (core driver flow tables
are created with UID value of zero) from userspace.
Create flow tables that are exposed to users with the shared UID, this
allows users to point to default NIC flow tables.

Steering loops are prevented at FW level as FW enforces that no flow
table at level X can point to a table at level lower than X.

Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/infiniband/hw/mlx5/fs.c          | 138 ++++++++++++++++++++++-
 drivers/infiniband/hw/mlx5/mlx5_ib.h     |   6 +
 include/uapi/rdma/mlx5_user_ioctl_cmds.h |  17 +++
 3 files changed, 156 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
index b1402aea29c1..691d00c89f33 100644
--- a/drivers/infiniband/hw/mlx5/fs.c
+++ b/drivers/infiniband/hw/mlx5/fs.c
@@ -679,7 +679,15 @@ enum flow_table_type {
 #define MLX5_FS_MAX_TYPES	 6
 #define MLX5_FS_MAX_ENTRIES	 BIT(16)
 
-static struct mlx5_ib_flow_prio *_get_prio(struct mlx5_flow_namespace *ns,
+static bool mlx5_ib_shared_ft_allowed(struct ib_device *device)
+{
+	struct mlx5_ib_dev *dev = to_mdev(device);
+
+	return MLX5_CAP_GEN(dev->mdev, shared_object_to_user_object_allowed);
+}
+
+static struct mlx5_ib_flow_prio *_get_prio(struct mlx5_ib_dev *dev,
+					   struct mlx5_flow_namespace *ns,
 					   struct mlx5_ib_flow_prio *prio,
 					   int priority,
 					   int num_entries, int num_groups,
@@ -688,6 +696,8 @@ static struct mlx5_ib_flow_prio *_get_prio(struct mlx5_flow_namespace *ns,
 	struct mlx5_flow_table_attr ft_attr = {};
 	struct mlx5_flow_table *ft;
 
+	if (mlx5_ib_shared_ft_allowed(&dev->ib_dev))
+		ft_attr.uid = MLX5_SHARED_RESOURCE_UID;
 	ft_attr.prio = priority;
 	ft_attr.max_fte = num_entries;
 	ft_attr.flags = flags;
@@ -784,8 +794,8 @@ static struct mlx5_ib_flow_prio *get_flow_table(struct mlx5_ib_dev *dev,
 
 	ft = prio->flow_table;
 	if (!ft)
-		return _get_prio(ns, prio, priority, max_table_size, num_groups,
-				 flags);
+		return _get_prio(dev, ns, prio, priority, max_table_size,
+				 num_groups, flags);
 
 	return prio;
 }
@@ -927,7 +937,7 @@ int mlx5_ib_fs_add_op_fc(struct mlx5_ib_dev *dev, u32 port_num,
 
 	prio = &dev->flow_db->opfcs[type];
 	if (!prio->flow_table) {
-		prio = _get_prio(ns, prio, priority,
+		prio = _get_prio(dev, ns, prio, priority,
 				 dev->num_ports * MAX_OPFC_RULES, 1, 0);
 		if (IS_ERR(prio)) {
 			err = PTR_ERR(prio);
@@ -1499,7 +1509,7 @@ _get_flow_table(struct mlx5_ib_dev *dev, u16 user_priority,
 	if (prio->flow_table)
 		return prio;
 
-	return _get_prio(ns, prio, priority, max_table_size,
+	return _get_prio(dev, ns, prio, priority, max_table_size,
 			 MLX5_FS_MAX_TYPES, flags);
 }
 
@@ -2016,6 +2026,23 @@ static int flow_matcher_cleanup(struct ib_uobject *uobject,
 	return 0;
 }
 
+static int steering_anchor_cleanup(struct ib_uobject *uobject,
+				   enum rdma_remove_reason why,
+				   struct uverbs_attr_bundle *attrs)
+{
+	struct mlx5_ib_steering_anchor *obj = uobject->object;
+
+	if (atomic_read(&obj->usecnt))
+		return -EBUSY;
+
+	mutex_lock(&obj->dev->flow_db->lock);
+	put_flow_table(obj->dev, obj->ft_prio, true);
+	mutex_unlock(&obj->dev->flow_db->lock);
+
+	kfree(obj);
+	return 0;
+}
+
 static int mlx5_ib_matcher_ns(struct uverbs_attr_bundle *attrs,
 			      struct mlx5_ib_flow_matcher *obj)
 {
@@ -2122,6 +2149,75 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_FLOW_MATCHER_CREATE)(
 	return err;
 }
 
+static int UVERBS_HANDLER(MLX5_IB_METHOD_STEERING_ANCHOR_CREATE)(
+	struct uverbs_attr_bundle *attrs)
+{
+	struct ib_uobject *uobj = uverbs_attr_get_uobject(
+		attrs, MLX5_IB_ATTR_STEERING_ANCHOR_CREATE_HANDLE);
+	struct mlx5_ib_dev *dev = mlx5_udata_to_mdev(&attrs->driver_udata);
+	enum mlx5_ib_uapi_flow_table_type ib_uapi_ft_type;
+	enum mlx5_flow_namespace_type ns_type;
+	struct mlx5_ib_steering_anchor *obj;
+	struct mlx5_ib_flow_prio *ft_prio;
+	u16 priority;
+	u32 ft_id;
+	int err;
+
+	if (!capable(CAP_NET_RAW))
+		return -EPERM;
+
+	err = uverbs_get_const(&ib_uapi_ft_type, attrs,
+			       MLX5_IB_ATTR_STEERING_ANCHOR_FT_TYPE);
+	if (err)
+		return err;
+
+	err = mlx5_ib_ft_type_to_namespace(ib_uapi_ft_type, &ns_type);
+	if (err)
+		return err;
+
+	err = uverbs_copy_from(&priority, attrs,
+			       MLX5_IB_ATTR_STEERING_ANCHOR_PRIORITY);
+	if (err)
+		return err;
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return -ENOMEM;
+
+	mutex_lock(&dev->flow_db->lock);
+	ft_prio = _get_flow_table(dev, priority, ns_type, 0);
+	if (IS_ERR(ft_prio)) {
+		mutex_unlock(&dev->flow_db->lock);
+		err = PTR_ERR(ft_prio);
+		goto free_obj;
+	}
+
+	ft_prio->refcount++;
+	ft_id = mlx5_flow_table_id(ft_prio->flow_table);
+	mutex_unlock(&dev->flow_db->lock);
+
+	err = uverbs_copy_to(attrs, MLX5_IB_ATTR_STEERING_ANCHOR_FT_ID,
+			     &ft_id, sizeof(ft_id));
+	if (err)
+		goto put_flow_table;
+
+	uobj->object = obj;
+	obj->dev = dev;
+	obj->ft_prio = ft_prio;
+	atomic_set(&obj->usecnt, 0);
+
+	return 0;
+
+put_flow_table:
+	mutex_lock(&dev->flow_db->lock);
+	put_flow_table(dev, ft_prio, true);
+	mutex_unlock(&dev->flow_db->lock);
+free_obj:
+	kfree(obj);
+
+	return err;
+}
+
 static struct ib_flow_action *
 mlx5_ib_create_modify_header(struct mlx5_ib_dev *dev,
 			     enum mlx5_ib_uapi_flow_table_type ft_type,
@@ -2478,6 +2574,35 @@ DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_FLOW_MATCHER,
 			    &UVERBS_METHOD(MLX5_IB_METHOD_FLOW_MATCHER_CREATE),
 			    &UVERBS_METHOD(MLX5_IB_METHOD_FLOW_MATCHER_DESTROY));
 
+DECLARE_UVERBS_NAMED_METHOD(
+	MLX5_IB_METHOD_STEERING_ANCHOR_CREATE,
+	UVERBS_ATTR_IDR(MLX5_IB_ATTR_STEERING_ANCHOR_CREATE_HANDLE,
+			MLX5_IB_OBJECT_STEERING_ANCHOR,
+			UVERBS_ACCESS_NEW,
+			UA_MANDATORY),
+	UVERBS_ATTR_CONST_IN(MLX5_IB_ATTR_STEERING_ANCHOR_FT_TYPE,
+			     enum mlx5_ib_uapi_flow_table_type,
+			     UA_MANDATORY),
+	UVERBS_ATTR_PTR_IN(MLX5_IB_ATTR_STEERING_ANCHOR_PRIORITY,
+			   UVERBS_ATTR_TYPE(u16),
+			   UA_MANDATORY),
+	UVERBS_ATTR_PTR_IN(MLX5_IB_ATTR_STEERING_ANCHOR_FT_ID,
+			   UVERBS_ATTR_TYPE(u32),
+			   UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(
+	MLX5_IB_METHOD_STEERING_ANCHOR_DESTROY,
+	UVERBS_ATTR_IDR(MLX5_IB_ATTR_STEERING_ANCHOR_DESTROY_HANDLE,
+			MLX5_IB_OBJECT_STEERING_ANCHOR,
+			UVERBS_ACCESS_DESTROY,
+			UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(
+	MLX5_IB_OBJECT_STEERING_ANCHOR,
+	UVERBS_TYPE_ALLOC_IDR(steering_anchor_cleanup),
+	&UVERBS_METHOD(MLX5_IB_METHOD_STEERING_ANCHOR_CREATE),
+	&UVERBS_METHOD(MLX5_IB_METHOD_STEERING_ANCHOR_DESTROY));
+
 const struct uapi_definition mlx5_ib_flow_defs[] = {
 	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(
 		MLX5_IB_OBJECT_FLOW_MATCHER),
@@ -2486,6 +2611,9 @@ const struct uapi_definition mlx5_ib_flow_defs[] = {
 		&mlx5_ib_fs),
 	UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_FLOW_ACTION,
 				&mlx5_ib_flow_actions),
+	UAPI_DEF_CHAIN_OBJ_TREE_NAMED(
+		MLX5_IB_OBJECT_STEERING_ANCHOR,
+		UAPI_DEF_IS_OBJ_SUPPORTED(mlx5_ib_shared_ft_allowed)),
 	{},
 };
 
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 998b67509a53..c067db25fadd 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -259,6 +259,12 @@ struct mlx5_ib_flow_matcher {
 	u8			match_criteria_enable;
 };
 
+struct mlx5_ib_steering_anchor {
+	struct mlx5_ib_flow_prio *ft_prio;
+	struct mlx5_ib_dev *dev;
+	atomic_t usecnt;
+};
+
 struct mlx5_ib_pp {
 	u16 index;
 	struct mlx5_core_dev *mdev;
diff --git a/include/uapi/rdma/mlx5_user_ioctl_cmds.h b/include/uapi/rdma/mlx5_user_ioctl_cmds.h
index e539c84d63f1..3bee490eb585 100644
--- a/include/uapi/rdma/mlx5_user_ioctl_cmds.h
+++ b/include/uapi/rdma/mlx5_user_ioctl_cmds.h
@@ -228,6 +228,7 @@ enum mlx5_ib_objects {
 	MLX5_IB_OBJECT_VAR,
 	MLX5_IB_OBJECT_PP,
 	MLX5_IB_OBJECT_UAR,
+	MLX5_IB_OBJECT_STEERING_ANCHOR,
 };
 
 enum mlx5_ib_flow_matcher_create_attrs {
@@ -248,6 +249,22 @@ enum mlx5_ib_flow_matcher_methods {
 	MLX5_IB_METHOD_FLOW_MATCHER_DESTROY,
 };
 
+enum mlx5_ib_flow_steering_anchor_create_attrs {
+	MLX5_IB_ATTR_STEERING_ANCHOR_CREATE_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+	MLX5_IB_ATTR_STEERING_ANCHOR_FT_TYPE,
+	MLX5_IB_ATTR_STEERING_ANCHOR_PRIORITY,
+	MLX5_IB_ATTR_STEERING_ANCHOR_FT_ID,
+};
+
+enum mlx5_ib_flow_steering_anchor_destroy_attrs {
+	MLX5_IB_ATTR_STEERING_ANCHOR_DESTROY_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+enum mlx5_ib_steering_anchor_methods {
+	MLX5_IB_METHOD_STEERING_ANCHOR_CREATE = (1U << UVERBS_ID_NS_SHIFT),
+	MLX5_IB_METHOD_STEERING_ANCHOR_DESTROY,
+};
+
 enum mlx5_ib_device_query_context_attrs {
 	MLX5_IB_ATTR_QUERY_CONTEXT_RESP_UCTX = (1U << UVERBS_ID_NS_SHIFT),
 };
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace
  2022-07-03 20:54 ` [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace Saeed Mahameed
@ 2022-07-13 22:31   ` Saeed Mahameed
  2022-07-15  8:08     ` Jason Gunthorpe
  0 siblings, 1 reply; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-13 22:31 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Leon Romanovsky, Jason Gunthorpe, netdev, linux-rdma, Mark Bloch,
	Yishai Hadas

On 03 Jul 13:54, Saeed Mahameed wrote:
>From: Mark Bloch <mbloch@nvidia.com>
>
>Expose a steering anchor per priority to allow users to re-inject
>packets back into default NIC pipeline for additional processing.
>
>MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
>a user can use to re-inject packets at a specific priority.
>
>A FTE (flow table entry) can be created and the flow table ID
>used as a destination.
>
>When a packet is taken into a RDMA-controlled steering domain (like
>software steering) there may be a need to insert the packet back into
>the default NIC pipeline. This exposes a flow table ID to the user that can
>be used as a destination in a flow table entry.
>
>With this new method priorities that are exposed to users via
>MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.
>
>As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
>thus it's impossible to point to a NIC core flow table (core driver flow tables
>are created with UID value of zero) from userspace.
>Create flow tables that are exposed to users with the shared UID, this
>allows users to point to default NIC flow tables.
>
>Steering loops are prevented at FW level as FW enforces that no flow
>table at level X can point to a table at level lower than X.
>
>Signed-off-by: Mark Bloch <mbloch@nvidia.com>
>Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
>Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
>---
> drivers/infiniband/hw/mlx5/fs.c          | 138 ++++++++++++++++++++++-
> drivers/infiniband/hw/mlx5/mlx5_ib.h     |   6 +
> include/uapi/rdma/mlx5_user_ioctl_cmds.h |  17 +++

Jason, Can you ack/nack ? This has uapi.. 
I need to move forward with this submission.
Thanks




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace
  2022-07-13 22:31   ` Saeed Mahameed
@ 2022-07-15  8:08     ` Jason Gunthorpe
  2022-07-17 19:52       ` Saeed Mahameed
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2022-07-15  8:08 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Saeed Mahameed, Leon Romanovsky, netdev, linux-rdma, Mark Bloch,
	Yishai Hadas

On Wed, Jul 13, 2022 at 03:31:33PM -0700, Saeed Mahameed wrote:
> On 03 Jul 13:54, Saeed Mahameed wrote:
> > From: Mark Bloch <mbloch@nvidia.com>
> > 
> > Expose a steering anchor per priority to allow users to re-inject
> > packets back into default NIC pipeline for additional processing.
> > 
> > MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
> > a user can use to re-inject packets at a specific priority.
> > 
> > A FTE (flow table entry) can be created and the flow table ID
> > used as a destination.
> > 
> > When a packet is taken into a RDMA-controlled steering domain (like
> > software steering) there may be a need to insert the packet back into
> > the default NIC pipeline. This exposes a flow table ID to the user that can
> > be used as a destination in a flow table entry.
> > 
> > With this new method priorities that are exposed to users via
> > MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.
> > 
> > As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
> > thus it's impossible to point to a NIC core flow table (core driver flow tables
> > are created with UID value of zero) from userspace.
> > Create flow tables that are exposed to users with the shared UID, this
> > allows users to point to default NIC flow tables.
> > 
> > Steering loops are prevented at FW level as FW enforces that no flow
> > table at level X can point to a table at level lower than X.
> > 
> > Signed-off-by: Mark Bloch <mbloch@nvidia.com>
> > Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
> > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
> > ---
> > drivers/infiniband/hw/mlx5/fs.c          | 138 ++++++++++++++++++++++-
> > drivers/infiniband/hw/mlx5/mlx5_ib.h     |   6 +
> > include/uapi/rdma/mlx5_user_ioctl_cmds.h |  17 +++
> 
> Jason, Can you ack/nack ? This has uapi.. I need to move forward with this
> submission.

Yes, it looks fine, can you update the shared branch?

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace
  2022-07-15  8:08     ` Jason Gunthorpe
@ 2022-07-17 19:52       ` Saeed Mahameed
  2022-07-18 12:03         ` Leon Romanovsky
  0 siblings, 1 reply; 11+ messages in thread
From: Saeed Mahameed @ 2022-07-17 19:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Saeed Mahameed, Leon Romanovsky, netdev, linux-rdma, Mark Bloch,
	Yishai Hadas

On 15 Jul 05:08, Jason Gunthorpe wrote:
>On Wed, Jul 13, 2022 at 03:31:33PM -0700, Saeed Mahameed wrote:
>> On 03 Jul 13:54, Saeed Mahameed wrote:
>> > From: Mark Bloch <mbloch@nvidia.com>
>> >
>> > Expose a steering anchor per priority to allow users to re-inject
>> > packets back into default NIC pipeline for additional processing.
>> >
>> > MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
>> > a user can use to re-inject packets at a specific priority.
>> >
>> > A FTE (flow table entry) can be created and the flow table ID
>> > used as a destination.
>> >
>> > When a packet is taken into a RDMA-controlled steering domain (like
>> > software steering) there may be a need to insert the packet back into
>> > the default NIC pipeline. This exposes a flow table ID to the user that can
>> > be used as a destination in a flow table entry.
>> >
>> > With this new method priorities that are exposed to users via
>> > MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.
>> >
>> > As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
>> > thus it's impossible to point to a NIC core flow table (core driver flow tables
>> > are created with UID value of zero) from userspace.
>> > Create flow tables that are exposed to users with the shared UID, this
>> > allows users to point to default NIC flow tables.
>> >
>> > Steering loops are prevented at FW level as FW enforces that no flow
>> > table at level X can point to a table at level lower than X.
>> >
>> > Signed-off-by: Mark Bloch <mbloch@nvidia.com>
>> > Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
>> > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
>> > ---
>> > drivers/infiniband/hw/mlx5/fs.c          | 138 ++++++++++++++++++++++-
>> > drivers/infiniband/hw/mlx5/mlx5_ib.h     |   6 +
>> > include/uapi/rdma/mlx5_user_ioctl_cmds.h |  17 +++
>>
>> Jason, Can you ack/nack ? This has uapi.. I need to move forward with this
>> submission.
>
>Yes, it looks fine, can you update the shared branch?
>

Applied to mlx5-next, you may pull.

>Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace
  2022-07-17 19:52       ` Saeed Mahameed
@ 2022-07-18 12:03         ` Leon Romanovsky
  0 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2022-07-18 12:03 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Jason Gunthorpe, Saeed Mahameed, netdev, linux-rdma, Mark Bloch,
	Yishai Hadas

On Sun, Jul 17, 2022 at 12:52:29PM -0700, Saeed Mahameed wrote:
> On 15 Jul 05:08, Jason Gunthorpe wrote:
> > On Wed, Jul 13, 2022 at 03:31:33PM -0700, Saeed Mahameed wrote:
> > > On 03 Jul 13:54, Saeed Mahameed wrote:
> > > > From: Mark Bloch <mbloch@nvidia.com>
> > > >
> > > > Expose a steering anchor per priority to allow users to re-inject
> > > > packets back into default NIC pipeline for additional processing.
> > > >
> > > > MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
> > > > a user can use to re-inject packets at a specific priority.
> > > >
> > > > A FTE (flow table entry) can be created and the flow table ID
> > > > used as a destination.
> > > >
> > > > When a packet is taken into a RDMA-controlled steering domain (like
> > > > software steering) there may be a need to insert the packet back into
> > > > the default NIC pipeline. This exposes a flow table ID to the user that can
> > > > be used as a destination in a flow table entry.
> > > >
> > > > With this new method priorities that are exposed to users via
> > > > MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.
> > > >
> > > > As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
> > > > thus it's impossible to point to a NIC core flow table (core driver flow tables
> > > > are created with UID value of zero) from userspace.
> > > > Create flow tables that are exposed to users with the shared UID, this
> > > > allows users to point to default NIC flow tables.
> > > >
> > > > Steering loops are prevented at FW level as FW enforces that no flow
> > > > table at level X can point to a table at level lower than X.
> > > >
> > > > Signed-off-by: Mark Bloch <mbloch@nvidia.com>
> > > > Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
> > > > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
> > > > ---
> > > > drivers/infiniband/hw/mlx5/fs.c          | 138 ++++++++++++++++++++++-
> > > > drivers/infiniband/hw/mlx5/mlx5_ib.h     |   6 +
> > > > include/uapi/rdma/mlx5_user_ioctl_cmds.h |  17 +++
> > > 
> > > Jason, Can you ack/nack ? This has uapi.. I need to move forward with this
> > > submission.
> > 
> > Yes, it looks fine, can you update the shared branch?
> > 
> 
> Applied to mlx5-next, you may pull.

The last two patches "RDMA/ ..." are not supposed to be in mlx5-next
branch. Especially the last one that has uapi changes.

Unless that branch wasn't pulled by netdev, can you please delete them?

Thanks

> 
> > Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03
  2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
                   ` (4 preceding siblings ...)
  2022-07-03 20:54 ` [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace Saeed Mahameed
@ 2022-07-19 17:54 ` Leon Romanovsky
  5 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2022-07-19 17:54 UTC (permalink / raw)
  To: Saeed Mahameed; +Cc: Saeed Mahameed, Jason Gunthorpe, netdev, linux-rdma

On Sun, Jul 03, 2022 at 01:54:02PM -0700, Saeed Mahameed wrote:
> From: Saeed Mahameed <saeedm@nvidia.com>
> 
> Mark Bloch Says:
> ================
> Expose steering anchor
> 
> Expose a steering anchor per priority to allow users to re-inject
> packets back into default NIC pipeline for additional processing.
> 
> MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which
> a user can use to re-inject packets at a specific priority.
> 
> A FTE (flow table entry) can be created and the flow table ID
> used as a destination.
> 
> When a packet is taken into a RDMA-controlled steering domain (like
> software steering) there may be a need to insert the packet back into
> the default NIC pipeline. This exposes a flow table ID to the user that can
> be used as a destination in a flow table entry.
> 
> With this new method priorities that are exposed to users via
> MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID.
> 
> As user-created flow tables (via RDMA DEVX) are created with a non-zero UID
> thus it's impossible to point to a NIC core flow table (core driver flow tables
> are created with UID value of zero) from userspace.
> Create flow tables that are exposed to users with the shared UID, this
> allows users to point to default NIC flow tables.
> 
> Steering loops are prevented at FW level as FW enforces that no flow
> table at level X can point to a table at level lower than X. 
> 
> ===============
> 
> Mark Bloch (5):
>   net/mlx5: Expose the ability to point to any UID from shared UID
>   net/mlx5: fs, expose flow table ID to users
>   net/mlx5: fs, allow flow table creation with a UID
>   RDMA/mlx5: Refactor get flow table function
>   RDMA/mlx5: Expose steering anchor to userspace
> 

Thanks, applied.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-07-19 17:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-03 20:54 [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Saeed Mahameed
2022-07-03 20:54 ` [PATCH mlx5-next 1/5] net/mlx5: Expose the ability to point to any UID from shared UID Saeed Mahameed
2022-07-03 20:54 ` [PATCH mlx5-next 2/5] net/mlx5: fs, expose flow table ID to users Saeed Mahameed
2022-07-03 20:54 ` [PATCH mlx5-next 3/5] net/mlx5: fs, allow flow table creation with a UID Saeed Mahameed
2022-07-03 20:54 ` [PATCH rdma-next 4/5] RDMA/mlx5: Refactor get flow table function Saeed Mahameed
2022-07-03 20:54 ` [PATCH rdma-next 5/5] RDMA/mlx5: Expose steering anchor to userspace Saeed Mahameed
2022-07-13 22:31   ` Saeed Mahameed
2022-07-15  8:08     ` Jason Gunthorpe
2022-07-17 19:52       ` Saeed Mahameed
2022-07-18 12:03         ` Leon Romanovsky
2022-07-19 17:54 ` [PATCH mlx5-next 0/5] mlx5-next updates 2022-07-03 Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.