All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/4] mlx5 vport loopback
@ 2018-09-17 10:30 Leon Romanovsky
  2018-09-17 10:30 ` [PATCH mlx5-next 1/4] net/mlx5: Rename incorrect naming in IFC file Leon Romanovsky
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-17 10:30 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

From: Leon Romanovsky <leonro@mellanox.com>

Hi,

This is short series from Mark which extends handling of loopback
traffic. Originally mlx5 IB dynamically enabled/disabled both unicast
and multicast based on number of users. However RAW ethernet QPs need
more granular access.

Thanks

Mark Bloch (4):
  net/mlx5: Rename incorrect naming in IFC file
  RDMA/mlx5: Refactor transport domain bookkeeping logic
  RDMA/mlx5: Allow creating RAW ethernet QP with loopback support
  RDMA/mlx5: Enable vport loopback when user context or QP mandate

 drivers/infiniband/hw/mlx5/main.c                  | 61 ++++++++++----
 drivers/infiniband/hw/mlx5/mlx5_ib.h               | 16 +++-
 drivers/infiniband/hw/mlx5/qp.c                    | 96 +++++++++++++++++-----
 .../net/ethernet/mellanox/mlx5/core/en_common.c    |  2 +-
 include/linux/mlx5/mlx5_ifc.h                      |  4 +-
 include/uapi/rdma/mlx5-abi.h                       |  2 +
 6 files changed, 138 insertions(+), 43 deletions(-)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH mlx5-next 1/4] net/mlx5: Rename incorrect naming in IFC file
  2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
@ 2018-09-17 10:30 ` Leon Romanovsky
  2018-09-17 10:30 ` [PATCH rdma-next 2/4] RDMA/mlx5: Refactor transport domain bookkeeping logic Leon Romanovsky
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-17 10:30 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

From: Mark Bloch <markb@mellanox.com>

Remove a trailing underscore from the multicast/unicast names.

Signed-off-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c                     | 4 ++--
 drivers/net/ethernet/mellanox/mlx5/core/en_common.c | 2 +-
 include/linux/mlx5/mlx5_ifc.h                       | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 1f35ecbefffe..8bada4b94444 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1279,7 +1279,7 @@ static int create_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
 
 	if (dev->rep)
 		MLX5_SET(tirc, tirc, self_lb_block,
-			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST_);
+			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST);
 
 	err = mlx5_core_create_tir(dev->mdev, in, inlen, &rq->tirn);
 
@@ -1582,7 +1582,7 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 create_tir:
 	if (dev->rep)
 		MLX5_SET(tirc, tirc, self_lb_block,
-			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST_);
+			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST);
 
 	err = mlx5_core_create_tir(dev->mdev, in, inlen, &qp->rss_qp.tirn);
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
index db3278cc052b..3078491cc0d0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
@@ -153,7 +153,7 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
 
 	if (enable_uc_lb)
 		MLX5_SET(modify_tir_in, in, ctx.self_lb_block,
-			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST_);
+			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST);
 
 	MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
 
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 3a4a2e0567e9..4c7a1d25d73b 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -2559,8 +2559,8 @@ enum {
 };
 
 enum {
-	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST_    = 0x1,
-	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST_  = 0x2,
+	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST    = 0x1,
+	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST  = 0x2,
 };
 
 struct mlx5_ifc_tirc_bits {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-next 2/4] RDMA/mlx5: Refactor transport domain bookkeeping logic
  2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
  2018-09-17 10:30 ` [PATCH mlx5-next 1/4] net/mlx5: Rename incorrect naming in IFC file Leon Romanovsky
@ 2018-09-17 10:30 ` Leon Romanovsky
  2018-09-17 10:30 ` [PATCH rdma-next 3/4] RDMA/mlx5: Allow creating RAW ethernet QP with loopback support Leon Romanovsky
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-17 10:30 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

From: Mark Bloch <markb@mellanox.com>

In preparation to enable loopback on a single user context move the logic
that enables/disables loopback to separate functions and group variables
under a single struct.

Signed-off-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/main.c    | 45 +++++++++++++++++++++++-------------
 drivers/infiniband/hw/mlx5/mlx5_ib.h | 10 +++++---
 2 files changed, 36 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 659af370a961..b64861ba2c42 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1571,6 +1571,32 @@ static void deallocate_uars(struct mlx5_ib_dev *dev,
 			mlx5_cmd_free_uar(dev->mdev, bfregi->sys_pages[i]);
 }
 
+static int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev)
+{
+	int err = 0;
+
+	mutex_lock(&dev->lb.mutex);
+	dev->lb.user_td++;
+
+	if (dev->lb.user_td == 2)
+		err = mlx5_nic_vport_update_local_lb(dev->mdev, true);
+
+	mutex_unlock(&dev->lb.mutex);
+
+	return err;
+}
+
+static void mlx5_ib_disable_lb(struct mlx5_ib_dev *dev)
+{
+	mutex_lock(&dev->lb.mutex);
+	dev->lb.user_td--;
+
+	if (dev->lb.user_td < 2)
+		mlx5_nic_vport_update_local_lb(dev->mdev, false);
+
+	mutex_unlock(&dev->lb.mutex);
+}
+
 static int mlx5_ib_alloc_transport_domain(struct mlx5_ib_dev *dev, u32 *tdn)
 {
 	int err;
@@ -1587,14 +1613,7 @@ static int mlx5_ib_alloc_transport_domain(struct mlx5_ib_dev *dev, u32 *tdn)
 	     !MLX5_CAP_GEN(dev->mdev, disable_local_lb_mc)))
 		return err;
 
-	mutex_lock(&dev->lb_mutex);
-	dev->user_td++;
-
-	if (dev->user_td == 2)
-		err = mlx5_nic_vport_update_local_lb(dev->mdev, true);
-
-	mutex_unlock(&dev->lb_mutex);
-	return err;
+	return mlx5_ib_enable_lb(dev);
 }
 
 static void mlx5_ib_dealloc_transport_domain(struct mlx5_ib_dev *dev, u32 tdn)
@@ -1609,13 +1628,7 @@ static void mlx5_ib_dealloc_transport_domain(struct mlx5_ib_dev *dev, u32 tdn)
 	     !MLX5_CAP_GEN(dev->mdev, disable_local_lb_mc)))
 		return;
 
-	mutex_lock(&dev->lb_mutex);
-	dev->user_td--;
-
-	if (dev->user_td < 2)
-		mlx5_nic_vport_update_local_lb(dev->mdev, false);
-
-	mutex_unlock(&dev->lb_mutex);
+	mlx5_ib_disable_lb(dev);
 }
 
 static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
@@ -5970,7 +5983,7 @@ int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev)
 	if ((MLX5_CAP_GEN(dev->mdev, port_type) == MLX5_CAP_PORT_TYPE_ETH) &&
 	    (MLX5_CAP_GEN(dev->mdev, disable_local_lb_uc) ||
 	     MLX5_CAP_GEN(dev->mdev, disable_local_lb_mc)))
-		mutex_init(&dev->lb_mutex);
+		mutex_init(&dev->lb.mutex);
 
 	return 0;
 }
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 6c57872fdc4e..7b2af7e719c4 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -878,6 +878,12 @@ to_mcounters(struct ib_counters *ibcntrs)
 int parse_flow_flow_action(struct mlx5_ib_flow_action *maction,
 			   bool is_egress,
 			   struct mlx5_flow_act *action);
+struct mlx5_ib_lb_state {
+	/* protect the user_td */
+	struct mutex		mutex;
+	u32			user_td;
+};
+
 struct mlx5_ib_dev {
 	struct ib_device		ib_dev;
 	const struct uverbs_object_tree_def *driver_trees[7];
@@ -919,9 +925,7 @@ struct mlx5_ib_dev {
 	const struct mlx5_ib_profile	*profile;
 	struct mlx5_eswitch_rep		*rep;
 
-	/* protect the user_td */
-	struct mutex		lb_mutex;
-	u32			user_td;
+	struct mlx5_ib_lb_state		lb;
 	u8			umr_fence;
 	struct list_head	ib_dev_list;
 	u64			sys_image_guid;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-next 3/4] RDMA/mlx5: Allow creating RAW ethernet QP with loopback support
  2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
  2018-09-17 10:30 ` [PATCH mlx5-next 1/4] net/mlx5: Rename incorrect naming in IFC file Leon Romanovsky
  2018-09-17 10:30 ` [PATCH rdma-next 2/4] RDMA/mlx5: Refactor transport domain bookkeeping logic Leon Romanovsky
@ 2018-09-17 10:30 ` Leon Romanovsky
  2018-09-17 10:30 ` [PATCH rdma-next 4/4] RDMA/mlx5: Enable vport loopback when user context or QP mandate Leon Romanovsky
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-17 10:30 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

From: Mark Bloch <markb@mellanox.com>

Expose two new flags:
MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC
MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC

Those flags can be used at creation time in order to allow a QP
to be able to receive loopback traffic (unicast and multicast).
We store the state in the QP to be used on the destroy path
to indicate with which flags the QP was created with.

Signed-off-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 62 ++++++++++++++++++++++++++++--------
 include/uapi/rdma/mlx5-abi.h         |  2 ++
 3 files changed, 52 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 7b2af7e719c4..b258adb93097 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -440,7 +440,7 @@ struct mlx5_ib_qp {
 	struct list_head	cq_send_list;
 	struct mlx5_rate_limit	rl;
 	u32                     underlay_qpn;
-	bool			tunnel_offload_en;
+	u32			flags_en;
 	/* storage for qp sub type when core qp type is IB_QPT_DRIVER */
 	enum ib_qp_type		qp_sub_type;
 };
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 8bada4b94444..428e417e01da 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1258,8 +1258,9 @@ static bool tunnel_offload_supported(struct mlx5_core_dev *dev)
 
 static int create_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
 				    struct mlx5_ib_rq *rq, u32 tdn,
-				    bool tunnel_offload_en)
+				    u32 *qp_flags_en)
 {
+	u8 lb_flag = 0;
 	u32 *in;
 	void *tirc;
 	int inlen;
@@ -1274,12 +1275,21 @@ static int create_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
 	MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_DIRECT);
 	MLX5_SET(tirc, tirc, inline_rqn, rq->base.mqp.qpn);
 	MLX5_SET(tirc, tirc, transport_domain, tdn);
-	if (tunnel_offload_en)
+	if (*qp_flags_en & MLX5_QP_FLAG_TUNNEL_OFFLOADS)
 		MLX5_SET(tirc, tirc, tunneled_offload_en, 1);
 
-	if (dev->rep)
-		MLX5_SET(tirc, tirc, self_lb_block,
-			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST);
+	if (*qp_flags_en & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC)
+		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+
+	if (*qp_flags_en & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC)
+		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST;
+
+	if (dev->rep) {
+		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+		*qp_flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC;
+	}
+
+	MLX5_SET(tirc, tirc, self_lb_block, lb_flag);
 
 	err = mlx5_core_create_tir(dev->mdev, in, inlen, &rq->tirn);
 
@@ -1332,8 +1342,7 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 			goto err_destroy_sq;
 
 
-		err = create_raw_packet_qp_tir(dev, rq, tdn,
-					       qp->tunnel_offload_en);
+		err = create_raw_packet_qp_tir(dev, rq, tdn, &qp->flags_en);
 		if (err)
 			goto err_destroy_rq;
 	}
@@ -1410,6 +1419,7 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	u32 tdn = mucontext->tdn;
 	struct mlx5_ib_create_qp_rss ucmd = {};
 	size_t required_cmd_sz;
+	u8 lb_flag = 0;
 
 	if (init_attr->qp_type != IB_QPT_RAW_PACKET)
 		return -EOPNOTSUPP;
@@ -1444,7 +1454,9 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 		return -EOPNOTSUPP;
 	}
 
-	if (ucmd.flags & ~MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
+	if (ucmd.flags & ~(MLX5_QP_FLAG_TUNNEL_OFFLOADS |
+			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
+			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC)) {
 		mlx5_ib_dbg(dev, "invalid flags\n");
 		return -EOPNOTSUPP;
 	}
@@ -1461,6 +1473,16 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 		return -EOPNOTSUPP;
 	}
 
+	if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC || dev->rep) {
+		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+		qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC;
+	}
+
+	if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) {
+		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST;
+		qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC;
+	}
+
 	err = ib_copy_to_udata(udata, &resp, min(udata->outlen, sizeof(resp)));
 	if (err) {
 		mlx5_ib_dbg(dev, "copy failed\n");
@@ -1484,6 +1506,8 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)
 		MLX5_SET(tirc, tirc, tunneled_offload_en, 1);
 
+	MLX5_SET(tirc, tirc, self_lb_block, lb_flag);
+
 	if (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_INNER)
 		hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_inner);
 	else
@@ -1580,10 +1604,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	MLX5_SET(rx_hash_field_select, hfso, selected_fields, selected_fields);
 
 create_tir:
-	if (dev->rep)
-		MLX5_SET(tirc, tirc, self_lb_block,
-			 MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST);
-
 	err = mlx5_core_create_tir(dev->mdev, in, inlen, &qp->rss_qp.tirn);
 
 	if (err)
@@ -1710,7 +1730,23 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 				mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
 				return -EOPNOTSUPP;
 			}
-			qp->tunnel_offload_en = true;
+			qp->flags_en |= MLX5_QP_FLAG_TUNNEL_OFFLOADS;
+		}
+
+		if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC) {
+			if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
+				mlx5_ib_dbg(dev, "Self-LB UC isn't supported\n");
+				return -EOPNOTSUPP;
+			}
+			qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC;
+		}
+
+		if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) {
+			if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
+				mlx5_ib_dbg(dev, "Self-LB UM isn't supported\n");
+				return -EOPNOTSUPP;
+			}
+			qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC;
 		}
 
 		if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index addbb9c4529e..e584ba40208e 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -45,6 +45,8 @@ enum {
 	MLX5_QP_FLAG_BFREG_INDEX	= 1 << 3,
 	MLX5_QP_FLAG_TYPE_DCT		= 1 << 4,
 	MLX5_QP_FLAG_TYPE_DCI		= 1 << 5,
+	MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC = 1 << 6,
+	MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC = 1 << 7,
 };
 
 enum {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH rdma-next 4/4] RDMA/mlx5: Enable vport loopback when user context or QP mandate
  2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
                   ` (2 preceding siblings ...)
  2018-09-17 10:30 ` [PATCH rdma-next 3/4] RDMA/mlx5: Allow creating RAW ethernet QP with loopback support Leon Romanovsky
@ 2018-09-17 10:30 ` Leon Romanovsky
  2018-09-21 19:14 ` [PATCH rdma-next 0/4] mlx5 vport loopback Doug Ledford
  2018-09-22  1:38 ` Doug Ledford
  5 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-17 10:30 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

From: Mark Bloch <markb@mellanox.com>

A user can create a QP which can accept loopback traffic, but that's not
enough. We need to enable loopback on the vport as well. Currently vport
loopback is enabled only when more than 1 users are using the IB device,
update the logic to consider whatever a QP which supports loopback was
created, if so enable vport loopback even if there is only a single user.

Signed-off-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/main.c    | 40 +++++++++++++++++++++++++-----------
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  4 ++++
 drivers/infiniband/hw/mlx5/qp.c      | 34 +++++++++++++++++++++++-------
 3 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index b64861ba2c42..7d5fcf76466f 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1571,28 +1571,44 @@ static void deallocate_uars(struct mlx5_ib_dev *dev,
 			mlx5_cmd_free_uar(dev->mdev, bfregi->sys_pages[i]);
 }
 
-static int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev)
+int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp)
 {
 	int err = 0;
 
 	mutex_lock(&dev->lb.mutex);
-	dev->lb.user_td++;
-
-	if (dev->lb.user_td == 2)
-		err = mlx5_nic_vport_update_local_lb(dev->mdev, true);
+	if (td)
+		dev->lb.user_td++;
+	if (qp)
+		dev->lb.qps++;
+
+	if (dev->lb.user_td == 2 ||
+	    dev->lb.qps == 1) {
+		if (!dev->lb.enabled) {
+			err = mlx5_nic_vport_update_local_lb(dev->mdev, true);
+			dev->lb.enabled = true;
+		}
+	}
 
 	mutex_unlock(&dev->lb.mutex);
 
 	return err;
 }
 
-static void mlx5_ib_disable_lb(struct mlx5_ib_dev *dev)
+void mlx5_ib_disable_lb(struct mlx5_ib_dev *dev, bool td, bool qp)
 {
 	mutex_lock(&dev->lb.mutex);
-	dev->lb.user_td--;
-
-	if (dev->lb.user_td < 2)
-		mlx5_nic_vport_update_local_lb(dev->mdev, false);
+	if (td)
+		dev->lb.user_td--;
+	if (qp)
+		dev->lb.qps--;
+
+	if (dev->lb.user_td == 1 &&
+	    dev->lb.qps == 0) {
+		if (dev->lb.enabled) {
+			mlx5_nic_vport_update_local_lb(dev->mdev, false);
+			dev->lb.enabled = false;
+		}
+	}
 
 	mutex_unlock(&dev->lb.mutex);
 }
@@ -1613,7 +1629,7 @@ static int mlx5_ib_alloc_transport_domain(struct mlx5_ib_dev *dev, u32 *tdn)
 	     !MLX5_CAP_GEN(dev->mdev, disable_local_lb_mc)))
 		return err;
 
-	return mlx5_ib_enable_lb(dev);
+	return mlx5_ib_enable_lb(dev, true, false);
 }
 
 static void mlx5_ib_dealloc_transport_domain(struct mlx5_ib_dev *dev, u32 tdn)
@@ -1628,7 +1644,7 @@ static void mlx5_ib_dealloc_transport_domain(struct mlx5_ib_dev *dev, u32 tdn)
 	     !MLX5_CAP_GEN(dev->mdev, disable_local_lb_mc)))
 		return;
 
-	mlx5_ib_disable_lb(dev);
+	mlx5_ib_disable_lb(dev, true, false);
 }
 
 static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev,
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index b258adb93097..99c853c56d31 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -882,6 +882,8 @@ struct mlx5_ib_lb_state {
 	/* protect the user_td */
 	struct mutex		mutex;
 	u32			user_td;
+	int			qps;
+	bool			enabled;
 };
 
 struct mlx5_ib_dev {
@@ -1040,6 +1042,8 @@ int mlx5_ib_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *srq_attr);
 int mlx5_ib_destroy_srq(struct ib_srq *srq);
 int mlx5_ib_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
 			  const struct ib_recv_wr **bad_wr);
+int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp);
+void mlx5_ib_disable_lb(struct mlx5_ib_dev *dev, bool td, bool qp);
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *init_attr,
 				struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 428e417e01da..1f318a47040c 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1256,6 +1256,16 @@ static bool tunnel_offload_supported(struct mlx5_core_dev *dev)
 		 MLX5_CAP_ETH(dev, tunnel_stateless_geneve_rx));
 }
 
+static void destroy_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
+				      struct mlx5_ib_rq *rq,
+				      u32 qp_flags_en)
+{
+	if (qp_flags_en & (MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
+			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC))
+		mlx5_ib_disable_lb(dev, false, true);
+	mlx5_core_destroy_tir(dev->mdev, rq->tirn);
+}
+
 static int create_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
 				    struct mlx5_ib_rq *rq, u32 tdn,
 				    u32 *qp_flags_en)
@@ -1293,17 +1303,17 @@ static int create_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
 
 	err = mlx5_core_create_tir(dev->mdev, in, inlen, &rq->tirn);
 
+	if (!err && MLX5_GET(tirc, tirc, self_lb_block)) {
+		err = mlx5_ib_enable_lb(dev, false, true);
+
+		if (err)
+			destroy_raw_packet_qp_tir(dev, rq, 0);
+	}
 	kvfree(in);
 
 	return err;
 }
 
-static void destroy_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
-				      struct mlx5_ib_rq *rq)
-{
-	mlx5_core_destroy_tir(dev->mdev, rq->tirn);
-}
-
 static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				u32 *in, size_t inlen,
 				struct ib_pd *pd)
@@ -1372,7 +1382,7 @@ static void destroy_raw_packet_qp(struct mlx5_ib_dev *dev,
 	struct mlx5_ib_rq *rq = &raw_packet_qp->rq;
 
 	if (qp->rq.wqe_cnt) {
-		destroy_raw_packet_qp_tir(dev, rq);
+		destroy_raw_packet_qp_tir(dev, rq, qp->flags_en);
 		destroy_raw_packet_qp_rq(dev, rq);
 	}
 
@@ -1396,6 +1406,9 @@ static void raw_packet_qp_copy_info(struct mlx5_ib_qp *qp,
 
 static void destroy_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp)
 {
+	if (qp->flags_en & (MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
+			    MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC))
+		mlx5_ib_disable_lb(dev, false, true);
 	mlx5_core_destroy_tir(dev->mdev, qp->rss_qp.tirn);
 }
 
@@ -1606,6 +1619,13 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 create_tir:
 	err = mlx5_core_create_tir(dev->mdev, in, inlen, &qp->rss_qp.tirn);
 
+	if (!err && MLX5_GET(tirc, tirc, self_lb_block)) {
+		err = mlx5_ib_enable_lb(dev, false, true);
+
+		if (err)
+			mlx5_core_destroy_tir(dev->mdev, qp->rss_qp.tirn);
+	}
+
 	if (err)
 		goto err;
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
                   ` (3 preceding siblings ...)
  2018-09-17 10:30 ` [PATCH rdma-next 4/4] RDMA/mlx5: Enable vport loopback when user context or QP mandate Leon Romanovsky
@ 2018-09-21 19:14 ` Doug Ledford
  2018-09-21 19:33   ` Leon Romanovsky
  2018-09-21 19:40   ` Jason Gunthorpe
  2018-09-22  1:38 ` Doug Ledford
  5 siblings, 2 replies; 12+ messages in thread
From: Doug Ledford @ 2018-09-21 19:14 UTC (permalink / raw)
  To: Leon Romanovsky, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 1137 bytes --]

On Mon, 2018-09-17 at 13:30 +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
> 
> Hi,
> 
> This is short series from Mark which extends handling of loopback
> traffic. Originally mlx5 IB dynamically enabled/disabled both unicast
> and multicast based on number of users. However RAW ethernet QPs need
> more granular access.
> 
> Thanks
> 
> Mark Bloch (4):
>   net/mlx5: Rename incorrect naming in IFC file
>   RDMA/mlx5: Refactor transport domain bookkeeping logic
>   RDMA/mlx5: Allow creating RAW ethernet QP with loopback support
>   RDMA/mlx5: Enable vport loopback when user context or QP mandate

I've reviewed this series and I'm OK with it, but the first patch is for
net/mlx5.  How are you expecting the series to be applied?  Are you
wanting me or Jason to take the entire series, or does the first patch
need to go through the mlx5 tree and get picked up by Dave and us, and
then we take the rest?  This is unclear to me...

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-21 19:14 ` [PATCH rdma-next 0/4] mlx5 vport loopback Doug Ledford
@ 2018-09-21 19:33   ` Leon Romanovsky
  2018-09-21 20:05     ` Doug Ledford
  2018-09-21 19:40   ` Jason Gunthorpe
  1 sibling, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-21 19:33 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Jason Gunthorpe, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 2506 bytes --]

On Fri, Sep 21, 2018 at 03:14:36PM -0400, Doug Ledford wrote:
> On Mon, 2018-09-17 at 13:30 +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@mellanox.com>
> >
> > Hi,
> >
> > This is short series from Mark which extends handling of loopback
> > traffic. Originally mlx5 IB dynamically enabled/disabled both unicast
> > and multicast based on number of users. However RAW ethernet QPs need
> > more granular access.
> >
> > Thanks
> >
> > Mark Bloch (4):
> >   net/mlx5: Rename incorrect naming in IFC file
> >   RDMA/mlx5: Refactor transport domain bookkeeping logic
> >   RDMA/mlx5: Allow creating RAW ethernet QP with loopback support
> >   RDMA/mlx5: Enable vport loopback when user context or QP mandate
>
> I've reviewed this series and I'm OK with it, but the first patch is for
> net/mlx5.  How are you expecting the series to be applied?  Are you
> wanting me or Jason to take the entire series, or does the first patch
> need to go through the mlx5 tree and get picked up by Dave and us, and
> then we take the rest?  This is unclear to me...

Thanks Doug,

The preferable flow for such patches is that I or Saeed will apply
net/mlx5 patch (mlx5-next) on top of our shared branch after you or Jason
or Dave ack on whole series.

The shared branch is located in:
https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/log/?h=mlx5-next
As you can see, it is clear branch of our shared commits and it is based
on -rc1 to be sure no pollution of both subsystems will be done.

It ensures that netdev won't get RDMA patches and vice versa.

For RDMA, once we apply such mlx5-next patch, Jason usually merges of
this branch into his rdma-next and applies on top of it extra non
mlx5-next patches.
https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=for-next&id=af68ccbc1131ddd8dcda65b015cd9919b352485a

For netdev, it is a little bit different, because Saeed works with pull
requests and he creates merge commit before sending his pull request.

This model allows us to ensure that changes are pulled only when it is
really needed and there is a chance that Dave won't ever pull this
mlx5-next branch in this cycle because Saeed won't have any patches
depend on it.

Hope it makes it clear now.

Are you ok with me/Saeed taking first patch to our branch so you will be
able to take the rest?

Thanks

>
> --
> Doug Ledford <dledford@redhat.com>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-21 19:14 ` [PATCH rdma-next 0/4] mlx5 vport loopback Doug Ledford
  2018-09-21 19:33   ` Leon Romanovsky
@ 2018-09-21 19:40   ` Jason Gunthorpe
  1 sibling, 0 replies; 12+ messages in thread
From: Jason Gunthorpe @ 2018-09-21 19:40 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Leon Romanovsky, Leon Romanovsky, RDMA mailing list, Mark Bloch,
	Yishai Hadas, Saeed Mahameed, linux-netdev

On Fri, Sep 21, 2018 at 03:14:36PM -0400, Doug Ledford wrote:
> On Mon, 2018-09-17 at 13:30 +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@mellanox.com>
> > 
> > Hi,
> > 
> > This is short series from Mark which extends handling of loopback
> > traffic. Originally mlx5 IB dynamically enabled/disabled both unicast
> > and multicast based on number of users. However RAW ethernet QPs need
> > more granular access.
> > 
> > Thanks
> > 
> > Mark Bloch (4):
> >   net/mlx5: Rename incorrect naming in IFC file
> >   RDMA/mlx5: Refactor transport domain bookkeeping logic
> >   RDMA/mlx5: Allow creating RAW ethernet QP with loopback support
> >   RDMA/mlx5: Enable vport loopback when user context or QP mandate
> 
> I've reviewed this series and I'm OK with it, but the first patch is for
> net/mlx5.  How are you expecting the series to be applied?  Are you
> wanting me or Jason to take the entire series, or does the first patch
> need to go through the mlx5 tree and get picked up by Dave and us, and
> then we take the rest?  This is unclear to me...

Saeed or Leon will apply the net/mlx5 patches when netdev and RDMA
approves the series, such as with the above approval.

Our job is to take the branch from

  git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mellanox/mlx5-next

So it is a two step process where it is approved on the mailing list
then Saeed/Leon will respond with the branch commits.

Depending on need I've been doing either a 'series merge' which looks
like this:

commit 8193abb6a8171c775503acd041eb957cc7e9e7f4
Merge: c1dfc0114c901b 25bb36e75d7d62
Author: Jason Gunthorpe <jgg@mellanox.com>
Date:   Wed Jul 4 13:19:46 2018 -0600

    Merge branch 'mlx5-dump-fill-mkey' into rdma.git for-next
    
    For dependencies, branch based on 'mellanox/mlx5-next' of
    git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git
    
    Pull Dump and fill MKEY from Leon Romanovsky:
    
    ====================
    MLX5 IB HCA offers the memory key, dump_fill_mkey to increase performance,
    when used in a send or receive operations.
    
    It is used to force local HCA operations to skip the PCI bus access, while
    keeping track of the processed length in the ibv_sge handling.
    
    In this three patch series, we expose various bits in our HW spec
    file (mlx5_ifc.h), move unneeded for mlx5_core FW command and export such
    memory key to user space thought our mlx5-abi header file.
    ====================
    
    Botched auto-merge in mlx5_ib_alloc_ucontext() resolved by hand.
    
    * branch 'mlx5-dump-fill-mkey':
      IB/mlx5: Expose dump and fill memory key
      net/mlx5: Add hardware definitions for dump_fill_mkey
      net/mlx5: Limit scope of dump_fill_mkey function
      net/mlx5: Rate limit errors in command interface
    
    Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Which preserves the cover letter, so I prefer it.

 This only works if the RDMA patches have no dependency on the latest
RDMA tree.

The Mellanox branch may have additional patches beyond the series in
question, this just means they have progressed on the netdev side, I
usually trim them out of the 'git merge --log' section for greater
clarity.

Otherwise, merge the Mellanox branch with the right commit IDs and
then apply the RDMA patches, such as here:

commit eda98779f7d318cf96f030bbc5b23f034b69b80a
Merge: 4fca037783512c 664000b6bb4352
Author: Jason Gunthorpe <jgg@mellanox.com>
Date:   Tue Jul 24 13:10:23 2018 -0600

    Merge branch 'mellanox/mlx5-next' into rdma.git for-next
    
    From git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git
    
    This is required to resolve dependencies of the next series of RDMA
    patches.
    
    * branch 'mellanox/mlx5-next':
      net/mlx5: Add support for flow table destination number
      net/mlx5: Add forward compatible support for the FTE match data
      net/mlx5: Fix tristate and description for MLX5 module
      net/mlx5: Better return types for CQE API
      net/mlx5: Use ERR_CAST() instead of coding it
      net/mlx5: Add missing SET_DRIVER_VERSION command translation
      net/mlx5: Add XRQ commands definitions
      net/mlx5: Add core support for double vlan push/pop steering action
      net/mlx5: Expose MPEGC (Management PCIe General Configuration) structures
      net/mlx5: FW tracer, add hardware structures
      net/mlx5: fix uaccess beyond "count" in debugfs read/write handlers
    
    Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

It is up to Saeed to sync this branch with netdev.

Neither RDMA or netdev should apply patches marked for net/mlx5 - they
must go through the shared branch to avoid conflicts.

Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-21 19:33   ` Leon Romanovsky
@ 2018-09-21 20:05     ` Doug Ledford
  2018-09-21 21:40       ` Leon Romanovsky
  0 siblings, 1 reply; 12+ messages in thread
From: Doug Ledford @ 2018-09-21 20:05 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jason Gunthorpe, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 481 bytes --]

On Fri, 2018-09-21 at 22:33 +0300, Leon Romanovsky wrote:
> Hope it makes it clear now.

Clear enough.  Between yours and Jason's explanation I think it's well
covered.

> Are you ok with me/Saeed taking first patch to our branch so you will be
> able to take the rest?

Yep.  Let me know a tag when it's ready to merge.

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-21 20:05     ` Doug Ledford
@ 2018-09-21 21:40       ` Leon Romanovsky
  2018-09-22  0:15         ` Doug Ledford
  0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2018-09-21 21:40 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Jason Gunthorpe, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 661 bytes --]

On Fri, Sep 21, 2018 at 04:05:53PM -0400, Doug Ledford wrote:
> On Fri, 2018-09-21 at 22:33 +0300, Leon Romanovsky wrote:
> > Hope it makes it clear now.
>
> Clear enough.  Between yours and Jason's explanation I think it's well
> covered.
>
> > Are you ok with me/Saeed taking first patch to our branch so you will be
> > able to take the rest?
>
> Yep.  Let me know a tag when it's ready to merge.

Done, it doesn't have tag.
Commit d773ff41a7c ("net/mlx5: Rename incorrect naming in IFC file")

Thanks

>
> --
> Doug Ledford <dledford@redhat.com>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-21 21:40       ` Leon Romanovsky
@ 2018-09-22  0:15         ` Doug Ledford
  0 siblings, 0 replies; 12+ messages in thread
From: Doug Ledford @ 2018-09-22  0:15 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jason Gunthorpe, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 887 bytes --]

On Sat, 2018-09-22 at 00:40 +0300, Leon Romanovsky wrote:
> On Fri, Sep 21, 2018 at 04:05:53PM -0400, Doug Ledford wrote:
> > On Fri, 2018-09-21 at 22:33 +0300, Leon Romanovsky wrote:
> > > Hope it makes it clear now.
> > 
> > Clear enough.  Between yours and Jason's explanation I think it's well
> > covered.
> > 
> > > Are you ok with me/Saeed taking first patch to our branch so you will be
> > > able to take the rest?
> > 
> > Yep.  Let me know a tag when it's ready to merge.
> 
> Done, it doesn't have tag.
> Commit d773ff41a7c ("net/mlx5: Rename incorrect naming in IFC file")


You confused me for a minute!  There is a copy-paste issue in the above.
The actual commit hash is 5d773ff41a7c.  Thanks, got it.

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH rdma-next 0/4] mlx5 vport loopback
  2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
                   ` (4 preceding siblings ...)
  2018-09-21 19:14 ` [PATCH rdma-next 0/4] mlx5 vport loopback Doug Ledford
@ 2018-09-22  1:38 ` Doug Ledford
  5 siblings, 0 replies; 12+ messages in thread
From: Doug Ledford @ 2018-09-22  1:38 UTC (permalink / raw)
  To: Leon Romanovsky, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Mark Bloch, Yishai Hadas,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 1368 bytes --]

On Mon, 2018-09-17 at 13:30 +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
> 
> Hi,
> 
> This is short series from Mark which extends handling of loopback
> traffic. Originally mlx5 IB dynamically enabled/disabled both unicast
> and multicast based on number of users. However RAW ethernet QPs need
> more granular access.
> 
> Thanks
> 
> Mark Bloch (4):
>   net/mlx5: Rename incorrect naming in IFC file
>   RDMA/mlx5: Refactor transport domain bookkeeping logic
>   RDMA/mlx5: Allow creating RAW ethernet QP with loopback support
>   RDMA/mlx5: Enable vport loopback when user context or QP mandate
> 
>  drivers/infiniband/hw/mlx5/main.c                  | 61 ++++++++++----
>  drivers/infiniband/hw/mlx5/mlx5_ib.h               | 16 +++-
>  drivers/infiniband/hw/mlx5/qp.c                    | 96 +++++++++++++++++-----
>  .../net/ethernet/mellanox/mlx5/core/en_common.c    |  2 +-
>  include/linux/mlx5/mlx5_ifc.h                      |  4 +-
>  include/uapi/rdma/mlx5-abi.h                       |  2 +
>  6 files changed, 138 insertions(+), 43 deletions(-)
> 
> --
> 2.14.4
> 

Thanks, first patch pulled from mlx5-next, remainder of series applied.

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-09-22  1:38 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-17 10:30 [PATCH rdma-next 0/4] mlx5 vport loopback Leon Romanovsky
2018-09-17 10:30 ` [PATCH mlx5-next 1/4] net/mlx5: Rename incorrect naming in IFC file Leon Romanovsky
2018-09-17 10:30 ` [PATCH rdma-next 2/4] RDMA/mlx5: Refactor transport domain bookkeeping logic Leon Romanovsky
2018-09-17 10:30 ` [PATCH rdma-next 3/4] RDMA/mlx5: Allow creating RAW ethernet QP with loopback support Leon Romanovsky
2018-09-17 10:30 ` [PATCH rdma-next 4/4] RDMA/mlx5: Enable vport loopback when user context or QP mandate Leon Romanovsky
2018-09-21 19:14 ` [PATCH rdma-next 0/4] mlx5 vport loopback Doug Ledford
2018-09-21 19:33   ` Leon Romanovsky
2018-09-21 20:05     ` Doug Ledford
2018-09-21 21:40       ` Leon Romanovsky
2018-09-22  0:15         ` Doug Ledford
2018-09-21 19:40   ` Jason Gunthorpe
2018-09-22  1:38 ` Doug Ledford

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.