linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next v3] RDMA/mlx4: Provide port number for special QPs
@ 2020-09-14 11:18 Leon Romanovsky
  2020-09-17 15:08 ` Jason Gunthorpe
  0 siblings, 1 reply; 5+ messages in thread
From: Leon Romanovsky @ 2020-09-14 11:18 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Yishai Hadas

From: Leon Romanovsky <leonro@nvidia.com>

Special QPs created by mlx4 have same QP port borrowed from
the context, while they are expected to have different ones.

Fix it by using HW physical port instead.

It fixes the following error during driver init:
[   12.074150] mlx4_core 0000:05:00.0: mlx4_ib: initializing demux service for 128 qp1 clients
[   12.084036] <mlx4_ib> create_pv_sqp: Couldn't create special QP (-16)
[   12.085123] <mlx4_ib> create_pv_resources: Couldn't create  QP1 (-16)
[   12.088300] mlx4_en: Mellanox ConnectX HCA Ethernet driver v4.0-0

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 Changelog:
v3: mlx4 devices create 2 special QPs in SRIOV mode, separate them by
port number and special bit. The mlx4 is limited to two ports and not
going to be extended, and the port_num is not forwarded to FW too, so
it is safe.
v2: https://lore.kernel.org/linux-rdma/20200907122156.478360-4-leon@kernel.org/#r
---
 drivers/infiniband/hw/mlx4/mad.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index 8bd16474708f..4b565640ba85 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -1792,7 +1792,7 @@ static void pv_qp_event_handler(struct ib_event *event, void *qp_context)
 }

 static int create_pv_sqp(struct mlx4_ib_demux_pv_ctx *ctx,
-			    enum ib_qp_type qp_type, int create_tun)
+			 enum ib_qp_type qp_type, int port, int create_tun)
 {
 	int i, ret;
 	struct mlx4_ib_demux_pv_qp *tun_qp;
@@ -1822,12 +1822,13 @@ static int create_pv_sqp(struct mlx4_ib_demux_pv_ctx *ctx,
 		qp_init_attr.proxy_qp_type = qp_type;
 		qp_attr_mask_INIT = IB_QP_STATE | IB_QP_PKEY_INDEX |
 			   IB_QP_QKEY | IB_QP_PORT;
+		qp_init_attr.init_attr.port_num = ctx->port;
 	} else {
 		qp_init_attr.init_attr.qp_type = qp_type;
 		qp_init_attr.init_attr.create_flags = MLX4_IB_SRIOV_SQP;
 		qp_attr_mask_INIT = IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_QKEY;
+		qp_init_attr.init_attr.port_num = port | 1 << 7;
 	}
-	qp_init_attr.init_attr.port_num = ctx->port;
 	qp_init_attr.init_attr.qp_context = ctx;
 	qp_init_attr.init_attr.event_handler = pv_qp_event_handler;
 	tun_qp->qp = ib_create_qp(ctx->pd, &qp_init_attr.init_attr);
@@ -2026,7 +2027,7 @@ static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
 	}

 	if (ctx->has_smi) {
-		ret = create_pv_sqp(ctx, IB_QPT_SMI, create_tun);
+		ret = create_pv_sqp(ctx, IB_QPT_SMI, port, create_tun);
 		if (ret) {
 			pr_err("Couldn't create %s QP0 (%d)\n",
 			       create_tun ? "tunnel for" : "",  ret);
@@ -2034,7 +2035,7 @@ static int create_pv_resources(struct ib_device *ibdev, int slave, int port,
 		}
 	}

-	ret = create_pv_sqp(ctx, IB_QPT_GSI, create_tun);
+	ret = create_pv_sqp(ctx, IB_QPT_GSI, port, create_tun);
 	if (ret) {
 		pr_err("Couldn't create %s QP1 (%d)\n",
 		       create_tun ? "tunnel for" : "",  ret);
--
2.26.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-09-18 11:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-14 11:18 [PATCH rdma-next v3] RDMA/mlx4: Provide port number for special QPs Leon Romanovsky
2020-09-17 15:08 ` Jason Gunthorpe
2020-09-17 16:10   ` Leon Romanovsky
2020-09-18  8:46     ` Leon Romanovsky
2020-09-18 11:56       ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).