All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II)
@ 2020-04-23 19:02 Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 01/18] RDMA/mlx5: Delete unsupported QP types Leon Romanovsky
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, Aharon Landau, Eli Cohen, Jack Morgenstein,
	linux-kernel, linux-rdma, Maor Gottlieb, Or Gerlitz,
	Roland Dreier

From: Leon Romanovsky <leonro@mellanox.com>

Hi,

This is second part of refactor mlx5_ib_create_qp() series [1] with one
extra fix from Aharon.

It is based on [1].

Thanks

[1] https://lore.kernel.org/lkml/20200420151105.282848-1-leon@kernel.org

Aharon Landau (1):
  RDMA/mlx5: Verify that QP is created with RQ or SQ

Leon Romanovsky (17):
  RDMA/mlx5: Delete unsupported QP types
  RDMA/mlx5: Store QP type in the vendor QP structure
  RDMA/mlx5: Promote RSS RAW QP attribute check in higher level
  RDMA/mlx5: Combine copy of create QP command in RSS RAW QP
  RDMA/mlx5: Remove second user copy in create_user_qp
  RDMA/mlx5: Rely on existence of udata to separate kernel/user flows
  RDMA/mlx5: Delete impossible inlen check
  RDMA/mlx5: Globally parse DEVX UID
  RDMA/mlx5: Separate XRC_TGT QP creation from common flow
  RDMA/mlx5: Separate to user/kernel create QP flows
  RDMA/mlx5: Reduce amount of duplication in QP destroy
  RDMA/mlx5: Group all create QP parameters to simplify in-kernel
    interfaces
  RDMA/mlx5: Promote RSS RAW QP flags check to higher level
  RDMA/mlx5: Handle udate outlen checks in one place
  RDMA/mlx5: Copy response to the user in one place
  RDMA/mlx5: Remove redundant destroy QP call
  RDMA/mlx5: Consolidate into special function all create QP calls

 drivers/infiniband/hw/mlx5/mlx5_ib.h |  22 +-
 drivers/infiniband/hw/mlx5/odp.c     |   3 +-
 drivers/infiniband/hw/mlx5/qp.c      | 989 ++++++++++++++++-----------
 3 files changed, 586 insertions(+), 428 deletions(-)

--
2.25.3


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 01/18] RDMA/mlx5: Delete unsupported QP types
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 02/18] RDMA/mlx5: Store QP type in the vendor QP structure Leon Romanovsky
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

There is no need to explicitly check unsupported QP types,
rely on  "default" keyword in switch-case to catch them.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 452683776445..04206b9c2c48 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -760,10 +760,7 @@ static int to_mlx5_st(enum ib_qp_type type)
 	case IB_QPT_SMI:		return MLX5_QP_ST_QP0;
 	case MLX5_IB_QPT_HW_GSI:	return MLX5_QP_ST_QP1;
 	case MLX5_IB_QPT_DCI:		return MLX5_QP_ST_DCI;
-	case IB_QPT_RAW_IPV6:		return MLX5_QP_ST_RAW_IPV6;
-	case IB_QPT_RAW_PACKET:
-	case IB_QPT_RAW_ETHERTYPE:	return MLX5_QP_ST_RAW_ETHERTYPE;
-	case IB_QPT_MAX:
+	case IB_QPT_RAW_PACKET:		return MLX5_QP_ST_RAW_ETHERTYPE;
 	default:		return -EINVAL;
 	}
 }
@@ -2282,14 +2279,10 @@ static void get_cqs(enum ib_qp_type qp_type,
 	case IB_QPT_RC:
 	case IB_QPT_UC:
 	case IB_QPT_UD:
-	case IB_QPT_RAW_IPV6:
-	case IB_QPT_RAW_ETHERTYPE:
 	case IB_QPT_RAW_PACKET:
 		*send_cq = ib_send_cq ? to_mcq(ib_send_cq) : NULL;
 		*recv_cq = ib_recv_cq ? to_mcq(ib_recv_cq) : NULL;
 		break;
-
-	case IB_QPT_MAX:
 	default:
 		*send_cq = NULL;
 		*recv_cq = NULL;
@@ -2434,9 +2427,6 @@ static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr)
 	case IB_QPT_DRIVER:
 	case IB_QPT_GSI:
 		return 0;
-	case IB_QPT_RAW_IPV6:
-	case IB_QPT_RAW_ETHERTYPE:
-	case IB_QPT_MAX:
 	default:
 		goto out;
 	}
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 02/18] RDMA/mlx5: Store QP type in the vendor QP structure
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 01/18] RDMA/mlx5: Delete unsupported QP types Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 03/18] RDMA/mlx5: Promote RSS RAW QP attribute check in higher level Leon Romanovsky
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

QP type is stored in the IB/core QP struct, but it doesn't have all the
needed information, like internal QP type used in the driver itself.
Update mlx5_ib to have cached QP type which includes both IBTA and
Mellanox specific one.

Such change allows us to make even further cleanup of QP creation flow.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |   8 +-
 drivers/infiniband/hw/mlx5/odp.c     |   3 +-
 drivers/infiniband/hw/mlx5/qp.c      | 136 +++++++++++++--------------
 3 files changed, 74 insertions(+), 73 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index cb2331b03f7b..6b500f9b2346 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -465,8 +465,12 @@ struct mlx5_ib_qp {
 	struct mlx5_rate_limit	rl;
 	u32                     underlay_qpn;
 	u32			flags_en;
-	/* storage for qp sub type when core qp type is IB_QPT_DRIVER */
-	enum ib_qp_type		qp_sub_type;
+	/*
+	 * IB/core doesn't store low-level QP types, so
+	 * store both MLX and IBTA types in the field below.
+	 * IB_QPT_DRIVER will be break to DCI/DCT subtypes.
+	 */
+	enum ib_qp_type		type;
 	/* A flag to indicate if there's a new counter is configured
 	 * but not take effective
 	 */
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index e4759310c0e2..70577d546567 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -1136,8 +1136,7 @@ static int mlx5_ib_mr_initiator_pfault_handler(
 	if (qp->ibqp.qp_type == IB_QPT_XRC_INI)
 		*wqe += sizeof(struct mlx5_wqe_xrc_seg);
 
-	if (qp->ibqp.qp_type == IB_QPT_UD ||
-	    qp->qp_sub_type == MLX5_IB_QPT_DCI) {
+	if (qp->type == IB_QPT_UD || qp->type == MLX5_IB_QPT_DCI) {
 		av = *wqe;
 		if (av->dqp_dct & cpu_to_be32(MLX5_EXTENDED_UD_AV))
 			*wqe += sizeof(struct mlx5_av);
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 04206b9c2c48..6e2f71a48cbb 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1227,14 +1227,13 @@ static void destroy_qp_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp)
 
 static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
 {
-	if (attr->srq || (attr->qp_type == IB_QPT_XRC_TGT) ||
-	    (qp->qp_sub_type == MLX5_IB_QPT_DCI) ||
-	    (attr->qp_type == IB_QPT_XRC_INI))
+	if (attr->srq || (qp->type == IB_QPT_XRC_TGT) ||
+	    (qp->type == MLX5_IB_QPT_DCI) || (qp->type == IB_QPT_XRC_INI))
 		return MLX5_SRQ_RQ;
 	else if (!qp->has_rq)
 		return MLX5_ZERO_LEN_RQ;
-	else
-		return MLX5_NON_ZERO_RQ;
+
+	return MLX5_NON_ZERO_RQ;
 }
 
 static int create_raw_packet_qp_tis(struct mlx5_ib_dev *dev,
@@ -1967,9 +1966,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	spin_lock_init(&qp->sq.lock);
 	spin_lock_init(&qp->rq.lock);
 
-	mlx5_st = to_mlx5_st((init_attr->qp_type != IB_QPT_DRIVER) ?
-				     init_attr->qp_type :
-				     qp->qp_sub_type);
+	mlx5_st = to_mlx5_st(qp->type);
 	if (mlx5_st < 0)
 		return -EINVAL;
 
@@ -2073,8 +2070,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 					  MLX5_RES_SCAT_DATA32_CQE);
 	}
 	if ((qp->flags_en & MLX5_QP_FLAG_SCATTER_CQE) &&
-	    (qp->qp_sub_type == MLX5_IB_QPT_DCI ||
-	     init_attr->qp_type == IB_QPT_RC))
+	    (qp->type == MLX5_IB_QPT_DCI || qp->type == IB_QPT_RC))
 		configure_requester_scat_cqe(dev, init_attr, ucmd, qpc);
 
 	if (qp->rq.wqe_cnt) {
@@ -2166,7 +2162,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	base->container_mibqp = qp;
 	base->mqp.event = mlx5_ib_qp_event;
 
-	get_cqs(init_attr->qp_type, init_attr->send_cq, init_attr->recv_cq,
+	get_cqs(qp->type, init_attr->send_cq, init_attr->recv_cq,
 		&send_cq, &recv_cq);
 	spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
 	mlx5_ib_lock_cqs(send_cq, recv_cq);
@@ -2406,7 +2402,8 @@ static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	return 0;
 }
 
-static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr)
+static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
+			 enum ib_qp_type *type)
 {
 	if (attr->qp_type == IB_QPT_DRIVER && !MLX5_CAP_GEN(dev->mdev, dct))
 		goto out;
@@ -2426,11 +2423,12 @@ static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr)
 	case MLX5_IB_QPT_REG_UMR:
 	case IB_QPT_DRIVER:
 	case IB_QPT_GSI:
-		return 0;
+		break;
 	default:
 		goto out;
 	}
 
+	*type = attr->qp_type;
 	return 0;
 
 out:
@@ -2518,7 +2516,6 @@ static void process_vendor_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
 }
 
 static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
-				struct ib_qp_init_attr *attr,
 				struct mlx5_ib_create_qp *ucmd)
 {
 	struct mlx5_core_dev *mdev = dev->mdev;
@@ -2527,17 +2524,20 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 
 	switch (flags & (MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI)) {
 	case MLX5_QP_FLAG_TYPE_DCI:
-		qp->qp_sub_type = MLX5_IB_QPT_DCI;
+		qp->type = MLX5_IB_QPT_DCI;
 		break;
 	case MLX5_QP_FLAG_TYPE_DCT:
-		qp->qp_sub_type = MLX5_IB_QPT_DCT;
-		fallthrough;
-	default:
+		qp->type = MLX5_IB_QPT_DCT;
 		break;
-	}
-
-	if (attr->qp_type == IB_QPT_DRIVER && !qp->qp_sub_type)
+	default:
+		if (qp->type != IB_QPT_DRIVER)
+			break;
+		/*
+		 * It is IB_QPT_DRIVER and or no subtype or
+		 * wrong subtype were provided.
+		 */
 		return -EINVAL;
+	}
 
 	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TYPE_DCI, true, qp);
 	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TYPE_DCT, true, qp);
@@ -2546,7 +2546,7 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_SCATTER_CQE,
 			    MLX5_CAP_GEN(mdev, sctr_data_cqe), qp);
 
-	if (attr->qp_type == IB_QPT_RAW_PACKET) {
+	if (qp->type == IB_QPT_RAW_PACKET) {
 		cond = MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) ||
 		       MLX5_CAP_ETH(mdev, tunnel_stateless_gre) ||
 		       MLX5_CAP_ETH(mdev, tunnel_stateless_geneve_rx);
@@ -2560,7 +2560,7 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				    qp);
 	}
 
-	if (attr->qp_type == IB_QPT_RC)
+	if (qp->type == IB_QPT_RC)
 		process_vendor_flag(dev, &flags,
 				    MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE,
 				    MLX5_CAP_GEN(mdev, qp_packet_based), qp);
@@ -2597,12 +2597,12 @@ static void process_create_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
 static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				struct ib_qp_init_attr *attr)
 {
-	enum ib_qp_type qp_type = attr->qp_type;
+	enum ib_qp_type qp_type = qp->type;
 	struct mlx5_core_dev *mdev = dev->mdev;
 	int create_flags = attr->create_flags;
 	bool cond;
 
-	if (qp->qp_sub_type == MLX5_IB_QPT_DCT)
+	if (qp_type == MLX5_IB_QPT_DCT)
 		return (create_flags) ? -EINVAL : 0;
 
 	if (qp_type == IB_QPT_RAW_PACKET && attr->rwq_ind_tbl)
@@ -2656,34 +2656,6 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return (create_flags) ? -EINVAL : 0;
 }
 
-static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-			    struct ib_qp_init_attr *attr,
-			    struct mlx5_ib_create_qp *ucmd,
-			    struct ib_udata *udata)
-{
-	struct mlx5_ib_dev *mdev = to_mdev(pd->device);
-	int ret = -EINVAL;
-
-	switch (qp->qp_sub_type) {
-	case MLX5_IB_QPT_DCT:
-		if (!attr->srq || !attr->recv_cq)
-			goto out;
-
-		ret = create_dct(pd, qp, attr, ucmd, udata);
-		break;
-	case MLX5_IB_QPT_DCI:
-		if (attr->cap.max_recv_wr || attr->cap.max_recv_sge)
-			goto out;
-
-		ret = create_qp_common(mdev, pd, attr, ucmd, udata, qp);
-		break;
-	default:
-		return -EINVAL;
-	}
-
-out:	return ret;
-}
-
 static size_t process_udata_size(struct ib_qp_init_attr *attr,
 				 struct ib_udata *udata)
 {
@@ -2707,6 +2679,30 @@ static int create_raw_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	return create_qp_common(dev, pd, attr, ucmd, udata, qp);
 }
 
+static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+			 struct ib_qp_init_attr *attr)
+{
+	int ret = 0;
+
+	switch (qp->type) {
+	case MLX5_IB_QPT_DCT:
+		ret = (!attr->srq || !attr->recv_cq) ? -EINVAL : 0;
+		break;
+	case MLX5_IB_QPT_DCI:
+		ret = (attr->cap.max_recv_wr || attr->cap.max_recv_sge) ?
+			      -EINVAL :
+			      0;
+		break;
+	default:
+		break;
+	}
+
+	if (ret)
+		mlx5_ib_dbg(dev, "QP type %d has wrong attributes\n", qp->type);
+
+	return ret;
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *init_attr,
 				struct ib_udata *udata)
@@ -2714,13 +2710,14 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	struct mlx5_ib_create_qp ucmd = {};
 	struct mlx5_ib_dev *dev;
 	struct mlx5_ib_qp *qp;
+	enum ib_qp_type type;
 	u16 xrcdn = 0;
 	int err;
 
 	dev = pd ? to_mdev(pd->device) :
 		   to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
 
-	err = check_qp_type(dev, init_attr);
+	err = check_qp_type(dev, init_attr, &type);
 	if (err) {
 		mlx5_ib_dbg(dev, "Unsupported QP type %d\n",
 			    init_attr->qp_type);
@@ -2750,8 +2747,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (!qp)
 		return ERR_PTR(-ENOMEM);
 
+	qp->type = type;
 	if (udata) {
-		err = process_vendor_flags(dev, qp, init_attr, &ucmd);
+		err = process_vendor_flags(dev, qp, &ucmd);
 		if (err)
 			goto free_qp;
 	}
@@ -2759,16 +2757,20 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (err)
 		goto free_qp;
 
-	if (init_attr->qp_type == IB_QPT_XRC_TGT)
+	if (qp->type == IB_QPT_XRC_TGT)
 		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
 
-	switch (init_attr->qp_type) {
-	case IB_QPT_DRIVER:
-		err = create_driver_qp(pd, qp, init_attr, &ucmd, udata);
-		break;
+	err = check_qp_attr(dev, qp, init_attr);
+	if (err)
+		goto free_qp;
+
+	switch (qp->type) {
 	case IB_QPT_RAW_PACKET:
 		err = create_raw_qp(pd, qp, init_attr, &ucmd, udata);
 		break;
+	case MLX5_IB_QPT_DCT:
+		err = create_dct(pd, qp, init_attr, &ucmd, udata);
+		break;
 	default:
 		err = create_qp_common(dev, pd, init_attr,
 				       (udata) ? &ucmd : NULL, udata, qp);
@@ -2821,7 +2823,7 @@ int mlx5_ib_destroy_qp(struct ib_qp *qp, struct ib_udata *udata)
 	if (unlikely(qp->qp_type == IB_QPT_GSI))
 		return mlx5_ib_gsi_destroy_qp(qp);
 
-	if (mqp->qp_sub_type == MLX5_IB_QPT_DCT)
+	if (mqp->type == MLX5_IB_QPT_DCT)
 		return mlx5_ib_destroy_dct(mqp);
 
 	destroy_qp_common(dev, mqp, udata);
@@ -3568,8 +3570,7 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
 	u16 op;
 	u8 tx_affinity = 0;
 
-	mlx5_st = to_mlx5_st(ibqp->qp_type == IB_QPT_DRIVER ?
-			     qp->qp_sub_type : ibqp->qp_type);
+	mlx5_st = to_mlx5_st(qp->type);
 	if (mlx5_st < 0)
 		return -EINVAL;
 
@@ -4023,11 +4024,8 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 	if (unlikely(ibqp->qp_type == IB_QPT_GSI))
 		return mlx5_ib_gsi_modify_qp(ibqp, attr, attr_mask);
 
-	if (ibqp->qp_type == IB_QPT_DRIVER)
-		qp_type = qp->qp_sub_type;
-	else
-		qp_type = (unlikely(ibqp->qp_type == MLX5_IB_QPT_HW_GSI)) ?
-			IB_QPT_GSI : ibqp->qp_type;
+	qp_type = (unlikely(ibqp->qp_type == MLX5_IB_QPT_HW_GSI)) ? IB_QPT_GSI :
+								    qp->type;
 
 	if (qp_type == MLX5_IB_QPT_DCT)
 		return mlx5_ib_modify_dct(ibqp, attr, attr_mask, udata);
@@ -5866,7 +5864,7 @@ int mlx5_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	memset(qp_init_attr, 0, sizeof(*qp_init_attr));
 	memset(qp_attr, 0, sizeof(*qp_attr));
 
-	if (unlikely(qp->qp_sub_type == MLX5_IB_QPT_DCT))
+	if (unlikely(qp->type == MLX5_IB_QPT_DCT))
 		return mlx5_ib_dct_query_qp(dev, qp, qp_attr,
 					    qp_attr_mask, qp_init_attr);
 
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 03/18] RDMA/mlx5: Promote RSS RAW QP attribute check in higher level
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 01/18] RDMA/mlx5: Delete unsupported QP types Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 02/18] RDMA/mlx5: Store QP type in the vendor QP structure Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 04/18] RDMA/mlx5: Combine copy of create QP command in RSS RAW QP Leon Romanovsky
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Perform check of attributes of RAW PACKET QP in separate function.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 6e2f71a48cbb..f7c32dc5cf9c 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1645,9 +1645,6 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	size_t required_cmd_sz;
 	u8 lb_flag = 0;
 
-	if (init_attr->send_cq)
-		return -EINVAL;
-
 	min_resp_len = offsetof(typeof(resp), bfreg_index) + sizeof(resp.bfreg_index);
 	if (udata->outlen < min_resp_len)
 		return -EINVAL;
@@ -2693,6 +2690,9 @@ static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 			      -EINVAL :
 			      0;
 		break;
+	case IB_QPT_RAW_PACKET:
+		ret = (attr->rwq_ind_tbl && attr->send_cq) ? -EINVAL : 0;
+		break;
 	default:
 		break;
 	}
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 04/18] RDMA/mlx5: Combine copy of create QP command in RSS RAW QP
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (2 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 03/18] RDMA/mlx5: Promote RSS RAW QP attribute check in higher level Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 05/18] RDMA/mlx5: Remove second user copy in create_user_qp Leon Romanovsky
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Change the create QP flow to handle all copy_from_user() operations in
one place.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 156 +++++++++++++++++---------------
 1 file changed, 82 insertions(+), 74 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index f7c32dc5cf9c..07cd976c4a20 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1624,6 +1624,7 @@ static void destroy_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *q
 
 static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 				 struct ib_qp_init_attr *init_attr,
+				 struct mlx5_ib_create_qp_rss *ucmd,
 				 struct ib_udata *udata)
 {
 	struct mlx5_ib_ucontext *mucontext = rdma_udata_to_drv_context(
@@ -1641,46 +1642,26 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	u32 outer_l4;
 	size_t min_resp_len;
 	u32 tdn = mucontext->tdn;
-	struct mlx5_ib_create_qp_rss ucmd = {};
-	size_t required_cmd_sz;
 	u8 lb_flag = 0;
 
 	min_resp_len = offsetof(typeof(resp), bfreg_index) + sizeof(resp.bfreg_index);
 	if (udata->outlen < min_resp_len)
 		return -EINVAL;
 
-	required_cmd_sz = offsetof(typeof(ucmd), flags) + sizeof(ucmd.flags);
-	if (udata->inlen < required_cmd_sz) {
-		mlx5_ib_dbg(dev, "invalid inlen\n");
-		return -EINVAL;
-	}
-
-	if (udata->inlen > sizeof(ucmd) &&
-	    !ib_is_udata_cleared(udata, sizeof(ucmd),
-				 udata->inlen - sizeof(ucmd))) {
-		mlx5_ib_dbg(dev, "inlen is not supported\n");
-		return -EOPNOTSUPP;
-	}
-
-	if (ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen))) {
-		mlx5_ib_dbg(dev, "copy failed\n");
-		return -EFAULT;
-	}
-
-	if (ucmd.comp_mask) {
+	if (ucmd->comp_mask) {
 		mlx5_ib_dbg(dev, "invalid comp mask\n");
 		return -EOPNOTSUPP;
 	}
 
-	if (ucmd.flags & ~(MLX5_QP_FLAG_TUNNEL_OFFLOADS |
+	if (ucmd->flags & ~(MLX5_QP_FLAG_TUNNEL_OFFLOADS |
 			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
 			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC)) {
 		mlx5_ib_dbg(dev, "invalid flags\n");
 		return -EOPNOTSUPP;
 	}
 
-	if (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_INNER &&
-	    !(ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)) {
+	if (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_INNER &&
+	    !(ucmd->flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)) {
 		mlx5_ib_dbg(dev, "Tunnel offloads must be set for inner RSS\n");
 		return -EOPNOTSUPP;
 	}
@@ -1717,29 +1698,29 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 
 	hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer);
 
-	if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)
+	if (ucmd->flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)
 		MLX5_SET(tirc, tirc, tunneled_offload_en, 1);
 
 	MLX5_SET(tirc, tirc, self_lb_block, lb_flag);
 
-	if (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_INNER)
+	if (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_INNER)
 		hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_inner);
 	else
 		hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer);
 
-	switch (ucmd.rx_hash_function) {
+	switch (ucmd->rx_hash_function) {
 	case MLX5_RX_HASH_FUNC_TOEPLITZ:
 	{
 		void *rss_key = MLX5_ADDR_OF(tirc, tirc, rx_hash_toeplitz_key);
 		size_t len = MLX5_FLD_SZ_BYTES(tirc, rx_hash_toeplitz_key);
 
-		if (len != ucmd.rx_key_len) {
+		if (len != ucmd->rx_key_len) {
 			err = -EINVAL;
 			goto err;
 		}
 
 		MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_TOEPLITZ);
-		memcpy(rss_key, ucmd.rx_hash_key, len);
+		memcpy(rss_key, ucmd->rx_hash_key, len);
 		break;
 	}
 	default:
@@ -1747,7 +1728,7 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 		goto err;
 	}
 
-	if (!ucmd.rx_hash_fields_mask) {
+	if (!ucmd->rx_hash_fields_mask) {
 		/* special case when this TIR serves as steering entry without hashing */
 		if (!init_attr->rwq_ind_tbl->log_ind_tbl_size)
 			goto create_tir;
@@ -1755,29 +1736,31 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 		goto err;
 	}
 
-	if (((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV4) ||
-	     (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV4)) &&
-	     ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV6) ||
-	     (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV6))) {
+	if (((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV4) ||
+	     (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV4)) &&
+	     ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV6) ||
+	     (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV6))) {
 		err = -EINVAL;
 		goto err;
 	}
 
 	/* If none of IPV4 & IPV6 SRC/DST was set - this bit field is ignored */
-	if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV4) ||
-	    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV4))
+	if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV4) ||
+	    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV4))
 		MLX5_SET(rx_hash_field_select, hfso, l3_prot_type,
 			 MLX5_L3_PROT_TYPE_IPV4);
-	else if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV6) ||
-		 (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV6))
+	else if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV6) ||
+		 (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV6))
 		MLX5_SET(rx_hash_field_select, hfso, l3_prot_type,
 			 MLX5_L3_PROT_TYPE_IPV6);
 
-	outer_l4 = ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_TCP) ||
-		    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_TCP)) << 0 |
-		   ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_UDP) ||
-		    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_UDP)) << 1 |
-		   (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_IPSEC_SPI) << 2;
+	outer_l4 = ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_TCP) ||
+		    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_TCP))
+			   << 0 |
+		   ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_UDP) ||
+		    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_UDP))
+			   << 1 |
+		   (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_IPSEC_SPI) << 2;
 
 	/* Check that only one l4 protocol is set */
 	if (outer_l4 & (outer_l4 - 1)) {
@@ -1786,32 +1769,32 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	}
 
 	/* If none of TCP & UDP SRC/DST was set - this bit field is ignored */
-	if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_TCP) ||
-	    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_TCP))
+	if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_TCP) ||
+	    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_TCP))
 		MLX5_SET(rx_hash_field_select, hfso, l4_prot_type,
 			 MLX5_L4_PROT_TYPE_TCP);
-	else if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_UDP) ||
-		 (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_UDP))
+	else if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_UDP) ||
+		 (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_UDP))
 		MLX5_SET(rx_hash_field_select, hfso, l4_prot_type,
 			 MLX5_L4_PROT_TYPE_UDP);
 
-	if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV4) ||
-	    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV6))
+	if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV4) ||
+	    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_IPV6))
 		selected_fields |= MLX5_HASH_FIELD_SEL_SRC_IP;
 
-	if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV4) ||
-	    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV6))
+	if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV4) ||
+	    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_IPV6))
 		selected_fields |= MLX5_HASH_FIELD_SEL_DST_IP;
 
-	if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_TCP) ||
-	    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_UDP))
+	if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_TCP) ||
+	    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_SRC_PORT_UDP))
 		selected_fields |= MLX5_HASH_FIELD_SEL_L4_SPORT;
 
-	if ((ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_TCP) ||
-	    (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_UDP))
+	if ((ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_TCP) ||
+	    (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_DST_PORT_UDP))
 		selected_fields |= MLX5_HASH_FIELD_SEL_L4_DPORT;
 
-	if (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_IPSEC_SPI)
+	if (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_IPSEC_SPI)
 		selected_fields |= MLX5_HASH_FIELD_SEL_IPSEC_SPI;
 
 	MLX5_SET(rx_hash_field_select, hfso, selected_fields, selected_fields);
@@ -2513,11 +2496,16 @@ static void process_vendor_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
 }
 
 static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
-				struct mlx5_ib_create_qp *ucmd)
+				void *ucmd, struct ib_qp_init_attr *attr)
 {
 	struct mlx5_core_dev *mdev = dev->mdev;
-	int flags = ucmd->flags;
 	bool cond;
+	int flags;
+
+	if (attr->rwq_ind_tbl)
+		flags = ((struct mlx5_ib_create_qp_rss *)ucmd)->flags;
+	else
+		flags = ((struct mlx5_ib_create_qp *)ucmd)->flags;
 
 	switch (flags & (MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI)) {
 	case MLX5_QP_FLAG_TYPE_DCI:
@@ -2657,21 +2645,32 @@ static size_t process_udata_size(struct ib_qp_init_attr *attr,
 				 struct ib_udata *udata)
 {
 	size_t ucmd = sizeof(struct mlx5_ib_create_qp);
+	size_t inlen = udata->inlen;
 
 	if (attr->qp_type == IB_QPT_DRIVER)
-		return (udata->inlen < ucmd) ? 0 : ucmd;
+		return (inlen < ucmd) ? 0 : ucmd;
+
+	if (!attr->rwq_ind_tbl)
+		return ucmd;
+
+	if (inlen < offsetofend(struct mlx5_ib_create_qp_rss, flags))
+		return 0;
+
+	ucmd = sizeof(struct mlx5_ib_create_qp_rss);
+	if (inlen > ucmd && !ib_is_udata_cleared(udata, ucmd, inlen - ucmd))
+		return 0;
 
-	return ucmd;
+	return min(ucmd, inlen);
 }
 
 static int create_raw_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-			 struct ib_qp_init_attr *attr,
-			 struct mlx5_ib_create_qp *ucmd, struct ib_udata *udata)
+			 struct ib_qp_init_attr *attr, void *ucmd,
+			 struct ib_udata *udata)
 {
 	struct mlx5_ib_dev *dev = to_mdev(pd->device);
 
 	if (attr->rwq_ind_tbl)
-		return create_rss_raw_qp_tir(pd, qp, attr, udata);
+		return create_rss_raw_qp_tir(pd, qp, attr, ucmd, udata);
 
 	return create_qp_common(dev, pd, attr, ucmd, udata, qp);
 }
@@ -2707,10 +2706,10 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *init_attr,
 				struct ib_udata *udata)
 {
-	struct mlx5_ib_create_qp ucmd = {};
 	struct mlx5_ib_dev *dev;
 	struct mlx5_ib_qp *qp;
 	enum ib_qp_type type;
+	void *ucmd = NULL;
 	u16 xrcdn = 0;
 	int err;
 
@@ -2731,25 +2730,31 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (init_attr->qp_type == IB_QPT_GSI)
 		return mlx5_ib_gsi_create_qp(pd, init_attr);
 
-	if (udata && !init_attr->rwq_ind_tbl) {
+	if (udata) {
 		size_t inlen =
 			process_udata_size(init_attr, udata);
 
 		if (!inlen)
 			return ERR_PTR(-EINVAL);
 
-		err = ib_copy_from_udata(&ucmd, udata, inlen);
+		ucmd = kzalloc(inlen, GFP_KERNEL);
+		if (!ucmd)
+			return ERR_PTR(-ENOMEM);
+
+		err = ib_copy_from_udata(ucmd, udata, inlen);
 		if (err)
-			return ERR_PTR(err);
+			goto free_ucmd;
 	}
 
 	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
-	if (!qp)
-		return ERR_PTR(-ENOMEM);
+	if (!qp) {
+		err = -ENOMEM;
+		goto free_ucmd;
+	}
 
 	qp->type = type;
 	if (udata) {
-		err = process_vendor_flags(dev, qp, &ucmd);
+		err = process_vendor_flags(dev, qp, ucmd, init_attr);
 		if (err)
 			goto free_qp;
 	}
@@ -2766,20 +2771,21 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 
 	switch (qp->type) {
 	case IB_QPT_RAW_PACKET:
-		err = create_raw_qp(pd, qp, init_attr, &ucmd, udata);
+		err = create_raw_qp(pd, qp, init_attr, ucmd, udata);
 		break;
 	case MLX5_IB_QPT_DCT:
-		err = create_dct(pd, qp, init_attr, &ucmd, udata);
+		err = create_dct(pd, qp, init_attr, ucmd, udata);
 		break;
 	default:
-		err = create_qp_common(dev, pd, init_attr,
-				       (udata) ? &ucmd : NULL, udata, qp);
+		err = create_qp_common(dev, pd, init_attr, ucmd, udata, qp);
 	}
 	if (err) {
 		mlx5_ib_dbg(dev, "create_qp_common failed\n");
 		goto free_qp;
 	}
 
+	kfree(ucmd);
+
 	if (is_qp0(init_attr->qp_type))
 		qp->ibqp.qp_num = 0;
 	else if (is_qp1(init_attr->qp_type))
@@ -2793,6 +2799,8 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 
 free_qp:
 	kfree(qp);
+free_ucmd:
+	kfree(ucmd);
 	return ERR_PTR(err);
 }
 
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 05/18] RDMA/mlx5: Remove second user copy in create_user_qp
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (3 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 04/18] RDMA/mlx5: Combine copy of create QP command in RSS RAW QP Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 06/18] RDMA/mlx5: Rely on existence of udata to separate kernel/user flows Leon Romanovsky
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Combine copy_from_user() from create_user_qp() and general code.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 34 +++++++++++++++------------------
 1 file changed, 15 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 07cd976c4a20..0f7a8f45764f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -914,13 +914,12 @@ static int adjust_bfregn(struct mlx5_ib_dev *dev,
 
 static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			  struct mlx5_ib_qp *qp, struct ib_udata *udata,
-			  struct ib_qp_init_attr *attr,
-			  u32 **in,
+			  struct ib_qp_init_attr *attr, u32 **in,
 			  struct mlx5_ib_create_qp_resp *resp, int *inlen,
-			  struct mlx5_ib_qp_base *base)
+			  struct mlx5_ib_qp_base *base,
+			  struct mlx5_ib_create_qp *ucmd)
 {
 	struct mlx5_ib_ucontext *context;
-	struct mlx5_ib_create_qp ucmd;
 	struct mlx5_ib_ubuffer *ubuffer = &base->ubuffer;
 	int page_shift = 0;
 	int uar_index = 0;
@@ -934,24 +933,18 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	u16 uid;
 	u32 uar_flags;
 
-	err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd));
-	if (err) {
-		mlx5_ib_dbg(dev, "copy failed\n");
-		return err;
-	}
-
 	context = rdma_udata_to_drv_context(udata, struct mlx5_ib_ucontext,
 					    ibucontext);
-	uar_flags = ucmd.flags & (MLX5_QP_FLAG_UAR_PAGE_INDEX |
-				  MLX5_QP_FLAG_BFREG_INDEX);
+	uar_flags = qp->flags_en &
+		    (MLX5_QP_FLAG_UAR_PAGE_INDEX | MLX5_QP_FLAG_BFREG_INDEX);
 	switch (uar_flags) {
 	case MLX5_QP_FLAG_UAR_PAGE_INDEX:
-		uar_index = ucmd.bfreg_index;
+		uar_index = ucmd->bfreg_index;
 		bfregn = MLX5_IB_INVALID_BFREG;
 		break;
 	case MLX5_QP_FLAG_BFREG_INDEX:
 		uar_index = bfregn_to_uar_index(dev, &context->bfregi,
-						ucmd.bfreg_index, true);
+						ucmd->bfreg_index, true);
 		if (uar_index < 0)
 			return uar_index;
 		bfregn = MLX5_IB_INVALID_BFREG;
@@ -976,12 +969,12 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	qp->sq.wqe_shift = ilog2(MLX5_SEND_WQE_BB);
 	qp->sq.offset = qp->rq.wqe_cnt << qp->rq.wqe_shift;
 
-	err = set_user_buf_size(dev, qp, &ucmd, base, attr);
+	err = set_user_buf_size(dev, qp, ucmd, base, attr);
 	if (err)
 		goto err_bfreg;
 
-	if (ucmd.buf_addr && ubuffer->buf_size) {
-		ubuffer->buf_addr = ucmd.buf_addr;
+	if (ucmd->buf_addr && ubuffer->buf_size) {
+		ubuffer->buf_addr = ucmd->buf_addr;
 		err = mlx5_ib_umem_get(dev, udata, ubuffer->buf_addr,
 				       ubuffer->buf_size, &ubuffer->umem,
 				       &npages, &page_shift, &ncont, &offset);
@@ -1018,7 +1011,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		resp->bfreg_index = MLX5_IB_INVALID_BFREG;
 	qp->bfregn = bfregn;
 
-	err = mlx5_ib_db_map_user(context, udata, ucmd.db_addr, &qp->db);
+	err = mlx5_ib_db_map_user(context, udata, ucmd->db_addr, &qp->db);
 	if (err) {
 		mlx5_ib_dbg(dev, "map failed\n");
 		goto err_free;
@@ -1991,7 +1984,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 				return -EINVAL;
 			}
 			err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
-					     &resp, &inlen, base);
+					     &resp, &inlen, base, ucmd);
 			if (err)
 				mlx5_ib_dbg(dev, "err %d\n", err);
 		} else {
@@ -2550,6 +2543,9 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				    MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE,
 				    MLX5_CAP_GEN(mdev, qp_packet_based), qp);
 
+	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_BFREG_INDEX, true, qp);
+	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_UAR_PAGE_INDEX, true, qp);
+
 	if (flags)
 		mlx5_ib_dbg(dev, "udata has unsupported flags 0x%X\n", flags);
 
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 06/18] RDMA/mlx5: Rely on existence of udata to separate kernel/user flows
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (4 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 05/18] RDMA/mlx5: Remove second user copy in create_user_qp Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 07/18] RDMA/mlx5: Delete impossible inlen check Leon Romanovsky
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Instead of keeping special field to separate kernel/user create/destroy
flows, rely on existence of udata pointer. All allocation flows are
using kzalloc() and leave uninitialized pointers as NULL which makes
MLX5_QP_EMPTY and MLX5_QP_KERNEL flows to be the same.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h | 14 --------------
 drivers/infiniband/hw/mlx5/qp.c      | 23 ++++++++++-------------
 2 files changed, 10 insertions(+), 27 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 6b500f9b2346..25007e3aebf7 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -337,7 +337,6 @@ struct mlx5_ib_rwq {
 	struct ib_umem		*umem;
 	size_t			buf_size;
 	unsigned int		page_shift;
-	int			create_type;
 	struct mlx5_db		db;
 	u32			user_index;
 	u32			wqe_count;
@@ -346,17 +345,6 @@ struct mlx5_ib_rwq {
 	u32			create_flags; /* Use enum mlx5_ib_wq_flags */
 };
 
-enum {
-	MLX5_QP_USER,
-	MLX5_QP_KERNEL,
-	MLX5_QP_EMPTY
-};
-
-enum {
-	MLX5_WQ_USER,
-	MLX5_WQ_KERNEL
-};
-
 struct mlx5_ib_rwq_ind_table {
 	struct ib_rwq_ind_table ib_rwq_ind_tbl;
 	u32			rqtn;
@@ -457,8 +445,6 @@ struct mlx5_ib_qp {
 	 */
 	int			bfregn;
 
-	int			create_type;
-
 	struct list_head	qps_list;
 	struct list_head	cq_recv_list;
 	struct list_head	cq_send_list;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 0f7a8f45764f..007163753f6f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -897,7 +897,6 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		goto err_umem;
 	}
 
-	rwq->create_type = MLX5_WQ_USER;
 	return 0;
 
 err_umem:
@@ -1022,7 +1021,6 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		mlx5_ib_dbg(dev, "copy failed\n");
 		goto err_unmap;
 	}
-	qp->create_type = MLX5_QP_USER;
 
 	return 0;
 
@@ -1187,7 +1185,6 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
 		err = -ENOMEM;
 		goto err_wrid;
 	}
-	qp->create_type = MLX5_QP_KERNEL;
 
 	return 0;
 
@@ -1214,8 +1211,10 @@ static void destroy_qp_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp)
 	kvfree(qp->sq.wrid);
 	kvfree(qp->sq.wr_data);
 	kvfree(qp->rq.wrid);
-	mlx5_db_free(dev->mdev, &qp->db);
-	mlx5_frag_buf_free(dev->mdev, &qp->buf);
+	if (qp->db.db)
+		mlx5_db_free(dev->mdev, &qp->db);
+	if (qp->buf.frags)
+		mlx5_frag_buf_free(dev->mdev, &qp->buf);
 }
 
 static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
@@ -2000,8 +1999,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		in = kvzalloc(inlen, GFP_KERNEL);
 		if (!in)
 			return -ENOMEM;
-
-		qp->create_type = MLX5_QP_EMPTY;
 	}
 
 	if (is_sqp(init_attr->qp_type))
@@ -2155,9 +2152,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return 0;
 
 err_create:
-	if (qp->create_type == MLX5_QP_USER)
+	if (udata)
 		destroy_qp_user(dev, pd, qp, base, udata);
-	else if (qp->create_type == MLX5_QP_KERNEL)
+	else
 		destroy_qp_kernel(dev, qp);
 
 err:
@@ -2311,7 +2308,7 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	if (recv_cq)
 		list_del(&qp->cq_recv_list);
 
-	if (qp->create_type == MLX5_QP_KERNEL) {
+	if (!udata) {
 		__mlx5_ib_cq_clean(recv_cq, base->mqp.qpn,
 				   qp->ibqp.srq ? to_msrq(qp->ibqp.srq) : NULL);
 		if (send_cq != recv_cq)
@@ -2331,10 +2328,10 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				     base->mqp.qpn);
 	}
 
-	if (qp->create_type == MLX5_QP_KERNEL)
-		destroy_qp_kernel(dev, qp);
-	else if (qp->create_type == MLX5_QP_USER)
+	if (udata)
 		destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base, udata);
+	else
+		destroy_qp_kernel(dev, qp);
 }
 
 static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 07/18] RDMA/mlx5: Delete impossible inlen check
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (5 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 06/18] RDMA/mlx5: Rely on existence of udata to separate kernel/user flows Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 08/18] RDMA/mlx5: Globally parse DEVX UID Leon Romanovsky
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

The inlen is set to be above zero in all flows before
and can't be negative at this stage.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 007163753f6f..9c9c9e3c793f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2107,11 +2107,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		qp->flags &= ~IB_QP_CREATE_PCI_WRITE_END_PADDING;
 	}
 
-	if (inlen < 0) {
-		err = -EINVAL;
-		goto err;
-	}
-
 	if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
 	    qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd->sq_buf_addr;
@@ -2156,8 +2151,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		destroy_qp_user(dev, pd, qp, base, udata);
 	else
 		destroy_qp_kernel(dev, qp);
-
-err:
 	kvfree(in);
 	return err;
 }
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 08/18] RDMA/mlx5: Globally parse DEVX UID
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (6 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 07/18] RDMA/mlx5: Delete impossible inlen check Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 09/18] RDMA/mlx5: Separate XRC_TGT QP creation from common flow Leon Romanovsky
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Remove duplication in parsing of DEVX UID.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 51 +++++++++++++++++----------------
 1 file changed, 27 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 9c9c9e3c793f..f34fb8734834 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1916,18 +1916,16 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct mlx5_ib_create_qp *ucmd,
-			    struct ib_udata *udata, struct mlx5_ib_qp *qp)
+			    struct ib_udata *udata, struct mlx5_ib_qp *qp,
+			    u32 uidx)
 {
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
 	struct mlx5_core_dev *mdev = dev->mdev;
 	struct mlx5_ib_create_qp_resp resp = {};
-	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
-		udata, struct mlx5_ib_ucontext, ibucontext);
 	struct mlx5_ib_cq *send_cq;
 	struct mlx5_ib_cq *recv_cq;
 	unsigned long flags;
-	u32 uidx = MLX5_IB_DEFAULT_UIDX;
 	struct mlx5_ib_qp_base *base;
 	int mlx5_st;
 	void *qpc;
@@ -1945,12 +1943,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
 		qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
 
-	if (udata) {
-		err = get_qp_user_index(ucontext, ucmd, udata->inlen, &uidx);
-		if (err)
-			return err;
-	}
-
 	if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
 		qp->underlay_qpn = init_attr->source_qpn;
 
@@ -2329,18 +2321,10 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 
 static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 		      struct ib_qp_init_attr *attr,
-		      struct mlx5_ib_create_qp *ucmd, struct ib_udata *udata)
+		      struct mlx5_ib_create_qp *ucmd, u32 uidx)
 {
-	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
-		udata, struct mlx5_ib_ucontext, ibucontext);
-	int err = 0;
-	u32 uidx = MLX5_IB_DEFAULT_UIDX;
 	void *dctc;
 
-	err = get_qp_user_index(ucontext, ucmd, sizeof(*ucmd), &uidx);
-	if (err)
-		return err;
-
 	qp->dct.in = kzalloc(MLX5_ST_SZ_BYTES(create_dct_in), GFP_KERNEL);
 	if (!qp->dct.in)
 		return -ENOMEM;
@@ -2651,14 +2635,14 @@ static size_t process_udata_size(struct ib_qp_init_attr *attr,
 
 static int create_raw_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 			 struct ib_qp_init_attr *attr, void *ucmd,
-			 struct ib_udata *udata)
+			 struct ib_udata *udata, u32 uidx)
 {
 	struct mlx5_ib_dev *dev = to_mdev(pd->device);
 
 	if (attr->rwq_ind_tbl)
 		return create_rss_raw_qp_tir(pd, qp, attr, ucmd, udata);
 
-	return create_qp_common(dev, pd, attr, ucmd, udata, qp);
+	return create_qp_common(dev, pd, attr, ucmd, udata, qp, uidx);
 }
 
 static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
@@ -2688,10 +2672,24 @@ static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return ret;
 }
 
+static int get_qp_uidx(struct mlx5_ib_qp *qp, struct ib_udata *udata,
+		       struct mlx5_ib_create_qp *ucmd,
+		       struct ib_qp_init_attr *attr, u32 *uidx)
+{
+	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
+		udata, struct mlx5_ib_ucontext, ibucontext);
+
+	if (attr->rwq_ind_tbl)
+		return 0;
+
+	return get_qp_user_index(ucontext, ucmd, sizeof(*ucmd), uidx);
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *init_attr,
 				struct ib_udata *udata)
 {
+	u32 uidx = MLX5_IB_DEFAULT_UIDX;
 	struct mlx5_ib_dev *dev;
 	struct mlx5_ib_qp *qp;
 	enum ib_qp_type type;
@@ -2743,6 +2741,10 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		err = process_vendor_flags(dev, qp, ucmd, init_attr);
 		if (err)
 			goto free_qp;
+
+		err = get_qp_uidx(qp, udata, ucmd, init_attr, &uidx);
+		if (err)
+			goto free_qp;
 	}
 	err = process_create_flags(dev, qp, init_attr);
 	if (err)
@@ -2757,13 +2759,14 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 
 	switch (qp->type) {
 	case IB_QPT_RAW_PACKET:
-		err = create_raw_qp(pd, qp, init_attr, ucmd, udata);
+		err = create_raw_qp(pd, qp, init_attr, ucmd, udata, uidx);
 		break;
 	case MLX5_IB_QPT_DCT:
-		err = create_dct(pd, qp, init_attr, ucmd, udata);
+		err = create_dct(pd, qp, init_attr, ucmd, uidx);
 		break;
 	default:
-		err = create_qp_common(dev, pd, init_attr, ucmd, udata, qp);
+		err = create_qp_common(dev, pd, init_attr, ucmd, udata, qp,
+				       uidx);
 	}
 	if (err) {
 		mlx5_ib_dbg(dev, "create_qp_common failed\n");
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 09/18] RDMA/mlx5: Separate XRC_TGT QP creation from common flow
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (7 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 08/18] RDMA/mlx5: Globally parse DEVX UID Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 10/18] RDMA/mlx5: Separate to user/kernel create QP flows Leon Romanovsky
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

XRC_TGT QP doesn't fail into kernel or user flow separation. It is
initiated by the user, but is created through in-kernel verbs flow
and doesn't have PD and udata in similar way to kernel QPs.

So let's separate creation of that QP type from the common flow.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 158 +++++++++++++++++++++-----------
 1 file changed, 106 insertions(+), 52 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index f34fb8734834..e00a51d3e17e 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -991,8 +991,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		goto err_umem;
 	}
 
-	uid = (attr->qp_type != IB_QPT_XRC_TGT &&
-	       attr->qp_type != IB_QPT_XRC_INI) ? to_mpd(pd)->uid : 0;
+	uid = (attr->qp_type != IB_QPT_XRC_INI) ? to_mpd(pd)->uid : 0;
 	MLX5_SET(create_qp_in, *in, uid, uid);
 	pas = (__be64 *)MLX5_ADDR_OF(create_qp_in, *in, pas);
 	if (ubuffer->umem)
@@ -1913,6 +1912,81 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
 	return atomic_mode;
 }
 
+static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev,
+			     struct ib_qp_init_attr *attr,
+			     struct mlx5_ib_qp *qp, struct ib_udata *udata,
+			     u32 uidx)
+{
+	struct mlx5_ib_resources *devr = &dev->devr;
+	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
+	struct mlx5_core_dev *mdev = dev->mdev;
+	struct mlx5_ib_qp_base *base;
+	unsigned long flags;
+	void *qpc;
+	u32 *in;
+	int err;
+
+	mutex_init(&qp->mutex);
+
+	if (attr->sq_sig_type == IB_SIGNAL_ALL_WR)
+		qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
+
+	in = kvzalloc(inlen, GFP_KERNEL);
+	if (!in)
+		return -ENOMEM;
+
+	qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
+
+	MLX5_SET(qpc, qpc, st, MLX5_QP_ST_XRC);
+	MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
+	MLX5_SET(qpc, qpc, pd, to_mpd(devr->p0)->pdn);
+
+	if (qp->flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
+		MLX5_SET(qpc, qpc, block_lb_mc, 1);
+	if (qp->flags & IB_QP_CREATE_CROSS_CHANNEL)
+		MLX5_SET(qpc, qpc, cd_master, 1);
+	if (qp->flags & IB_QP_CREATE_MANAGED_SEND)
+		MLX5_SET(qpc, qpc, cd_slave_send, 1);
+	if (qp->flags & IB_QP_CREATE_MANAGED_RECV)
+		MLX5_SET(qpc, qpc, cd_slave_receive, 1);
+
+	MLX5_SET(qpc, qpc, rq_type, MLX5_SRQ_RQ);
+	MLX5_SET(qpc, qpc, no_sq, 1);
+	MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
+	MLX5_SET(qpc, qpc, cqn_snd, to_mcq(devr->c0)->mcq.cqn);
+	MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
+	MLX5_SET(qpc, qpc, xrcd, to_mxrcd(attr->xrcd)->xrcdn);
+	MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma);
+
+	/* 0xffffff means we ask to work with cqe version 0 */
+	if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
+		MLX5_SET(qpc, qpc, user_index, uidx);
+
+	if (qp->flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
+		MLX5_SET(qpc, qpc, end_padding_mode,
+			 MLX5_WQ_END_PAD_MODE_ALIGN);
+		/* Special case to clean flag */
+		qp->flags &= ~IB_QP_CREATE_PCI_WRITE_END_PADDING;
+	}
+
+	base = &qp->trans_qp.base;
+	err = mlx5_core_create_qp(dev, &base->mqp, in, inlen);
+	kvfree(in);
+	if (err) {
+		destroy_qp_user(dev, NULL, qp, base, udata);
+		return err;
+	}
+
+	base->container_mibqp = qp;
+	base->mqp.event = mlx5_ib_qp_event;
+
+	spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
+	list_add_tail(&qp->qps_list, &dev->qp_list);
+	spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
+
+	return 0;
+}
+
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct mlx5_ib_create_qp *ucmd,
@@ -1958,40 +2032,30 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		return err;
 	}
 
-	if (pd) {
-		if (udata) {
-			__u32 max_wqes =
-				1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
-			mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n",
-				    ucmd->sq_wqe_count);
-			if (ucmd->rq_wqe_shift != qp->rq.wqe_shift ||
-			    ucmd->rq_wqe_count != qp->rq.wqe_cnt) {
-				mlx5_ib_dbg(dev, "invalid rq params\n");
-				return -EINVAL;
-			}
-			if (ucmd->sq_wqe_count > max_wqes) {
-				mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
-					    ucmd->sq_wqe_count, max_wqes);
-				return -EINVAL;
-			}
-			err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
-					     &resp, &inlen, base, ucmd);
-			if (err)
-				mlx5_ib_dbg(dev, "err %d\n", err);
-		} else {
-			err = create_kernel_qp(dev, init_attr, qp, &in, &inlen,
-					       base);
-			if (err)
-				mlx5_ib_dbg(dev, "err %d\n", err);
+	if (udata) {
+		__u32 max_wqes = 1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
+
+		mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n",
+			    ucmd->sq_wqe_count);
+		if (ucmd->rq_wqe_shift != qp->rq.wqe_shift ||
+		    ucmd->rq_wqe_count != qp->rq.wqe_cnt) {
+			mlx5_ib_dbg(dev, "invalid rq params\n");
+			return -EINVAL;
+		}
+		if (ucmd->sq_wqe_count > max_wqes) {
+			mlx5_ib_dbg(
+				dev,
+				"requested sq_wqe_count (%d) > max allowed (%d)\n",
+				ucmd->sq_wqe_count, max_wqes);
+			return -EINVAL;
 		}
+		err = create_user_qp(dev, pd, qp, udata, init_attr, &in, &resp,
+				     &inlen, base, ucmd);
+	} else
+		err = create_kernel_qp(dev, init_attr, qp, &in, &inlen, base);
 
-		if (err)
-			return err;
-	} else {
-		in = kvzalloc(inlen, GFP_KERNEL);
-		if (!in)
-			return -ENOMEM;
-	}
+	if (err)
+		return err;
 
 	if (is_sqp(init_attr->qp_type))
 		qp->port = init_attr->port_num;
@@ -2054,12 +2118,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 
 	/* Set default resources */
 	switch (init_attr->qp_type) {
-	case IB_QPT_XRC_TGT:
-		MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
-		MLX5_SET(qpc, qpc, cqn_snd, to_mcq(devr->c0)->mcq.cqn);
-		MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
-		MLX5_SET(qpc, qpc, xrcd, to_mxrcd(init_attr->xrcd)->xrcdn);
-		break;
 	case IB_QPT_XRC_INI:
 		MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
 		MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
@@ -2105,16 +2163,12 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
 		err = create_raw_packet_qp(dev, qp, in, inlen, pd, udata,
 					   &resp);
-	} else {
+	} else
 		err = mlx5_core_create_qp(dev, &base->mqp, in, inlen);
-	}
-
-	if (err) {
-		mlx5_ib_dbg(dev, "create qp failed\n");
-		goto err_create;
-	}
 
 	kvfree(in);
+	if (err)
+		goto err_create;
 
 	base->container_mibqp = qp;
 	base->mqp.event = mlx5_ib_qp_event;
@@ -2143,7 +2197,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		destroy_qp_user(dev, pd, qp, base, udata);
 	else
 		destroy_qp_kernel(dev, qp);
-	kvfree(in);
 	return err;
 }
 
@@ -2750,9 +2803,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (err)
 		goto free_qp;
 
-	if (qp->type == IB_QPT_XRC_TGT)
-		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
-
 	err = check_qp_attr(dev, qp, init_attr);
 	if (err)
 		goto free_qp;
@@ -2764,12 +2814,16 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	case MLX5_IB_QPT_DCT:
 		err = create_dct(pd, qp, init_attr, ucmd, uidx);
 		break;
+	case IB_QPT_XRC_TGT:
+		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
+		err = create_xrc_tgt_qp(dev, init_attr, qp, udata, uidx);
+		break;
 	default:
 		err = create_qp_common(dev, pd, init_attr, ucmd, udata, qp,
 				       uidx);
 	}
 	if (err) {
-		mlx5_ib_dbg(dev, "create_qp_common failed\n");
+		mlx5_ib_dbg(dev, "create_qp failed %d\n", err);
 		goto free_qp;
 	}
 
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 10/18] RDMA/mlx5: Separate to user/kernel create QP flows
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (8 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 09/18] RDMA/mlx5: Separate XRC_TGT QP creation from common flow Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 11/18] RDMA/mlx5: Reduce amount of duplication in QP destroy Leon Romanovsky
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

The kernel and user create QP flows have very little common code,
separate them to simplify the future work of creating per-type
create_*_qp() functions.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 205 ++++++++++++++++++++++++--------
 1 file changed, 156 insertions(+), 49 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index e00a51d3e17e..f667abf3f461 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -911,12 +911,12 @@ static int adjust_bfregn(struct mlx5_ib_dev *dev,
 				bfregn % MLX5_NON_FP_BFREGS_PER_UAR;
 }
 
-static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
-			  struct mlx5_ib_qp *qp, struct ib_udata *udata,
-			  struct ib_qp_init_attr *attr, u32 **in,
-			  struct mlx5_ib_create_qp_resp *resp, int *inlen,
-			  struct mlx5_ib_qp_base *base,
-			  struct mlx5_ib_create_qp *ucmd)
+static int _create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+			   struct mlx5_ib_qp *qp, struct ib_udata *udata,
+			   struct ib_qp_init_attr *attr, u32 **in,
+			   struct mlx5_ib_create_qp_resp *resp, int *inlen,
+			   struct mlx5_ib_qp_base *base,
+			   struct mlx5_ib_create_qp *ucmd)
 {
 	struct mlx5_ib_ucontext *context;
 	struct mlx5_ib_ubuffer *ubuffer = &base->ubuffer;
@@ -1083,11 +1083,10 @@ static void *get_sq_edge(struct mlx5_ib_wq *sq, u32 idx)
 	return fragment_end + MLX5_SEND_WQE_BB;
 }
 
-static int create_kernel_qp(struct mlx5_ib_dev *dev,
-			    struct ib_qp_init_attr *init_attr,
-			    struct mlx5_ib_qp *qp,
-			    u32 **in, int *inlen,
-			    struct mlx5_ib_qp_base *base)
+static int _create_kernel_qp(struct mlx5_ib_dev *dev,
+			     struct ib_qp_init_attr *init_attr,
+			     struct mlx5_ib_qp *qp, u32 **in, int *inlen,
+			     struct mlx5_ib_qp_base *base)
 {
 	int uar_index;
 	void *qpc;
@@ -1987,11 +1986,11 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev,
 	return 0;
 }
 
-static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
-			    struct ib_qp_init_attr *init_attr,
-			    struct mlx5_ib_create_qp *ucmd,
-			    struct ib_udata *udata, struct mlx5_ib_qp *qp,
-			    u32 uidx)
+static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+			  struct ib_qp_init_attr *init_attr,
+			  struct mlx5_ib_create_qp *ucmd,
+			  struct ib_udata *udata, struct mlx5_ib_qp *qp,
+			  u32 uidx)
 {
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
@@ -2032,28 +2031,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		return err;
 	}
 
-	if (udata) {
-		__u32 max_wqes = 1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
+	if (ucmd->rq_wqe_shift != qp->rq.wqe_shift ||
+	    ucmd->rq_wqe_count != qp->rq.wqe_cnt)
+		return -EINVAL;
 
-		mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n",
-			    ucmd->sq_wqe_count);
-		if (ucmd->rq_wqe_shift != qp->rq.wqe_shift ||
-		    ucmd->rq_wqe_count != qp->rq.wqe_cnt) {
-			mlx5_ib_dbg(dev, "invalid rq params\n");
-			return -EINVAL;
-		}
-		if (ucmd->sq_wqe_count > max_wqes) {
-			mlx5_ib_dbg(
-				dev,
-				"requested sq_wqe_count (%d) > max allowed (%d)\n",
-				ucmd->sq_wqe_count, max_wqes);
-			return -EINVAL;
-		}
-		err = create_user_qp(dev, pd, qp, udata, init_attr, &in, &resp,
-				     &inlen, base, ucmd);
-	} else
-		err = create_kernel_qp(dev, init_attr, qp, &in, &inlen, base);
+	if (ucmd->sq_wqe_count > (1 << MLX5_CAP_GEN(mdev, log_max_qp_sz)))
+		return -EINVAL;
 
+	err = _create_user_qp(dev, pd, qp, udata, init_attr, &in, &resp, &inlen,
+			      base, ucmd);
 	if (err)
 		return err;
 
@@ -2064,12 +2050,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 
 	MLX5_SET(qpc, qpc, st, mlx5_st);
 	MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
-
-	if (init_attr->qp_type != MLX5_IB_QPT_REG_UMR)
-		MLX5_SET(qpc, qpc, pd, to_mpd(pd ? pd : devr->p0)->pdn);
-	else
-		MLX5_SET(qpc, qpc, latency_sensitive, 1);
-
+	MLX5_SET(qpc, qpc, pd, to_mpd(pd)->pdn);
 
 	if (qp->flags_en & MLX5_QP_FLAG_SIGNATURE)
 		MLX5_SET(qpc, qpc, wq_signature, 1);
@@ -2145,10 +2126,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
 		MLX5_SET(qpc, qpc, user_index, uidx);
 
-	/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
-	if (qp->flags & IB_QP_CREATE_IPOIB_UD_LSO)
-		MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
-
 	if (qp->flags & IB_QP_CREATE_PCI_WRITE_END_PADDING &&
 	    init_attr->qp_type != IB_QPT_RAW_PACKET) {
 		MLX5_SET(qpc, qpc, end_padding_mode,
@@ -2200,6 +2177,133 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return err;
 }
 
+static int create_kernel_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+			    struct ib_qp_init_attr *attr, struct mlx5_ib_qp *qp,
+			    u32 uidx)
+{
+	struct mlx5_ib_resources *devr = &dev->devr;
+	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
+	struct mlx5_core_dev *mdev = dev->mdev;
+	struct mlx5_ib_cq *send_cq;
+	struct mlx5_ib_cq *recv_cq;
+	unsigned long flags;
+	struct mlx5_ib_qp_base *base;
+	int mlx5_st;
+	void *qpc;
+	u32 *in;
+	int err;
+
+	mutex_init(&qp->mutex);
+	spin_lock_init(&qp->sq.lock);
+	spin_lock_init(&qp->rq.lock);
+
+	mlx5_st = to_mlx5_st(qp->type);
+	if (mlx5_st < 0)
+		return -EINVAL;
+
+	if (attr->sq_sig_type == IB_SIGNAL_ALL_WR)
+		qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
+
+	base = &qp->trans_qp.base;
+
+	qp->has_rq = qp_has_rq(attr);
+	err = set_rq_size(dev, &attr->cap, qp->has_rq, qp, NULL);
+	if (err) {
+		mlx5_ib_dbg(dev, "err %d\n", err);
+		return err;
+	}
+
+	err = _create_kernel_qp(dev, attr, qp, &in, &inlen, base);
+	if (err)
+		return err;
+
+	if (is_sqp(attr->qp_type))
+		qp->port = attr->port_num;
+
+	qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
+
+	MLX5_SET(qpc, qpc, st, mlx5_st);
+	MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
+
+	if (attr->qp_type != MLX5_IB_QPT_REG_UMR)
+		MLX5_SET(qpc, qpc, pd, to_mpd(pd ? pd : devr->p0)->pdn);
+	else
+		MLX5_SET(qpc, qpc, latency_sensitive, 1);
+
+
+	if (qp->flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
+		MLX5_SET(qpc, qpc, block_lb_mc, 1);
+
+	if (qp->rq.wqe_cnt) {
+		MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
+		MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt));
+	}
+
+	MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, attr));
+
+	if (qp->sq.wqe_cnt)
+		MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt));
+	else
+		MLX5_SET(qpc, qpc, no_sq, 1);
+
+	if (attr->srq) {
+		MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x0)->xrcdn);
+		MLX5_SET(qpc, qpc, srqn_rmpn_xrqn,
+			 to_msrq(attr->srq)->msrq.srqn);
+	} else {
+		MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
+		MLX5_SET(qpc, qpc, srqn_rmpn_xrqn,
+			 to_msrq(devr->s1)->msrq.srqn);
+	}
+
+	if (attr->send_cq)
+		MLX5_SET(qpc, qpc, cqn_snd, to_mcq(attr->send_cq)->mcq.cqn);
+
+	if (attr->recv_cq)
+		MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(attr->recv_cq)->mcq.cqn);
+
+	MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma);
+
+	/* 0xffffff means we ask to work with cqe version 0 */
+	if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
+		MLX5_SET(qpc, qpc, user_index, uidx);
+
+	/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
+	if (qp->flags & IB_QP_CREATE_IPOIB_UD_LSO)
+		MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
+
+	err = mlx5_core_create_qp(dev, &base->mqp, in, inlen);
+	kvfree(in);
+	if (err)
+		goto err_create;
+
+	base->container_mibqp = qp;
+	base->mqp.event = mlx5_ib_qp_event;
+
+	get_cqs(qp->type, attr->send_cq, attr->recv_cq,
+		&send_cq, &recv_cq);
+	spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
+	mlx5_ib_lock_cqs(send_cq, recv_cq);
+	/* Maintain device to QPs access, needed for further handling via reset
+	 * flow
+	 */
+	list_add_tail(&qp->qps_list, &dev->qp_list);
+	/* Maintain CQ to QPs access, needed for further handling via reset flow
+	 */
+	if (send_cq)
+		list_add_tail(&qp->cq_send_list, &send_cq->list_send_qp);
+	if (recv_cq)
+		list_add_tail(&qp->cq_recv_list, &recv_cq->list_recv_qp);
+	mlx5_ib_unlock_cqs(send_cq, recv_cq);
+	spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
+
+	return 0;
+
+err_create:
+	destroy_qp_kernel(dev, qp);
+	return err;
+}
+
 static void mlx5_ib_lock_cqs(struct mlx5_ib_cq *send_cq, struct mlx5_ib_cq *recv_cq)
 	__acquires(&send_cq->lock) __acquires(&recv_cq->lock)
 {
@@ -2695,7 +2799,7 @@ static int create_raw_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	if (attr->rwq_ind_tbl)
 		return create_rss_raw_qp_tir(pd, qp, attr, ucmd, udata);
 
-	return create_qp_common(dev, pd, attr, ucmd, udata, qp, uidx);
+	return create_user_qp(dev, pd, attr, ucmd, udata, qp, uidx);
 }
 
 static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
@@ -2819,8 +2923,11 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		err = create_xrc_tgt_qp(dev, init_attr, qp, udata, uidx);
 		break;
 	default:
-		err = create_qp_common(dev, pd, init_attr, ucmd, udata, qp,
-				       uidx);
+		if (udata)
+			err = create_user_qp(dev, pd, init_attr, ucmd, udata,
+					     qp, uidx);
+		else
+			err = create_kernel_qp(dev, pd, init_attr, qp, uidx);
 	}
 	if (err) {
 		mlx5_ib_dbg(dev, "create_qp failed %d\n", err);
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 11/18] RDMA/mlx5: Reduce amount of duplication in QP destroy
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (9 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 10/18] RDMA/mlx5: Separate to user/kernel create QP flows Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 12/18] RDMA/mlx5: Group all create QP parameters to simplify in-kernel interfaces Leon Romanovsky
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Delete both PD argument and checks if udata was provided, in favour
of unified destroy QP functions.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 70 +++++++++++++++------------------
 1 file changed, 31 insertions(+), 39 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index f667abf3f461..78c34737ae2f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1038,25 +1038,36 @@ static int _create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return err;
 }
 
-static void destroy_qp_user(struct mlx5_ib_dev *dev, struct ib_pd *pd,
-			    struct mlx5_ib_qp *qp, struct mlx5_ib_qp_base *base,
-			    struct ib_udata *udata)
+static void destroy_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+		       struct mlx5_ib_qp_base *base, struct ib_udata *udata)
 {
-	struct mlx5_ib_ucontext *context =
-		rdma_udata_to_drv_context(
-			udata,
-			struct mlx5_ib_ucontext,
-			ibucontext);
+	struct mlx5_ib_ucontext *context = rdma_udata_to_drv_context(
+		udata, struct mlx5_ib_ucontext, ibucontext);
 
-	mlx5_ib_db_unmap_user(context, &qp->db);
-	ib_umem_release(base->ubuffer.umem);
+	if (udata) {
+		/* User QP */
+		mlx5_ib_db_unmap_user(context, &qp->db);
+		ib_umem_release(base->ubuffer.umem);
 
-	/*
-	 * Free only the BFREGs which are handled by the kernel.
-	 * BFREGs of UARs allocated dynamically are handled by user.
-	 */
-	if (qp->bfregn != MLX5_IB_INVALID_BFREG)
-		mlx5_ib_free_bfreg(dev, &context->bfregi, qp->bfregn);
+		/*
+		 * Free only the BFREGs which are handled by the kernel.
+		 * BFREGs of UARs allocated dynamically are handled by user.
+		 */
+		if (qp->bfregn != MLX5_IB_INVALID_BFREG)
+			mlx5_ib_free_bfreg(dev, &context->bfregi, qp->bfregn);
+		return;
+	}
+
+	/* Kernel QP */
+	kvfree(qp->sq.wqe_head);
+	kvfree(qp->sq.w_list);
+	kvfree(qp->sq.wrid);
+	kvfree(qp->sq.wr_data);
+	kvfree(qp->rq.wrid);
+	if (qp->db.db)
+		mlx5_db_free(dev->mdev, &qp->db);
+	if (qp->buf.frags)
+		mlx5_frag_buf_free(dev->mdev, &qp->buf);
 }
 
 /* get_sq_edge - Get the next nearby edge.
@@ -1202,19 +1213,6 @@ static int _create_kernel_qp(struct mlx5_ib_dev *dev,
 	return err;
 }
 
-static void destroy_qp_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp)
-{
-	kvfree(qp->sq.wqe_head);
-	kvfree(qp->sq.w_list);
-	kvfree(qp->sq.wrid);
-	kvfree(qp->sq.wr_data);
-	kvfree(qp->rq.wrid);
-	if (qp->db.db)
-		mlx5_db_free(dev->mdev, &qp->db);
-	if (qp->buf.frags)
-		mlx5_frag_buf_free(dev->mdev, &qp->buf);
-}
-
 static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
 {
 	if (attr->srq || (qp->type == IB_QPT_XRC_TGT) ||
@@ -1972,7 +1970,7 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev,
 	err = mlx5_core_create_qp(dev, &base->mqp, in, inlen);
 	kvfree(in);
 	if (err) {
-		destroy_qp_user(dev, NULL, qp, base, udata);
+		destroy_qp(dev, qp, base, udata);
 		return err;
 	}
 
@@ -2170,10 +2168,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return 0;
 
 err_create:
-	if (udata)
-		destroy_qp_user(dev, pd, qp, base, udata);
-	else
-		destroy_qp_kernel(dev, qp);
+	destroy_qp(dev, qp, base, udata);
 	return err;
 }
 
@@ -2300,7 +2295,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return 0;
 
 err_create:
-	destroy_qp_kernel(dev, qp);
+	destroy_qp(dev, qp, base, NULL);
 	return err;
 }
 
@@ -2470,10 +2465,7 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				     base->mqp.qpn);
 	}
 
-	if (udata)
-		destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base, udata);
-	else
-		destroy_qp_kernel(dev, qp);
+	destroy_qp(dev, qp, base, udata);
 }
 
 static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 12/18] RDMA/mlx5: Group all create QP parameters to simplify in-kernel interfaces
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (10 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 11/18] RDMA/mlx5: Reduce amount of duplication in QP destroy Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 13/18] RDMA/mlx5: Promote RSS RAW QP flags check to higher level Leon Romanovsky
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

The amount of parameters passed in and out between internal mlx5
create QP functions is too large to easily follow the flow. Change
it by grouping all create QP parameter into one structure.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 148 +++++++++++++++++---------------
 1 file changed, 81 insertions(+), 67 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 78c34737ae2f..310d9a4d5a30 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1610,14 +1610,24 @@ static void destroy_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *q
 			     to_mpd(qp->ibqp.pd)->uid);
 }
 
-static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-				 struct ib_qp_init_attr *init_attr,
-				 struct mlx5_ib_create_qp_rss *ucmd,
-				 struct ib_udata *udata)
+struct mlx5_create_qp_params {
+	struct ib_udata *udata;
+	size_t inlen;
+	void *ucmd;
+	u8 is_rss_raw : 1;
+	struct ib_qp_init_attr *attr;
+	u32 uidx;
+};
+
+static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+				 struct mlx5_ib_qp *qp,
+				 struct mlx5_create_qp_params *params)
 {
+	struct ib_qp_init_attr *init_attr = params->attr;
+	struct mlx5_ib_create_qp_rss *ucmd = params->ucmd;
+	struct ib_udata *udata = params->udata;
 	struct mlx5_ib_ucontext *mucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
-	struct mlx5_ib_dev *dev = to_mdev(pd->device);
 	struct mlx5_ib_create_qp_resp resp = {};
 	int inlen;
 	int outlen;
@@ -1632,7 +1642,8 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	u32 tdn = mucontext->tdn;
 	u8 lb_flag = 0;
 
-	min_resp_len = offsetof(typeof(resp), bfreg_index) + sizeof(resp.bfreg_index);
+	min_resp_len =
+		offsetof(typeof(resp), bfreg_index) + sizeof(resp.bfreg_index);
 	if (udata->outlen < min_resp_len)
 		return -EINVAL;
 
@@ -1909,11 +1920,12 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
 	return atomic_mode;
 }
 
-static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev,
-			     struct ib_qp_init_attr *attr,
-			     struct mlx5_ib_qp *qp, struct ib_udata *udata,
-			     u32 uidx)
+static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+			     struct mlx5_create_qp_params *params)
 {
+	struct ib_qp_init_attr *attr = params->attr;
+	struct ib_udata *udata = params->udata;
+	u32 uidx = params->uidx;
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
 	struct mlx5_core_dev *mdev = dev->mdev;
@@ -1985,11 +1997,13 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev,
 }
 
 static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
-			  struct ib_qp_init_attr *init_attr,
-			  struct mlx5_ib_create_qp *ucmd,
-			  struct ib_udata *udata, struct mlx5_ib_qp *qp,
-			  u32 uidx)
+			  struct mlx5_ib_qp *qp,
+			  struct mlx5_create_qp_params *params)
 {
+	struct ib_qp_init_attr *init_attr = params->attr;
+	struct mlx5_ib_create_qp *ucmd = params->ucmd;
+	struct ib_udata *udata = params->udata;
+	u32 uidx = params->uidx;
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
 	struct mlx5_core_dev *mdev = dev->mdev;
@@ -2173,9 +2187,11 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 }
 
 static int create_kernel_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
-			    struct ib_qp_init_attr *attr, struct mlx5_ib_qp *qp,
-			    u32 uidx)
+			    struct mlx5_ib_qp *qp,
+			    struct mlx5_create_qp_params *params)
 {
+	struct ib_qp_init_attr *attr = params->attr;
+	u32 uidx = params->uidx;
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
 	struct mlx5_core_dev *mdev = dev->mdev;
@@ -2469,9 +2485,11 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 }
 
 static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-		      struct ib_qp_init_attr *attr,
-		      struct mlx5_ib_create_qp *ucmd, u32 uidx)
+		      struct mlx5_create_qp_params *params)
 {
+	struct ib_qp_init_attr *attr = params->attr;
+	struct mlx5_ib_create_qp *ucmd = params->ucmd;
+	u32 uidx = params->uidx;
 	void *dctc;
 
 	qp->dct.in = kzalloc(MLX5_ST_SZ_BYTES(create_dct_in), GFP_KERNEL);
@@ -2782,16 +2800,14 @@ static size_t process_udata_size(struct ib_qp_init_attr *attr,
 	return min(ucmd, inlen);
 }
 
-static int create_raw_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-			 struct ib_qp_init_attr *attr, void *ucmd,
-			 struct ib_udata *udata, u32 uidx)
+static int create_raw_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+			 struct mlx5_ib_qp *qp,
+			 struct mlx5_create_qp_params *params)
 {
-	struct mlx5_ib_dev *dev = to_mdev(pd->device);
+	if (params->is_rss_raw)
+		return create_rss_raw_qp_tir(dev, pd, qp, params);
 
-	if (attr->rwq_ind_tbl)
-		return create_rss_raw_qp_tir(pd, qp, attr, ucmd, udata);
-
-	return create_user_qp(dev, pd, attr, ucmd, udata, qp, uidx);
+	return create_user_qp(dev, pd, qp, params);
 }
 
 static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
@@ -2821,60 +2837,59 @@ static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return ret;
 }
 
-static int get_qp_uidx(struct mlx5_ib_qp *qp, struct ib_udata *udata,
-		       struct mlx5_ib_create_qp *ucmd,
-		       struct ib_qp_init_attr *attr, u32 *uidx)
+static int get_qp_uidx(struct mlx5_ib_qp *qp,
+		       struct mlx5_create_qp_params *params)
 {
+	struct mlx5_ib_create_qp *ucmd = params->ucmd;
+	struct ib_udata *udata = params->udata;
 	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
 
-	if (attr->rwq_ind_tbl)
+	if (params->is_rss_raw)
 		return 0;
 
-	return get_qp_user_index(ucontext, ucmd, sizeof(*ucmd), uidx);
+	return get_qp_user_index(ucontext, ucmd, sizeof(*ucmd), &params->uidx);
 }
 
-struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
-				struct ib_qp_init_attr *init_attr,
+struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 				struct ib_udata *udata)
 {
-	u32 uidx = MLX5_IB_DEFAULT_UIDX;
+	struct mlx5_create_qp_params params = {};
 	struct mlx5_ib_dev *dev;
 	struct mlx5_ib_qp *qp;
 	enum ib_qp_type type;
-	void *ucmd = NULL;
 	u16 xrcdn = 0;
 	int err;
 
 	dev = pd ? to_mdev(pd->device) :
-		   to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
+		   to_mdev(to_mxrcd(attr->xrcd)->ibxrcd.device);
 
-	err = check_qp_type(dev, init_attr, &type);
-	if (err) {
-		mlx5_ib_dbg(dev, "Unsupported QP type %d\n",
-			    init_attr->qp_type);
+	err = check_qp_type(dev, attr, &type);
+	if (err)
 		return ERR_PTR(err);
-	}
 
-	err = check_valid_flow(dev, pd, init_attr, udata);
+	err = check_valid_flow(dev, pd, attr, udata);
 	if (err)
 		return ERR_PTR(err);
 
-	if (init_attr->qp_type == IB_QPT_GSI)
-		return mlx5_ib_gsi_create_qp(pd, init_attr);
+	if (attr->qp_type == IB_QPT_GSI)
+		return mlx5_ib_gsi_create_qp(pd, attr);
 
-	if (udata) {
-		size_t inlen =
-			process_udata_size(init_attr, udata);
+	params.udata = udata;
+	params.uidx = MLX5_IB_DEFAULT_UIDX;
+	params.attr = attr;
+	params.is_rss_raw = !!attr->rwq_ind_tbl;
 
-		if (!inlen)
+	if (udata) {
+		params.inlen = process_udata_size(attr, udata);
+		if (!params.inlen)
 			return ERR_PTR(-EINVAL);
 
-		ucmd = kzalloc(inlen, GFP_KERNEL);
-		if (!ucmd)
+		params.ucmd = kzalloc(params.inlen, GFP_KERNEL);
+		if (!params.ucmd)
 			return ERR_PTR(-ENOMEM);
 
-		err = ib_copy_from_udata(ucmd, udata, inlen);
+		err = ib_copy_from_udata(params.ucmd, udata, params.inlen);
 		if (err)
 			goto free_ucmd;
 	}
@@ -2887,50 +2902,49 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 
 	qp->type = type;
 	if (udata) {
-		err = process_vendor_flags(dev, qp, ucmd, init_attr);
+		err = process_vendor_flags(dev, qp, params.ucmd, attr);
 		if (err)
 			goto free_qp;
 
-		err = get_qp_uidx(qp, udata, ucmd, init_attr, &uidx);
+		err = get_qp_uidx(qp, &params);
 		if (err)
 			goto free_qp;
 	}
-	err = process_create_flags(dev, qp, init_attr);
+	err = process_create_flags(dev, qp, attr);
 	if (err)
 		goto free_qp;
 
-	err = check_qp_attr(dev, qp, init_attr);
+	err = check_qp_attr(dev, qp, attr);
 	if (err)
 		goto free_qp;
 
 	switch (qp->type) {
 	case IB_QPT_RAW_PACKET:
-		err = create_raw_qp(pd, qp, init_attr, ucmd, udata, uidx);
+		err = create_raw_qp(dev, pd, qp, &params);
 		break;
 	case MLX5_IB_QPT_DCT:
-		err = create_dct(pd, qp, init_attr, ucmd, uidx);
+		err = create_dct(pd, qp, &params);
 		break;
 	case IB_QPT_XRC_TGT:
-		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
-		err = create_xrc_tgt_qp(dev, init_attr, qp, udata, uidx);
+		xrcdn = to_mxrcd(attr->xrcd)->xrcdn;
+		err = create_xrc_tgt_qp(dev, qp, &params);
 		break;
 	default:
 		if (udata)
-			err = create_user_qp(dev, pd, init_attr, ucmd, udata,
-					     qp, uidx);
+			err = create_user_qp(dev, pd, qp, &params);
 		else
-			err = create_kernel_qp(dev, pd, init_attr, qp, uidx);
+			err = create_kernel_qp(dev, pd, qp, &params);
 	}
 	if (err) {
-		mlx5_ib_dbg(dev, "create_qp failed %d\n", err);
+		mlx5_ib_err(dev, "create_qp failed %d\n", err);
 		goto free_qp;
 	}
 
-	kfree(ucmd);
+	kfree(params.ucmd);
 
-	if (is_qp0(init_attr->qp_type))
+	if (is_qp0(attr->qp_type))
 		qp->ibqp.qp_num = 0;
-	else if (is_qp1(init_attr->qp_type))
+	else if (is_qp1(attr->qp_type))
 		qp->ibqp.qp_num = 1;
 	else
 		qp->ibqp.qp_num = qp->trans_qp.base.mqp.qpn;
@@ -2942,7 +2956,7 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 free_qp:
 	kfree(qp);
 free_ucmd:
-	kfree(ucmd);
+	kfree(params.ucmd);
 	return ERR_PTR(err);
 }
 
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 13/18] RDMA/mlx5: Promote RSS RAW QP flags check to higher level
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (11 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 12/18] RDMA/mlx5: Group all create QP parameters to simplify in-kernel interfaces Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:02 ` [PATCH rdma-next 14/18] RDMA/mlx5: Handle udate outlen checks in one place Leon Romanovsky
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Move check that user didn't supplied RSS RAW QP unsupported
command flags to the function that checks all such flags.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 310d9a4d5a30..82de57ef78e7 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1652,13 +1652,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		return -EOPNOTSUPP;
 	}
 
-	if (ucmd->flags & ~(MLX5_QP_FLAG_TUNNEL_OFFLOADS |
-			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
-			   MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC)) {
-		mlx5_ib_dbg(dev, "invalid flags\n");
-		return -EOPNOTSUPP;
-	}
-
 	if (ucmd->rx_hash_fields_mask & MLX5_RX_HASH_INNER &&
 	    !(ucmd->flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)) {
 		mlx5_ib_dbg(dev, "Tunnel offloads must be set for inner RSS\n");
@@ -2687,11 +2680,20 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_BFREG_INDEX, true, qp);
 	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_UAR_PAGE_INDEX, true, qp);
 
+	cond = qp->flags_en & ~(MLX5_QP_FLAG_TUNNEL_OFFLOADS |
+				MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
+				MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC);
+	if (attr->rwq_ind_tbl && cond) {
+		mlx5_ib_dbg(dev, "RSS RAW QP has unsupported flags 0x%X\n",
+			    cond);
+		return -EINVAL;
+	}
+
 	if (flags)
 		mlx5_ib_dbg(dev, "udata has unsupported flags 0x%X\n", flags);
 
 	return (flags) ? -EINVAL : 0;
-}
+	}
 
 static void process_create_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
 				bool cond, struct mlx5_ib_qp *qp)
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 14/18] RDMA/mlx5: Handle udate outlen checks in one place
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (12 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 13/18] RDMA/mlx5: Promote RSS RAW QP flags check to higher level Leon Romanovsky
@ 2020-04-23 19:02 ` Leon Romanovsky
  2020-04-23 19:03 ` [PATCH rdma-next 15/18] RDMA/mlx5: Copy response to the user " Leon Romanovsky
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:02 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Place in one function all udata size checks. This will allow
us move ib_copy_to_udata() in general place and ensure that
it will be performed after call to the FW.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 48 ++++++++++++++++++++-------------
 1 file changed, 30 insertions(+), 18 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 82de57ef78e7..900e76d684fb 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1613,6 +1613,7 @@ static void destroy_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *q
 struct mlx5_create_qp_params {
 	struct ib_udata *udata;
 	size_t inlen;
+	size_t outlen;
 	void *ucmd;
 	u8 is_rss_raw : 1;
 	struct ib_qp_init_attr *attr;
@@ -1638,15 +1639,9 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	void *hfso;
 	u32 selected_fields = 0;
 	u32 outer_l4;
-	size_t min_resp_len;
 	u32 tdn = mucontext->tdn;
 	u8 lb_flag = 0;
 
-	min_resp_len =
-		offsetof(typeof(resp), bfreg_index) + sizeof(resp.bfreg_index);
-	if (udata->outlen < min_resp_len)
-		return -EINVAL;
-
 	if (ucmd->comp_mask) {
 		mlx5_ib_dbg(dev, "invalid comp mask\n");
 		return -EOPNOTSUPP;
@@ -2780,26 +2775,43 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return (create_flags) ? -EINVAL : 0;
 }
 
-static size_t process_udata_size(struct ib_qp_init_attr *attr,
-				 struct ib_udata *udata)
+static int process_udata_size(struct mlx5_ib_dev *dev,
+			      struct mlx5_create_qp_params *params)
 {
 	size_t ucmd = sizeof(struct mlx5_ib_create_qp);
+	struct ib_qp_init_attr *attr = params->attr;
+	struct ib_udata *udata = params->udata;
+	size_t outlen = udata->outlen;
 	size_t inlen = udata->inlen;
 
-	if (attr->qp_type == IB_QPT_DRIVER)
-		return (inlen < ucmd) ? 0 : ucmd;
+	params->outlen = min(outlen, sizeof(struct mlx5_ib_create_qp_resp));
+	if (attr->qp_type == IB_QPT_DRIVER) {
+		params->inlen = (inlen < ucmd) ? 0 : ucmd;
+		goto out;
+	}
 
-	if (!attr->rwq_ind_tbl)
-		return ucmd;
+	if (!params->is_rss_raw) {
+		params->inlen = ucmd;
+		goto out;
+	}
 
+	/* RSS RAW QP */
 	if (inlen < offsetofend(struct mlx5_ib_create_qp_rss, flags))
-		return 0;
+		return -EINVAL;
+
+	if (outlen < offsetofend(struct mlx5_ib_create_qp_resp, bfreg_index))
+		return -EINVAL;
 
 	ucmd = sizeof(struct mlx5_ib_create_qp_rss);
 	if (inlen > ucmd && !ib_is_udata_cleared(udata, ucmd, inlen - ucmd))
-		return 0;
+		return -EINVAL;
+
+	params->inlen = min(ucmd, inlen);
+out:
+	if (!params->inlen)
+		mlx5_ib_dbg(dev, "udata is too small or not cleared\n");
 
-	return min(ucmd, inlen);
+	return (params->inlen) ? 0 : -EINVAL;
 }
 
 static int create_raw_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
@@ -2883,9 +2895,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 	params.is_rss_raw = !!attr->rwq_ind_tbl;
 
 	if (udata) {
-		params.inlen = process_udata_size(attr, udata);
-		if (!params.inlen)
-			return ERR_PTR(-EINVAL);
+		err = process_udata_size(dev, &params);
+		if (err)
+			return ERR_PTR(err);
 
 		params.ucmd = kzalloc(params.inlen, GFP_KERNEL);
 		if (!params.ucmd)
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 15/18] RDMA/mlx5: Copy response to the user in one place
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (13 preceding siblings ...)
  2020-04-23 19:02 ` [PATCH rdma-next 14/18] RDMA/mlx5: Handle udate outlen checks in one place Leon Romanovsky
@ 2020-04-23 19:03 ` Leon Romanovsky
  2020-04-23 19:03 ` [PATCH rdma-next 16/18] RDMA/mlx5: Remove redundant destroy QP call Leon Romanovsky
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Update all the places in create QP flows to copy response
to the user in one place.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 113 +++++++++++++++-----------------
 1 file changed, 52 insertions(+), 61 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 900e76d684fb..8e94a824e29f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1015,17 +1015,8 @@ static int _create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		goto err_free;
 	}
 
-	err = ib_copy_to_udata(udata, resp, min(udata->outlen, sizeof(*resp)));
-	if (err) {
-		mlx5_ib_dbg(dev, "copy failed\n");
-		goto err_unmap;
-	}
-
 	return 0;
 
-err_unmap:
-	mlx5_ib_db_unmap_user(context, &qp->db);
-
 err_free:
 	kvfree(*in);
 
@@ -1551,14 +1542,8 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 
 	qp->trans_qp.base.mqp.qpn = qp->sq.wqe_cnt ? sq->base.mqp.qpn :
 						     rq->base.mqp.qpn;
-	err = ib_copy_to_udata(udata, resp, min(udata->outlen, sizeof(*resp)));
-	if (err)
-		goto err_destroy_tir;
-
 	return 0;
 
-err_destroy_tir:
-	destroy_raw_packet_qp_tir(dev, rq, qp->flags_en, pd);
 err_destroy_rq:
 	destroy_raw_packet_qp_rq(dev, rq);
 err_destroy_sq:
@@ -1618,6 +1603,7 @@ struct mlx5_create_qp_params {
 	u8 is_rss_raw : 1;
 	struct ib_qp_init_attr *attr;
 	u32 uidx;
+	struct mlx5_ib_create_qp_resp resp;
 };
 
 static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
@@ -1629,7 +1615,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	struct ib_udata *udata = params->udata;
 	struct mlx5_ib_ucontext *mucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
-	struct mlx5_ib_create_qp_resp resp = {};
 	int inlen;
 	int outlen;
 	int err;
@@ -1662,12 +1647,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (qp->flags_en & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC)
 		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST;
 
-	err = ib_copy_to_udata(udata, &resp, min(udata->outlen, sizeof(resp)));
-	if (err) {
-		mlx5_ib_dbg(dev, "copy failed\n");
-		return -EINVAL;
-	}
-
 	inlen = MLX5_ST_SZ_BYTES(create_tir_in);
 	outlen = MLX5_ST_SZ_BYTES(create_tir_out);
 	in = kvzalloc(inlen + outlen, GFP_KERNEL);
@@ -1803,34 +1782,30 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		goto err;
 
 	if (mucontext->devx_uid) {
-		resp.comp_mask |= MLX5_IB_CREATE_QP_RESP_MASK_TIRN;
-		resp.tirn = qp->rss_qp.tirn;
+		params->resp.comp_mask |= MLX5_IB_CREATE_QP_RESP_MASK_TIRN;
+		params->resp.tirn = qp->rss_qp.tirn;
 		if (MLX5_CAP_FLOWTABLE_NIC_RX(dev->mdev, sw_owner)) {
-			resp.tir_icm_addr =
+			params->resp.tir_icm_addr =
 				MLX5_GET(create_tir_out, out, icm_address_31_0);
-			resp.tir_icm_addr |= (u64)MLX5_GET(create_tir_out, out,
-							   icm_address_39_32)
-					     << 32;
-			resp.tir_icm_addr |= (u64)MLX5_GET(create_tir_out, out,
-							   icm_address_63_40)
-					     << 40;
-			resp.comp_mask |=
+			params->resp.tir_icm_addr |=
+				(u64)MLX5_GET(create_tir_out, out,
+					      icm_address_39_32)
+				<< 32;
+			params->resp.tir_icm_addr |=
+				(u64)MLX5_GET(create_tir_out, out,
+					      icm_address_63_40)
+				<< 40;
+			params->resp.comp_mask |=
 				MLX5_IB_CREATE_QP_RESP_MASK_TIR_ICM_ADDR;
 		}
 	}
 
-	err = ib_copy_to_udata(udata, &resp, min(udata->outlen, sizeof(resp)));
-	if (err)
-		goto err_copy;
-
 	kvfree(in);
 	/* qpn is reserved for that QP */
 	qp->trans_qp.base.mqp.qpn = 0;
 	qp->is_rss = true;
 	return 0;
 
-err_copy:
-	mlx5_cmd_destroy_tir(dev->mdev, qp->rss_qp.tirn, mucontext->devx_uid);
 err:
 	kvfree(in);
 	return err;
@@ -1995,7 +1970,6 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
 	struct mlx5_core_dev *mdev = dev->mdev;
-	struct mlx5_ib_create_qp_resp resp = {};
 	struct mlx5_ib_cq *send_cq;
 	struct mlx5_ib_cq *recv_cq;
 	unsigned long flags;
@@ -2038,8 +2012,8 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (ucmd->sq_wqe_count > (1 << MLX5_CAP_GEN(mdev, log_max_qp_sz)))
 		return -EINVAL;
 
-	err = _create_user_qp(dev, pd, qp, udata, init_attr, &in, &resp, &inlen,
-			      base, ucmd);
+	err = _create_user_qp(dev, pd, qp, udata, init_attr, &in, &params->resp,
+			      &inlen, base, ucmd);
 	if (err)
 		return err;
 
@@ -2139,7 +2113,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd->sq_buf_addr;
 		raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
 		err = create_raw_packet_qp(dev, qp, in, inlen, pd, udata,
-					   &resp);
+					   &params->resp);
 	} else
 		err = mlx5_core_create_qp(dev, &base->mqp, in, inlen);
 
@@ -2865,6 +2839,25 @@ static int get_qp_uidx(struct mlx5_ib_qp *qp,
 	return get_qp_user_index(ucontext, ucmd, sizeof(*ucmd), &params->uidx);
 }
 
+static int mlx5_ib_destroy_dct(struct mlx5_ib_qp *mqp)
+{
+	struct mlx5_ib_dev *dev = to_mdev(mqp->ibqp.device);
+
+	if (mqp->state == IB_QPS_RTR) {
+		int err;
+
+		err = mlx5_core_destroy_dct(dev, &mqp->dct.mdct);
+		if (err) {
+			mlx5_ib_warn(dev, "failed to destroy DCT %d\n", err);
+			return err;
+		}
+	}
+
+	kfree(mqp->dct.in);
+	kfree(mqp);
+	return 0;
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 				struct ib_udata *udata)
 {
@@ -2955,6 +2948,7 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 	}
 
 	kfree(params.ucmd);
+	params.ucmd = NULL;
 
 	if (is_qp0(attr->qp_type))
 		qp->ibqp.qp_num = 0;
@@ -2965,8 +2959,24 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 
 	qp->trans_qp.xrcdn = xrcdn;
 
+	if (udata)
+		/*
+		 * It is safe to copy response for all user create QP flows,
+		 * including MLX5_IB_QPT_DCT, which doesn't need it.
+		 * In that case, resp will be filled with zeros.
+		 */
+		err = ib_copy_to_udata(udata, &params.resp, params.outlen);
+	if (err)
+		goto destroy_qp;
+
 	return &qp->ibqp;
 
+destroy_qp:
+	if (qp->type == MLX5_IB_QPT_DCT)
+		mlx5_ib_destroy_dct(qp);
+	else
+		destroy_qp_common(dev, qp, udata);
+	qp = NULL;
 free_qp:
 	kfree(qp);
 free_ucmd:
@@ -2974,25 +2984,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 	return ERR_PTR(err);
 }
 
-static int mlx5_ib_destroy_dct(struct mlx5_ib_qp *mqp)
-{
-	struct mlx5_ib_dev *dev = to_mdev(mqp->ibqp.device);
-
-	if (mqp->state == IB_QPS_RTR) {
-		int err;
-
-		err = mlx5_core_destroy_dct(dev, &mqp->dct.mdct);
-		if (err) {
-			mlx5_ib_warn(dev, "failed to destroy DCT %d\n", err);
-			return err;
-		}
-	}
-
-	kfree(mqp->dct.in);
-	kfree(mqp);
-	return 0;
-}
-
 int mlx5_ib_destroy_qp(struct ib_qp *qp, struct ib_udata *udata)
 {
 	struct mlx5_ib_dev *dev = to_mdev(qp->device);
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 16/18] RDMA/mlx5: Remove redundant destroy QP call
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (14 preceding siblings ...)
  2020-04-23 19:03 ` [PATCH rdma-next 15/18] RDMA/mlx5: Copy response to the user " Leon Romanovsky
@ 2020-04-23 19:03 ` Leon Romanovsky
  2020-04-23 19:03 ` [PATCH rdma-next 17/18] RDMA/mlx5: Consolidate into special function all create QP calls Leon Romanovsky
  2020-04-23 19:03 ` [PATCH rdma-next 18/18] RDMA/mlx5: Verify that QP is created with RQ or SQ Leon Romanovsky
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

After major refactoring in create QP flow, it is no needed to call
to destroy QP in XRC_TGT flow.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 8e94a824e29f..0612663868dd 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1887,7 +1887,6 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 			     struct mlx5_create_qp_params *params)
 {
 	struct ib_qp_init_attr *attr = params->attr;
-	struct ib_udata *udata = params->udata;
 	u32 uidx = params->uidx;
 	struct mlx5_ib_resources *devr = &dev->devr;
 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
@@ -1944,10 +1943,8 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	base = &qp->trans_qp.base;
 	err = mlx5_core_create_qp(dev, &base->mqp, in, inlen);
 	kvfree(in);
-	if (err) {
-		destroy_qp(dev, qp, base, udata);
+	if (err)
 		return err;
-	}
 
 	base->container_mibqp = qp;
 	base->mqp.event = mlx5_ib_qp_event;
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 17/18] RDMA/mlx5: Consolidate into special function all create QP calls
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (15 preceding siblings ...)
  2020-04-23 19:03 ` [PATCH rdma-next 16/18] RDMA/mlx5: Remove redundant destroy QP call Leon Romanovsky
@ 2020-04-23 19:03 ` Leon Romanovsky
  2020-04-23 19:03 ` [PATCH rdma-next 18/18] RDMA/mlx5: Verify that QP is created with RQ or SQ Leon Romanovsky
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Finish separation to blocks of mlx5_ib_create_qp() functions,
so all internal create QP implementation are located in one place.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 85 +++++++++++++++++++--------------
 1 file changed, 49 insertions(+), 36 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 0612663868dd..9ee12622ac3c 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1953,6 +1953,7 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	list_add_tail(&qp->qps_list, &dev->qp_list);
 	spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
 
+	qp->trans_qp.xrcdn = to_mxrcd(attr->xrcd)->xrcdn;
 	return 0;
 }
 
@@ -2785,14 +2786,54 @@ static int process_udata_size(struct mlx5_ib_dev *dev,
 	return (params->inlen) ? 0 : -EINVAL;
 }
 
-static int create_raw_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
-			 struct mlx5_ib_qp *qp,
-			 struct mlx5_create_qp_params *params)
+static int create_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+		     struct mlx5_ib_qp *qp,
+		     struct mlx5_create_qp_params *params)
 {
-	if (params->is_rss_raw)
-		return create_rss_raw_qp_tir(dev, pd, qp, params);
+	int err;
+
+	if (params->is_rss_raw) {
+		err = create_rss_raw_qp_tir(dev, pd, qp, params);
+		goto out;
+	}
+
+	if (qp->type == MLX5_IB_QPT_DCT) {
+		err = create_dct(pd, qp, params);
+		goto out;
+	}
+
+	if (qp->type == IB_QPT_XRC_TGT) {
+		err = create_xrc_tgt_qp(dev, qp, params);
+		goto out;
+	}
+
+	if (params->udata)
+		err = create_user_qp(dev, pd, qp, params);
+	else
+		err = create_kernel_qp(dev, pd, qp, params);
+
+out:
+	if (err) {
+		mlx5_ib_err(dev, "Create QP type %d failed\n", qp->type);
+		return err;
+	}
+
+	if (is_qp0(qp->type))
+		qp->ibqp.qp_num = 0;
+	else if (is_qp1(qp->type))
+		qp->ibqp.qp_num = 1;
+	else
+		qp->ibqp.qp_num = qp->trans_qp.base.mqp.qpn;
+
+	mlx5_ib_dbg(dev,
+		"QP type %d, ib qpn 0x%X, mlx qpn 0x%x, rcqn 0x%x, scqn 0x%x\n",
+		qp->type, qp->ibqp.qp_num, qp->trans_qp.base.mqp.qpn,
+		params->attr->recv_cq ? to_mcq(params->attr->recv_cq)->mcq.cqn :
+					-1,
+		params->attr->send_cq ? to_mcq(params->attr->send_cq)->mcq.cqn :
+					-1);
 
-	return create_user_qp(dev, pd, qp, params);
+	return 0;
 }
 
 static int check_qp_attr(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
@@ -2862,7 +2903,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 	struct mlx5_ib_dev *dev;
 	struct mlx5_ib_qp *qp;
 	enum ib_qp_type type;
-	u16 xrcdn = 0;
 	int err;
 
 	dev = pd ? to_mdev(pd->device) :
@@ -2922,40 +2962,13 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attr,
 	if (err)
 		goto free_qp;
 
-	switch (qp->type) {
-	case IB_QPT_RAW_PACKET:
-		err = create_raw_qp(dev, pd, qp, &params);
-		break;
-	case MLX5_IB_QPT_DCT:
-		err = create_dct(pd, qp, &params);
-		break;
-	case IB_QPT_XRC_TGT:
-		xrcdn = to_mxrcd(attr->xrcd)->xrcdn;
-		err = create_xrc_tgt_qp(dev, qp, &params);
-		break;
-	default:
-		if (udata)
-			err = create_user_qp(dev, pd, qp, &params);
-		else
-			err = create_kernel_qp(dev, pd, qp, &params);
-	}
-	if (err) {
-		mlx5_ib_err(dev, "create_qp failed %d\n", err);
+	err = create_qp(dev, pd, qp, &params);
+	if (err)
 		goto free_qp;
-	}
 
 	kfree(params.ucmd);
 	params.ucmd = NULL;
 
-	if (is_qp0(attr->qp_type))
-		qp->ibqp.qp_num = 0;
-	else if (is_qp1(attr->qp_type))
-		qp->ibqp.qp_num = 1;
-	else
-		qp->ibqp.qp_num = qp->trans_qp.base.mqp.qpn;
-
-	qp->trans_qp.xrcdn = xrcdn;
-
 	if (udata)
 		/*
 		 * It is safe to copy response for all user create QP flows,
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH rdma-next 18/18] RDMA/mlx5: Verify that QP is created with RQ or SQ
  2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
                   ` (16 preceding siblings ...)
  2020-04-23 19:03 ` [PATCH rdma-next 17/18] RDMA/mlx5: Consolidate into special function all create QP calls Leon Romanovsky
@ 2020-04-23 19:03 ` Leon Romanovsky
  17 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2020-04-23 19:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Aharon Landau, Eli Cohen, Jack Morgenstein, linux-rdma,
	Maor Gottlieb, Or Gerlitz, Roland Dreier

From: Aharon Landau <aharonl@mellanox.com>

RAW packet QP and underlay QP must be created with either
RQ or SQ, check that.

Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: Aharon Landau <aharonl@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 9ee12622ac3c..956a7ad3843d 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1482,6 +1482,8 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	u16 uid = to_mpd(pd)->uid;
 	u32 out[MLX5_ST_SZ_DW(create_tir_out)] = {};
 
+	if (!qp->sq.wqe_cnt && !qp->rq.wqe_cnt)
+		return -EINVAL;
 	if (qp->sq.wqe_cnt) {
 		err = create_raw_packet_qp_tis(dev, qp, sq, tdn, pd);
 		if (err)
-- 
2.25.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-04-23 19:04 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-23 19:02 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part II) Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 01/18] RDMA/mlx5: Delete unsupported QP types Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 02/18] RDMA/mlx5: Store QP type in the vendor QP structure Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 03/18] RDMA/mlx5: Promote RSS RAW QP attribute check in higher level Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 04/18] RDMA/mlx5: Combine copy of create QP command in RSS RAW QP Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 05/18] RDMA/mlx5: Remove second user copy in create_user_qp Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 06/18] RDMA/mlx5: Rely on existence of udata to separate kernel/user flows Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 07/18] RDMA/mlx5: Delete impossible inlen check Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 08/18] RDMA/mlx5: Globally parse DEVX UID Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 09/18] RDMA/mlx5: Separate XRC_TGT QP creation from common flow Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 10/18] RDMA/mlx5: Separate to user/kernel create QP flows Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 11/18] RDMA/mlx5: Reduce amount of duplication in QP destroy Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 12/18] RDMA/mlx5: Group all create QP parameters to simplify in-kernel interfaces Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 13/18] RDMA/mlx5: Promote RSS RAW QP flags check to higher level Leon Romanovsky
2020-04-23 19:02 ` [PATCH rdma-next 14/18] RDMA/mlx5: Handle udate outlen checks in one place Leon Romanovsky
2020-04-23 19:03 ` [PATCH rdma-next 15/18] RDMA/mlx5: Copy response to the user " Leon Romanovsky
2020-04-23 19:03 ` [PATCH rdma-next 16/18] RDMA/mlx5: Remove redundant destroy QP call Leon Romanovsky
2020-04-23 19:03 ` [PATCH rdma-next 17/18] RDMA/mlx5: Consolidate into special function all create QP calls Leon Romanovsky
2020-04-23 19:03 ` [PATCH rdma-next 18/18] RDMA/mlx5: Verify that QP is created with RQ or SQ Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.