All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I)
@ 2020-04-20 15:10 Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 01/18] RDMA/mlx5: Organize QP types checks in one place Leon Romanovsky
                   ` (18 more replies)
  0 siblings, 19 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, linux-kernel, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Hi,

This is first part of series which tries to return some sanity
to mlx5_ib_create_qp() function. Such refactoring is required
to make extension of that function with less worries of breaking
driver.

Extra goal of such refactoring is to ensure that QP is allocated
at the beginning of function and released at the end. It will allow
us to move QP allocation to be under IB/core responsibility.

It is based on previously sent [1] "[PATCH mlx5-next 00/24] Mass
conversion to light mlx5 command interface"

Thanks

[1] https://lore.kernel.org/linux-rdma/20200420114136.264924-1-leon@kernel.org

Leon Romanovsky (18):
  RDMA/mlx5: Organize QP types checks in one place
  RDMA/mlx5: Delete impossible GSI port check
  RDMA/mlx5: Perform check if QP creation flow is valid
  RDMA/mlx5: Prepare QP allocation for future removal
  RDMA/mlx5: Avoid setting redundant NULL for XRC QPs
  RDMA/mlx5: Set QP subtype immediately when it is known
  RDMA/mlx5: Separate create QP flows to be based on type
  RDMA/mlx5: Split scatter CQE configuration for DCT QP
  RDMA/mlx5: Update all DRIVER QP places to use QP subtype
  RDMA/mlx5: Move DRIVER QP flags check into separate function
  RDMA/mlx5: Remove second copy from user for non RSS RAW QPs
  RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow
  RDMA/mlx5: Delete create QP flags obfuscation
  RDMA/mlx5: Process create QP flags in one place
  RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE
    signature
  RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags
  RDMA/mlx5: Return all configured create flags through query QP
  RDMA/mlx5: Process all vendor flags in one place

 drivers/infiniband/hw/mlx5/devx.c    |   2 +-
 drivers/infiniband/hw/mlx5/flow.c    |   2 +-
 drivers/infiniband/hw/mlx5/gsi.c     |  10 -
 drivers/infiniband/hw/mlx5/main.c    |   9 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  24 +-
 drivers/infiniband/hw/mlx5/odp.c     |   2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 928 +++++++++++++--------------
 7 files changed, 467 insertions(+), 510 deletions(-)

--
2.25.2


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 01/18] RDMA/mlx5: Organize QP types checks in one place
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 02/18] RDMA/mlx5: Delete impossible GSI port check Leon Romanovsky
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Perform check if QP type is supported in one place at the beginning of
the create_qp function instead of current implementation with checks
buried inside of the code.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 129 +++++++++++++++++---------------
 1 file changed, 68 insertions(+), 61 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 0f986ce22a96..df63ad38509b 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2677,12 +2677,42 @@ static int set_mlx_qp_type(struct mlx5_ib_dev *dev,
 		}
 	}
 
-	if (!MLX5_CAP_GEN(dev->mdev, dct)) {
-		mlx5_ib_dbg(dev, "DC transport is not supported\n");
-		return -EOPNOTSUPP;
+	return 0;
+}
+
+static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr)
+{
+	if (attr->qp_type == IB_QPT_DRIVER && !MLX5_CAP_GEN(dev->mdev, dct))
+		goto out;
+
+	switch (attr->qp_type) {
+	case IB_QPT_XRC_TGT:
+	case IB_QPT_XRC_INI:
+		if (!MLX5_CAP_GEN(dev->mdev, xrc))
+			goto out;
+		fallthrough;
+	case IB_QPT_RAW_PACKET:
+	case IB_QPT_RC:
+	case IB_QPT_UC:
+	case IB_QPT_UD:
+	case IB_QPT_SMI:
+	case MLX5_IB_QPT_HW_GSI:
+	case MLX5_IB_QPT_REG_UMR:
+	case IB_QPT_DRIVER:
+	case IB_QPT_GSI:
+		return 0;
+	case IB_QPT_RAW_IPV6:
+	case IB_QPT_RAW_ETHERTYPE:
+	case IB_QPT_MAX:
+	default:
+		goto out;
 	}
 
 	return 0;
+
+out:
+	mlx5_ib_dbg(dev, "Unsupported QP type %d\n", attr->qp_type);
+	return -EOPNOTSUPP;
 }
 
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
@@ -2698,9 +2728,17 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
 
-	if (pd) {
-		dev = to_mdev(pd->device);
+	dev = pd ? to_mdev(pd->device) :
+		   to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
 
+	err = check_qp_type(dev, init_attr);
+	if (err) {
+		mlx5_ib_dbg(dev, "Unsupported QP type %d\n",
+			    init_attr->qp_type);
+		return ERR_PTR(err);
+	}
+
+	if (pd) {
 		if (init_attr->qp_type == IB_QPT_RAW_PACKET) {
 			if (!ucontext) {
 				mlx5_ib_dbg(dev, "Raw Packet QP is not supported for kernel consumers\n");
@@ -2718,7 +2756,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				ib_qp_type_str(init_attr->qp_type));
 			return ERR_PTR(-EINVAL);
 		}
-		dev = to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
 	}
 
 	if (init_attr->qp_type == IB_QPT_DRIVER) {
@@ -2741,67 +2778,37 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		}
 	}
 
-	switch (init_attr->qp_type) {
-	case IB_QPT_XRC_TGT:
-	case IB_QPT_XRC_INI:
-		if (!MLX5_CAP_GEN(dev->mdev, xrc)) {
-			mlx5_ib_dbg(dev, "XRC not supported\n");
-			return ERR_PTR(-ENOSYS);
-		}
-		init_attr->recv_cq = NULL;
-		if (init_attr->qp_type == IB_QPT_XRC_TGT) {
-			xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
-			init_attr->send_cq = NULL;
-		}
-
-		/* fall through */
-	case IB_QPT_RAW_PACKET:
-	case IB_QPT_RC:
-	case IB_QPT_UC:
-	case IB_QPT_UD:
-	case IB_QPT_SMI:
-	case MLX5_IB_QPT_HW_GSI:
-	case MLX5_IB_QPT_REG_UMR:
-	case MLX5_IB_QPT_DCI:
-		qp = kzalloc(sizeof(*qp), GFP_KERNEL);
-		if (!qp)
-			return ERR_PTR(-ENOMEM);
-
-		err = create_qp_common(dev, pd, init_attr, udata, qp);
-		if (err) {
-			mlx5_ib_dbg(dev, "create_qp_common failed\n");
-			kfree(qp);
-			return ERR_PTR(err);
-		}
+	if (init_attr->qp_type == IB_QPT_GSI)
+		return mlx5_ib_gsi_create_qp(pd, init_attr);
 
-		if (is_qp0(init_attr->qp_type))
-			qp->ibqp.qp_num = 0;
-		else if (is_qp1(init_attr->qp_type))
-			qp->ibqp.qp_num = 1;
-		else
-			qp->ibqp.qp_num = qp->trans_qp.base.mqp.qpn;
+	if (init_attr->qp_type == IB_QPT_XRC_TGT) {
+		init_attr->recv_cq = NULL;
+		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
+		init_attr->send_cq = NULL;
+	}
 
-		mlx5_ib_dbg(dev, "ib qpnum 0x%x, mlx qpn 0x%x, rcqn 0x%x, scqn 0x%x\n",
-			    qp->ibqp.qp_num, qp->trans_qp.base.mqp.qpn,
-			    init_attr->recv_cq ? to_mcq(init_attr->recv_cq)->mcq.cqn : -1,
-			    init_attr->send_cq ? to_mcq(init_attr->send_cq)->mcq.cqn : -1);
+	if (init_attr->qp_type == IB_QPT_XRC_INI)
+		init_attr->recv_cq = NULL;
 
-		qp->trans_qp.xrcdn = xrcdn;
+	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
+	if (!qp)
+		return ERR_PTR(-ENOMEM);
 
-		break;
+	err = create_qp_common(dev, pd, init_attr, udata, qp);
+	if (err) {
+		mlx5_ib_dbg(dev, "create_qp_common failed\n");
+		kfree(qp);
+		return ERR_PTR(err);
+	}
 
-	case IB_QPT_GSI:
-		return mlx5_ib_gsi_create_qp(pd, init_attr);
+	if (is_qp0(init_attr->qp_type))
+		qp->ibqp.qp_num = 0;
+	else if (is_qp1(init_attr->qp_type))
+		qp->ibqp.qp_num = 1;
+	else
+		qp->ibqp.qp_num = qp->trans_qp.base.mqp.qpn;
 
-	case IB_QPT_RAW_IPV6:
-	case IB_QPT_RAW_ETHERTYPE:
-	case IB_QPT_MAX:
-	default:
-		mlx5_ib_dbg(dev, "unsupported qp type %d\n",
-			    init_attr->qp_type);
-		/* Don't support raw QPs */
-		return ERR_PTR(-EOPNOTSUPP);
-	}
+	qp->trans_qp.xrcdn = xrcdn;
 
 	if (verbs_init_attr->qp_type == IB_QPT_DRIVER)
 		qp->qp_sub_type = init_attr->qp_type;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 02/18] RDMA/mlx5: Delete impossible GSI port check
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 01/18] RDMA/mlx5: Organize QP types checks in one place Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 03/18] RDMA/mlx5: Perform check if QP creation flow is valid Leon Romanovsky
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

GSI QP is created in the kernel with very strict parameters,
there is no possible way that port number will be wrong in
such flow.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/gsi.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/gsi.c b/drivers/infiniband/hw/mlx5/gsi.c
index fbae1c094fe2..40d418153891 100644
--- a/drivers/infiniband/hw/mlx5/gsi.c
+++ b/drivers/infiniband/hw/mlx5/gsi.c
@@ -122,8 +122,6 @@ struct ib_qp *mlx5_ib_gsi_create_qp(struct ib_pd *pd,
 	int num_qps = 0;
 	int ret;
 
-	mlx5_ib_dbg(dev, "creating GSI QP\n");
-
 	if (mlx5_ib_deth_sqpn_cap(dev)) {
 		if (MLX5_CAP_GEN(dev->mdev,
 				 port_type) == MLX5_CAP_PORT_TYPE_IB)
@@ -132,14 +130,6 @@ struct ib_qp *mlx5_ib_gsi_create_qp(struct ib_pd *pd,
 			num_qps = MLX5_MAX_PORTS;
 	}
 
-
-	if (port_num > ARRAY_SIZE(dev->devr.ports) || port_num < 1) {
-		mlx5_ib_warn(dev,
-			     "invalid port number %d during GSI QP creation\n",
-			     port_num);
-		return ERR_PTR(-EINVAL);
-	}
-
 	gsi = kzalloc(sizeof(*gsi), GFP_KERNEL);
 	if (!gsi)
 		return ERR_PTR(-ENOMEM);
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 03/18] RDMA/mlx5: Perform check if QP creation flow is valid
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 01/18] RDMA/mlx5: Organize QP types checks in one place Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 02/18] RDMA/mlx5: Delete impossible GSI port check Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 04/18] RDMA/mlx5: Prepare QP allocation for future removal Leon Romanovsky
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Fast check that kernel and user flows provides enough
data to create QP.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 128 +++++++++++++++-----------------
 1 file changed, 61 insertions(+), 67 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index df63ad38509b..2722923a1859 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1666,9 +1666,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	size_t required_cmd_sz;
 	u8 lb_flag = 0;
 
-	if (init_attr->qp_type != IB_QPT_RAW_PACKET)
-		return -EOPNOTSUPP;
-
 	if (init_attr->create_flags || init_attr->send_cq)
 		return -EINVAL;
 
@@ -2032,13 +2029,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (mlx5_st < 0)
 		return -EINVAL;
 
-	if (init_attr->rwq_ind_tbl) {
-		if (!udata)
-			return -ENOSYS;
-
-		err = create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
-		return err;
-	}
+	if (init_attr->rwq_ind_tbl)
+		return create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
 
 	if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
 		if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
@@ -2565,39 +2557,6 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 		destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base, udata);
 }
 
-static const char *ib_qp_type_str(enum ib_qp_type type)
-{
-	switch (type) {
-	case IB_QPT_SMI:
-		return "IB_QPT_SMI";
-	case IB_QPT_GSI:
-		return "IB_QPT_GSI";
-	case IB_QPT_RC:
-		return "IB_QPT_RC";
-	case IB_QPT_UC:
-		return "IB_QPT_UC";
-	case IB_QPT_UD:
-		return "IB_QPT_UD";
-	case IB_QPT_RAW_IPV6:
-		return "IB_QPT_RAW_IPV6";
-	case IB_QPT_RAW_ETHERTYPE:
-		return "IB_QPT_RAW_ETHERTYPE";
-	case IB_QPT_XRC_INI:
-		return "IB_QPT_XRC_INI";
-	case IB_QPT_XRC_TGT:
-		return "IB_QPT_XRC_TGT";
-	case IB_QPT_RAW_PACKET:
-		return "IB_QPT_RAW_PACKET";
-	case MLX5_IB_QPT_REG_UMR:
-		return "MLX5_IB_QPT_REG_UMR";
-	case IB_QPT_DRIVER:
-		return "IB_QPT_DRIVER";
-	case IB_QPT_MAX:
-	default:
-		return "Invalid QP type";
-	}
-}
-
 static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd,
 					struct ib_qp_init_attr *attr,
 					struct mlx5_ib_create_qp *ucmd,
@@ -2655,9 +2614,6 @@ static int set_mlx_qp_type(struct mlx5_ib_dev *dev,
 	enum { MLX_QP_FLAGS = MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI };
 	int err;
 
-	if (!udata)
-		return -EINVAL;
-
 	if (udata->inlen < sizeof(*ucmd)) {
 		mlx5_ib_dbg(dev, "create_qp user command is smaller than expected\n");
 		return -EINVAL;
@@ -2715,6 +2671,62 @@ static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr)
 	return -EOPNOTSUPP;
 }
 
+static int check_valid_flow(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+			    struct ib_qp_init_attr *attr,
+			    struct ib_udata *udata)
+{
+	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
+		udata, struct mlx5_ib_ucontext, ibucontext);
+
+	if (!udata) {
+		/* Kernel create_qp callers */
+		if (attr->rwq_ind_tbl)
+			return -EOPNOTSUPP;
+
+		switch (attr->qp_type) {
+		case IB_QPT_RAW_PACKET:
+		case IB_QPT_DRIVER:
+			return -EOPNOTSUPP;
+		default:
+			return 0;
+		}
+	}
+
+	/* Userspace create_qp callers */
+	if (attr->qp_type == IB_QPT_RAW_PACKET && !ucontext->cqe_version) {
+		mlx5_ib_dbg(dev,
+			"Raw Packet QP is only supported for CQE version > 0\n");
+		return -EINVAL;
+	}
+
+	if (attr->qp_type != IB_QPT_RAW_PACKET && attr->rwq_ind_tbl) {
+		mlx5_ib_dbg(dev,
+			    "Wrong QP type %d for the RWQ indirect table\n",
+			    attr->qp_type);
+		return -EINVAL;
+	}
+
+	switch (attr->qp_type) {
+	case IB_QPT_SMI:
+	case MLX5_IB_QPT_HW_GSI:
+	case MLX5_IB_QPT_REG_UMR:
+	case IB_QPT_GSI:
+		mlx5_ib_dbg(dev, "Kernel doesn't support QP type %d\n",
+			    attr->qp_type);
+		return -EINVAL;
+	default:
+		break;
+	}
+
+	/*
+	 * We don't need to see this warning, it means that kernel code
+	 * missing ib_pd. Placed here to catch developer's mistakes.
+	 */
+	WARN_ONCE(!pd && attr->qp_type != IB_QPT_XRC_TGT,
+		  "There is a missing PD pointer assignment\n");
+	return 0;
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *verbs_init_attr,
 				struct ib_udata *udata)
@@ -2725,8 +2737,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	int err;
 	struct ib_qp_init_attr mlx_init_attr;
 	struct ib_qp_init_attr *init_attr = verbs_init_attr;
-	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
-		udata, struct mlx5_ib_ucontext, ibucontext);
 
 	dev = pd ? to_mdev(pd->device) :
 		   to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
@@ -2738,25 +2748,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		return ERR_PTR(err);
 	}
 
-	if (pd) {
-		if (init_attr->qp_type == IB_QPT_RAW_PACKET) {
-			if (!ucontext) {
-				mlx5_ib_dbg(dev, "Raw Packet QP is not supported for kernel consumers\n");
-				return ERR_PTR(-EINVAL);
-			} else if (!ucontext->cqe_version) {
-				mlx5_ib_dbg(dev, "Raw Packet QP is only supported for CQE version > 0\n");
-				return ERR_PTR(-EINVAL);
-			}
-		}
-	} else {
-		/* being cautious here */
-		if (init_attr->qp_type != IB_QPT_XRC_TGT &&
-		    init_attr->qp_type != MLX5_IB_QPT_REG_UMR) {
-			pr_warn("%s: no PD for transport %s\n", __func__,
-				ib_qp_type_str(init_attr->qp_type));
-			return ERR_PTR(-EINVAL);
-		}
-	}
+	err = check_valid_flow(dev, pd, init_attr, udata);
+	if (err)
+		return ERR_PTR(err);
 
 	if (init_attr->qp_type == IB_QPT_DRIVER) {
 		struct mlx5_ib_create_qp ucmd;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 04/18] RDMA/mlx5: Prepare QP allocation for future removal
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (2 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 03/18] RDMA/mlx5: Perform check if QP creation flow is valid Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 05/18] RDMA/mlx5: Avoid setting redundant NULL for XRC QPs Leon Romanovsky
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Unify the QP memory allocation across different paths,
so it will be in one place.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 45 +++++++++++++++------------------
 1 file changed, 20 insertions(+), 25 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 2722923a1859..6a025153cb93 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2557,14 +2557,13 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 		destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base, udata);
 }
 
-static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd,
+static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 					struct ib_qp_init_attr *attr,
 					struct mlx5_ib_create_qp *ucmd,
 					struct ib_udata *udata)
 {
 	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
-	struct mlx5_ib_qp *qp;
 	int err = 0;
 	u32 uidx = MLX5_IB_DEFAULT_UIDX;
 	void *dctc;
@@ -2576,15 +2575,9 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd,
 	if (err)
 		return ERR_PTR(err);
 
-	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
-	if (!qp)
-		return ERR_PTR(-ENOMEM);
-
 	qp->dct.in = kzalloc(MLX5_ST_SZ_BYTES(create_dct_in), GFP_KERNEL);
-	if (!qp->dct.in) {
-		err = -ENOMEM;
-		goto err_free;
-	}
+	if (!qp->dct.in)
+		return ERR_PTR(-ENOMEM);
 
 	MLX5_SET(create_dct_in, qp->dct.in, uid, to_mpd(pd)->uid);
 	dctc = MLX5_ADDR_OF(create_dct_in, qp->dct.in, dct_context_entry);
@@ -2601,9 +2594,6 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd,
 	qp->state = IB_QPS_RESET;
 
 	return &qp->ibqp;
-err_free:
-	kfree(qp);
-	return ERR_PTR(err);
 }
 
 static int set_mlx_qp_type(struct mlx5_ib_dev *dev,
@@ -2752,6 +2742,13 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (err)
 		return ERR_PTR(err);
 
+	if (init_attr->qp_type == IB_QPT_GSI)
+		return mlx5_ib_gsi_create_qp(pd, init_attr);
+
+	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
+	if (!qp)
+		return ERR_PTR(-ENOMEM);
+
 	if (init_attr->qp_type == IB_QPT_DRIVER) {
 		struct mlx5_ib_create_qp ucmd;
 
@@ -2759,22 +2756,21 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		memcpy(init_attr, verbs_init_attr, sizeof(*verbs_init_attr));
 		err = set_mlx_qp_type(dev, init_attr, &ucmd, udata);
 		if (err)
-			return ERR_PTR(err);
+			goto free_qp;
 
 		if (init_attr->qp_type == MLX5_IB_QPT_DCI) {
 			if (init_attr->cap.max_recv_wr ||
 			    init_attr->cap.max_recv_sge) {
 				mlx5_ib_dbg(dev, "DCI QP requires zero size receive queue\n");
-				return ERR_PTR(-EINVAL);
+				err = -EINVAL;
+				goto free_qp;
 			}
 		} else {
-			return mlx5_ib_create_dct(pd, init_attr, &ucmd, udata);
+			return mlx5_ib_create_dct(pd, qp, init_attr, &ucmd,
+						  udata);
 		}
 	}
 
-	if (init_attr->qp_type == IB_QPT_GSI)
-		return mlx5_ib_gsi_create_qp(pd, init_attr);
-
 	if (init_attr->qp_type == IB_QPT_XRC_TGT) {
 		init_attr->recv_cq = NULL;
 		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
@@ -2784,15 +2780,10 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (init_attr->qp_type == IB_QPT_XRC_INI)
 		init_attr->recv_cq = NULL;
 
-	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
-	if (!qp)
-		return ERR_PTR(-ENOMEM);
-
 	err = create_qp_common(dev, pd, init_attr, udata, qp);
 	if (err) {
 		mlx5_ib_dbg(dev, "create_qp_common failed\n");
-		kfree(qp);
-		return ERR_PTR(err);
+		goto free_qp;
 	}
 
 	if (is_qp0(init_attr->qp_type))
@@ -2808,6 +2799,10 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		qp->qp_sub_type = init_attr->qp_type;
 
 	return &qp->ibqp;
+
+free_qp:
+	kfree(qp);
+	return ERR_PTR(err);
 }
 
 static int mlx5_ib_destroy_dct(struct mlx5_ib_qp *mqp)
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 05/18] RDMA/mlx5: Avoid setting redundant NULL for XRC QPs
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (3 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 04/18] RDMA/mlx5: Prepare QP allocation for future removal Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 06/18] RDMA/mlx5: Set QP subtype immediately when it is known Leon Romanovsky
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

There is no need to set NULL in recv_cq and send_cq, they are already
set to NULL by the IB/core logic.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 6a025153cb93..2039f5391e20 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2771,14 +2771,8 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		}
 	}
 
-	if (init_attr->qp_type == IB_QPT_XRC_TGT) {
-		init_attr->recv_cq = NULL;
+	if (init_attr->qp_type == IB_QPT_XRC_TGT)
 		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
-		init_attr->send_cq = NULL;
-	}
-
-	if (init_attr->qp_type == IB_QPT_XRC_INI)
-		init_attr->recv_cq = NULL;
 
 	err = create_qp_common(dev, pd, init_attr, udata, qp);
 	if (err) {
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 06/18] RDMA/mlx5: Set QP subtype immediately when it is known
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (4 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 05/18] RDMA/mlx5: Avoid setting redundant NULL for XRC QPs Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 07/18] RDMA/mlx5: Separate create QP flows to be based on type Leon Romanovsky
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

There is no need to delay QP subtype assignment to the end of the
create_qp() function and it is better to move it to be immediately
after it is checked so we would be able to rewrite later checks
to be based on it and not on over-written struct ib_qp_init_attr.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 2039f5391e20..35308e7ae312 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2581,7 +2581,6 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 
 	MLX5_SET(create_dct_in, qp->dct.in, uid, to_mpd(pd)->uid);
 	dctc = MLX5_ADDR_OF(create_dct_in, qp->dct.in, dct_context_entry);
-	qp->qp_sub_type = MLX5_IB_QPT_DCT;
 	MLX5_SET(dctc, dctc, pd, to_mpd(pd)->pdn);
 	MLX5_SET(dctc, dctc, srqn_xrqn, to_msrq(attr->srq)->msrq.srqn);
 	MLX5_SET(dctc, dctc, cqn, to_mcq(attr->recv_cq)->mcq.cqn);
@@ -2765,7 +2764,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				err = -EINVAL;
 				goto free_qp;
 			}
+			qp->qp_sub_type = MLX5_IB_QPT_DCI;
 		} else {
+			qp->qp_sub_type = MLX5_IB_QPT_DCT;
 			return mlx5_ib_create_dct(pd, qp, init_attr, &ucmd,
 						  udata);
 		}
@@ -2789,9 +2790,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 
 	qp->trans_qp.xrcdn = xrcdn;
 
-	if (verbs_init_attr->qp_type == IB_QPT_DRIVER)
-		qp->qp_sub_type = init_attr->qp_type;
-
 	return &qp->ibqp;
 
 free_qp:
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 07/18] RDMA/mlx5: Separate create QP flows to be based on type
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (5 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 06/18] RDMA/mlx5: Set QP subtype immediately when it is known Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 08/18] RDMA/mlx5: Split scatter CQE configuration for DCT QP Leon Romanovsky
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Move driver QP creation flow to separate functions to simplify
the create_qp() and allow future separation of create_qp_common()
to subtypes.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 54 ++++++++++++++++++++++++---------
 1 file changed, 39 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 35308e7ae312..3b10fa5c6db8 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2557,10 +2557,9 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 		destroy_qp_user(dev, &get_pd(qp)->ibpd, qp, base, udata);
 }
 
-static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
-					struct ib_qp_init_attr *attr,
-					struct mlx5_ib_create_qp *ucmd,
-					struct ib_udata *udata)
+static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
+		      struct ib_qp_init_attr *attr,
+		      struct mlx5_ib_create_qp *ucmd, struct ib_udata *udata)
 {
 	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
@@ -2568,16 +2567,13 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	u32 uidx = MLX5_IB_DEFAULT_UIDX;
 	void *dctc;
 
-	if (!attr->srq || !attr->recv_cq)
-		return ERR_PTR(-EINVAL);
-
 	err = get_qp_user_index(ucontext, ucmd, sizeof(*ucmd), &uidx);
 	if (err)
-		return ERR_PTR(err);
+		return err;
 
 	qp->dct.in = kzalloc(MLX5_ST_SZ_BYTES(create_dct_in), GFP_KERNEL);
 	if (!qp->dct.in)
-		return ERR_PTR(-ENOMEM);
+		return -ENOMEM;
 
 	MLX5_SET(create_dct_in, qp->dct.in, uid, to_mpd(pd)->uid);
 	dctc = MLX5_ADDR_OF(create_dct_in, qp->dct.in, dct_context_entry);
@@ -2592,7 +2588,7 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 
 	qp->state = IB_QPS_RESET;
 
-	return &qp->ibqp;
+	return 0;
 }
 
 static int set_mlx_qp_type(struct mlx5_ib_dev *dev,
@@ -2716,10 +2712,36 @@ static int check_valid_flow(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return 0;
 }
 
+static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
+			    struct ib_qp_init_attr *attr,
+			    struct mlx5_ib_create_qp *ucmd,
+			    struct ib_udata *udata)
+{
+	struct mlx5_ib_dev *mdev = to_mdev(pd->device);
+	int ret = -EINVAL;
+
+	switch (qp->qp_sub_type) {
+	case MLX5_IB_QPT_DCT:
+		if (!attr->srq || !attr->recv_cq)
+			goto out;
+
+		ret = create_dct(pd, qp, attr, ucmd, udata);
+		break;
+	case MLX5_IB_QPT_DCI:
+		ret = create_qp_common(mdev, pd, attr, udata, qp);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+out:	return ret;
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *verbs_init_attr,
 				struct ib_udata *udata)
 {
+	struct mlx5_ib_create_qp ucmd = {};
 	struct mlx5_ib_dev *dev;
 	struct mlx5_ib_qp *qp;
 	u16 xrcdn = 0;
@@ -2749,8 +2771,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		return ERR_PTR(-ENOMEM);
 
 	if (init_attr->qp_type == IB_QPT_DRIVER) {
-		struct mlx5_ib_create_qp ucmd;
-
 		init_attr = &mlx_init_attr;
 		memcpy(init_attr, verbs_init_attr, sizeof(*verbs_init_attr));
 		err = set_mlx_qp_type(dev, init_attr, &ucmd, udata);
@@ -2767,15 +2787,19 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 			qp->qp_sub_type = MLX5_IB_QPT_DCI;
 		} else {
 			qp->qp_sub_type = MLX5_IB_QPT_DCT;
-			return mlx5_ib_create_dct(pd, qp, init_attr, &ucmd,
-						  udata);
 		}
 	}
 
 	if (init_attr->qp_type == IB_QPT_XRC_TGT)
 		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
 
-	err = create_qp_common(dev, pd, init_attr, udata, qp);
+	switch (init_attr->qp_type) {
+	case IB_QPT_DRIVER:
+		err = create_driver_qp(pd, qp, init_attr, &ucmd, udata);
+		break;
+	default:
+		err = create_qp_common(dev, pd, init_attr, udata, qp);
+	}
 	if (err) {
 		mlx5_ib_dbg(dev, "create_qp_common failed\n");
 		goto free_qp;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 08/18] RDMA/mlx5: Split scatter CQE configuration for DCT QP
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (6 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 07/18] RDMA/mlx5: Separate create QP flows to be based on type Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 09/18] RDMA/mlx5: Update all DRIVER QP places to use QP subtype Leon Romanovsky
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

DCT QPs have separate creation flow and can be easily extracted
from configure_responder_scat_cqe(), this makes both updated
functions more clear.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 3b10fa5c6db8..3d19adc7c260 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1907,13 +1907,6 @@ static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
 
 	rcqe_sz = mlx5_ib_get_cqe_size(init_attr->recv_cq);
 
-	if (init_attr->qp_type == MLX5_IB_QPT_DCT) {
-		if (rcqe_sz == 128)
-			MLX5_SET(dctc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
-
-		return;
-	}
-
 	MLX5_SET(qpc, qpc, cs_res,
 		 rcqe_sz == 128 ? MLX5_RES_SCAT_DATA64_CQE :
 				  MLX5_RES_SCAT_DATA32_CQE);
@@ -2583,8 +2576,12 @@ static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	MLX5_SET64(dctc, dctc, dc_access_key, ucmd->access_key);
 	MLX5_SET(dctc, dctc, user_index, uidx);
 
-	if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE)
-		configure_responder_scat_cqe(attr, dctc);
+	if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE) {
+		int rcqe_sz = mlx5_ib_get_cqe_size(attr->recv_cq);
+
+		if (rcqe_sz == 128)
+			MLX5_SET(dctc, dctc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
+	}
 
 	qp->state = IB_QPS_RESET;
 
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 09/18] RDMA/mlx5: Update all DRIVER QP places to use QP subtype
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (7 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 08/18] RDMA/mlx5: Split scatter CQE configuration for DCT QP Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 10/18] RDMA/mlx5: Move DRIVER QP flags check into separate function Leon Romanovsky
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Instead of overwriting QP init attributes with driver QP subtype,
use that subtype directly. This change will allow us to remove
logic which cached QP init attributes.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 48 +++++++++++----------------------
 1 file changed, 15 insertions(+), 33 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 3d19adc7c260..4e6e72adb4c3 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1232,7 +1232,7 @@ static void destroy_qp_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp)
 static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
 {
 	if (attr->srq || (attr->qp_type == IB_QPT_XRC_TGT) ||
-	    (attr->qp_type == MLX5_IB_QPT_DCI) ||
+	    (qp->qp_sub_type == MLX5_IB_QPT_DCI) ||
 	    (attr->qp_type == IB_QPT_XRC_INI))
 		return MLX5_SRQ_RQ;
 	else if (!qp->has_rq)
@@ -1241,15 +1241,6 @@ static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
 		return MLX5_NON_ZERO_RQ;
 }
 
-static int is_connected(enum ib_qp_type qp_type)
-{
-	if (qp_type == IB_QPT_RC || qp_type == IB_QPT_UC ||
-	    qp_type == MLX5_IB_QPT_DCI)
-		return 1;
-
-	return 0;
-}
-
 static int create_raw_packet_qp_tis(struct mlx5_ib_dev *dev,
 				    struct mlx5_ib_qp *qp,
 				    struct mlx5_ib_sq *sq, u32 tdn,
@@ -1897,33 +1888,14 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return err;
 }
 
-static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
-					 void *qpc)
-{
-	int rcqe_sz;
-
-	if (init_attr->qp_type == MLX5_IB_QPT_DCI)
-		return;
-
-	rcqe_sz = mlx5_ib_get_cqe_size(init_attr->recv_cq);
-
-	MLX5_SET(qpc, qpc, cs_res,
-		 rcqe_sz == 128 ? MLX5_RES_SCAT_DATA64_CQE :
-				  MLX5_RES_SCAT_DATA32_CQE);
-}
-
 static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
 					 struct ib_qp_init_attr *init_attr,
 					 struct mlx5_ib_create_qp *ucmd,
 					 void *qpc)
 {
-	enum ib_qp_type qpt = init_attr->qp_type;
 	int scqe_sz;
 	bool allow_scat_cqe = false;
 
-	if (qpt == IB_QPT_UC || qpt == IB_QPT_UD)
-		return;
-
 	if (ucmd)
 		allow_scat_cqe = ucmd->flags & MLX5_QP_FLAG_ALLOW_SCATTER_CQE;
 
@@ -2018,7 +1990,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	spin_lock_init(&qp->sq.lock);
 	spin_lock_init(&qp->rq.lock);
 
-	mlx5_st = to_mlx5_st(init_attr->qp_type);
+	mlx5_st = to_mlx5_st((init_attr->qp_type != IB_QPT_DRIVER) ?
+				     init_attr->qp_type :
+				     qp->qp_sub_type);
 	if (mlx5_st < 0)
 		return -EINVAL;
 
@@ -2240,12 +2214,20 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		MLX5_SET(qpc, qpc, cd_slave_receive, 1);
 	if (qp->flags & MLX5_IB_QP_PACKET_BASED_CREDIT)
 		MLX5_SET(qpc, qpc, req_e2e_credit_mode, 1);
-	if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
-		configure_responder_scat_cqe(init_attr, qpc);
+	if (qp->scat_cqe && (init_attr->qp_type == IB_QPT_RC ||
+			     init_attr->qp_type == IB_QPT_UC)) {
+		int rcqe_sz = rcqe_sz =
+			mlx5_ib_get_cqe_size(init_attr->recv_cq);
+
+		MLX5_SET(qpc, qpc, cs_res,
+			 rcqe_sz == 128 ? MLX5_RES_SCAT_DATA64_CQE :
+					  MLX5_RES_SCAT_DATA32_CQE);
+	}
+	if (qp->scat_cqe && (qp->qp_sub_type == MLX5_IB_QPT_DCI ||
+			     init_attr->qp_type == IB_QPT_RC))
 		configure_requester_scat_cqe(dev, init_attr,
 					     udata ? &ucmd : NULL,
 					     qpc);
-	}
 
 	if (qp->rq.wqe_cnt) {
 		MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 10/18] RDMA/mlx5: Move DRIVER QP flags check into separate function
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (8 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 09/18] RDMA/mlx5: Update all DRIVER QP places to use QP subtype Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 11/18] RDMA/mlx5: Remove second copy from user for non RSS RAW QPs Leon Romanovsky
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Perform validation of DRIVER QP in relevant function.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 91 ++++++++++++++++-----------------
 1 file changed, 43 insertions(+), 48 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 4e6e72adb4c3..9ae8b43a77d4 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2570,36 +2570,6 @@ static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	return 0;
 }
 
-static int set_mlx_qp_type(struct mlx5_ib_dev *dev,
-			   struct ib_qp_init_attr *init_attr,
-			   struct mlx5_ib_create_qp *ucmd,
-			   struct ib_udata *udata)
-{
-	enum { MLX_QP_FLAGS = MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI };
-	int err;
-
-	if (udata->inlen < sizeof(*ucmd)) {
-		mlx5_ib_dbg(dev, "create_qp user command is smaller than expected\n");
-		return -EINVAL;
-	}
-	err = ib_copy_from_udata(ucmd, udata, sizeof(*ucmd));
-	if (err)
-		return err;
-
-	if ((ucmd->flags & MLX_QP_FLAGS) == MLX5_QP_FLAG_TYPE_DCI) {
-		init_attr->qp_type = MLX5_IB_QPT_DCI;
-	} else {
-		if ((ucmd->flags & MLX_QP_FLAGS) == MLX5_QP_FLAG_TYPE_DCT) {
-			init_attr->qp_type = MLX5_IB_QPT_DCT;
-		} else {
-			mlx5_ib_dbg(dev, "Invalid QP flags\n");
-			return -EINVAL;
-		}
-	}
-
-	return 0;
-}
-
 static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr)
 {
 	if (attr->qp_type == IB_QPT_DRIVER && !MLX5_CAP_GEN(dev->mdev, dct))
@@ -2691,6 +2661,24 @@ static int check_valid_flow(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return 0;
 }
 
+static int process_vendor_flags(struct mlx5_ib_qp *qp,
+				struct ib_qp_init_attr *attr,
+				struct mlx5_ib_create_qp *ucmd)
+{
+	switch (ucmd->flags & (MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI)) {
+	case MLX5_QP_FLAG_TYPE_DCI:
+		qp->qp_sub_type = MLX5_IB_QPT_DCI;
+		break;
+	case MLX5_QP_FLAG_TYPE_DCT:
+		qp->qp_sub_type = MLX5_IB_QPT_DCT;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 			    struct ib_qp_init_attr *attr,
 			    struct mlx5_ib_create_qp *ucmd,
@@ -2707,6 +2695,9 @@ static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 		ret = create_dct(pd, qp, attr, ucmd, udata);
 		break;
 	case MLX5_IB_QPT_DCI:
+		if (attr->cap.max_recv_wr || attr->cap.max_recv_sge)
+			goto out;
+
 		ret = create_qp_common(mdev, pd, attr, udata, qp);
 		break;
 	default:
@@ -2716,8 +2707,16 @@ static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 out:	return ret;
 }
 
+static size_t process_udata_size(struct ib_qp_init_attr *attr,
+				 struct ib_udata *udata)
+{
+	size_t ucmd = sizeof(struct mlx5_ib_create_qp);
+
+	return (udata->inlen < ucmd) ? 0 : ucmd;
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
-				struct ib_qp_init_attr *verbs_init_attr,
+				struct ib_qp_init_attr *init_attr,
 				struct ib_udata *udata)
 {
 	struct mlx5_ib_create_qp ucmd = {};
@@ -2725,8 +2724,6 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	struct mlx5_ib_qp *qp;
 	u16 xrcdn = 0;
 	int err;
-	struct ib_qp_init_attr mlx_init_attr;
-	struct ib_qp_init_attr *init_attr = verbs_init_attr;
 
 	dev = pd ? to_mdev(pd->device) :
 		   to_mdev(to_mxrcd(init_attr->xrcd)->ibxrcd.device);
@@ -2745,28 +2742,26 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (init_attr->qp_type == IB_QPT_GSI)
 		return mlx5_ib_gsi_create_qp(pd, init_attr);
 
+	if (udata && init_attr->qp_type == IB_QPT_DRIVER) {
+		size_t inlen =
+			process_udata_size(init_attr, udata);
+
+		if (!inlen)
+			return ERR_PTR(-EINVAL);
+
+		err = ib_copy_from_udata(&ucmd, udata, inlen);
+		if (err)
+			return ERR_PTR(err);
+	}
+
 	qp = kzalloc(sizeof(*qp), GFP_KERNEL);
 	if (!qp)
 		return ERR_PTR(-ENOMEM);
 
 	if (init_attr->qp_type == IB_QPT_DRIVER) {
-		init_attr = &mlx_init_attr;
-		memcpy(init_attr, verbs_init_attr, sizeof(*verbs_init_attr));
-		err = set_mlx_qp_type(dev, init_attr, &ucmd, udata);
+		err = process_vendor_flags(qp, init_attr, &ucmd);
 		if (err)
 			goto free_qp;
-
-		if (init_attr->qp_type == MLX5_IB_QPT_DCI) {
-			if (init_attr->cap.max_recv_wr ||
-			    init_attr->cap.max_recv_sge) {
-				mlx5_ib_dbg(dev, "DCI QP requires zero size receive queue\n");
-				err = -EINVAL;
-				goto free_qp;
-			}
-			qp->qp_sub_type = MLX5_IB_QPT_DCI;
-		} else {
-			qp->qp_sub_type = MLX5_IB_QPT_DCT;
-		}
 	}
 
 	if (init_attr->qp_type == IB_QPT_XRC_TGT)
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 11/18] RDMA/mlx5: Remove second copy from user for non RSS RAW QPs
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (9 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 10/18] RDMA/mlx5: Move DRIVER QP flags check into separate function Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:10 ` [PATCH rdma-next 12/18] RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow Leon Romanovsky
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Change the common code to use already copied user command buffer.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 56 ++++++++++++++++-----------------
 1 file changed, 27 insertions(+), 29 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 9ae8b43a77d4..ac65fc37b805 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1967,6 +1967,7 @@ static inline bool check_flags_mask(uint64_t input, uint64_t supported)
 
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
+			    struct mlx5_ib_create_qp *ucmd,
 			    struct ib_udata *udata, struct mlx5_ib_qp *qp)
 {
 	struct mlx5_ib_resources *devr = &dev->devr;
@@ -1979,7 +1980,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	struct mlx5_ib_cq *recv_cq;
 	unsigned long flags;
 	u32 uidx = MLX5_IB_DEFAULT_UIDX;
-	struct mlx5_ib_create_qp ucmd;
 	struct mlx5_ib_qp_base *base;
 	int mlx5_st;
 	void *qpc;
@@ -2056,12 +2056,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	}
 
 	if (udata) {
-		if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
-			mlx5_ib_dbg(dev, "copy failed\n");
-			return -EFAULT;
-		}
-
-		if (!check_flags_mask(ucmd.flags,
+		if (!check_flags_mask(ucmd->flags,
 				      MLX5_QP_FLAG_ALLOW_SCATTER_CQE |
 				      MLX5_QP_FLAG_BFREG_INDEX |
 				      MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE |
@@ -2075,14 +2070,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 				      MLX5_QP_FLAG_TYPE_DCT))
 			return -EINVAL;
 
-		err = get_qp_user_index(ucontext, &ucmd, udata->inlen, &uidx);
+		err = get_qp_user_index(ucontext, ucmd, udata->inlen, &uidx);
 		if (err)
 			return err;
 
-		qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
+		qp->wq_sig = !!(ucmd->flags & MLX5_QP_FLAG_SIGNATURE);
 		if (MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
-			qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
-		if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
+			qp->scat_cqe =
+				!!(ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE);
+		if (ucmd->flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
 			if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
 			    !tunnel_offload_supported(mdev)) {
 				mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
@@ -2091,7 +2087,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			qp->flags_en |= MLX5_QP_FLAG_TUNNEL_OFFLOADS;
 		}
 
-		if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC) {
+		if (ucmd->flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC) {
 			if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
 				mlx5_ib_dbg(dev, "Self-LB UC isn't supported\n");
 				return -EOPNOTSUPP;
@@ -2099,7 +2095,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC;
 		}
 
-		if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) {
+		if (ucmd->flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) {
 			if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
 				mlx5_ib_dbg(dev, "Self-LB UM isn't supported\n");
 				return -EOPNOTSUPP;
@@ -2107,7 +2103,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC;
 		}
 
-		if (ucmd.flags & MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE) {
+		if (ucmd->flags & MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE) {
 			if (init_attr->qp_type != IB_QPT_RC ||
 				!MLX5_CAP_GEN(dev->mdev, qp_packet_based)) {
 				mlx5_ib_dbg(dev, "packet based credit mode isn't supported\n");
@@ -2138,8 +2134,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	       &qp->trans_qp.base;
 
 	qp->has_rq = qp_has_rq(init_attr);
-	err = set_rq_size(dev, &init_attr->cap, qp->has_rq,
-			  qp, udata ? &ucmd : NULL);
+	err = set_rq_size(dev, &init_attr->cap, qp->has_rq, qp, ucmd);
 	if (err) {
 		mlx5_ib_dbg(dev, "err %d\n", err);
 		return err;
@@ -2149,15 +2144,16 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		if (udata) {
 			__u32 max_wqes =
 				1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
-			mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
-			if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
-			    ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
+			mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n",
+				    ucmd->sq_wqe_count);
+			if (ucmd->rq_wqe_shift != qp->rq.wqe_shift ||
+			    ucmd->rq_wqe_count != qp->rq.wqe_cnt) {
 				mlx5_ib_dbg(dev, "invalid rq params\n");
 				return -EINVAL;
 			}
-			if (ucmd.sq_wqe_count > max_wqes) {
+			if (ucmd->sq_wqe_count > max_wqes) {
 				mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
-					    ucmd.sq_wqe_count, max_wqes);
+					    ucmd->sq_wqe_count, max_wqes);
 				return -EINVAL;
 			}
 			if (init_attr->create_flags &
@@ -2225,9 +2221,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	}
 	if (qp->scat_cqe && (qp->qp_sub_type == MLX5_IB_QPT_DCI ||
 			     init_attr->qp_type == IB_QPT_RC))
-		configure_requester_scat_cqe(dev, init_attr,
-					     udata ? &ucmd : NULL,
-					     qpc);
+		configure_requester_scat_cqe(dev, init_attr, ucmd, qpc);
 
 	if (qp->rq.wqe_cnt) {
 		MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
@@ -2308,7 +2302,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 
 	if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
 	    qp->flags & MLX5_IB_QP_UNDERLAY) {
-		qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
+		qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd->sq_buf_addr;
 		raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
 		err = create_raw_packet_qp(dev, qp, in, inlen, pd, udata,
 					   &resp);
@@ -2698,7 +2692,7 @@ static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 		if (attr->cap.max_recv_wr || attr->cap.max_recv_sge)
 			goto out;
 
-		ret = create_qp_common(mdev, pd, attr, udata, qp);
+		ret = create_qp_common(mdev, pd, attr, ucmd, udata, qp);
 		break;
 	default:
 		return -EINVAL;
@@ -2712,7 +2706,10 @@ static size_t process_udata_size(struct ib_qp_init_attr *attr,
 {
 	size_t ucmd = sizeof(struct mlx5_ib_create_qp);
 
-	return (udata->inlen < ucmd) ? 0 : ucmd;
+	if (attr->qp_type == IB_QPT_DRIVER)
+		return (udata->inlen < ucmd) ? 0 : ucmd;
+
+	return ucmd;
 }
 
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
@@ -2742,7 +2739,7 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (init_attr->qp_type == IB_QPT_GSI)
 		return mlx5_ib_gsi_create_qp(pd, init_attr);
 
-	if (udata && init_attr->qp_type == IB_QPT_DRIVER) {
+	if (udata && !init_attr->rwq_ind_tbl) {
 		size_t inlen =
 			process_udata_size(init_attr, udata);
 
@@ -2772,7 +2769,8 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		err = create_driver_qp(pd, qp, init_attr, &ucmd, udata);
 		break;
 	default:
-		err = create_qp_common(dev, pd, init_attr, udata, qp);
+		err = create_qp_common(dev, pd, init_attr,
+				       (udata) ? &ucmd : NULL, udata, qp);
 	}
 	if (err) {
 		mlx5_ib_dbg(dev, "create_qp_common failed\n");
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 12/18] RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (10 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 11/18] RDMA/mlx5: Remove second copy from user for non RSS RAW QPs Leon Romanovsky
@ 2020-04-20 15:10 ` Leon Romanovsky
  2020-04-20 15:11 ` [PATCH rdma-next 13/18] RDMA/mlx5: Delete create QP flags obfuscation Leon Romanovsky
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:10 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Create initial function for IB_QPT_RAW_PACKET flow.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index ac65fc37b805..f50a006472c2 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1634,13 +1634,13 @@ static void destroy_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *q
 			     to_mpd(qp->ibqp.pd)->uid);
 }
 
-static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
-				 struct ib_pd *pd,
+static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 				 struct ib_qp_init_attr *init_attr,
 				 struct ib_udata *udata)
 {
 	struct mlx5_ib_ucontext *mucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
+	struct mlx5_ib_dev *dev = to_mdev(pd->device);
 	struct mlx5_ib_create_qp_resp resp = {};
 	int inlen;
 	int outlen;
@@ -1996,9 +1996,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (mlx5_st < 0)
 		return -EINVAL;
 
-	if (init_attr->rwq_ind_tbl)
-		return create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
-
 	if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
 		if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
 			mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
@@ -2712,6 +2709,18 @@ static size_t process_udata_size(struct ib_qp_init_attr *attr,
 	return ucmd;
 }
 
+static int create_raw_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
+			 struct ib_qp_init_attr *attr,
+			 struct mlx5_ib_create_qp *ucmd, struct ib_udata *udata)
+{
+	struct mlx5_ib_dev *dev = to_mdev(pd->device);
+
+	if (attr->rwq_ind_tbl)
+		return create_rss_raw_qp_tir(pd, qp, attr, udata);
+
+	return create_qp_common(dev, pd, attr, ucmd, udata, qp);
+}
+
 struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 				struct ib_qp_init_attr *init_attr,
 				struct ib_udata *udata)
@@ -2768,6 +2777,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	case IB_QPT_DRIVER:
 		err = create_driver_qp(pd, qp, init_attr, &ucmd, udata);
 		break;
+	case IB_QPT_RAW_PACKET:
+		err = create_raw_qp(pd, qp, init_attr, &ucmd, udata);
+		break;
 	default:
 		err = create_qp_common(dev, pd, init_attr,
 				       (udata) ? &ucmd : NULL, udata, qp);
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 13/18] RDMA/mlx5: Delete create QP flags obfuscation
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (11 preceding siblings ...)
  2020-04-20 15:10 ` [PATCH rdma-next 12/18] RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow Leon Romanovsky
@ 2020-04-20 15:11 ` Leon Romanovsky
  2020-04-20 15:11 ` [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place Leon Romanovsky
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:11 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

There is no point in redefinition of stable and exposed to users create
flags. Their values won't be changed and it is equal to used by the
mlx5. Delete the mlx5 definitions and use IB/core fields.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/devx.c    |  2 +-
 drivers/infiniband/hw/mlx5/flow.c    |  2 +-
 drivers/infiniband/hw/mlx5/main.c    |  9 ++--
 drivers/infiniband/hw/mlx5/mlx5_ib.h | 21 +-------
 drivers/infiniband/hw/mlx5/qp.c      | 80 ++++++++++++++--------------
 5 files changed, 49 insertions(+), 65 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
index 35b98c2d64d5..1d7feed6d3cb 100644
--- a/drivers/infiniband/hw/mlx5/devx.c
+++ b/drivers/infiniband/hw/mlx5/devx.c
@@ -615,7 +615,7 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs,
 		enum ib_qp_type	qp_type = qp->ibqp.qp_type;
 
 		if (qp_type == IB_QPT_RAW_PACKET ||
-		    (qp->flags & MLX5_IB_QP_UNDERLAY)) {
+		    (qp->flags & IB_QP_CREATE_SOURCE_QPN)) {
 			struct mlx5_ib_raw_packet_qp *raw_packet_qp =
 							 &qp->raw_packet_qp;
 			struct mlx5_ib_rq *rq = &raw_packet_qp->rq;
diff --git a/drivers/infiniband/hw/mlx5/flow.c b/drivers/infiniband/hw/mlx5/flow.c
index 6111f8162e5f..e35cc3b7901c 100644
--- a/drivers/infiniband/hw/mlx5/flow.c
+++ b/drivers/infiniband/hw/mlx5/flow.c
@@ -142,7 +142,7 @@ static int get_dests(struct uverbs_attr_bundle *attrs,
 			return -EINVAL;
 
 		mqp = to_mqp(*qp);
-		if (mqp->flags & MLX5_IB_QP_RSS)
+		if (mqp->is_rss)
 			*dest_id = mqp->rss_qp.tirn;
 		else
 			*dest_id = mqp->raw_packet_qp.rq.tirn;
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 689b9dddda26..b581c0514606 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -3945,15 +3945,16 @@ static struct ib_flow *mlx5_ib_create_flow(struct ib_qp *qp,
 		dst->type = MLX5_FLOW_DESTINATION_TYPE_PORT;
 	} else {
 		dst->type = MLX5_FLOW_DESTINATION_TYPE_TIR;
-		if (mqp->flags & MLX5_IB_QP_RSS)
+		if (mqp->is_rss)
 			dst->tir_num = mqp->rss_qp.tirn;
 		else
 			dst->tir_num = mqp->raw_packet_qp.rq.tirn;
 	}
 
 	if (flow_attr->type == IB_FLOW_ATTR_NORMAL) {
-		underlay_qpn = (mqp->flags & MLX5_IB_QP_UNDERLAY) ?
-			mqp->underlay_qpn : 0;
+		underlay_qpn = (mqp->flags & IB_QP_CREATE_SOURCE_QPN) ?
+				       mqp->underlay_qpn :
+				       0;
 		handler = _create_flow_rule(dev, ft_prio, flow_attr,
 					    dst, underlay_qpn, ucmd);
 	} else if (flow_attr->type == IB_FLOW_ATTR_ALL_DEFAULT ||
@@ -4419,7 +4420,7 @@ static int mlx5_ib_mcg_attach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
 	uid = ibqp->pd ?
 		to_mpd(ibqp->pd)->uid : 0;
 
-	if (mqp->flags & MLX5_IB_QP_UNDERLAY) {
+	if (mqp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		mlx5_ib_dbg(dev, "Attaching a multi cast group to underlay QP is not supported\n");
 		return -EOPNOTSUPP;
 	}
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 550b044c670e..4492630e7638 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -450,7 +450,8 @@ struct mlx5_ib_qp {
 	int			scat_cqe;
 	int			max_inline_data;
 	struct mlx5_bf	        bf;
-	int			has_rq;
+	u8			has_rq:1;
+	u8			is_rss:1;
 
 	/* only for user space QPs. For kernel
 	 * we have it from the bf object
@@ -482,24 +483,6 @@ struct mlx5_ib_cq_buf {
 	int			nent;
 };
 
-enum mlx5_ib_qp_flags {
-	MLX5_IB_QP_LSO                          = IB_QP_CREATE_IPOIB_UD_LSO,
-	MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK     = IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK,
-	MLX5_IB_QP_CROSS_CHANNEL            = IB_QP_CREATE_CROSS_CHANNEL,
-	MLX5_IB_QP_MANAGED_SEND             = IB_QP_CREATE_MANAGED_SEND,
-	MLX5_IB_QP_MANAGED_RECV             = IB_QP_CREATE_MANAGED_RECV,
-	MLX5_IB_QP_SIGNATURE_HANDLING           = 1 << 5,
-	/* QP uses 1 as its source QP number */
-	MLX5_IB_QP_SQPN_QP1			= 1 << 6,
-	MLX5_IB_QP_CAP_SCATTER_FCS		= 1 << 7,
-	MLX5_IB_QP_RSS				= 1 << 8,
-	MLX5_IB_QP_CVLAN_STRIPPING		= 1 << 9,
-	MLX5_IB_QP_UNDERLAY			= 1 << 10,
-	MLX5_IB_QP_PCI_WRITE_END_PADDING	= 1 << 11,
-	MLX5_IB_QP_TUNNEL_OFFLOAD		= 1 << 12,
-	MLX5_IB_QP_PACKET_BASED_CREDIT		= 1 << 13,
-};
-
 struct mlx5_umr_wr {
 	struct ib_send_wr		wr;
 	u64				virt_addr;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index f50a006472c2..6ae2ef82649e 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -596,7 +596,7 @@ static int set_user_buf_size(struct mlx5_ib_dev *dev,
 	}
 
 	if (attr->qp_type == IB_QPT_RAW_PACKET ||
-	    qp->flags & MLX5_IB_QP_UNDERLAY) {
+	    qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		base->ubuffer.buf_size = qp->rq.wqe_cnt << qp->rq.wqe_shift;
 		qp->raw_packet_qp.sq.ubuffer.buf_size = qp->sq.wqe_cnt << 6;
 	} else {
@@ -951,7 +951,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		bfregn = MLX5_IB_INVALID_BFREG;
 		break;
 	case 0:
-		if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
+		if (qp->flags & IB_QP_CREATE_CROSS_CHANNEL)
 			return -EINVAL;
 		bfregn = alloc_bfreg(dev, &context->bfregi);
 		if (bfregn < 0)
@@ -1169,7 +1169,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
 
 	if (init_attr->create_flags & MLX5_IB_QP_CREATE_SQPN_QP1) {
 		MLX5_SET(qpc, qpc, deth_sqpn, 1);
-		qp->flags |= MLX5_IB_QP_SQPN_QP1;
+		qp->flags |= MLX5_IB_QP_CREATE_SQPN_QP1;
 	}
 
 	mlx5_fill_page_frag_array(&qp->buf,
@@ -1251,7 +1251,7 @@ static int create_raw_packet_qp_tis(struct mlx5_ib_dev *dev,
 
 	MLX5_SET(create_tis_in, in, uid, to_mpd(pd)->uid);
 	MLX5_SET(tisc, tisc, transport_domain, tdn);
-	if (qp->flags & MLX5_IB_QP_UNDERLAY)
+	if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
 		MLX5_SET(tisc, tisc, underlay_qpn, qp->underlay_qpn);
 
 	return mlx5_core_create_tis(dev->mdev, in, &sq->tisn);
@@ -1400,7 +1400,7 @@ static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
 	MLX5_SET(rqc, rqc, user_index, MLX5_GET(qpc, qpc, user_index));
 	MLX5_SET(rqc, rqc, cqn, MLX5_GET(qpc, qpc, cqn_rcv));
 
-	if (mqp->flags & MLX5_IB_QP_CAP_SCATTER_FCS)
+	if (mqp->flags & IB_QP_CREATE_SCATTER_FCS)
 		MLX5_SET(rqc, rqc, scatter_fcs, 1);
 
 	wq = MLX5_ADDR_OF(rqc, rqc, wq);
@@ -1538,9 +1538,9 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	if (qp->rq.wqe_cnt) {
 		rq->base.container_mibqp = qp;
 
-		if (qp->flags & MLX5_IB_QP_CVLAN_STRIPPING)
+		if (qp->flags & IB_QP_CREATE_CVLAN_STRIPPING)
 			rq->flags |= MLX5_IB_RQ_CVLAN_STRIPPING;
-		if (qp->flags & MLX5_IB_QP_PCI_WRITE_END_PADDING)
+		if (qp->flags & IB_QP_CREATE_PCI_WRITE_END_PADDING)
 			rq->flags |= MLX5_IB_RQ_PCI_WRITE_END_PADDING;
 		err = create_raw_packet_qp_rq(dev, rq, in, inlen, pd);
 		if (err)
@@ -1878,7 +1878,7 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	kvfree(in);
 	/* qpn is reserved for that QP */
 	qp->trans_qp.base.mqp.qpn = 0;
-	qp->flags |= MLX5_IB_QP_RSS;
+	qp->is_rss = true;
 	return 0;
 
 err_copy:
@@ -2001,7 +2001,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
 			return -EINVAL;
 		} else {
-			qp->flags |= MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK;
+			qp->flags |= IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK;
 		}
 	}
 
@@ -2014,11 +2014,11 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			return -EINVAL;
 		}
 		if (init_attr->create_flags & IB_QP_CREATE_CROSS_CHANNEL)
-			qp->flags |= MLX5_IB_QP_CROSS_CHANNEL;
+			qp->flags |= IB_QP_CREATE_CROSS_CHANNEL;
 		if (init_attr->create_flags & IB_QP_CREATE_MANAGED_SEND)
-			qp->flags |= MLX5_IB_QP_MANAGED_SEND;
+			qp->flags |= IB_QP_CREATE_MANAGED_SEND;
 		if (init_attr->create_flags & IB_QP_CREATE_MANAGED_RECV)
-			qp->flags |= MLX5_IB_QP_MANAGED_RECV;
+			qp->flags |= IB_QP_CREATE_MANAGED_RECV;
 	}
 
 	if (init_attr->qp_type == IB_QPT_UD &&
@@ -2038,7 +2038,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
 			return -EOPNOTSUPP;
 		}
-		qp->flags |= MLX5_IB_QP_CAP_SCATTER_FCS;
+		qp->flags |= IB_QP_CREATE_SCATTER_FCS;
 	}
 
 	if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
@@ -2049,7 +2049,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		      MLX5_CAP_ETH(dev->mdev, vlan_cap)) ||
 		    (init_attr->qp_type != IB_QPT_RAW_PACKET))
 			return -EOPNOTSUPP;
-		qp->flags |= MLX5_IB_QP_CVLAN_STRIPPING;
+		qp->flags |= IB_QP_CREATE_CVLAN_STRIPPING;
 	}
 
 	if (udata) {
@@ -2106,7 +2106,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 				mlx5_ib_dbg(dev, "packet based credit mode isn't supported\n");
 				return -EOPNOTSUPP;
 			}
-			qp->flags |= MLX5_IB_QP_PACKET_BASED_CREDIT;
+			qp->flags_en |= MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE;
 		}
 
 		if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
@@ -2118,7 +2118,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 				return -EOPNOTSUPP;
 			}
 
-			qp->flags |= MLX5_IB_QP_UNDERLAY;
+			qp->flags |= IB_QP_CREATE_SOURCE_QPN;
 			qp->underlay_qpn = init_attr->source_qpn;
 		}
 	} else {
@@ -2126,7 +2126,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	}
 
 	base = (init_attr->qp_type == IB_QPT_RAW_PACKET ||
-		qp->flags & MLX5_IB_QP_UNDERLAY) ?
+		qp->flags & IB_QP_CREATE_SOURCE_QPN) ?
 	       &qp->raw_packet_qp.rq.base :
 	       &qp->trans_qp.base;
 
@@ -2196,16 +2196,16 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (qp->wq_sig)
 		MLX5_SET(qpc, qpc, wq_signature, 1);
 
-	if (qp->flags & MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK)
+	if (qp->flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
 		MLX5_SET(qpc, qpc, block_lb_mc, 1);
 
-	if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
+	if (qp->flags & IB_QP_CREATE_CROSS_CHANNEL)
 		MLX5_SET(qpc, qpc, cd_master, 1);
-	if (qp->flags & MLX5_IB_QP_MANAGED_SEND)
+	if (qp->flags & IB_QP_CREATE_MANAGED_SEND)
 		MLX5_SET(qpc, qpc, cd_slave_send, 1);
-	if (qp->flags & MLX5_IB_QP_MANAGED_RECV)
+	if (qp->flags & IB_QP_CREATE_MANAGED_RECV)
 		MLX5_SET(qpc, qpc, cd_slave_receive, 1);
-	if (qp->flags & MLX5_IB_QP_PACKET_BASED_CREDIT)
+	if (qp->flags_en & MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE)
 		MLX5_SET(qpc, qpc, req_e2e_credit_mode, 1);
 	if (qp->scat_cqe && (init_attr->qp_type == IB_QPT_RC ||
 			     init_attr->qp_type == IB_QPT_UC)) {
@@ -2276,7 +2276,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (init_attr->qp_type == IB_QPT_UD &&
 	    (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)) {
 		MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
-		qp->flags |= MLX5_IB_QP_LSO;
+		qp->flags |= IB_QP_CREATE_IPOIB_UD_LSO;
 	}
 
 	if (init_attr->create_flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
@@ -2288,7 +2288,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			MLX5_SET(qpc, qpc, end_padding_mode,
 				 MLX5_WQ_END_PAD_MODE_ALIGN);
 		} else {
-			qp->flags |= MLX5_IB_QP_PCI_WRITE_END_PADDING;
+			qp->flags |= IB_QP_CREATE_PCI_WRITE_END_PADDING;
 		}
 	}
 
@@ -2298,7 +2298,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	}
 
 	if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
-	    qp->flags & MLX5_IB_QP_UNDERLAY) {
+	    qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd->sq_buf_addr;
 		raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
 		err = create_raw_packet_qp(dev, qp, in, inlen, pd, udata,
@@ -2463,13 +2463,13 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	}
 
 	base = (qp->ibqp.qp_type == IB_QPT_RAW_PACKET ||
-		qp->flags & MLX5_IB_QP_UNDERLAY) ?
+		qp->flags & IB_QP_CREATE_SOURCE_QPN) ?
 	       &qp->raw_packet_qp.rq.base :
 	       &qp->trans_qp.base;
 
 	if (qp->state != IB_QPS_RESET) {
 		if (qp->ibqp.qp_type != IB_QPT_RAW_PACKET &&
-		    !(qp->flags & MLX5_IB_QP_UNDERLAY)) {
+		    !(qp->flags & IB_QP_CREATE_SOURCE_QPN)) {
 			err = mlx5_core_qp_modify(dev, MLX5_CMD_OP_2RST_QP, 0,
 						  NULL, &base->mqp);
 		} else {
@@ -2508,7 +2508,7 @@ static void destroy_qp_common(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
 
 	if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET ||
-	    qp->flags & MLX5_IB_QP_UNDERLAY) {
+	    qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		destroy_raw_packet_qp(dev, qp);
 	} else {
 		err = mlx5_core_destroy_qp(dev, &base->mqp);
@@ -3478,7 +3478,7 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
 	if (!(dev->lag_active && qp_supports_affinity(qp)))
 		return 0;
 
-	if (mqp->flags & MLX5_IB_QP_SQPN_QP1)
+	if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
 		tx_affinity = mqp->gsi_lag_port;
 	else if (init)
 		tx_affinity = get_tx_affinity_rr(dev, udata);
@@ -3620,7 +3620,7 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
 	if (is_sqp(ibqp->qp_type)) {
 		context->mtu_msgmax = (IB_MTU_256 << 5) | 8;
 	} else if ((ibqp->qp_type == IB_QPT_UD &&
-		    !(qp->flags & MLX5_IB_QP_UNDERLAY)) ||
+		    !(qp->flags & IB_QP_CREATE_SOURCE_QPN)) ||
 		   ibqp->qp_type == MLX5_IB_QPT_REG_UMR) {
 		context->mtu_msgmax = (IB_MTU_4096 << 5) | 12;
 	} else if (attr_mask & IB_QP_PATH_MTU) {
@@ -3725,7 +3725,7 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
 			       qp->port) - 1;
 
 		/* Underlay port should be used - index 0 function per port */
-		if (qp->flags & MLX5_IB_QP_UNDERLAY)
+		if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
 			port_num = 0;
 
 		if (ibqp->counter)
@@ -3739,7 +3739,7 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
 	if (!ibqp->uobject && cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT)
 		context->sq_crq_size |= cpu_to_be16(1 << 4);
 
-	if (qp->flags & MLX5_IB_QP_SQPN_QP1)
+	if (qp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
 		context->deth_sqpn = cpu_to_be32(1);
 
 	mlx5_cur = to_mlx5_state(cur_state);
@@ -3756,7 +3756,7 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
 	optpar &= opt_mask[mlx5_cur][mlx5_new][mlx5_st];
 
 	if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET ||
-	    qp->flags & MLX5_IB_QP_UNDERLAY) {
+	    qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		struct mlx5_modify_raw_qp_param raw_qp_param = {};
 
 		raw_qp_param.operation = op;
@@ -4052,7 +4052,7 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 		port = attr_mask & IB_QP_PORT ? attr->port_num : qp->port;
 	}
 
-	if (qp->flags & MLX5_IB_QP_UNDERLAY) {
+	if (qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		if (attr_mask & ~(IB_QP_STATE | IB_QP_CUR_STATE)) {
 			mlx5_ib_dbg(dev, "invalid attr_mask 0x%x when underlay QP is used\n",
 				    attr_mask);
@@ -5884,7 +5884,7 @@ int mlx5_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	mutex_lock(&qp->mutex);
 
 	if (qp->ibqp.qp_type == IB_QPT_RAW_PACKET ||
-	    qp->flags & MLX5_IB_QP_UNDERLAY) {
+	    qp->flags & IB_QP_CREATE_SOURCE_QPN) {
 		err = query_raw_packet_qp_state(dev, qp, &raw_packet_qp_state);
 		if (err)
 			goto out;
@@ -5919,16 +5919,16 @@ int mlx5_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	qp_init_attr->cap	     = qp_attr->cap;
 
 	qp_init_attr->create_flags = 0;
-	if (qp->flags & MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK)
+	if (qp->flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
 		qp_init_attr->create_flags |= IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK;
 
-	if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
+	if (qp->flags & IB_QP_CREATE_CROSS_CHANNEL)
 		qp_init_attr->create_flags |= IB_QP_CREATE_CROSS_CHANNEL;
-	if (qp->flags & MLX5_IB_QP_MANAGED_SEND)
+	if (qp->flags & IB_QP_CREATE_MANAGED_SEND)
 		qp_init_attr->create_flags |= IB_QP_CREATE_MANAGED_SEND;
-	if (qp->flags & MLX5_IB_QP_MANAGED_RECV)
+	if (qp->flags & IB_QP_CREATE_MANAGED_RECV)
 		qp_init_attr->create_flags |= IB_QP_CREATE_MANAGED_RECV;
-	if (qp->flags & MLX5_IB_QP_SQPN_QP1)
+	if (qp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
 		qp_init_attr->create_flags |= MLX5_IB_QP_CREATE_SQPN_QP1;
 
 	qp_init_attr->sq_sig_type = qp->sq_signal_bits & MLX5_WQE_CTRL_CQ_UPDATE ?
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (12 preceding siblings ...)
  2020-04-20 15:11 ` [PATCH rdma-next 13/18] RDMA/mlx5: Delete create QP flags obfuscation Leon Romanovsky
@ 2020-04-20 15:11 ` Leon Romanovsky
  2020-04-23 18:53   ` Leon Romanovsky
  2020-04-24 19:51   ` Jason Gunthorpe
  2020-04-20 15:11 ` [PATCH rdma-next 15/18] RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE signature Leon Romanovsky
                   ` (4 subsequent siblings)
  18 siblings, 2 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:11 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

create_flags is checked in too many places and scattered across all
the code, consolidate all the checks inside one function, so we will
be easily see the flow. As part of such change, delete unreachable code,
because IB/core is responsible sanitize the input.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 200 ++++++++++++++++----------------
 1 file changed, 101 insertions(+), 99 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 6ae2ef82649e..82152e86c9d6 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1097,17 +1097,9 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
 	void *qpc;
 	int err;
 
-	if (init_attr->create_flags & ~(IB_QP_CREATE_INTEGRITY_EN |
-					IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK |
-					IB_QP_CREATE_IPOIB_UD_LSO |
-					IB_QP_CREATE_NETIF_QP |
-					MLX5_IB_QP_CREATE_SQPN_QP1 |
-					MLX5_IB_QP_CREATE_WC_TEST))
-		return -EINVAL;
-
 	if (init_attr->qp_type == MLX5_IB_QPT_REG_UMR)
 		qp->bf.bfreg = &dev->fp_bfreg;
-	else if (init_attr->create_flags & MLX5_IB_QP_CREATE_WC_TEST)
+	else if (qp->flags & MLX5_IB_QP_CREATE_WC_TEST)
 		qp->bf.bfreg = &dev->wc_bfreg;
 	else
 		qp->bf.bfreg = &dev->bfreg;
@@ -1167,10 +1159,8 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
 	MLX5_SET(qpc, qpc, fre, 1);
 	MLX5_SET(qpc, qpc, rlky, 1);
 
-	if (init_attr->create_flags & MLX5_IB_QP_CREATE_SQPN_QP1) {
+	if (qp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
 		MLX5_SET(qpc, qpc, deth_sqpn, 1);
-		qp->flags |= MLX5_IB_QP_CREATE_SQPN_QP1;
-	}
 
 	mlx5_fill_page_frag_array(&qp->buf,
 				  (__be64 *)MLX5_ADDR_OF(create_qp_in,
@@ -1657,7 +1647,7 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	size_t required_cmd_sz;
 	u8 lb_flag = 0;
 
-	if (init_attr->create_flags || init_attr->send_cq)
+	if (init_attr->send_cq)
 		return -EINVAL;
 
 	min_resp_len = offsetof(typeof(resp), bfreg_index) + sizeof(resp.bfreg_index);
@@ -1996,62 +1986,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	if (mlx5_st < 0)
 		return -EINVAL;
 
-	if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
-		if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
-			mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
-			return -EINVAL;
-		} else {
-			qp->flags |= IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK;
-		}
-	}
-
-	if (init_attr->create_flags &
-			(IB_QP_CREATE_CROSS_CHANNEL |
-			 IB_QP_CREATE_MANAGED_SEND |
-			 IB_QP_CREATE_MANAGED_RECV)) {
-		if (!MLX5_CAP_GEN(mdev, cd)) {
-			mlx5_ib_dbg(dev, "cross-channel isn't supported\n");
-			return -EINVAL;
-		}
-		if (init_attr->create_flags & IB_QP_CREATE_CROSS_CHANNEL)
-			qp->flags |= IB_QP_CREATE_CROSS_CHANNEL;
-		if (init_attr->create_flags & IB_QP_CREATE_MANAGED_SEND)
-			qp->flags |= IB_QP_CREATE_MANAGED_SEND;
-		if (init_attr->create_flags & IB_QP_CREATE_MANAGED_RECV)
-			qp->flags |= IB_QP_CREATE_MANAGED_RECV;
-	}
-
-	if (init_attr->qp_type == IB_QPT_UD &&
-	    (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO))
-		if (!MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
-			mlx5_ib_dbg(dev, "ipoib UD lso qp isn't supported\n");
-			return -EOPNOTSUPP;
-		}
-
-	if (init_attr->create_flags & IB_QP_CREATE_SCATTER_FCS) {
-		if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
-			mlx5_ib_dbg(dev, "Scatter FCS is supported only for Raw Packet QPs");
-			return -EOPNOTSUPP;
-		}
-		if (!MLX5_CAP_GEN(dev->mdev, eth_net_offloads) ||
-		    !MLX5_CAP_ETH(dev->mdev, scatter_fcs)) {
-			mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
-			return -EOPNOTSUPP;
-		}
-		qp->flags |= IB_QP_CREATE_SCATTER_FCS;
-	}
-
 	if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
 		qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
 
-	if (init_attr->create_flags & IB_QP_CREATE_CVLAN_STRIPPING) {
-		if (!(MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
-		      MLX5_CAP_ETH(dev->mdev, vlan_cap)) ||
-		    (init_attr->qp_type != IB_QPT_RAW_PACKET))
-			return -EOPNOTSUPP;
-		qp->flags |= IB_QP_CREATE_CVLAN_STRIPPING;
-	}
-
 	if (udata) {
 		if (!check_flags_mask(ucmd->flags,
 				      MLX5_QP_FLAG_ALLOW_SCATTER_CQE |
@@ -2108,23 +2045,13 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			}
 			qp->flags_en |= MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE;
 		}
-
-		if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
-			if (init_attr->qp_type != IB_QPT_UD ||
-			    (MLX5_CAP_GEN(dev->mdev, port_type) !=
-			     MLX5_CAP_PORT_TYPE_IB) ||
-			    !mlx5_get_flow_namespace(dev->mdev, MLX5_FLOW_NAMESPACE_BYPASS)) {
-				mlx5_ib_dbg(dev, "Source QP option isn't supported\n");
-				return -EOPNOTSUPP;
-			}
-
-			qp->flags |= IB_QP_CREATE_SOURCE_QPN;
-			qp->underlay_qpn = init_attr->source_qpn;
-		}
 	} else {
 		qp->wq_sig = !!wq_signature;
 	}
 
+	if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
+		qp->underlay_qpn = init_attr->source_qpn;
+
 	base = (init_attr->qp_type == IB_QPT_RAW_PACKET ||
 		qp->flags & IB_QP_CREATE_SOURCE_QPN) ?
 	       &qp->raw_packet_qp.rq.base :
@@ -2153,11 +2080,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 					    ucmd->sq_wqe_count, max_wqes);
 				return -EINVAL;
 			}
-			if (init_attr->create_flags &
-			    MLX5_IB_QP_CREATE_SQPN_QP1) {
-				mlx5_ib_dbg(dev, "user-space is not allowed to create UD QPs spoofing as QP1\n");
-				return -EINVAL;
-			}
 			err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
 					     &resp, &inlen, base);
 			if (err)
@@ -2273,23 +2195,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		MLX5_SET(qpc, qpc, user_index, uidx);
 
 	/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
-	if (init_attr->qp_type == IB_QPT_UD &&
-	    (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)) {
+	if (qp->flags & IB_QP_CREATE_IPOIB_UD_LSO)
 		MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
-		qp->flags |= IB_QP_CREATE_IPOIB_UD_LSO;
-	}
 
-	if (init_attr->create_flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
-		if (!MLX5_CAP_GEN(dev->mdev, end_pad)) {
-			mlx5_ib_dbg(dev, "scatter end padding is not supported\n");
-			err = -EOPNOTSUPP;
-			goto err;
-		} else if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
-			MLX5_SET(qpc, qpc, end_padding_mode,
-				 MLX5_WQ_END_PAD_MODE_ALIGN);
-		} else {
-			qp->flags |= IB_QP_CREATE_PCI_WRITE_END_PADDING;
-		}
+	if (qp->flags & IB_QP_CREATE_PCI_WRITE_END_PADDING &&
+	    init_attr->qp_type != IB_QPT_RAW_PACKET) {
+		MLX5_SET(qpc, qpc, end_padding_mode,
+			 MLX5_WQ_END_PAD_MODE_ALIGN);
+		/* Special case to clean flag */
+		qp->flags &= ~IB_QP_CREATE_PCI_WRITE_END_PADDING;
 	}
 
 	if (inlen < 0) {
@@ -2670,6 +2584,91 @@ static int process_vendor_flags(struct mlx5_ib_qp *qp,
 	return 0;
 }
 
+static void process_create_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
+				bool cond, struct mlx5_ib_qp *qp)
+{
+	if (!(*flags & flag))
+		return;
+
+	if (cond) {
+		qp->flags |= flag;
+		*flags &= ~flag;
+		return;
+	}
+
+	if (flag == MLX5_IB_QP_CREATE_WC_TEST) {
+		/*
+		 * Special case, if condition didn't meet, it won't be error,
+		 * just different in-kernel flow.
+		 */
+		*flags &= ~MLX5_IB_QP_CREATE_WC_TEST;
+		return;
+	}
+	mlx5_ib_dbg(dev, "Verbs create QP flag 0x%X is not supported\n", flag);
+}
+
+static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+				struct ib_qp_init_attr *attr)
+{
+	enum ib_qp_type qp_type = attr->qp_type;
+	struct mlx5_core_dev *mdev = dev->mdev;
+	int create_flags = attr->create_flags;
+	bool cond;
+
+	if (qp->qp_sub_type == MLX5_IB_QPT_DCT)
+		return (create_flags) ? -EINVAL : 0;
+
+	if (qp_type == IB_QPT_RAW_PACKET)
+		return (attr->rwq_ind_tbl && create_flags) ? -EINVAL : 0;
+
+	process_create_flag(dev, &create_flags,
+			    IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK,
+			    MLX5_CAP_GEN(mdev, block_lb_mc), qp);
+	process_create_flag(dev, &create_flags, IB_QP_CREATE_CROSS_CHANNEL,
+			    MLX5_CAP_GEN(mdev, cd), qp);
+	process_create_flag(dev, &create_flags, IB_QP_CREATE_MANAGED_SEND,
+			    MLX5_CAP_GEN(mdev, cd), qp);
+	process_create_flag(dev, &create_flags, IB_QP_CREATE_MANAGED_RECV,
+			    MLX5_CAP_GEN(mdev, cd), qp);
+
+	if (qp_type == IB_QPT_UD) {
+		process_create_flag(dev, &create_flags,
+				    IB_QP_CREATE_IPOIB_UD_LSO,
+				    MLX5_CAP_GEN(mdev, ipoib_basic_offloads),
+				    qp);
+		cond = MLX5_CAP_GEN(mdev, port_type) == MLX5_CAP_PORT_TYPE_IB;
+		process_create_flag(dev, &create_flags, IB_QP_CREATE_SOURCE_QPN,
+				    cond, qp);
+	}
+
+	if (qp_type == IB_QPT_RAW_PACKET) {
+		cond = MLX5_CAP_GEN(mdev, eth_net_offloads) &&
+		       MLX5_CAP_ETH(mdev, scatter_fcs);
+		process_create_flag(dev, &create_flags,
+				    IB_QP_CREATE_SCATTER_FCS, cond, qp);
+
+		cond = MLX5_CAP_GEN(mdev, eth_net_offloads) &&
+		       MLX5_CAP_ETH(mdev, vlan_cap);
+		process_create_flag(dev, &create_flags,
+				    IB_QP_CREATE_CVLAN_STRIPPING, cond, qp);
+	}
+
+	process_create_flag(dev, &create_flags,
+			    IB_QP_CREATE_PCI_WRITE_END_PADDING,
+			    MLX5_CAP_GEN(mdev, end_pad), qp);
+
+	process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_WC_TEST,
+			    qp_type != MLX5_IB_QPT_REG_UMR, qp);
+	process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_SQPN_QP1,
+			    true, qp);
+
+	if (create_flags)
+		mlx5_ib_dbg(dev, "Create QP has unsupported flags 0x%X\n",
+			    create_flags);
+
+	return (create_flags) ? -EINVAL : 0;
+}
+
 static int create_driver_qp(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 			    struct ib_qp_init_attr *attr,
 			    struct mlx5_ib_create_qp *ucmd,
@@ -2769,6 +2768,9 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 		if (err)
 			goto free_qp;
 	}
+	err = process_create_flags(dev, qp, init_attr);
+	if (err)
+		goto free_qp;
 
 	if (init_attr->qp_type == IB_QPT_XRC_TGT)
 		xrcdn = to_mxrcd(init_attr->xrcd)->xrcdn;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 15/18] RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE signature
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (13 preceding siblings ...)
  2020-04-20 15:11 ` [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place Leon Romanovsky
@ 2020-04-20 15:11 ` Leon Romanovsky
  2020-04-20 15:11 ` [PATCH rdma-next 16/18] RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags Leon Romanovsky
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:11 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

MLX5_QP_FLAG_SIGNATURE is exposed to the users but in the kernel
the create_qp flow treated it differently from other MLX5_QP_FLAG_*s.
Fix it by ditching wq_sig boolean variable and use general flag_en
mechanism.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  1 -
 drivers/infiniband/hw/mlx5/odp.c     |  2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 36 +++++++++++++++++-----------
 3 files changed, 23 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 4492630e7638..251380ff5706 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -446,7 +446,6 @@ struct mlx5_ib_qp {
 	u32			flags;
 	u8			port;
 	u8			state;
-	int			wq_sig;
 	int			scat_cqe;
 	int			max_inline_data;
 	struct mlx5_bf	        bf;
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 16af1105cfcf..e4759310c0e2 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -1190,7 +1190,7 @@ static int mlx5_ib_mr_responder_pfault_handler_rq(struct mlx5_ib_dev *dev,
 	struct mlx5_ib_wq *wq = &qp->rq;
 	int wqe_size = 1 << wq->wqe_shift;
 
-	if (qp->wq_sig) {
+	if (qp->flags_en & MLX5_QP_FLAG_SIGNATURE) {
 		mlx5_ib_err(dev, "ODP fault with WQE signatures is not supported\n");
 		return -EFAULT;
 	}
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 82152e86c9d6..a3c693ce1865 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -41,9 +41,6 @@
 #include "cmd.h"
 #include "qp.h"
 
-/* not supported currently */
-static int wq_signature;
-
 enum {
 	MLX5_IB_ACK_REQ_FREQ	= 8,
 };
@@ -392,17 +389,26 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
 		cap->max_recv_wr = 0;
 		cap->max_recv_sge = 0;
 	} else {
+		int wq_sig = !!(qp->flags_en & MLX5_QP_FLAG_SIGNATURE);
+
 		if (ucmd) {
 			qp->rq.wqe_cnt = ucmd->rq_wqe_count;
 			if (ucmd->rq_wqe_shift > BITS_PER_BYTE * sizeof(ucmd->rq_wqe_shift))
 				return -EINVAL;
 			qp->rq.wqe_shift = ucmd->rq_wqe_shift;
-			if ((1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) < qp->wq_sig)
+			if ((1 << qp->rq.wqe_shift) /
+				    sizeof(struct mlx5_wqe_data_seg) <
+			    wq_sig)
 				return -EINVAL;
-			qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig;
+			qp->rq.max_gs =
+				(1 << qp->rq.wqe_shift) /
+					sizeof(struct mlx5_wqe_data_seg) -
+				wq_sig;
 			qp->rq.max_post = qp->rq.wqe_cnt;
 		} else {
-			wqe_size = qp->wq_sig ? sizeof(struct mlx5_wqe_signature_seg) : 0;
+			wqe_size =
+				wq_sig ? sizeof(struct mlx5_wqe_signature_seg) :
+					 0;
 			wqe_size += cap->max_recv_sge * sizeof(struct mlx5_wqe_data_seg);
 			wqe_size = roundup_pow_of_two(wqe_size);
 			wq_size = roundup_pow_of_two(cap->max_recv_wr) * wqe_size;
@@ -416,7 +422,10 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
 				return -EINVAL;
 			}
 			qp->rq.wqe_shift = ilog2(wqe_size);
-			qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig;
+			qp->rq.max_gs =
+				(1 << qp->rq.wqe_shift) /
+					sizeof(struct mlx5_wqe_data_seg) -
+				wq_sig;
 			qp->rq.max_post = qp->rq.wqe_cnt;
 		}
 	}
@@ -2008,7 +2017,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		if (err)
 			return err;
 
-		qp->wq_sig = !!(ucmd->flags & MLX5_QP_FLAG_SIGNATURE);
+		if (ucmd->flags & MLX5_QP_FLAG_SIGNATURE)
+			qp->flags_en |= MLX5_QP_FLAG_SIGNATURE;
 		if (MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
 			qp->scat_cqe =
 				!!(ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE);
@@ -2045,8 +2055,6 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			}
 			qp->flags_en |= MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE;
 		}
-	} else {
-		qp->wq_sig = !!wq_signature;
 	}
 
 	if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
@@ -2115,7 +2123,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		MLX5_SET(qpc, qpc, latency_sensitive, 1);
 
 
-	if (qp->wq_sig)
+	if (qp->flags_en & MLX5_QP_FLAG_SIGNATURE)
 		MLX5_SET(qpc, qpc, wq_signature, 1);
 
 	if (qp->flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
@@ -5050,7 +5058,7 @@ static void finish_wqe(struct mlx5_ib_qp *qp,
 					     mlx5_opcode | ((u32)opmod << 24));
 	ctrl->qpn_ds = cpu_to_be32(size | (qp->trans_qp.base.mqp.qpn << 8));
 	ctrl->fm_ce_se |= fence;
-	if (unlikely(qp->wq_sig))
+	if (unlikely(qp->flags_en & MLX5_QP_FLAG_SIGNATURE))
 		ctrl->signature = wq_sig(ctrl);
 
 	qp->sq.wrid[idx] = wr_id;
@@ -5502,7 +5510,7 @@ static int _mlx5_ib_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
 		}
 
 		scat = mlx5_frag_buf_get_wqe(&qp->rq.fbc, ind);
-		if (qp->wq_sig)
+		if (qp->flags_en & MLX5_QP_FLAG_SIGNATURE)
 			scat++;
 
 		for (i = 0; i < wr->num_sge; i++)
@@ -5514,7 +5522,7 @@ static int _mlx5_ib_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
 			scat[i].addr       = 0;
 		}
 
-		if (qp->wq_sig) {
+		if (qp->flags_en & MLX5_QP_FLAG_SIGNATURE) {
 			sig = (struct mlx5_rwqe_sig *)scat;
 			set_sig_seg(sig, (qp->rq.max_gs + 1) << 2);
 		}
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 16/18] RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (14 preceding siblings ...)
  2020-04-20 15:11 ` [PATCH rdma-next 15/18] RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE signature Leon Romanovsky
@ 2020-04-20 15:11 ` Leon Romanovsky
  2020-04-20 15:11 ` [PATCH rdma-next 17/18] RDMA/mlx5: Return all configured create flags through query QP Leon Romanovsky
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:11 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

In similar way to wqe_sig, the scat_cqe was treated differently from
other create QP vendor flags. Change it to be similar to other flags
and use flags_en mechanism.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  1 -
 drivers/infiniband/hw/mlx5/qp.c      | 17 ++++++++++-------
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 251380ff5706..9451aa836df0 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -446,7 +446,6 @@ struct mlx5_ib_qp {
 	u32			flags;
 	u8			port;
 	u8			state;
-	int			scat_cqe;
 	int			max_inline_data;
 	struct mlx5_bf	        bf;
 	u8			has_rq:1;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index a3c693ce1865..8facbaa0ce5a 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2019,9 +2019,10 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 
 		if (ucmd->flags & MLX5_QP_FLAG_SIGNATURE)
 			qp->flags_en |= MLX5_QP_FLAG_SIGNATURE;
-		if (MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
-			qp->scat_cqe =
-				!!(ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE);
+		if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE &&
+		    MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
+			qp->flags_en |= MLX5_QP_FLAG_SCATTER_CQE;
+
 		if (ucmd->flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
 			if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
 			    !tunnel_offload_supported(mdev)) {
@@ -2137,8 +2138,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		MLX5_SET(qpc, qpc, cd_slave_receive, 1);
 	if (qp->flags_en & MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE)
 		MLX5_SET(qpc, qpc, req_e2e_credit_mode, 1);
-	if (qp->scat_cqe && (init_attr->qp_type == IB_QPT_RC ||
-			     init_attr->qp_type == IB_QPT_UC)) {
+	if ((qp->flags_en & MLX5_QP_FLAG_SCATTER_CQE) &&
+	    (init_attr->qp_type == IB_QPT_RC ||
+	     init_attr->qp_type == IB_QPT_UC)) {
 		int rcqe_sz = rcqe_sz =
 			mlx5_ib_get_cqe_size(init_attr->recv_cq);
 
@@ -2146,8 +2148,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			 rcqe_sz == 128 ? MLX5_RES_SCAT_DATA64_CQE :
 					  MLX5_RES_SCAT_DATA32_CQE);
 	}
-	if (qp->scat_cqe && (qp->qp_sub_type == MLX5_IB_QPT_DCI ||
-			     init_attr->qp_type == IB_QPT_RC))
+	if ((qp->flags_en & MLX5_QP_FLAG_SCATTER_CQE) &&
+	    (qp->qp_sub_type == MLX5_IB_QPT_DCI ||
+	     init_attr->qp_type == IB_QPT_RC))
 		configure_requester_scat_cqe(dev, init_attr, ucmd, qpc);
 
 	if (qp->rq.wqe_cnt) {
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 17/18] RDMA/mlx5: Return all configured create flags through query QP
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (15 preceding siblings ...)
  2020-04-20 15:11 ` [PATCH rdma-next 16/18] RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags Leon Romanovsky
@ 2020-04-20 15:11 ` Leon Romanovsky
  2020-04-20 15:11 ` [PATCH rdma-next 18/18] RDMA/mlx5: Process all vendor flags in one place Leon Romanovsky
  2020-04-24 19:54 ` [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Jason Gunthorpe
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:11 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

The "flags" field in struct mlx5_ib_qp contains all UAPI flags
configured at the create QP stage. Return all the data as is
without masking.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  1 +
 drivers/infiniband/hw/mlx5/qp.c      | 13 +------------
 2 files changed, 2 insertions(+), 12 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 9451aa836df0..cb2331b03f7b 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -443,6 +443,7 @@ struct mlx5_ib_qp {
 	/* serialize qp state modifications
 	 */
 	struct mutex		mutex;
+	/* cached variant of create_flags from struct ib_qp_init_attr */
 	u32			flags;
 	u8			port;
 	u8			state;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 8facbaa0ce5a..15c476e858c5 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -5931,18 +5931,7 @@ int mlx5_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 
 	qp_init_attr->cap	     = qp_attr->cap;
 
-	qp_init_attr->create_flags = 0;
-	if (qp->flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK)
-		qp_init_attr->create_flags |= IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK;
-
-	if (qp->flags & IB_QP_CREATE_CROSS_CHANNEL)
-		qp_init_attr->create_flags |= IB_QP_CREATE_CROSS_CHANNEL;
-	if (qp->flags & IB_QP_CREATE_MANAGED_SEND)
-		qp_init_attr->create_flags |= IB_QP_CREATE_MANAGED_SEND;
-	if (qp->flags & IB_QP_CREATE_MANAGED_RECV)
-		qp_init_attr->create_flags |= IB_QP_CREATE_MANAGED_RECV;
-	if (qp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)
-		qp_init_attr->create_flags |= MLX5_IB_QP_CREATE_SQPN_QP1;
+	qp_init_attr->create_flags = qp->flags;
 
 	qp_init_attr->sq_sig_type = qp->sq_signal_bits & MLX5_WQE_CTRL_CQ_UPDATE ?
 		IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR;
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH rdma-next 18/18] RDMA/mlx5: Process all vendor flags in one place
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (16 preceding siblings ...)
  2020-04-20 15:11 ` [PATCH rdma-next 17/18] RDMA/mlx5: Return all configured create flags through query QP Leon Romanovsky
@ 2020-04-20 15:11 ` Leon Romanovsky
  2020-04-24 19:54 ` [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Jason Gunthorpe
  18 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-20 15:11 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Leon Romanovsky, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Check that vendor flags provided through ucmd are valid.

Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 156 +++++++++++++++-----------------
 1 file changed, 71 insertions(+), 85 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 15c476e858c5..eb9e1944263c 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1430,13 +1430,6 @@ static void destroy_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
 	mlx5_core_destroy_rq_tracked(dev, &rq->base.mqp);
 }
 
-static bool tunnel_offload_supported(struct mlx5_core_dev *dev)
-{
-	return  (MLX5_CAP_ETH(dev, tunnel_stateless_vxlan) ||
-		 MLX5_CAP_ETH(dev, tunnel_stateless_gre) ||
-		 MLX5_CAP_ETH(dev, tunnel_stateless_geneve_rx));
-}
-
 static void destroy_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
 				      struct mlx5_ib_rq *rq,
 				      u32 qp_flags_en,
@@ -1693,27 +1686,20 @@ static int create_rss_raw_qp_tir(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 		return -EOPNOTSUPP;
 	}
 
-	if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS &&
-	    !tunnel_offload_supported(dev->mdev)) {
-		mlx5_ib_dbg(dev, "tunnel offloads isn't supported\n");
-		return -EOPNOTSUPP;
-	}
-
 	if (ucmd.rx_hash_fields_mask & MLX5_RX_HASH_INNER &&
 	    !(ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS)) {
 		mlx5_ib_dbg(dev, "Tunnel offloads must be set for inner RSS\n");
 		return -EOPNOTSUPP;
 	}
 
-	if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC || dev->is_rep) {
-		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+	if (dev->is_rep)
 		qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC;
-	}
 
-	if (ucmd.flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) {
+	if (qp->flags_en & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC)
+		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+
+	if (qp->flags_en & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC)
 		lb_flag |= MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST;
-		qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC;
-	}
 
 	err = ib_copy_to_udata(udata, &resp, min(udata->outlen, sizeof(resp)));
 	if (err) {
@@ -1959,11 +1945,6 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
 	return atomic_mode;
 }
 
-static inline bool check_flags_mask(uint64_t input, uint64_t supported)
-{
-	return (input & ~supported) == 0;
-}
-
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct mlx5_ib_create_qp *ucmd,
@@ -1999,63 +1980,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
 
 	if (udata) {
-		if (!check_flags_mask(ucmd->flags,
-				      MLX5_QP_FLAG_ALLOW_SCATTER_CQE |
-				      MLX5_QP_FLAG_BFREG_INDEX |
-				      MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE |
-				      MLX5_QP_FLAG_SCATTER_CQE |
-				      MLX5_QP_FLAG_SIGNATURE |
-				      MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC |
-				      MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC |
-				      MLX5_QP_FLAG_TUNNEL_OFFLOADS |
-				      MLX5_QP_FLAG_UAR_PAGE_INDEX |
-				      MLX5_QP_FLAG_TYPE_DCI |
-				      MLX5_QP_FLAG_TYPE_DCT))
-			return -EINVAL;
-
 		err = get_qp_user_index(ucontext, ucmd, udata->inlen, &uidx);
 		if (err)
 			return err;
-
-		if (ucmd->flags & MLX5_QP_FLAG_SIGNATURE)
-			qp->flags_en |= MLX5_QP_FLAG_SIGNATURE;
-		if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE &&
-		    MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
-			qp->flags_en |= MLX5_QP_FLAG_SCATTER_CQE;
-
-		if (ucmd->flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
-			if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
-			    !tunnel_offload_supported(mdev)) {
-				mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
-				return -EOPNOTSUPP;
-			}
-			qp->flags_en |= MLX5_QP_FLAG_TUNNEL_OFFLOADS;
-		}
-
-		if (ucmd->flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC) {
-			if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
-				mlx5_ib_dbg(dev, "Self-LB UC isn't supported\n");
-				return -EOPNOTSUPP;
-			}
-			qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC;
-		}
-
-		if (ucmd->flags & MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC) {
-			if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
-				mlx5_ib_dbg(dev, "Self-LB UM isn't supported\n");
-				return -EOPNOTSUPP;
-			}
-			qp->flags_en |= MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC;
-		}
-
-		if (ucmd->flags & MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE) {
-			if (init_attr->qp_type != IB_QPT_RC ||
-				!MLX5_CAP_GEN(dev->mdev, qp_packet_based)) {
-				mlx5_ib_dbg(dev, "packet based credit mode isn't supported\n");
-				return -EOPNOTSUPP;
-			}
-			qp->flags_en |= MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE;
-		}
 	}
 
 	if (qp->flags & IB_QP_CREATE_SOURCE_QPN)
@@ -2474,7 +2401,7 @@ static int create_dct(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	MLX5_SET64(dctc, dctc, dc_access_key, ucmd->access_key);
 	MLX5_SET(dctc, dctc, user_index, uidx);
 
-	if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE) {
+	if (qp->flags_en & MLX5_QP_FLAG_SCATTER_CQE) {
 		int rcqe_sz = mlx5_ib_get_cqe_size(attr->recv_cq);
 
 		if (rcqe_sz == 128)
@@ -2577,22 +2504,81 @@ static int check_valid_flow(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	return 0;
 }
 
-static int process_vendor_flags(struct mlx5_ib_qp *qp,
+static void process_vendor_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
+				bool cond, struct mlx5_ib_qp *qp)
+{
+	if (!(*flags & flag))
+		return;
+
+	if (cond) {
+		qp->flags_en |= flag;
+		*flags &= ~flag;
+		return;
+	}
+
+	if (flag == MLX5_QP_FLAG_SCATTER_CQE) {
+		/*
+		 * We don't return error if this flag was provided,
+		 * and mlx5 doesn't have right capability.
+		 */
+		*flags &= ~MLX5_QP_FLAG_SCATTER_CQE;
+		return;
+	}
+	mlx5_ib_dbg(dev, "Vendor create QP flag 0x%X is not supported\n", flag);
+}
+
+static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 				struct ib_qp_init_attr *attr,
 				struct mlx5_ib_create_qp *ucmd)
 {
-	switch (ucmd->flags & (MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI)) {
+	struct mlx5_core_dev *mdev = dev->mdev;
+	int flags = ucmd->flags;
+	bool cond;
+
+	switch (flags & (MLX5_QP_FLAG_TYPE_DCT | MLX5_QP_FLAG_TYPE_DCI)) {
 	case MLX5_QP_FLAG_TYPE_DCI:
 		qp->qp_sub_type = MLX5_IB_QPT_DCI;
 		break;
 	case MLX5_QP_FLAG_TYPE_DCT:
 		qp->qp_sub_type = MLX5_IB_QPT_DCT;
-		break;
+		fallthrough;
 	default:
+		break;
+	}
+
+	if (attr->qp_type == IB_QPT_DRIVER && !qp->qp_sub_type)
 		return -EINVAL;
+
+	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TYPE_DCI, true, qp);
+	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TYPE_DCT, true, qp);
+
+	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_SIGNATURE, true, qp);
+	process_vendor_flag(dev, &flags, MLX5_QP_FLAG_SCATTER_CQE,
+			    MLX5_CAP_GEN(mdev, sctr_data_cqe), qp);
+
+	if (attr->qp_type == IB_QPT_RAW_PACKET) {
+		cond = MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) ||
+		       MLX5_CAP_ETH(mdev, tunnel_stateless_gre) ||
+		       MLX5_CAP_ETH(mdev, tunnel_stateless_geneve_rx);
+		process_vendor_flag(dev, &flags, MLX5_QP_FLAG_TUNNEL_OFFLOADS,
+				    cond, qp);
+		process_vendor_flag(dev, &flags,
+				    MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC, true,
+				    qp);
+		process_vendor_flag(dev, &flags,
+				    MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC, true,
+				    qp);
 	}
 
-	return 0;
+	if (attr->qp_type == IB_QPT_RC)
+		process_vendor_flag(dev, &flags,
+				    MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE,
+				    MLX5_CAP_GEN(mdev, qp_packet_based), qp);
+
+	if (flags)
+		mlx5_ib_dbg(dev, "udata has unsupported flags 0x%X\n", flags);
+
+	return (flags) ? -EINVAL : 0;
 }
 
 static void process_create_flag(struct mlx5_ib_dev *dev, int *flags, int flag,
@@ -2774,8 +2760,8 @@ struct ib_qp *mlx5_ib_create_qp(struct ib_pd *pd,
 	if (!qp)
 		return ERR_PTR(-ENOMEM);
 
-	if (init_attr->qp_type == IB_QPT_DRIVER) {
-		err = process_vendor_flags(qp, init_attr, &ucmd);
+	if (udata) {
+		err = process_vendor_flags(dev, qp, init_attr, &ucmd);
 		if (err)
 			goto free_qp;
 	}
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place
  2020-04-20 15:11 ` [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place Leon Romanovsky
@ 2020-04-23 18:53   ` Leon Romanovsky
  2020-04-24 19:51   ` Jason Gunthorpe
  1 sibling, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-23 18:53 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma, Maor Gottlieb

On Mon, Apr 20, 2020 at 06:11:01PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
>
> create_flags is checked in too many places and scattered across all
> the code, consolidate all the checks inside one function, so we will
> be easily see the flow. As part of such change, delete unreachable code,
> because IB/core is responsible sanitize the input.
>
> Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/hw/mlx5/qp.c | 200 ++++++++++++++++----------------
>  1 file changed, 101 insertions(+), 99 deletions(-)

<...>

> +static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
> +				struct ib_qp_init_attr *attr)
> +{
> +	enum ib_qp_type qp_type = attr->qp_type;
> +	struct mlx5_core_dev *mdev = dev->mdev;
> +	int create_flags = attr->create_flags;
> +	bool cond;
> +
> +	if (qp->qp_sub_type == MLX5_IB_QPT_DCT)
> +		return (create_flags) ? -EINVAL : 0;
> +
> +	if (qp_type == IB_QPT_RAW_PACKET)
> +		return (attr->rwq_ind_tbl && create_flags) ? -EINVAL : 0;

This line is needed to be:
+       if (qp_type == IB_QPT_RAW_PACKET && attr->rwq_ind_tbl)
+               return (create_flags) ? -EINVAL : 0;

Thanks

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place
  2020-04-20 15:11 ` [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place Leon Romanovsky
  2020-04-23 18:53   ` Leon Romanovsky
@ 2020-04-24 19:51   ` Jason Gunthorpe
  2020-04-24 20:23     ` Leon Romanovsky
  1 sibling, 1 reply; 24+ messages in thread
From: Jason Gunthorpe @ 2020-04-24 19:51 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Leon Romanovsky, linux-rdma, Maor Gottlieb

On Mon, Apr 20, 2020 at 06:11:01PM +0300, Leon Romanovsky wrote:
> +	process_create_flag(dev, &create_flags,
> +			    IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK,
> +			    MLX5_CAP_GEN(mdev, block_lb_mc), qp);

This only applies to datagram QP types

> +	process_create_flag(dev, &create_flags, IB_QP_CREATE_CROSS_CHANNEL,
> +			    MLX5_CAP_GEN(mdev, cd), qp);
> +	process_create_flag(dev, &create_flags, IB_QP_CREATE_MANAGED_SEND,
> +			    MLX5_CAP_GEN(mdev, cd), qp);
> +	process_create_flag(dev, &create_flags, IB_QP_CREATE_MANAGED_RECV,
> +			    MLX5_CAP_GEN(mdev, cd), qp);
> +
> +	if (qp_type == IB_QPT_UD) {
> +		process_create_flag(dev, &create_flags,
> +				    IB_QP_CREATE_IPOIB_UD_LSO,
> +				    MLX5_CAP_GEN(mdev, ipoib_basic_offloads),
> +				    qp);
> +		cond = MLX5_CAP_GEN(mdev, port_type) == MLX5_CAP_PORT_TYPE_IB;
> +		process_create_flag(dev, &create_flags, IB_QP_CREATE_SOURCE_QPN,
> +				    cond, qp);
> +	}
> +
> +	if (qp_type == IB_QPT_RAW_PACKET) {
> +		cond = MLX5_CAP_GEN(mdev, eth_net_offloads) &&
> +		       MLX5_CAP_ETH(mdev, scatter_fcs);
> +		process_create_flag(dev, &create_flags,
> +				    IB_QP_CREATE_SCATTER_FCS, cond, qp);
> +
> +		cond = MLX5_CAP_GEN(mdev, eth_net_offloads) &&
> +		       MLX5_CAP_ETH(mdev, vlan_cap);
> +		process_create_flag(dev, &create_flags,
> +				    IB_QP_CREATE_CVLAN_STRIPPING, cond, qp);
> +	}
> +
> +	process_create_flag(dev, &create_flags,
> +			    IB_QP_CREATE_PCI_WRITE_END_PADDING,
> +			    MLX5_CAP_GEN(mdev, end_pad), qp);

This one is datagram only too

> +
> +	process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_WC_TEST,
> +			    qp_type != MLX5_IB_QPT_REG_UMR, qp);
> +	process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_SQPN_QP1,
> +			    true, qp);

I wonder if these are excluded from userspace someplace, seems like it
is worth a udata test here just to be clear

> +
> +	if (create_flags)
> +		mlx5_ib_dbg(dev, "Create QP has unsupported flags 0x%X\n",
> +			    create_flags);
> +
> +	return (create_flags) ? -EINVAL : 0;

Since there is already an if, avoid ternary expression

Jason

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I)
  2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
                   ` (17 preceding siblings ...)
  2020-04-20 15:11 ` [PATCH rdma-next 18/18] RDMA/mlx5: Process all vendor flags in one place Leon Romanovsky
@ 2020-04-24 19:54 ` Jason Gunthorpe
  2020-04-24 20:26   ` Leon Romanovsky
  18 siblings, 1 reply; 24+ messages in thread
From: Jason Gunthorpe @ 2020-04-24 19:54 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Leon Romanovsky, linux-kernel, linux-rdma, Maor Gottlieb

On Mon, Apr 20, 2020 at 06:10:47PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
> 
> Hi,
> 
> This is first part of series which tries to return some sanity
> to mlx5_ib_create_qp() function. Such refactoring is required
> to make extension of that function with less worries of breaking
> driver.
> 
> Extra goal of such refactoring is to ensure that QP is allocated
> at the beginning of function and released at the end. It will allow
> us to move QP allocation to be under IB/core responsibility.
> 
> It is based on previously sent [1] "[PATCH mlx5-next 00/24] Mass
> conversion to light mlx5 command interface"
> 
> Thanks
> 
> [1] https://lore.kernel.org/linux-rdma/20200420114136.264924-1-leon@kernel.org
> 
> Leon Romanovsky (18):
>   RDMA/mlx5: Organize QP types checks in one place
>   RDMA/mlx5: Delete impossible GSI port check
>   RDMA/mlx5: Perform check if QP creation flow is valid
>   RDMA/mlx5: Prepare QP allocation for future removal
>   RDMA/mlx5: Avoid setting redundant NULL for XRC QPs
>   RDMA/mlx5: Set QP subtype immediately when it is known
>   RDMA/mlx5: Separate create QP flows to be based on type
>   RDMA/mlx5: Split scatter CQE configuration for DCT QP
>   RDMA/mlx5: Update all DRIVER QP places to use QP subtype
>   RDMA/mlx5: Move DRIVER QP flags check into separate function
>   RDMA/mlx5: Remove second copy from user for non RSS RAW QPs
>   RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow
>   RDMA/mlx5: Delete create QP flags obfuscation
>   RDMA/mlx5: Process create QP flags in one place
>   RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE
>     signature
>   RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags
>   RDMA/mlx5: Return all configured create flags through query QP
>   RDMA/mlx5: Process all vendor flags in one place

This seems reasonable, can you send it so it applies without other
series?

Jason

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place
  2020-04-24 19:51   ` Jason Gunthorpe
@ 2020-04-24 20:23     ` Leon Romanovsky
  0 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-24 20:23 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Doug Ledford, linux-rdma, Maor Gottlieb

On Fri, Apr 24, 2020 at 04:51:27PM -0300, Jason Gunthorpe wrote:
> On Mon, Apr 20, 2020 at 06:11:01PM +0300, Leon Romanovsky wrote:
> > +	process_create_flag(dev, &create_flags,
> > +			    IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK,
> > +			    MLX5_CAP_GEN(mdev, block_lb_mc), qp);
>
> This only applies to datagram QP types

We didn't really check it before, should I check it now?

>
> > +	process_create_flag(dev, &create_flags, IB_QP_CREATE_CROSS_CHANNEL,
> > +			    MLX5_CAP_GEN(mdev, cd), qp);
> > +	process_create_flag(dev, &create_flags, IB_QP_CREATE_MANAGED_SEND,
> > +			    MLX5_CAP_GEN(mdev, cd), qp);
> > +	process_create_flag(dev, &create_flags, IB_QP_CREATE_MANAGED_RECV,
> > +			    MLX5_CAP_GEN(mdev, cd), qp);
> > +
> > +	if (qp_type == IB_QPT_UD) {
> > +		process_create_flag(dev, &create_flags,
> > +				    IB_QP_CREATE_IPOIB_UD_LSO,
> > +				    MLX5_CAP_GEN(mdev, ipoib_basic_offloads),
> > +				    qp);
> > +		cond = MLX5_CAP_GEN(mdev, port_type) == MLX5_CAP_PORT_TYPE_IB;
> > +		process_create_flag(dev, &create_flags, IB_QP_CREATE_SOURCE_QPN,
> > +				    cond, qp);
> > +	}
> > +
> > +	if (qp_type == IB_QPT_RAW_PACKET) {
> > +		cond = MLX5_CAP_GEN(mdev, eth_net_offloads) &&
> > +		       MLX5_CAP_ETH(mdev, scatter_fcs);
> > +		process_create_flag(dev, &create_flags,
> > +				    IB_QP_CREATE_SCATTER_FCS, cond, qp);
> > +
> > +		cond = MLX5_CAP_GEN(mdev, eth_net_offloads) &&
> > +		       MLX5_CAP_ETH(mdev, vlan_cap);
> > +		process_create_flag(dev, &create_flags,
> > +				    IB_QP_CREATE_CVLAN_STRIPPING, cond, qp);
> > +	}
> > +
> > +	process_create_flag(dev, &create_flags,
> > +			    IB_QP_CREATE_PCI_WRITE_END_PADDING,
> > +			    MLX5_CAP_GEN(mdev, end_pad), qp);
>
> This one is datagram only toos

Same

>
> > +
> > +	process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_WC_TEST,
> > +			    qp_type != MLX5_IB_QPT_REG_UMR, qp);
> > +	process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_SQPN_QP1,
> > +			    true, qp);
>
> I wonder if these are excluded from userspace someplace, seems like it
> is worth a udata test here just to be clear

We are excluding them in create_qp():drivers/infiniband/core/uverbs_cmd.c

1411         if (attr.create_flags & ~(IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK |
1412                                 IB_QP_CREATE_CROSS_CHANNEL |
1413                                 IB_QP_CREATE_MANAGED_SEND |
1414                                 IB_QP_CREATE_MANAGED_RECV |
1415                                 IB_QP_CREATE_SCATTER_FCS |
1416                                 IB_QP_CREATE_CVLAN_STRIPPING |
1417                                 IB_QP_CREATE_SOURCE_QPN |
1418                                 IB_QP_CREATE_PCI_WRITE_END_PADDING)) {
1419                 ret = -EINVAL;
1420                 goto err_put;
1421         }

>
> > +
> > +	if (create_flags)
> > +		mlx5_ib_dbg(dev, "Create QP has unsupported flags 0x%X\n",
> > +			    create_flags);
> > +
> > +	return (create_flags) ? -EINVAL : 0;
>
> Since there is already an if, avoid ternary expression

No problem

>
> Jason

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I)
  2020-04-24 19:54 ` [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Jason Gunthorpe
@ 2020-04-24 20:26   ` Leon Romanovsky
  0 siblings, 0 replies; 24+ messages in thread
From: Leon Romanovsky @ 2020-04-24 20:26 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Doug Ledford, linux-kernel, linux-rdma, Maor Gottlieb

On Fri, Apr 24, 2020 at 04:54:26PM -0300, Jason Gunthorpe wrote:
> On Mon, Apr 20, 2020 at 06:10:47PM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@mellanox.com>
> >
> > Hi,
> >
> > This is first part of series which tries to return some sanity
> > to mlx5_ib_create_qp() function. Such refactoring is required
> > to make extension of that function with less worries of breaking
> > driver.
> >
> > Extra goal of such refactoring is to ensure that QP is allocated
> > at the beginning of function and released at the end. It will allow
> > us to move QP allocation to be under IB/core responsibility.
> >
> > It is based on previously sent [1] "[PATCH mlx5-next 00/24] Mass
> > conversion to light mlx5 command interface"
> >
> > Thanks
> >
> > [1] https://lore.kernel.org/linux-rdma/20200420114136.264924-1-leon@kernel.org
> >
> > Leon Romanovsky (18):
> >   RDMA/mlx5: Organize QP types checks in one place
> >   RDMA/mlx5: Delete impossible GSI port check
> >   RDMA/mlx5: Perform check if QP creation flow is valid
> >   RDMA/mlx5: Prepare QP allocation for future removal
> >   RDMA/mlx5: Avoid setting redundant NULL for XRC QPs
> >   RDMA/mlx5: Set QP subtype immediately when it is known
> >   RDMA/mlx5: Separate create QP flows to be based on type
> >   RDMA/mlx5: Split scatter CQE configuration for DCT QP
> >   RDMA/mlx5: Update all DRIVER QP places to use QP subtype
> >   RDMA/mlx5: Move DRIVER QP flags check into separate function
> >   RDMA/mlx5: Remove second copy from user for non RSS RAW QPs
> >   RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow
> >   RDMA/mlx5: Delete create QP flags obfuscation
> >   RDMA/mlx5: Process create QP flags in one place
> >   RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE
> >     signature
> >   RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags
> >   RDMA/mlx5: Return all configured create flags through query QP
> >   RDMA/mlx5: Process all vendor flags in one place
>
> This seems reasonable, can you send it so it applies without other
> series?

Maybe it is doable, but part II needs [1] as pre-requirement.
Do you anyway prefer me to do it?

Thanks

>
> Jason

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-04-24 20:26 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-20 15:10 [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 01/18] RDMA/mlx5: Organize QP types checks in one place Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 02/18] RDMA/mlx5: Delete impossible GSI port check Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 03/18] RDMA/mlx5: Perform check if QP creation flow is valid Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 04/18] RDMA/mlx5: Prepare QP allocation for future removal Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 05/18] RDMA/mlx5: Avoid setting redundant NULL for XRC QPs Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 06/18] RDMA/mlx5: Set QP subtype immediately when it is known Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 07/18] RDMA/mlx5: Separate create QP flows to be based on type Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 08/18] RDMA/mlx5: Split scatter CQE configuration for DCT QP Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 09/18] RDMA/mlx5: Update all DRIVER QP places to use QP subtype Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 10/18] RDMA/mlx5: Move DRIVER QP flags check into separate function Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 11/18] RDMA/mlx5: Remove second copy from user for non RSS RAW QPs Leon Romanovsky
2020-04-20 15:10 ` [PATCH rdma-next 12/18] RDMA/mlx5: Initial separation of RAW_PACKET QP from common flow Leon Romanovsky
2020-04-20 15:11 ` [PATCH rdma-next 13/18] RDMA/mlx5: Delete create QP flags obfuscation Leon Romanovsky
2020-04-20 15:11 ` [PATCH rdma-next 14/18] RDMA/mlx5: Process create QP flags in one place Leon Romanovsky
2020-04-23 18:53   ` Leon Romanovsky
2020-04-24 19:51   ` Jason Gunthorpe
2020-04-24 20:23     ` Leon Romanovsky
2020-04-20 15:11 ` [PATCH rdma-next 15/18] RDMA/mlx5: Use flags_en mechanism to mark QP created with WQE signature Leon Romanovsky
2020-04-20 15:11 ` [PATCH rdma-next 16/18] RDMA/mlx5: Change scatter CQE flag to be set like other vendor flags Leon Romanovsky
2020-04-20 15:11 ` [PATCH rdma-next 17/18] RDMA/mlx5: Return all configured create flags through query QP Leon Romanovsky
2020-04-20 15:11 ` [PATCH rdma-next 18/18] RDMA/mlx5: Process all vendor flags in one place Leon Romanovsky
2020-04-24 19:54 ` [PATCH rdma-next 00/18] Refactor mlx5_ib_create_qp (Part I) Jason Gunthorpe
2020-04-24 20:26   ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.