linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next v1 0/2] Convert XRC to use xarray
@ 2020-06-23 11:15 Leon Romanovsky
  2020-06-23 11:15 ` [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain Leon Romanovsky
  2020-06-23 11:15 ` [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup Leon Romanovsky
  0 siblings, 2 replies; 13+ messages in thread
From: Leon Romanovsky @ 2020-06-23 11:15 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, linux-kernel, linux-rdma, Maor Gottlieb

From: Leon Romanovsky <leonro@mellanox.com>

Changelog:
v1: Changed ib_dealloc_xrcd_user() do not iterate over tgt list, because
it is expected to be empty.
v0: https://lore.kernel.org/lkml/20200621104110.53509-1-leon@kernel.org
Two small patches to simplify and improve XRC logic.

Thanks

Maor Gottlieb (2):
  RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain
  RDMA/core: Optimize XRC target lookup

 drivers/infiniband/core/uverbs_cmd.c | 12 ++---
 drivers/infiniband/core/verbs.c      | 76 +++++++++++++---------------
 drivers/infiniband/hw/mlx5/main.c    | 24 +++------
 include/rdma/ib_verbs.h              | 27 +++++-----
 4 files changed, 59 insertions(+), 80 deletions(-)

--
2.26.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain
  2020-06-23 11:15 [PATCH rdma-next v1 0/2] Convert XRC to use xarray Leon Romanovsky
@ 2020-06-23 11:15 ` Leon Romanovsky
  2020-07-02 18:27   ` Jason Gunthorpe
  2020-06-23 11:15 ` [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup Leon Romanovsky
  1 sibling, 1 reply; 13+ messages in thread
From: Leon Romanovsky @ 2020-06-23 11:15 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Maor Gottlieb, linux-rdma

From: Maor Gottlieb <maorg@mellanox.com>

ib_alloc_xrcd already does the required initialization, so move
the mlx5 driver and uverbs to call it and save some code duplication,
while cleaning the function argument lists of that function.

Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/uverbs_cmd.c | 12 +++---------
 drivers/infiniband/core/verbs.c      | 19 +++++++++++++------
 drivers/infiniband/hw/mlx5/main.c    | 24 ++++++++----------------
 include/rdma/ib_verbs.h              | 22 ++++++++++++----------
 4 files changed, 36 insertions(+), 41 deletions(-)

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 557644dcc923..68c9a0210220 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -614,17 +614,11 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs)
 	}
 
 	if (!xrcd) {
-		xrcd = ib_dev->ops.alloc_xrcd(ib_dev, &attrs->driver_udata);
+		xrcd = ib_alloc_xrcd_user(ib_dev, inode, &attrs->driver_udata);
 		if (IS_ERR(xrcd)) {
 			ret = PTR_ERR(xrcd);
 			goto err;
 		}
-
-		xrcd->inode   = inode;
-		xrcd->device  = ib_dev;
-		atomic_set(&xrcd->usecnt, 0);
-		mutex_init(&xrcd->tgt_qp_mutex);
-		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
 		new_xrcd = 1;
 	}
 
@@ -663,7 +657,7 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs)
 	}
 
 err_dealloc_xrcd:
-	ib_dealloc_xrcd(xrcd, uverbs_get_cleared_udata(attrs));
+	ib_dealloc_xrcd_user(xrcd, uverbs_get_cleared_udata(attrs));
 
 err:
 	uobj_alloc_abort(&obj->uobject, attrs);
@@ -701,7 +695,7 @@ int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, struct ib_xrcd *xrcd,
 	if (inode && !atomic_dec_and_test(&xrcd->usecnt))
 		return 0;
 
-	ret = ib_dealloc_xrcd(xrcd, &attrs->driver_udata);
+	ret = ib_dealloc_xrcd_user(xrcd, &attrs->driver_udata);
 
 	if (ib_is_destroy_retryable(ret, why, uobject)) {
 		atomic_inc(&xrcd->usecnt);
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index d70771caf534..d66a0ad62077 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -2289,17 +2289,24 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
 }
 EXPORT_SYMBOL(ib_detach_mcast);
 
-struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller)
+/**
+ * ib_alloc_xrcd_user - Allocates an XRC domain.
+ * @device: The device on which to allocate the XRC domain.
+ * @inode: inode to connect XRCD
+ * @udata: Valid user data or NULL for kernel object
+ */
+struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
+				   struct inode *inode, struct ib_udata *udata)
 {
 	struct ib_xrcd *xrcd;
 
 	if (!device->ops.alloc_xrcd)
 		return ERR_PTR(-EOPNOTSUPP);
 
-	xrcd = device->ops.alloc_xrcd(device, NULL);
+	xrcd = device->ops.alloc_xrcd(device, udata);
 	if (!IS_ERR(xrcd)) {
 		xrcd->device = device;
-		xrcd->inode = NULL;
+		xrcd->inode = inode;
 		atomic_set(&xrcd->usecnt, 0);
 		mutex_init(&xrcd->tgt_qp_mutex);
 		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
@@ -2307,9 +2314,9 @@ struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller)
 
 	return xrcd;
 }
-EXPORT_SYMBOL(__ib_alloc_xrcd);
+EXPORT_SYMBOL(ib_alloc_xrcd_user);
 
-int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata)
+int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata)
 {
 	struct ib_qp *qp;
 	int ret;
@@ -2327,7 +2334,7 @@ int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata)
 
 	return xrcd->device->ops.dealloc_xrcd(xrcd, udata);
 }
-EXPORT_SYMBOL(ib_dealloc_xrcd);
+EXPORT_SYMBOL(ib_dealloc_xrcd_user);
 
 /**
  * ib_create_wq - Creates a WQ associated with the specified protection
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 47a0c091eea5..46c596a855e7 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -5043,27 +5043,17 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
 	if (ret)
 		goto err_create_cq;
 
-	devr->x0 = mlx5_ib_alloc_xrcd(&dev->ib_dev, NULL);
+	devr->x0 = ib_alloc_xrcd(&dev->ib_dev);
 	if (IS_ERR(devr->x0)) {
 		ret = PTR_ERR(devr->x0);
 		goto error2;
 	}
-	devr->x0->device = &dev->ib_dev;
-	devr->x0->inode = NULL;
-	atomic_set(&devr->x0->usecnt, 0);
-	mutex_init(&devr->x0->tgt_qp_mutex);
-	INIT_LIST_HEAD(&devr->x0->tgt_qp_list);
 
-	devr->x1 = mlx5_ib_alloc_xrcd(&dev->ib_dev, NULL);
+	devr->x1 = ib_alloc_xrcd(&dev->ib_dev);
 	if (IS_ERR(devr->x1)) {
 		ret = PTR_ERR(devr->x1);
 		goto error3;
 	}
-	devr->x1->device = &dev->ib_dev;
-	devr->x1->inode = NULL;
-	atomic_set(&devr->x1->usecnt, 0);
-	mutex_init(&devr->x1->tgt_qp_mutex);
-	INIT_LIST_HEAD(&devr->x1->tgt_qp_list);
 
 	memset(&attr, 0, sizeof(attr));
 	attr.attr.max_sge = 1;
@@ -5125,13 +5115,14 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
 error6:
 	kfree(devr->s1);
 error5:
+	atomic_dec(&devr->s0->ext.xrc.xrcd->usecnt);
 	mlx5_ib_destroy_srq(devr->s0, NULL);
 err_create:
 	kfree(devr->s0);
 error4:
-	mlx5_ib_dealloc_xrcd(devr->x1, NULL);
+	ib_dealloc_xrcd(devr->x1);
 error3:
-	mlx5_ib_dealloc_xrcd(devr->x0, NULL);
+	ib_dealloc_xrcd(devr->x0);
 error2:
 	mlx5_ib_destroy_cq(devr->c0, NULL);
 err_create_cq:
@@ -5149,10 +5140,11 @@ static void destroy_dev_resources(struct mlx5_ib_resources *devr)
 
 	mlx5_ib_destroy_srq(devr->s1, NULL);
 	kfree(devr->s1);
+	atomic_dec(&devr->s0->ext.xrc.xrcd->usecnt);
 	mlx5_ib_destroy_srq(devr->s0, NULL);
 	kfree(devr->s0);
-	mlx5_ib_dealloc_xrcd(devr->x0, NULL);
-	mlx5_ib_dealloc_xrcd(devr->x1, NULL);
+	ib_dealloc_xrcd(devr->x0);
+	ib_dealloc_xrcd(devr->x1);
 	mlx5_ib_destroy_cq(devr->c0, NULL);
 	kfree(devr->c0);
 	mlx5_ib_dealloc_pd(devr->p0, NULL);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index f1e8afe1dd75..f785a4f1e58b 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -4331,21 +4331,23 @@ int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
  */
 int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
 
-/**
- * ib_alloc_xrcd - Allocates an XRC domain.
- * @device: The device on which to allocate the XRC domain.
- * @caller: Module name for kernel consumers
- */
-struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller);
-#define ib_alloc_xrcd(device) \
-	__ib_alloc_xrcd((device), KBUILD_MODNAME)
+struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
+				   struct inode *inode, struct ib_udata *udata);
+static inline struct ib_xrcd *ib_alloc_xrcd(struct ib_device *device)
+{
+	return ib_alloc_xrcd_user(device, NULL, NULL);
+}
 
 /**
- * ib_dealloc_xrcd - Deallocates an XRC domain.
+ * ib_dealloc_xrcd_user - Deallocates an XRC domain.
  * @xrcd: The XRC domain to deallocate.
  * @udata: Valid user data or NULL for kernel object
  */
-int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata);
+int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata);
+static inline int ib_dealloc_xrcd(struct ib_xrcd *xrcd)
+{
+	return ib_dealloc_xrcd_user(xrcd, NULL);
+}
 
 static inline int ib_check_mr_access(int flags)
 {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-23 11:15 [PATCH rdma-next v1 0/2] Convert XRC to use xarray Leon Romanovsky
  2020-06-23 11:15 ` [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain Leon Romanovsky
@ 2020-06-23 11:15 ` Leon Romanovsky
  2020-06-23 17:52   ` Jason Gunthorpe
  1 sibling, 1 reply; 13+ messages in thread
From: Leon Romanovsky @ 2020-06-23 11:15 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Maor Gottlieb, linux-rdma

From: Maor Gottlieb <maorg@mellanox.com>

Replace the mutex with read write semaphore and use xarray instead
of linked list for XRC target QPs. This will give faster XRC target
lookup. In addition, when QP is closed, don't insert it back to the
xarray if the destroy command failed.

Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
 include/rdma/ib_verbs.h         |  5 ++-
 2 files changed, 23 insertions(+), 39 deletions(-)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index d66a0ad62077..1ccbe43e33cd 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
 	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
 }
 
-static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
-{
-	mutex_lock(&xrcd->tgt_qp_mutex);
-	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
-	mutex_unlock(&xrcd->tgt_qp_mutex);
-}
-
 static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
 				  void (*event_handler)(struct ib_event *, void *),
 				  void *qp_context)
@@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
 	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
 		return ERR_PTR(-EINVAL);
 
-	qp = ERR_PTR(-EINVAL);
-	mutex_lock(&xrcd->tgt_qp_mutex);
-	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
-		if (real_qp->qp_num == qp_open_attr->qp_num) {
-			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
-					  qp_open_attr->qp_context);
-			break;
-		}
+	down_read(&xrcd->tgt_qps_rwsem);
+	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
+	if (!real_qp) {
+		up_read(&xrcd->tgt_qps_rwsem);
+		return ERR_PTR(-EINVAL);
 	}
-	mutex_unlock(&xrcd->tgt_qp_mutex);
+	qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
+			  qp_open_attr->qp_context);
+	up_read(&xrcd->tgt_qps_rwsem);
 	return qp;
 }
 EXPORT_SYMBOL(ib_open_qp);
@@ -1157,6 +1149,7 @@ static struct ib_qp *create_xrc_qp_user(struct ib_qp *qp,
 					struct ib_qp_init_attr *qp_init_attr)
 {
 	struct ib_qp *real_qp = qp;
+	int err;
 
 	qp->event_handler = __ib_shared_qp_event_handler;
 	qp->qp_context = qp;
@@ -1172,7 +1165,12 @@ static struct ib_qp *create_xrc_qp_user(struct ib_qp *qp,
 	if (IS_ERR(qp))
 		return qp;
 
-	__ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp);
+	err = xa_err(xa_store(&qp_init_attr->xrcd->tgt_qps, real_qp->qp_num,
+			      real_qp, GFP_KERNEL));
+	if (err) {
+		ib_close_qp(qp);
+		return ERR_PTR(err);
+	}
 	return qp;
 }
 
@@ -1888,21 +1886,18 @@ static int __ib_destroy_shared_qp(struct ib_qp *qp)
 
 	real_qp = qp->real_qp;
 	xrcd = real_qp->xrcd;
-
-	mutex_lock(&xrcd->tgt_qp_mutex);
+	down_write(&xrcd->tgt_qps_rwsem);
 	ib_close_qp(qp);
 	if (atomic_read(&real_qp->usecnt) == 0)
-		list_del(&real_qp->xrcd_list);
+		xa_erase(&xrcd->tgt_qps, real_qp->qp_num);
 	else
 		real_qp = NULL;
-	mutex_unlock(&xrcd->tgt_qp_mutex);
+	up_write(&xrcd->tgt_qps_rwsem);
 
 	if (real_qp) {
 		ret = ib_destroy_qp(real_qp);
 		if (!ret)
 			atomic_dec(&xrcd->usecnt);
-		else
-			__ib_insert_xrcd_qp(xrcd, real_qp);
 	}
 
 	return 0;
@@ -2308,8 +2303,8 @@ struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
 		xrcd->device = device;
 		xrcd->inode = inode;
 		atomic_set(&xrcd->usecnt, 0);
-		mutex_init(&xrcd->tgt_qp_mutex);
-		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
+		init_rwsem(&xrcd->tgt_qps_rwsem);
+		xa_init(&xrcd->tgt_qps);
 	}
 
 	return xrcd;
@@ -2318,20 +2313,10 @@ EXPORT_SYMBOL(ib_alloc_xrcd_user);
 
 int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata)
 {
-	struct ib_qp *qp;
-	int ret;
-
 	if (atomic_read(&xrcd->usecnt))
 		return -EBUSY;
 
-	while (!list_empty(&xrcd->tgt_qp_list)) {
-		qp = list_entry(xrcd->tgt_qp_list.next, struct ib_qp, xrcd_list);
-		ret = ib_destroy_qp(qp);
-		if (ret)
-			return ret;
-	}
-	mutex_destroy(&xrcd->tgt_qp_mutex);
-
+	WARN_ON(!xa_empty(&xrcd->tgt_qps));
 	return xrcd->device->ops.dealloc_xrcd(xrcd, udata);
 }
 EXPORT_SYMBOL(ib_dealloc_xrcd_user);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index f785a4f1e58b..9b973b3b6f4c 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1568,9 +1568,8 @@ struct ib_xrcd {
 	struct ib_device       *device;
 	atomic_t		usecnt; /* count all exposed resources */
 	struct inode	       *inode;
-
-	struct mutex		tgt_qp_mutex;
-	struct list_head	tgt_qp_list;
+	struct rw_semaphore	tgt_qps_rwsem;
+	struct xarray		tgt_qps;
 };
 
 struct ib_ah {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-23 11:15 ` [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup Leon Romanovsky
@ 2020-06-23 17:52   ` Jason Gunthorpe
  2020-06-23 18:15     ` Leon Romanovsky
  0 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2020-06-23 17:52 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Maor Gottlieb, linux-rdma

On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
> From: Maor Gottlieb <maorg@mellanox.com>
> 
> Replace the mutex with read write semaphore and use xarray instead
> of linked list for XRC target QPs. This will give faster XRC target
> lookup. In addition, when QP is closed, don't insert it back to the
> xarray if the destroy command failed.
> 
> Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>  drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
>  include/rdma/ib_verbs.h         |  5 ++-
>  2 files changed, 23 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> index d66a0ad62077..1ccbe43e33cd 100644
> +++ b/drivers/infiniband/core/verbs.c
> @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
>  	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
>  }
>  
> -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
> -{
> -	mutex_lock(&xrcd->tgt_qp_mutex);
> -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
> -	mutex_unlock(&xrcd->tgt_qp_mutex);
> -}
> -
>  static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
>  				  void (*event_handler)(struct ib_event *, void *),
>  				  void *qp_context)
> @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
>  	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
>  		return ERR_PTR(-EINVAL);
>  
> -	qp = ERR_PTR(-EINVAL);
> -	mutex_lock(&xrcd->tgt_qp_mutex);
> -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
> -		if (real_qp->qp_num == qp_open_attr->qp_num) {
> -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
> -					  qp_open_attr->qp_context);
> -			break;
> -		}
> +	down_read(&xrcd->tgt_qps_rwsem);
> +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
> +	if (!real_qp) {

Don't we already have a xarray indexed against qp_num in res_track?
Can we use it somehow?

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-23 17:52   ` Jason Gunthorpe
@ 2020-06-23 18:15     ` Leon Romanovsky
  2020-06-23 18:49       ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Leon Romanovsky @ 2020-06-23 18:15 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Doug Ledford, Maor Gottlieb, linux-rdma

On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
> On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
> > From: Maor Gottlieb <maorg@mellanox.com>
> >
> > Replace the mutex with read write semaphore and use xarray instead
> > of linked list for XRC target QPs. This will give faster XRC target
> > lookup. In addition, when QP is closed, don't insert it back to the
> > xarray if the destroy command failed.
> >
> > Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> >  drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
> >  include/rdma/ib_verbs.h         |  5 ++-
> >  2 files changed, 23 insertions(+), 39 deletions(-)
> >
> > diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> > index d66a0ad62077..1ccbe43e33cd 100644
> > +++ b/drivers/infiniband/core/verbs.c
> > @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
> >  	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
> >  }
> >
> > -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
> > -{
> > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
> > -	mutex_unlock(&xrcd->tgt_qp_mutex);
> > -}
> > -
> >  static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
> >  				  void (*event_handler)(struct ib_event *, void *),
> >  				  void *qp_context)
> > @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
> >  	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
> >  		return ERR_PTR(-EINVAL);
> >
> > -	qp = ERR_PTR(-EINVAL);
> > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
> > -		if (real_qp->qp_num == qp_open_attr->qp_num) {
> > -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
> > -					  qp_open_attr->qp_context);
> > -			break;
> > -		}
> > +	down_read(&xrcd->tgt_qps_rwsem);
> > +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
> > +	if (!real_qp) {
>
> Don't we already have a xarray indexed against qp_num in res_track?
> Can we use it somehow?

We don't have restrack for XRC, we will need somehow manage QP-to-XRC
connection there.

Thanks

>
> Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-23 18:15     ` Leon Romanovsky
@ 2020-06-23 18:49       ` Jason Gunthorpe
  2020-06-24 10:42         ` Maor Gottlieb
  0 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2020-06-23 18:49 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Maor Gottlieb, linux-rdma

On Tue, Jun 23, 2020 at 09:15:06PM +0300, Leon Romanovsky wrote:
> On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
> > On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
> > > From: Maor Gottlieb <maorg@mellanox.com>
> > >
> > > Replace the mutex with read write semaphore and use xarray instead
> > > of linked list for XRC target QPs. This will give faster XRC target
> > > lookup. In addition, when QP is closed, don't insert it back to the
> > > xarray if the destroy command failed.
> > >
> > > Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> > > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > >  drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
> > >  include/rdma/ib_verbs.h         |  5 ++-
> > >  2 files changed, 23 insertions(+), 39 deletions(-)
> > >
> > > diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> > > index d66a0ad62077..1ccbe43e33cd 100644
> > > +++ b/drivers/infiniband/core/verbs.c
> > > @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
> > >  	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
> > >  }
> > >
> > > -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
> > > -{
> > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
> > > -	mutex_unlock(&xrcd->tgt_qp_mutex);
> > > -}
> > > -
> > >  static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
> > >  				  void (*event_handler)(struct ib_event *, void *),
> > >  				  void *qp_context)
> > > @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
> > >  	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
> > >  		return ERR_PTR(-EINVAL);
> > >
> > > -	qp = ERR_PTR(-EINVAL);
> > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
> > > -		if (real_qp->qp_num == qp_open_attr->qp_num) {
> > > -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
> > > -					  qp_open_attr->qp_context);
> > > -			break;
> > > -		}
> > > +	down_read(&xrcd->tgt_qps_rwsem);
> > > +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
> > > +	if (!real_qp) {
> >
> > Don't we already have a xarray indexed against qp_num in res_track?
> > Can we use it somehow?
> 
> We don't have restrack for XRC, we will need somehow manage QP-to-XRC
> connection there.

It is not xrc, this is just looking up a qp and checking if it is part
of the xrcd

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-23 18:49       ` Jason Gunthorpe
@ 2020-06-24 10:42         ` Maor Gottlieb
  2020-06-24 14:00           ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Maor Gottlieb @ 2020-06-24 10:42 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky; +Cc: Doug Ledford, linux-rdma


On 6/23/2020 9:49 PM, Jason Gunthorpe wrote:
> On Tue, Jun 23, 2020 at 09:15:06PM +0300, Leon Romanovsky wrote:
>> On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
>>> On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>
>>>> Replace the mutex with read write semaphore and use xarray instead
>>>> of linked list for XRC target QPs. This will give faster XRC target
>>>> lookup. In addition, when QP is closed, don't insert it back to the
>>>> xarray if the destroy command failed.
>>>>
>>>> Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
>>>> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>>>>   drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
>>>>   include/rdma/ib_verbs.h         |  5 ++-
>>>>   2 files changed, 23 insertions(+), 39 deletions(-)
>>>>
>>>> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
>>>> index d66a0ad62077..1ccbe43e33cd 100644
>>>> +++ b/drivers/infiniband/core/verbs.c
>>>> @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
>>>>   	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
>>>>   }
>>>>
>>>> -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
>>>> -{
>>>> -	mutex_lock(&xrcd->tgt_qp_mutex);
>>>> -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
>>>> -	mutex_unlock(&xrcd->tgt_qp_mutex);
>>>> -}
>>>> -
>>>>   static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
>>>>   				  void (*event_handler)(struct ib_event *, void *),
>>>>   				  void *qp_context)
>>>> @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
>>>>   	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
>>>>   		return ERR_PTR(-EINVAL);
>>>>
>>>> -	qp = ERR_PTR(-EINVAL);
>>>> -	mutex_lock(&xrcd->tgt_qp_mutex);
>>>> -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
>>>> -		if (real_qp->qp_num == qp_open_attr->qp_num) {
>>>> -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
>>>> -					  qp_open_attr->qp_context);
>>>> -			break;
>>>> -		}
>>>> +	down_read(&xrcd->tgt_qps_rwsem);
>>>> +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
>>>> +	if (!real_qp) {
>>> Don't we already have a xarray indexed against qp_num in res_track?
>>> Can we use it somehow?
>> We don't have restrack for XRC, we will need somehow manage QP-to-XRC
>> connection there.
> It is not xrc, this is just looking up a qp and checking if it is part
> of the xrcd
>
> Jason


It's the XRC target  QP and it is not tracked.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-24 10:42         ` Maor Gottlieb
@ 2020-06-24 14:00           ` Jason Gunthorpe
  2020-06-24 14:48             ` Maor Gottlieb
  0 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2020-06-24 14:00 UTC (permalink / raw)
  To: Maor Gottlieb; +Cc: Leon Romanovsky, Doug Ledford, linux-rdma

On Wed, Jun 24, 2020 at 01:42:49PM +0300, Maor Gottlieb wrote:
> 
> On 6/23/2020 9:49 PM, Jason Gunthorpe wrote:
> > On Tue, Jun 23, 2020 at 09:15:06PM +0300, Leon Romanovsky wrote:
> > > On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
> > > > On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
> > > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > > 
> > > > > Replace the mutex with read write semaphore and use xarray instead
> > > > > of linked list for XRC target QPs. This will give faster XRC target
> > > > > lookup. In addition, when QP is closed, don't insert it back to the
> > > > > xarray if the destroy command failed.
> > > > > 
> > > > > Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> > > > > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > > > >   drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
> > > > >   include/rdma/ib_verbs.h         |  5 ++-
> > > > >   2 files changed, 23 insertions(+), 39 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> > > > > index d66a0ad62077..1ccbe43e33cd 100644
> > > > > +++ b/drivers/infiniband/core/verbs.c
> > > > > @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
> > > > >   	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
> > > > >   }
> > > > > 
> > > > > -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
> > > > > -{
> > > > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > > > -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
> > > > > -	mutex_unlock(&xrcd->tgt_qp_mutex);
> > > > > -}
> > > > > -
> > > > >   static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
> > > > >   				  void (*event_handler)(struct ib_event *, void *),
> > > > >   				  void *qp_context)
> > > > > @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
> > > > >   	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
> > > > >   		return ERR_PTR(-EINVAL);
> > > > > 
> > > > > -	qp = ERR_PTR(-EINVAL);
> > > > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > > > -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
> > > > > -		if (real_qp->qp_num == qp_open_attr->qp_num) {
> > > > > -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
> > > > > -					  qp_open_attr->qp_context);
> > > > > -			break;
> > > > > -		}
> > > > > +	down_read(&xrcd->tgt_qps_rwsem);
> > > > > +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
> > > > > +	if (!real_qp) {
> > > > Don't we already have a xarray indexed against qp_num in res_track?
> > > > Can we use it somehow?
> > > We don't have restrack for XRC, we will need somehow manage QP-to-XRC
> > > connection there.
> > It is not xrc, this is just looking up a qp and checking if it is part
> > of the xrcd
> > 
> > Jason
> 
> It's the XRC target  QP and it is not tracked.

Really? Something called 'real_qp' isn't stored in the restrack?
Doesn't that sound like a bug already?

Jason 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-24 14:00           ` Jason Gunthorpe
@ 2020-06-24 14:48             ` Maor Gottlieb
  2020-06-25  8:26               ` Leon Romanovsky
  0 siblings, 1 reply; 13+ messages in thread
From: Maor Gottlieb @ 2020-06-24 14:48 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Leon Romanovsky, Doug Ledford, linux-rdma


On 6/24/2020 5:00 PM, Jason Gunthorpe wrote:
> On Wed, Jun 24, 2020 at 01:42:49PM +0300, Maor Gottlieb wrote:
>> On 6/23/2020 9:49 PM, Jason Gunthorpe wrote:
>>> On Tue, Jun 23, 2020 at 09:15:06PM +0300, Leon Romanovsky wrote:
>>>> On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
>>>>> On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
>>>>>> From: Maor Gottlieb <maorg@mellanox.com>
>>>>>>
>>>>>> Replace the mutex with read write semaphore and use xarray instead
>>>>>> of linked list for XRC target QPs. This will give faster XRC target
>>>>>> lookup. In addition, when QP is closed, don't insert it back to the
>>>>>> xarray if the destroy command failed.
>>>>>>
>>>>>> Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
>>>>>> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>>>>>>    drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
>>>>>>    include/rdma/ib_verbs.h         |  5 ++-
>>>>>>    2 files changed, 23 insertions(+), 39 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
>>>>>> index d66a0ad62077..1ccbe43e33cd 100644
>>>>>> +++ b/drivers/infiniband/core/verbs.c
>>>>>> @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
>>>>>>    	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
>>>>>>    }
>>>>>>
>>>>>> -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
>>>>>> -{
>>>>>> -	mutex_lock(&xrcd->tgt_qp_mutex);
>>>>>> -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
>>>>>> -	mutex_unlock(&xrcd->tgt_qp_mutex);
>>>>>> -}
>>>>>> -
>>>>>>    static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
>>>>>>    				  void (*event_handler)(struct ib_event *, void *),
>>>>>>    				  void *qp_context)
>>>>>> @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
>>>>>>    	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
>>>>>>    		return ERR_PTR(-EINVAL);
>>>>>>
>>>>>> -	qp = ERR_PTR(-EINVAL);
>>>>>> -	mutex_lock(&xrcd->tgt_qp_mutex);
>>>>>> -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
>>>>>> -		if (real_qp->qp_num == qp_open_attr->qp_num) {
>>>>>> -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
>>>>>> -					  qp_open_attr->qp_context);
>>>>>> -			break;
>>>>>> -		}
>>>>>> +	down_read(&xrcd->tgt_qps_rwsem);
>>>>>> +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
>>>>>> +	if (!real_qp) {
>>>>> Don't we already have a xarray indexed against qp_num in res_track?
>>>>> Can we use it somehow?
>>>> We don't have restrack for XRC, we will need somehow manage QP-to-XRC
>>>> connection there.
>>> It is not xrc, this is just looking up a qp and checking if it is part
>>> of the xrcd
>>>
>>> Jason
>> It's the XRC target  QP and it is not tracked.
> Really? Something called 'real_qp' isn't stored in the restrack?
> Doesn't that sound like a bug already?
>
> Jason

Bug / limitation. see the below comment from core_priv.h:

         /*
          * We don't track XRC QPs for now, because they don't have PD
          * and more importantly they are created internaly by driver,
          * see mlx5 create_dev_resources() as an example.
          */

Leon, the PD is a real limitation? regarding the second part (mlx5),  
you just sent patches that change it,right?


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup
  2020-06-24 14:48             ` Maor Gottlieb
@ 2020-06-25  8:26               ` Leon Romanovsky
  0 siblings, 0 replies; 13+ messages in thread
From: Leon Romanovsky @ 2020-06-25  8:26 UTC (permalink / raw)
  To: Maor Gottlieb; +Cc: Jason Gunthorpe, Doug Ledford, linux-rdma

On Wed, Jun 24, 2020 at 05:48:27PM +0300, Maor Gottlieb wrote:
>
> On 6/24/2020 5:00 PM, Jason Gunthorpe wrote:
> > On Wed, Jun 24, 2020 at 01:42:49PM +0300, Maor Gottlieb wrote:
> > > On 6/23/2020 9:49 PM, Jason Gunthorpe wrote:
> > > > On Tue, Jun 23, 2020 at 09:15:06PM +0300, Leon Romanovsky wrote:
> > > > > On Tue, Jun 23, 2020 at 02:52:00PM -0300, Jason Gunthorpe wrote:
> > > > > > On Tue, Jun 23, 2020 at 02:15:31PM +0300, Leon Romanovsky wrote:
> > > > > > > From: Maor Gottlieb <maorg@mellanox.com>
> > > > > > >
> > > > > > > Replace the mutex with read write semaphore and use xarray instead
> > > > > > > of linked list for XRC target QPs. This will give faster XRC target
> > > > > > > lookup. In addition, when QP is closed, don't insert it back to the
> > > > > > > xarray if the destroy command failed.
> > > > > > >
> > > > > > > Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> > > > > > > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > > > > > >    drivers/infiniband/core/verbs.c | 57 ++++++++++++---------------------
> > > > > > >    include/rdma/ib_verbs.h         |  5 ++-
> > > > > > >    2 files changed, 23 insertions(+), 39 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> > > > > > > index d66a0ad62077..1ccbe43e33cd 100644
> > > > > > > +++ b/drivers/infiniband/core/verbs.c
> > > > > > > @@ -1090,13 +1090,6 @@ static void __ib_shared_qp_event_handler(struct ib_event *event, void *context)
> > > > > > >    	spin_unlock_irqrestore(&qp->device->qp_open_list_lock, flags);
> > > > > > >    }
> > > > > > >
> > > > > > > -static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp)
> > > > > > > -{
> > > > > > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > > > > > -	list_add(&qp->xrcd_list, &xrcd->tgt_qp_list);
> > > > > > > -	mutex_unlock(&xrcd->tgt_qp_mutex);
> > > > > > > -}
> > > > > > > -
> > > > > > >    static struct ib_qp *__ib_open_qp(struct ib_qp *real_qp,
> > > > > > >    				  void (*event_handler)(struct ib_event *, void *),
> > > > > > >    				  void *qp_context)
> > > > > > > @@ -1139,16 +1132,15 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
> > > > > > >    	if (qp_open_attr->qp_type != IB_QPT_XRC_TGT)
> > > > > > >    		return ERR_PTR(-EINVAL);
> > > > > > >
> > > > > > > -	qp = ERR_PTR(-EINVAL);
> > > > > > > -	mutex_lock(&xrcd->tgt_qp_mutex);
> > > > > > > -	list_for_each_entry(real_qp, &xrcd->tgt_qp_list, xrcd_list) {
> > > > > > > -		if (real_qp->qp_num == qp_open_attr->qp_num) {
> > > > > > > -			qp = __ib_open_qp(real_qp, qp_open_attr->event_handler,
> > > > > > > -					  qp_open_attr->qp_context);
> > > > > > > -			break;
> > > > > > > -		}
> > > > > > > +	down_read(&xrcd->tgt_qps_rwsem);
> > > > > > > +	real_qp = xa_load(&xrcd->tgt_qps, qp_open_attr->qp_num);
> > > > > > > +	if (!real_qp) {
> > > > > > Don't we already have a xarray indexed against qp_num in res_track?
> > > > > > Can we use it somehow?
> > > > > We don't have restrack for XRC, we will need somehow manage QP-to-XRC
> > > > > connection there.
> > > > It is not xrc, this is just looking up a qp and checking if it is part
> > > > of the xrcd
> > > >
> > > > Jason
> > > It's the XRC target  QP and it is not tracked.
> > Really? Something called 'real_qp' isn't stored in the restrack?
> > Doesn't that sound like a bug already?
> >
> > Jason
>
> Bug / limitation. see the below comment from core_priv.h:
>
>         /*
>          * We don't track XRC QPs for now, because they don't have PD
>          * and more importantly they are created internaly by driver,
>          * see mlx5 create_dev_resources() as an example.
>          */
>
> Leon, the PD is a real limitation? regarding the second part (mlx5),  you
> just sent patches that change it,right?

The second part is not relevant now, but the first part is still
relevant, due to the check in restrack.c.

  131         case RDMA_RESTRACK_QP:
  132                 pd = container_of(res, struct ib_qp, res)->pd;
  133                 if (!pd) {
  134                         WARN_ONCE(true, "XRC QPs are not supported\n");
  135                         /* Survive, despite the programmer's error */
  136                         res->kern_name = " ";
  137                 }
  138                 break;


The reason to it that "regular" QPs has the name of their "creator"
inside PD which doesn't exist for XRC. It is possible to change and
make special case for the XRC, but all places that touch "kern_name"
need to be audited.

It is in my roadmap after allocation work will be finished and we will
introduce proper reference counting for the QPs.

Thanks

>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain
  2020-06-23 11:15 ` [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain Leon Romanovsky
@ 2020-07-02 18:27   ` Jason Gunthorpe
  2020-07-03  6:25     ` Leon Romanovsky
  0 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2020-07-02 18:27 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Maor Gottlieb, linux-rdma

On Tue, Jun 23, 2020 at 02:15:30PM +0300, Leon Romanovsky wrote:
> From: Maor Gottlieb <maorg@mellanox.com>
> 
> ib_alloc_xrcd already does the required initialization, so move
> the mlx5 driver and uverbs to call it and save some code duplication,
> while cleaning the function argument lists of that function.
> 
> Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>  drivers/infiniband/core/uverbs_cmd.c | 12 +++---------
>  drivers/infiniband/core/verbs.c      | 19 +++++++++++++------
>  drivers/infiniband/hw/mlx5/main.c    | 24 ++++++++----------------
>  include/rdma/ib_verbs.h              | 22 ++++++++++++----------
>  4 files changed, 36 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
> index 557644dcc923..68c9a0210220 100644
> +++ b/drivers/infiniband/core/uverbs_cmd.c
> @@ -614,17 +614,11 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs)
>  	}
>  
>  	if (!xrcd) {
> -		xrcd = ib_dev->ops.alloc_xrcd(ib_dev, &attrs->driver_udata);
> +		xrcd = ib_alloc_xrcd_user(ib_dev, inode, &attrs->driver_udata);
>  		if (IS_ERR(xrcd)) {
>  			ret = PTR_ERR(xrcd);
>  			goto err;
>  		}
> -
> -		xrcd->inode   = inode;
> -		xrcd->device  = ib_dev;
> -		atomic_set(&xrcd->usecnt, 0);
> -		mutex_init(&xrcd->tgt_qp_mutex);
> -		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
>  		new_xrcd = 1;
>  	}
>  
> @@ -663,7 +657,7 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs)
>  	}
>  
>  err_dealloc_xrcd:
> -	ib_dealloc_xrcd(xrcd, uverbs_get_cleared_udata(attrs));
> +	ib_dealloc_xrcd_user(xrcd, uverbs_get_cleared_udata(attrs));
>  
>  err:
>  	uobj_alloc_abort(&obj->uobject, attrs);
> @@ -701,7 +695,7 @@ int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, struct ib_xrcd *xrcd,
>  	if (inode && !atomic_dec_and_test(&xrcd->usecnt))
>  		return 0;
>  
> -	ret = ib_dealloc_xrcd(xrcd, &attrs->driver_udata);
> +	ret = ib_dealloc_xrcd_user(xrcd, &attrs->driver_udata);
>  
>  	if (ib_is_destroy_retryable(ret, why, uobject)) {
>  		atomic_inc(&xrcd->usecnt);
> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> index d70771caf534..d66a0ad62077 100644
> +++ b/drivers/infiniband/core/verbs.c
> @@ -2289,17 +2289,24 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
>  }
>  EXPORT_SYMBOL(ib_detach_mcast);
>  
> -struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller)
> +/**
> + * ib_alloc_xrcd_user - Allocates an XRC domain.
> + * @device: The device on which to allocate the XRC domain.
> + * @inode: inode to connect XRCD
> + * @udata: Valid user data or NULL for kernel object
> + */
> +struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
> +				   struct inode *inode, struct ib_udata *udata)
>  {
>  	struct ib_xrcd *xrcd;
>  
>  	if (!device->ops.alloc_xrcd)
>  		return ERR_PTR(-EOPNOTSUPP);
>  
> -	xrcd = device->ops.alloc_xrcd(device, NULL);
> +	xrcd = device->ops.alloc_xrcd(device, udata);
>  	if (!IS_ERR(xrcd)) {
>  		xrcd->device = device;
> -		xrcd->inode = NULL;
> +		xrcd->inode = inode;
>  		atomic_set(&xrcd->usecnt, 0);
>  		mutex_init(&xrcd->tgt_qp_mutex);
>  		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
> @@ -2307,9 +2314,9 @@ struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller)
>  
>  	return xrcd;
>  }
> -EXPORT_SYMBOL(__ib_alloc_xrcd);
> +EXPORT_SYMBOL(ib_alloc_xrcd_user);
>  
> -int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata)
> +int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata)
>  {
>  	struct ib_qp *qp;
>  	int ret;
> @@ -2327,7 +2334,7 @@ int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata)
>  
>  	return xrcd->device->ops.dealloc_xrcd(xrcd, udata);
>  }
> -EXPORT_SYMBOL(ib_dealloc_xrcd);
> +EXPORT_SYMBOL(ib_dealloc_xrcd_user);
>  
>  /**
>   * ib_create_wq - Creates a WQ associated with the specified protection
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index 47a0c091eea5..46c596a855e7 100644
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -5043,27 +5043,17 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
>  	if (ret)
>  		goto err_create_cq;
>  
> -	devr->x0 = mlx5_ib_alloc_xrcd(&dev->ib_dev, NULL);
> +	devr->x0 = ib_alloc_xrcd(&dev->ib_dev);
>  	if (IS_ERR(devr->x0)) {
>  		ret = PTR_ERR(devr->x0);
>  		goto error2;
>  	}
> -	devr->x0->device = &dev->ib_dev;
> -	devr->x0->inode = NULL;
> -	atomic_set(&devr->x0->usecnt, 0);
> -	mutex_init(&devr->x0->tgt_qp_mutex);
> -	INIT_LIST_HEAD(&devr->x0->tgt_qp_list);
>  
> -	devr->x1 = mlx5_ib_alloc_xrcd(&dev->ib_dev, NULL);
> +	devr->x1 = ib_alloc_xrcd(&dev->ib_dev);
>  	if (IS_ERR(devr->x1)) {
>  		ret = PTR_ERR(devr->x1);
>  		goto error3;
>  	}
> -	devr->x1->device = &dev->ib_dev;
> -	devr->x1->inode = NULL;
> -	atomic_set(&devr->x1->usecnt, 0);
> -	mutex_init(&devr->x1->tgt_qp_mutex);
> -	INIT_LIST_HEAD(&devr->x1->tgt_qp_list);
>  
>  	memset(&attr, 0, sizeof(attr));
>  	attr.attr.max_sge = 1;
> @@ -5125,13 +5115,14 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
>  error6:
>  	kfree(devr->s1);
>  error5:
> +	atomic_dec(&devr->s0->ext.xrc.xrcd->usecnt);
>  	mlx5_ib_destroy_srq(devr->s0, NULL);
>  err_create:
>  	kfree(devr->s0);
>  error4:
> -	mlx5_ib_dealloc_xrcd(devr->x1, NULL);
> +	ib_dealloc_xrcd(devr->x1);
>  error3:
> -	mlx5_ib_dealloc_xrcd(devr->x0, NULL);
> +	ib_dealloc_xrcd(devr->x0);
>  error2:
>  	mlx5_ib_destroy_cq(devr->c0, NULL);
>  err_create_cq:
> @@ -5149,10 +5140,11 @@ static void destroy_dev_resources(struct mlx5_ib_resources *devr)
>  
>  	mlx5_ib_destroy_srq(devr->s1, NULL);
>  	kfree(devr->s1);
> +	atomic_dec(&devr->s0->ext.xrc.xrcd->usecnt);
>  	mlx5_ib_destroy_srq(devr->s0, NULL);
>  	kfree(devr->s0);
> -	mlx5_ib_dealloc_xrcd(devr->x0, NULL);
> -	mlx5_ib_dealloc_xrcd(devr->x1, NULL);
> +	ib_dealloc_xrcd(devr->x0);
> +	ib_dealloc_xrcd(devr->x1);

Why is this an improvement? Whatever this internal driver thing is, it
is not a visible XRCD..

In fact why use an ib_xrcd here at all when this only needs the
xrcdn? Just call the mlx_cmd_xrcd_* directly.

> +struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
> +				   struct inode *inode, struct ib_udata *udata);
> +static inline struct ib_xrcd *ib_alloc_xrcd(struct ib_device *device)
> +{
> +	return ib_alloc_xrcd_user(device, NULL, NULL);
> +}

Because other than the above there is no in-kernel user of XRCD and
this can all be deleted, the uverbs_cmd can directly create the xrcd
and call the driver like for other non-kernel objects.

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain
  2020-07-02 18:27   ` Jason Gunthorpe
@ 2020-07-03  6:25     ` Leon Romanovsky
  2020-07-03 12:00       ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Leon Romanovsky @ 2020-07-03  6:25 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Doug Ledford, Maor Gottlieb, linux-rdma

On Thu, Jul 02, 2020 at 03:27:24PM -0300, Jason Gunthorpe wrote:
> On Tue, Jun 23, 2020 at 02:15:30PM +0300, Leon Romanovsky wrote:
> > From: Maor Gottlieb <maorg@mellanox.com>
> >
> > ib_alloc_xrcd already does the required initialization, so move
> > the mlx5 driver and uverbs to call it and save some code duplication,
> > while cleaning the function argument lists of that function.
> >
> > Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
> > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> >  drivers/infiniband/core/uverbs_cmd.c | 12 +++---------
> >  drivers/infiniband/core/verbs.c      | 19 +++++++++++++------
> >  drivers/infiniband/hw/mlx5/main.c    | 24 ++++++++----------------
> >  include/rdma/ib_verbs.h              | 22 ++++++++++++----------
> >  4 files changed, 36 insertions(+), 41 deletions(-)
> >
> > diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
> > index 557644dcc923..68c9a0210220 100644
> > +++ b/drivers/infiniband/core/uverbs_cmd.c
> > @@ -614,17 +614,11 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs)
> >  	}
> >
> >  	if (!xrcd) {
> > -		xrcd = ib_dev->ops.alloc_xrcd(ib_dev, &attrs->driver_udata);
> > +		xrcd = ib_alloc_xrcd_user(ib_dev, inode, &attrs->driver_udata);
> >  		if (IS_ERR(xrcd)) {
> >  			ret = PTR_ERR(xrcd);
> >  			goto err;
> >  		}
> > -
> > -		xrcd->inode   = inode;
> > -		xrcd->device  = ib_dev;
> > -		atomic_set(&xrcd->usecnt, 0);
> > -		mutex_init(&xrcd->tgt_qp_mutex);
> > -		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
> >  		new_xrcd = 1;
> >  	}
> >
> > @@ -663,7 +657,7 @@ static int ib_uverbs_open_xrcd(struct uverbs_attr_bundle *attrs)
> >  	}
> >
> >  err_dealloc_xrcd:
> > -	ib_dealloc_xrcd(xrcd, uverbs_get_cleared_udata(attrs));
> > +	ib_dealloc_xrcd_user(xrcd, uverbs_get_cleared_udata(attrs));
> >
> >  err:
> >  	uobj_alloc_abort(&obj->uobject, attrs);
> > @@ -701,7 +695,7 @@ int ib_uverbs_dealloc_xrcd(struct ib_uobject *uobject, struct ib_xrcd *xrcd,
> >  	if (inode && !atomic_dec_and_test(&xrcd->usecnt))
> >  		return 0;
> >
> > -	ret = ib_dealloc_xrcd(xrcd, &attrs->driver_udata);
> > +	ret = ib_dealloc_xrcd_user(xrcd, &attrs->driver_udata);
> >
> >  	if (ib_is_destroy_retryable(ret, why, uobject)) {
> >  		atomic_inc(&xrcd->usecnt);
> > diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> > index d70771caf534..d66a0ad62077 100644
> > +++ b/drivers/infiniband/core/verbs.c
> > @@ -2289,17 +2289,24 @@ int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
> >  }
> >  EXPORT_SYMBOL(ib_detach_mcast);
> >
> > -struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller)
> > +/**
> > + * ib_alloc_xrcd_user - Allocates an XRC domain.
> > + * @device: The device on which to allocate the XRC domain.
> > + * @inode: inode to connect XRCD
> > + * @udata: Valid user data or NULL for kernel object
> > + */
> > +struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
> > +				   struct inode *inode, struct ib_udata *udata)
> >  {
> >  	struct ib_xrcd *xrcd;
> >
> >  	if (!device->ops.alloc_xrcd)
> >  		return ERR_PTR(-EOPNOTSUPP);
> >
> > -	xrcd = device->ops.alloc_xrcd(device, NULL);
> > +	xrcd = device->ops.alloc_xrcd(device, udata);
> >  	if (!IS_ERR(xrcd)) {
> >  		xrcd->device = device;
> > -		xrcd->inode = NULL;
> > +		xrcd->inode = inode;
> >  		atomic_set(&xrcd->usecnt, 0);
> >  		mutex_init(&xrcd->tgt_qp_mutex);
> >  		INIT_LIST_HEAD(&xrcd->tgt_qp_list);
> > @@ -2307,9 +2314,9 @@ struct ib_xrcd *__ib_alloc_xrcd(struct ib_device *device, const char *caller)
> >
> >  	return xrcd;
> >  }
> > -EXPORT_SYMBOL(__ib_alloc_xrcd);
> > +EXPORT_SYMBOL(ib_alloc_xrcd_user);
> >
> > -int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata)
> > +int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata)
> >  {
> >  	struct ib_qp *qp;
> >  	int ret;
> > @@ -2327,7 +2334,7 @@ int ib_dealloc_xrcd(struct ib_xrcd *xrcd, struct ib_udata *udata)
> >
> >  	return xrcd->device->ops.dealloc_xrcd(xrcd, udata);
> >  }
> > -EXPORT_SYMBOL(ib_dealloc_xrcd);
> > +EXPORT_SYMBOL(ib_dealloc_xrcd_user);
> >
> >  /**
> >   * ib_create_wq - Creates a WQ associated with the specified protection
> > diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> > index 47a0c091eea5..46c596a855e7 100644
> > +++ b/drivers/infiniband/hw/mlx5/main.c
> > @@ -5043,27 +5043,17 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
> >  	if (ret)
> >  		goto err_create_cq;
> >
> > -	devr->x0 = mlx5_ib_alloc_xrcd(&dev->ib_dev, NULL);
> > +	devr->x0 = ib_alloc_xrcd(&dev->ib_dev);
> >  	if (IS_ERR(devr->x0)) {
> >  		ret = PTR_ERR(devr->x0);
> >  		goto error2;
> >  	}
> > -	devr->x0->device = &dev->ib_dev;
> > -	devr->x0->inode = NULL;
> > -	atomic_set(&devr->x0->usecnt, 0);
> > -	mutex_init(&devr->x0->tgt_qp_mutex);
> > -	INIT_LIST_HEAD(&devr->x0->tgt_qp_list);
> >
> > -	devr->x1 = mlx5_ib_alloc_xrcd(&dev->ib_dev, NULL);
> > +	devr->x1 = ib_alloc_xrcd(&dev->ib_dev);
> >  	if (IS_ERR(devr->x1)) {
> >  		ret = PTR_ERR(devr->x1);
> >  		goto error3;
> >  	}
> > -	devr->x1->device = &dev->ib_dev;
> > -	devr->x1->inode = NULL;
> > -	atomic_set(&devr->x1->usecnt, 0);
> > -	mutex_init(&devr->x1->tgt_qp_mutex);
> > -	INIT_LIST_HEAD(&devr->x1->tgt_qp_list);
> >
> >  	memset(&attr, 0, sizeof(attr));
> >  	attr.attr.max_sge = 1;
> > @@ -5125,13 +5115,14 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
> >  error6:
> >  	kfree(devr->s1);
> >  error5:
> > +	atomic_dec(&devr->s0->ext.xrc.xrcd->usecnt);
> >  	mlx5_ib_destroy_srq(devr->s0, NULL);
> >  err_create:
> >  	kfree(devr->s0);
> >  error4:
> > -	mlx5_ib_dealloc_xrcd(devr->x1, NULL);
> > +	ib_dealloc_xrcd(devr->x1);
> >  error3:
> > -	mlx5_ib_dealloc_xrcd(devr->x0, NULL);
> > +	ib_dealloc_xrcd(devr->x0);
> >  error2:
> >  	mlx5_ib_destroy_cq(devr->c0, NULL);
> >  err_create_cq:
> > @@ -5149,10 +5140,11 @@ static void destroy_dev_resources(struct mlx5_ib_resources *devr)
> >
> >  	mlx5_ib_destroy_srq(devr->s1, NULL);
> >  	kfree(devr->s1);
> > +	atomic_dec(&devr->s0->ext.xrc.xrcd->usecnt);
> >  	mlx5_ib_destroy_srq(devr->s0, NULL);
> >  	kfree(devr->s0);
> > -	mlx5_ib_dealloc_xrcd(devr->x0, NULL);
> > -	mlx5_ib_dealloc_xrcd(devr->x1, NULL);
> > +	ib_dealloc_xrcd(devr->x0);
> > +	ib_dealloc_xrcd(devr->x1);
>
> Why is this an improvement? Whatever this internal driver thing is, it
> is not a visible XRCD..
>
> In fact why use an ib_xrcd here at all when this only needs the
> xrcdn? Just call the mlx_cmd_xrcd_* directly.

This is proper IB object and IMHO it should be created with standard primitives,
so we will be able account them properly and see full HW objects picture without
need to go and combine pieces from driver and ib_core.

The code properly hardcoded same thing as ib_core does for XRCD, which is right way
to do instead of making half of work like you are proposing.

At some point of time, XRCD will be visible in rdmatool too and we will
be able to RAW query even internal driver objects, because they are
standard ones.

Maybe, one day, we will be able to move mlx5_ib_handle_internal_error()
to general code.

>
> > +struct ib_xrcd *ib_alloc_xrcd_user(struct ib_device *device,
> > +				   struct inode *inode, struct ib_udata *udata);
> > +static inline struct ib_xrcd *ib_alloc_xrcd(struct ib_device *device)
> > +{
> > +	return ib_alloc_xrcd_user(device, NULL, NULL);
> > +}
>
> Because other than the above there is no in-kernel user of XRCD and
> this can all be deleted, the uverbs_cmd can directly create the xrcd
> and call the driver like for other non-kernel objects.

I can call directly to ib_alloc_xrcd_user() from mlx5, but I still
prefer to use ib_core primitives as much as possible.

Thanks

>
> Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain
  2020-07-03  6:25     ` Leon Romanovsky
@ 2020-07-03 12:00       ` Jason Gunthorpe
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2020-07-03 12:00 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Maor Gottlieb, linux-rdma

On Fri, Jul 03, 2020 at 09:25:12AM +0300, Leon Romanovsky wrote:
> > Why is this an improvement? Whatever this internal driver thing is, it
> > is not a visible XRCD..
> >
> > In fact why use an ib_xrcd here at all when this only needs the
> > xrcdn? Just call the mlx_cmd_xrcd_* directly.
> 
> This is proper IB object and IMHO it should be created with standard primitives,
> so we will be able account them properly and see full HW objects picture without
> need to go and combine pieces from driver and ib_core.

I'm not sure it is a proper IB object, it is some weird driver
internal thing, and I couldn't guess what it is being used for. Why
are user QPs being associated with a driver internal XRCD?

The key thing here is that it is never actaully used with any other
core API expecting an xrcd, only the driver specific xrcdn is
extracted and used in a few places.

Further it doesn't even act like an core xrcd, QPs being attached to
it are not recorded in the lists, the refcounts are not incrd, etc.

So even if you did expose it over rdmatool the whole thing would be an
inconsistent mess that doesn't reflect the expected configuration of a
real xrcd.

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-07-03 12:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-23 11:15 [PATCH rdma-next v1 0/2] Convert XRC to use xarray Leon Romanovsky
2020-06-23 11:15 ` [PATCH rdma-next v1 1/2] RDMA: Clean ib_alloc_xrcd() and reuse it to allocate XRC domain Leon Romanovsky
2020-07-02 18:27   ` Jason Gunthorpe
2020-07-03  6:25     ` Leon Romanovsky
2020-07-03 12:00       ` Jason Gunthorpe
2020-06-23 11:15 ` [PATCH rdma-next v1 2/2] RDMA/core: Optimize XRC target lookup Leon Romanovsky
2020-06-23 17:52   ` Jason Gunthorpe
2020-06-23 18:15     ` Leon Romanovsky
2020-06-23 18:49       ` Jason Gunthorpe
2020-06-24 10:42         ` Maor Gottlieb
2020-06-24 14:00           ` Jason Gunthorpe
2020-06-24 14:48             ` Maor Gottlieb
2020-06-25  8:26               ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).