linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-next 0/7] Replace red-black tree by xarray
@ 2021-10-20 22:05 Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 1/7] RDMA/rxe: Replace irqsave locks with bh locks Bob Pearson
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

This series of patches starts with some minor cleanups in rxe pools
and then replaces the red-black trees used to lookup rxe objects from
indices with xarrays. The change over is made in small steps starting
by adding xarrays and then convrting red-black trees to xarrays and
finally deleting the original index code and renaming the xarray code
to take its place.

These patches apply cleanly to current for-next branch after the
"Correct race conditions in rdma_rxe" patch series
	commit ed0d7445ed4651f4171c38d9446dc43c823ad32b
applied to
	commit ac0fffa0859b8e1e991939663b3ebdd80bf979e6 (origin/for-next)

Bob Pearson (7):
  RDMA/rxe: Replace irqsave locks with bh locks
  RDMA/rxe: Cleanup rxe_pool_entry
  RDMA/rxe: Add xarray support to rxe_pool.c
  RDMA/rxe: Replace pool_lock by xa_lock
  RDMA/rxe: Convert remaining pools to xarrays
  RDMA/rxe: Remove old index code from rxe_pool.c
  RDMA/rxe: Rename XARRAY as INDEX

 drivers/infiniband/sw/rxe/rxe.c       |  86 ++-------
 drivers/infiniband/sw/rxe/rxe_comp.c  |   8 +-
 drivers/infiniband/sw/rxe/rxe_cq.c    |  23 +--
 drivers/infiniband/sw/rxe/rxe_loc.h   |  10 +-
 drivers/infiniband/sw/rxe/rxe_mcast.c |  10 +-
 drivers/infiniband/sw/rxe/rxe_mr.c    |   6 +-
 drivers/infiniband/sw/rxe/rxe_mw.c    |  19 +-
 drivers/infiniband/sw/rxe/rxe_pool.c  | 266 +++++++-------------------
 drivers/infiniband/sw/rxe/rxe_pool.h  |  53 +++--
 drivers/infiniband/sw/rxe/rxe_qp.c    |   6 +-
 drivers/infiniband/sw/rxe/rxe_queue.c |   9 +-
 drivers/infiniband/sw/rxe/rxe_req.c   |  11 +-
 drivers/infiniband/sw/rxe/rxe_srq.c   |   6 +-
 drivers/infiniband/sw/rxe/rxe_task.c  |  18 +-
 drivers/infiniband/sw/rxe/rxe_verbs.c |  29 ++-
 drivers/infiniband/sw/rxe/rxe_verbs.h |  22 +--
 16 files changed, 185 insertions(+), 397 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH for-next 1/7] RDMA/rxe: Replace irqsave locks with bh locks
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 2/7] RDMA/rxe: Cleanup rxe_pool_entry Bob Pearson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Most of the locks in the rxe driver are _irqsave/restore locks but
in fact there are no interrupt threads that run rxe code or share data
with rxe. There are softirq threads and data sharing so the appropriate
lock type is _bh. This patch replaces all irqsave type locks with bh
type locks.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c  |  8 ++---
 drivers/infiniband/sw/rxe/rxe_cq.c    | 19 +++++-------
 drivers/infiniband/sw/rxe/rxe_mcast.c | 10 +++----
 drivers/infiniband/sw/rxe/rxe_mw.c    | 15 ++++------
 drivers/infiniband/sw/rxe/rxe_pool.c  | 42 +++++++++++----------------
 drivers/infiniband/sw/rxe/rxe_queue.c |  9 +++---
 drivers/infiniband/sw/rxe/rxe_req.c   | 11 +++----
 drivers/infiniband/sw/rxe/rxe_task.c  | 18 +++++-------
 drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++----------
 9 files changed, 65 insertions(+), 94 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index d771ba8449a1..f363fe3fa414 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -458,8 +458,6 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp,
 					   struct rxe_pkt_info *pkt,
 					   struct rxe_send_wqe *wqe)
 {
-	unsigned long flags;
-
 	if (wqe->has_rd_atomic) {
 		wqe->has_rd_atomic = 0;
 		atomic_inc(&qp->req.rd_atomic);
@@ -472,11 +470,11 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp,
 
 	if (unlikely(qp->req.state == QP_STATE_DRAIN)) {
 		/* state_lock used by requester & completer */
-		spin_lock_irqsave(&qp->state_lock, flags);
+		spin_lock_bh(&qp->state_lock);
 		if ((qp->req.state == QP_STATE_DRAIN) &&
 		    (qp->comp.psn == qp->req.psn)) {
 			qp->req.state = QP_STATE_DRAINED;
-			spin_unlock_irqrestore(&qp->state_lock, flags);
+			spin_unlock_bh(&qp->state_lock);
 
 			if (qp->ibqp.event_handler) {
 				struct ib_event ev;
@@ -488,7 +486,7 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp,
 					qp->ibqp.qp_context);
 			}
 		} else {
-			spin_unlock_irqrestore(&qp->state_lock, flags);
+			spin_unlock_bh(&qp->state_lock);
 		}
 	}
 
diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index 0c05d612ae63..dda510e4d904 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -42,14 +42,13 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq,
 static void rxe_send_complete(struct tasklet_struct *t)
 {
 	struct rxe_cq *cq = from_tasklet(cq, t, comp_task);
-	unsigned long flags;
 
-	spin_lock_irqsave(&cq->cq_lock, flags);
+	spin_lock_bh(&cq->cq_lock);
 	if (cq->is_dying) {
-		spin_unlock_irqrestore(&cq->cq_lock, flags);
+		spin_unlock_bh(&cq->cq_lock);
 		return;
 	}
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	spin_unlock_bh(&cq->cq_lock);
 
 	cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
 }
@@ -106,15 +105,14 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int cqe,
 int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 {
 	struct ib_event ev;
-	unsigned long flags;
 	int full;
 	void *addr;
 
-	spin_lock_irqsave(&cq->cq_lock, flags);
+	spin_lock_bh(&cq->cq_lock);
 
 	full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT);
 	if (unlikely(full)) {
-		spin_unlock_irqrestore(&cq->cq_lock, flags);
+		spin_unlock_bh(&cq->cq_lock);
 		if (cq->ibcq.event_handler) {
 			ev.device = cq->ibcq.device;
 			ev.element.cq = &cq->ibcq;
@@ -130,7 +128,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 
 	queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT);
 
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	spin_unlock_bh(&cq->cq_lock);
 
 	if ((cq->notify == IB_CQ_NEXT_COMP) ||
 	    (cq->notify == IB_CQ_SOLICITED && solicited)) {
@@ -144,12 +142,11 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 void rxe_cq_cleanup(struct rxe_pool_entry *arg)
 {
 	struct rxe_cq *cq = container_of(arg, typeof(*cq), pelem);
-	unsigned long flags;
 
 	/* TODO get rid of this */
-	spin_lock_irqsave(&cq->cq_lock, flags);
+	spin_lock_bh(&cq->cq_lock);
 	cq->is_dying = true;
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	spin_unlock_bh(&cq->cq_lock);
 
 	if (cq->queue)
 		rxe_queue_cleanup(cq->queue);
diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
index 685440a20669..a05487ca628e 100644
--- a/drivers/infiniband/sw/rxe/rxe_mcast.c
+++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
@@ -33,14 +33,13 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid,
 {
 	struct rxe_pool *pool = &rxe->mc_grp_pool;
 	struct rxe_mc_grp *grp;
-	unsigned long flags;
 	int err = 0;
 
 	/* Perform this while holding the mc_grp_pool lock
 	 * to prevent races where two coincident calls fail to lookup the
 	 * same group and then both create the same group.
 	 */
-	write_lock_irqsave(&pool->pool_lock, flags);
+	write_lock_bh(&pool->pool_lock);
 	grp = rxe_pool_get_key_locked(pool, mgid);
 	if (grp)
 		goto done;
@@ -66,7 +65,7 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid,
 	rxe_add_ref_locked(grp);
 done:
 	*grp_p = grp;
-	write_unlock_irqrestore(&pool->pool_lock, flags);
+	write_unlock_bh(&pool->pool_lock);
 
 	return err;
 }
@@ -75,9 +74,8 @@ static void rxe_mcast_put_grp(struct rxe_mc_grp *grp)
 {
 	struct rxe_dev *rxe = grp->rxe;
 	struct rxe_pool *pool = &rxe->mc_grp_pool;
-	unsigned long flags;
 
-	write_lock_irqsave(&pool->pool_lock, flags);
+	write_lock_bh(&pool->pool_lock);
 
 	rxe_drop_ref_locked(grp);
 
@@ -86,7 +84,7 @@ static void rxe_mcast_put_grp(struct rxe_mc_grp *grp)
 		rxe_fini_ref_locked(grp);
 	}
 
-	write_unlock_irqrestore(&pool->pool_lock, flags);
+	write_unlock_bh(&pool->pool_lock);
 }
 
 /**
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 599699f93332..7c264599b3d4 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -55,12 +55,11 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 {
 	struct rxe_mw *mw = to_rmw(ibmw);
 	struct rxe_pd *pd = to_rpd(ibmw->pd);
-	unsigned long flags;
 	int err;
 
-	spin_lock_irqsave(&mw->lock, flags);
+	spin_lock_bh(&mw->lock);
 	rxe_do_dealloc_mw(mw);
-	spin_unlock_irqrestore(&mw->lock, flags);
+	spin_unlock_bh(&mw->lock);
 
 	err = rxe_fini_ref(mw);
 	if (err)
@@ -199,7 +198,6 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	u32 mw_rkey = wqe->wr.wr.mw.mw_rkey;
 	u32 mr_lkey = wqe->wr.wr.mw.mr_lkey;
-	unsigned long flags;
 
 	mw = rxe_pool_get_index(&rxe->mw_pool, mw_rkey >> 8);
 	if (unlikely(!mw)) {
@@ -226,7 +224,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 		}
 	}
 
-	spin_lock_irqsave(&mw->lock, flags);
+	spin_lock_bh(&mw->lock);
 	ret = rxe_check_bind_mw(qp, wqe, mw, mr);
 	if (ret) {
 		if (mr)
@@ -236,7 +234,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 
 	rxe_do_bind_mw(qp, wqe, mw, mr);
 err_unlock:
-	spin_unlock_irqrestore(&mw->lock, flags);
+	spin_unlock_bh(&mw->lock);
 err_drop_mw:
 	rxe_drop_ref(mw);
 err:
@@ -280,7 +278,6 @@ static void rxe_do_invalidate_mw(struct rxe_mw *mw)
 int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey)
 {
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
-	unsigned long flags;
 	struct rxe_mw *mw;
 	int ret;
 
@@ -295,7 +292,7 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey)
 		goto err_drop_ref;
 	}
 
-	spin_lock_irqsave(&mw->lock, flags);
+	spin_lock_bh(&mw->lock);
 
 	ret = rxe_check_invalidate_mw(qp, mw);
 	if (ret)
@@ -303,7 +300,7 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey)
 
 	rxe_do_invalidate_mw(mw);
 err_unlock:
-	spin_unlock_irqrestore(&mw->lock, flags);
+	spin_unlock_bh(&mw->lock);
 err_drop_ref:
 	rxe_drop_ref(mw);
 err:
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 59f1a1919e30..58f826ab3bc6 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -342,34 +342,31 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key)
 
 void *rxe_alloc(struct rxe_pool *pool)
 {
-	unsigned long flags;
 	void *obj;
 
-	write_lock_irqsave(&pool->pool_lock, flags);
+	write_lock_bh(&pool->pool_lock);
 	obj = rxe_alloc_locked(pool);
-	write_unlock_irqrestore(&pool->pool_lock, flags);
+	write_unlock_bh(&pool->pool_lock);
 
 	return obj;
 }
 
 void *rxe_alloc_with_key(struct rxe_pool *pool, void *key)
 {
-	unsigned long flags;
 	void *obj;
 
-	write_lock_irqsave(&pool->pool_lock, flags);
+	write_lock_bh(&pool->pool_lock);
 	obj = rxe_alloc_with_key_locked(pool, key);
-	write_unlock_irqrestore(&pool->pool_lock, flags);
+	write_unlock_bh(&pool->pool_lock);
 
 	return obj;
 }
 
 int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem)
 {
-	unsigned long flags;
 	int err;
 
-	write_lock_irqsave(&pool->pool_lock, flags);
+	write_lock_bh(&pool->pool_lock);
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
 		goto out_cnt;
 
@@ -383,13 +380,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem)
 	}
 
 	refcount_set(&elem->refcnt, 1);
-	write_unlock_irqrestore(&pool->pool_lock, flags);
+	write_unlock_bh(&pool->pool_lock);
 
 	return 0;
 
 out_cnt:
 	atomic_dec(&pool->num_elem);
-	write_unlock_irqrestore(&pool->pool_lock, flags);
+	write_unlock_bh(&pool->pool_lock);
 	return -EINVAL;
 }
 
@@ -421,11 +418,10 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
 void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	void *obj;
-	unsigned long flags;
 
-	read_lock_irqsave(&pool->pool_lock, flags);
+	read_lock_bh(&pool->pool_lock);
 	obj = rxe_pool_get_index_locked(pool, index);
-	read_unlock_irqrestore(&pool->pool_lock, flags);
+	read_unlock_bh(&pool->pool_lock);
 
 	return obj;
 }
@@ -462,11 +458,10 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key)
 void *rxe_pool_get_key(struct rxe_pool *pool, void *key)
 {
 	void *obj;
-	unsigned long flags;
 
-	read_lock_irqsave(&pool->pool_lock, flags);
+	read_lock_bh(&pool->pool_lock);
 	obj = rxe_pool_get_key_locked(pool, key);
-	read_unlock_irqrestore(&pool->pool_lock, flags);
+	read_unlock_bh(&pool->pool_lock);
 
 	return obj;
 }
@@ -485,12 +480,11 @@ int __rxe_add_ref_locked(struct rxe_pool_entry *elem)
 int __rxe_add_ref(struct rxe_pool_entry *elem)
 {
 	struct rxe_pool *pool = elem->pool;
-	unsigned long flags;
 	int ret;
 
-	read_lock_irqsave(&pool->pool_lock, flags);
+	read_lock_bh(&pool->pool_lock);
 	ret = __rxe_add_ref_locked(elem);
-	read_unlock_irqrestore(&pool->pool_lock, flags);
+	read_unlock_bh(&pool->pool_lock);
 
 	return ret;
 }
@@ -509,12 +503,11 @@ int __rxe_drop_ref_locked(struct rxe_pool_entry *elem)
 int __rxe_drop_ref(struct rxe_pool_entry *elem)
 {
 	struct rxe_pool *pool = elem->pool;
-	unsigned long flags;
 	int ret;
 
-	read_lock_irqsave(&pool->pool_lock, flags);
+	read_lock_bh(&pool->pool_lock);
 	ret = __rxe_drop_ref_locked(elem);
-	read_unlock_irqrestore(&pool->pool_lock, flags);
+	read_unlock_bh(&pool->pool_lock);
 
 	return ret;
 }
@@ -560,12 +553,11 @@ int __rxe_fini_ref_locked(struct rxe_pool_entry *elem)
 int __rxe_fini_ref(struct rxe_pool_entry *elem)
 {
 	struct rxe_pool *pool = elem->pool;
-	unsigned long flags;
 	int ret;
 
-	read_lock_irqsave(&pool->pool_lock, flags);
+	read_lock_bh(&pool->pool_lock);
 	ret = __rxe_fini(elem);
-	read_unlock_irqrestore(&pool->pool_lock, flags);
+	read_unlock_bh(&pool->pool_lock);
 
 	if (!ret) {
 		if (pool->cleanup)
diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c
index 6e6e023c1b45..a1b283dd2d4c 100644
--- a/drivers/infiniband/sw/rxe/rxe_queue.c
+++ b/drivers/infiniband/sw/rxe/rxe_queue.c
@@ -151,7 +151,6 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p,
 	struct rxe_queue *new_q;
 	unsigned int num_elem = *num_elem_p;
 	int err;
-	unsigned long flags = 0, flags1;
 
 	new_q = rxe_queue_init(q->rxe, &num_elem, elem_size, q->type);
 	if (!new_q)
@@ -165,17 +164,17 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p,
 		goto err1;
 	}
 
-	spin_lock_irqsave(consumer_lock, flags1);
+	spin_lock_bh(consumer_lock);
 
 	if (producer_lock) {
-		spin_lock_irqsave(producer_lock, flags);
+		spin_lock_bh(producer_lock);
 		err = resize_finish(q, new_q, num_elem);
-		spin_unlock_irqrestore(producer_lock, flags);
+		spin_unlock_bh(producer_lock);
 	} else {
 		err = resize_finish(q, new_q, num_elem);
 	}
 
-	spin_unlock_irqrestore(consumer_lock, flags1);
+	spin_unlock_bh(consumer_lock);
 
 	rxe_queue_cleanup(new_q);	/* new/old dep on err */
 	if (err)
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 891cf98c74a0..7bc1ec8a5aa6 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -110,7 +110,6 @@ void rnr_nak_timer(struct timer_list *t)
 static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp)
 {
 	struct rxe_send_wqe *wqe;
-	unsigned long flags;
 	struct rxe_queue *q = qp->sq.queue;
 	unsigned int index = qp->req.wqe_index;
 	unsigned int cons;
@@ -124,25 +123,23 @@ static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp)
 		/* check to see if we are drained;
 		 * state_lock used by requester and completer
 		 */
-		spin_lock_irqsave(&qp->state_lock, flags);
+		spin_lock_bh(&qp->state_lock);
 		do {
 			if (qp->req.state != QP_STATE_DRAIN) {
 				/* comp just finished */
-				spin_unlock_irqrestore(&qp->state_lock,
-						       flags);
+				spin_unlock_bh(&qp->state_lock);
 				break;
 			}
 
 			if (wqe && ((index != cons) ||
 				(wqe->state != wqe_state_posted))) {
 				/* comp not done yet */
-				spin_unlock_irqrestore(&qp->state_lock,
-						       flags);
+				spin_unlock_bh(&qp->state_lock);
 				break;
 			}
 
 			qp->req.state = QP_STATE_DRAINED;
-			spin_unlock_irqrestore(&qp->state_lock, flags);
+			spin_unlock_bh(&qp->state_lock);
 
 			if (qp->ibqp.event_handler) {
 				struct ib_event ev;
diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
index 6951fdcb31bf..0c4db5bb17d7 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.c
+++ b/drivers/infiniband/sw/rxe/rxe_task.c
@@ -32,25 +32,24 @@ void rxe_do_task(struct tasklet_struct *t)
 {
 	int cont;
 	int ret;
-	unsigned long flags;
 	struct rxe_task *task = from_tasklet(task, t, tasklet);
 
-	spin_lock_irqsave(&task->state_lock, flags);
+	spin_lock_bh(&task->state_lock);
 	switch (task->state) {
 	case TASK_STATE_START:
 		task->state = TASK_STATE_BUSY;
-		spin_unlock_irqrestore(&task->state_lock, flags);
+		spin_unlock_bh(&task->state_lock);
 		break;
 
 	case TASK_STATE_BUSY:
 		task->state = TASK_STATE_ARMED;
 		fallthrough;
 	case TASK_STATE_ARMED:
-		spin_unlock_irqrestore(&task->state_lock, flags);
+		spin_unlock_bh(&task->state_lock);
 		return;
 
 	default:
-		spin_unlock_irqrestore(&task->state_lock, flags);
+		spin_unlock_bh(&task->state_lock);
 		pr_warn("%s failed with bad state %d\n", __func__, task->state);
 		return;
 	}
@@ -59,7 +58,7 @@ void rxe_do_task(struct tasklet_struct *t)
 		cont = 0;
 		ret = task->func(task->arg);
 
-		spin_lock_irqsave(&task->state_lock, flags);
+		spin_lock_bh(&task->state_lock);
 		switch (task->state) {
 		case TASK_STATE_BUSY:
 			if (ret)
@@ -81,7 +80,7 @@ void rxe_do_task(struct tasklet_struct *t)
 			pr_warn("%s failed with bad state %d\n", __func__,
 				task->state);
 		}
-		spin_unlock_irqrestore(&task->state_lock, flags);
+		spin_unlock_bh(&task->state_lock);
 	} while (cont);
 
 	task->ret = ret;
@@ -106,7 +105,6 @@ int rxe_init_task(void *obj, struct rxe_task *task,
 
 void rxe_cleanup_task(struct rxe_task *task)
 {
-	unsigned long flags;
 	bool idle;
 
 	/*
@@ -116,9 +114,9 @@ void rxe_cleanup_task(struct rxe_task *task)
 	task->destroyed = true;
 
 	do {
-		spin_lock_irqsave(&task->state_lock, flags);
+		spin_lock_bh(&task->state_lock);
 		idle = (task->state == TASK_STATE_START);
-		spin_unlock_irqrestore(&task->state_lock, flags);
+		spin_unlock_bh(&task->state_lock);
 	} while (!idle);
 
 	tasklet_kill(&task->tasklet);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 1b5084fd10ab..2b0ba33cff31 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -381,10 +381,9 @@ static int rxe_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
 			     const struct ib_recv_wr **bad_wr)
 {
 	int err = 0;
-	unsigned long flags;
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 
-	spin_lock_irqsave(&srq->rq.producer_lock, flags);
+	spin_lock_bh(&srq->rq.producer_lock);
 
 	while (wr) {
 		err = post_one_recv(&srq->rq, wr);
@@ -393,7 +392,7 @@ static int rxe_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
 		wr = wr->next;
 	}
 
-	spin_unlock_irqrestore(&srq->rq.producer_lock, flags);
+	spin_unlock_bh(&srq->rq.producer_lock);
 
 	if (err)
 		*bad_wr = wr;
@@ -627,19 +626,18 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
 	int err;
 	struct rxe_sq *sq = &qp->sq;
 	struct rxe_send_wqe *send_wqe;
-	unsigned long flags;
 	int full;
 
 	err = validate_send_wr(qp, ibwr, mask, length);
 	if (err)
 		return err;
 
-	spin_lock_irqsave(&qp->sq.sq_lock, flags);
+	spin_lock_bh(&qp->sq.sq_lock);
 
 	full = queue_full(sq->queue, QUEUE_TYPE_TO_DRIVER);
 
 	if (unlikely(full)) {
-		spin_unlock_irqrestore(&qp->sq.sq_lock, flags);
+		spin_unlock_bh(&qp->sq.sq_lock);
 		return -ENOMEM;
 	}
 
@@ -648,7 +646,7 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
 
 	queue_advance_producer(sq->queue, QUEUE_TYPE_TO_DRIVER);
 
-	spin_unlock_irqrestore(&qp->sq.sq_lock, flags);
+	spin_unlock_bh(&qp->sq.sq_lock);
 
 	return 0;
 }
@@ -728,7 +726,6 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
 	int err = 0;
 	struct rxe_qp *qp = to_rqp(ibqp);
 	struct rxe_rq *rq = &qp->rq;
-	unsigned long flags;
 
 	if (unlikely((qp_state(qp) < IB_QPS_INIT) || !qp->valid)) {
 		*bad_wr = wr;
@@ -742,7 +739,7 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
 		goto err1;
 	}
 
-	spin_lock_irqsave(&rq->producer_lock, flags);
+	spin_lock_bh(&rq->producer_lock);
 
 	while (wr) {
 		err = post_one_recv(rq, wr);
@@ -753,7 +750,7 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
 		wr = wr->next;
 	}
 
-	spin_unlock_irqrestore(&rq->producer_lock, flags);
+	spin_unlock_bh(&rq->producer_lock);
 
 	if (qp->resp.state == QP_STATE_ERROR)
 		rxe_run_task(&qp->resp.task, 1);
@@ -831,9 +828,8 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
 	int i;
 	struct rxe_cq *cq = to_rcq(ibcq);
 	struct rxe_cqe *cqe;
-	unsigned long flags;
 
-	spin_lock_irqsave(&cq->cq_lock, flags);
+	spin_lock_bh(&cq->cq_lock);
 	for (i = 0; i < num_entries; i++) {
 		cqe = queue_head(cq->queue, QUEUE_TYPE_FROM_DRIVER);
 		if (!cqe)
@@ -842,7 +838,7 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
 		memcpy(wc++, &cqe->ibwc, sizeof(*wc));
 		queue_advance_consumer(cq->queue, QUEUE_TYPE_FROM_DRIVER);
 	}
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	spin_unlock_bh(&cq->cq_lock);
 
 	return i;
 }
@@ -860,11 +856,10 @@ static int rxe_peek_cq(struct ib_cq *ibcq, int wc_cnt)
 static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
 {
 	struct rxe_cq *cq = to_rcq(ibcq);
-	unsigned long irq_flags;
 	int ret = 0;
 	int empty;
 
-	spin_lock_irqsave(&cq->cq_lock, irq_flags);
+	spin_lock_bh(&cq->cq_lock);
 	if (cq->notify != IB_CQ_NEXT_COMP)
 		cq->notify = flags & IB_CQ_SOLICITED_MASK;
 
@@ -873,7 +868,7 @@ static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
 	if ((flags & IB_CQ_REPORT_MISSED_EVENTS) && !empty)
 		ret = 1;
 
-	spin_unlock_irqrestore(&cq->cq_lock, irq_flags);
+	spin_unlock_bh(&cq->cq_lock);
 
 	return ret;
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH for-next 2/7] RDMA/rxe: Cleanup rxe_pool_entry
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 1/7] RDMA/rxe: Replace irqsave locks with bh locks Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c Bob Pearson
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently three different names are used to describe rxe pool elements.
They are referred to as entries, elems or pelems. This patch chooses one
'elem' and changes the other ones.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_cq.c    |  4 +-
 drivers/infiniband/sw/rxe/rxe_loc.h   | 10 ++--
 drivers/infiniband/sw/rxe/rxe_mr.c    |  6 +--
 drivers/infiniband/sw/rxe/rxe_mw.c    |  4 +-
 drivers/infiniband/sw/rxe/rxe_pool.c  | 70 +++++++++++++--------------
 drivers/infiniband/sw/rxe/rxe_pool.h  | 38 +++++++--------
 drivers/infiniband/sw/rxe/rxe_qp.c    |  6 +--
 drivers/infiniband/sw/rxe/rxe_srq.c   |  6 +--
 drivers/infiniband/sw/rxe/rxe_verbs.c |  2 +-
 drivers/infiniband/sw/rxe/rxe_verbs.h | 22 ++++-----
 10 files changed, 84 insertions(+), 84 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index dda510e4d904..79682a3b1357 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -139,9 +139,9 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 	return 0;
 }
 
-void rxe_cq_cleanup(struct rxe_pool_entry *arg)
+void rxe_cq_cleanup(struct rxe_pool_elem *arg)
 {
-	struct rxe_cq *cq = container_of(arg, typeof(*cq), pelem);
+	struct rxe_cq *cq = container_of(arg, typeof(*cq), elem);
 
 	/* TODO get rid of this */
 	spin_lock_bh(&cq->cq_lock);
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index a689ee8386b8..2d073dfd99a1 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -35,7 +35,7 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int new_cqe,
 
 int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited);
 
-void rxe_cq_cleanup(struct rxe_pool_entry *arg);
+void rxe_cq_cleanup(struct rxe_pool_elem *arg);
 
 /* rxe_mcast.c */
 int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp,
@@ -80,7 +80,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey);
 int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr);
 int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
-void rxe_mr_cleanup(struct rxe_pool_entry *arg);
+void rxe_mr_cleanup(struct rxe_pool_elem *arg);
 
 /* rxe_mw.c */
 int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata);
@@ -88,7 +88,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw);
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey);
 struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey);
-void rxe_mw_cleanup(struct rxe_pool_entry *arg);
+void rxe_mw_cleanup(struct rxe_pool_elem *arg);
 
 /* rxe_net.c */
 struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
@@ -121,7 +121,7 @@ void rxe_qp_error(struct rxe_qp *qp);
 
 void rxe_qp_destroy(struct rxe_qp *qp);
 
-void rxe_qp_cleanup(struct rxe_pool_entry *arg);
+void rxe_qp_cleanup(struct rxe_pool_elem *arg);
 
 static inline int qp_num(struct rxe_qp *qp)
 {
@@ -177,7 +177,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata);
-void rxe_srq_cleanup(struct rxe_pool_entry *arg);
+void rxe_srq_cleanup(struct rxe_pool_elem *arg);
 
 void rxe_dealloc(struct ib_device *ib_dev);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 6c50c8562fd8..63a36b7f2aa5 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -50,7 +50,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 
 static void rxe_mr_init(int access, struct rxe_mr *mr)
 {
-	u32 lkey = mr->pelem.index << 8 | rxe_get_next_key(-1);
+	u32 lkey = mr->elem.index << 8 | rxe_get_next_key(-1);
 	u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0;
 
 	/* set ibmr->l/rkey and also copy into private l/rkey
@@ -704,9 +704,9 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 	return 0;
 }
 
-void rxe_mr_cleanup(struct rxe_pool_entry *arg)
+void rxe_mr_cleanup(struct rxe_pool_elem *arg)
 {
-	struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem);
+	struct rxe_mr *mr = container_of(arg, typeof(*mr), elem);
 
 	ib_umem_release(mr->umem);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 7c264599b3d4..cd690dad9d39 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -20,7 +20,7 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
 		return ret;
 	}
 
-	mw->rkey = ibmw->rkey = (mw->pelem.index << 8) | rxe_get_next_key(-1);
+	mw->rkey = ibmw->rkey = (mw->elem.index << 8) | rxe_get_next_key(-1);
 	mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ?
 			RXE_MW_STATE_FREE : RXE_MW_STATE_VALID;
 	spin_lock_init(&mw->lock);
@@ -330,7 +330,7 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey)
 	return mw;
 }
 
-void rxe_mw_cleanup(struct rxe_pool_entry *elem)
+void rxe_mw_cleanup(struct rxe_pool_elem *elem)
 {
 	/* nothing to do currently */
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 58f826ab3bc6..24ebd1b663c3 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -12,19 +12,19 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_UC] = {
 		.name		= "rxe-uc",
 		.size		= sizeof(struct rxe_ucontext),
-		.elem_offset	= offsetof(struct rxe_ucontext, pelem),
+		.elem_offset	= offsetof(struct rxe_ucontext, elem),
 		.flags          = RXE_POOL_NO_ALLOC,
 	},
 	[RXE_TYPE_PD] = {
 		.name		= "rxe-pd",
 		.size		= sizeof(struct rxe_pd),
-		.elem_offset	= offsetof(struct rxe_pd, pelem),
+		.elem_offset	= offsetof(struct rxe_pd, elem),
 		.flags		= RXE_POOL_NO_ALLOC,
 	},
 	[RXE_TYPE_AH] = {
 		.name		= "rxe-ah",
 		.size		= sizeof(struct rxe_ah),
-		.elem_offset	= offsetof(struct rxe_ah, pelem),
+		.elem_offset	= offsetof(struct rxe_ah, elem),
 		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_AH_INDEX,
 		.max_index	= RXE_MAX_AH_INDEX,
@@ -32,7 +32,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_SRQ] = {
 		.name		= "rxe-srq",
 		.size		= sizeof(struct rxe_srq),
-		.elem_offset	= offsetof(struct rxe_srq, pelem),
+		.elem_offset	= offsetof(struct rxe_srq, elem),
 		.cleanup	= rxe_srq_cleanup,
 		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_SRQ_INDEX,
@@ -41,7 +41,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_QP] = {
 		.name		= "rxe-qp",
 		.size		= sizeof(struct rxe_qp),
-		.elem_offset	= offsetof(struct rxe_qp, pelem),
+		.elem_offset	= offsetof(struct rxe_qp, elem),
 		.cleanup	= rxe_qp_cleanup,
 		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_QP_INDEX,
@@ -50,14 +50,14 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_CQ] = {
 		.name		= "rxe-cq",
 		.size		= sizeof(struct rxe_cq),
-		.elem_offset	= offsetof(struct rxe_cq, pelem),
+		.elem_offset	= offsetof(struct rxe_cq, elem),
 		.flags          = RXE_POOL_NO_ALLOC,
 		.cleanup	= rxe_cq_cleanup,
 	},
 	[RXE_TYPE_MR] = {
 		.name		= "rxe-mr",
 		.size		= sizeof(struct rxe_mr),
-		.elem_offset	= offsetof(struct rxe_mr, pelem),
+		.elem_offset	= offsetof(struct rxe_mr, elem),
 		.cleanup	= rxe_mr_cleanup,
 		.flags		= RXE_POOL_INDEX,
 		.max_index	= RXE_MAX_MR_INDEX,
@@ -66,7 +66,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_MW] = {
 		.name		= "rxe-mw",
 		.size		= sizeof(struct rxe_mw),
-		.elem_offset	= offsetof(struct rxe_mw, pelem),
+		.elem_offset	= offsetof(struct rxe_mw, elem),
 		.cleanup	= rxe_mw_cleanup,
 		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.max_index	= RXE_MAX_MW_INDEX,
@@ -75,7 +75,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_MC_GRP] = {
 		.name		= "rxe-mc_grp",
 		.size		= sizeof(struct rxe_mc_grp),
-		.elem_offset	= offsetof(struct rxe_mc_grp, pelem),
+		.elem_offset	= offsetof(struct rxe_mc_grp, elem),
 		.flags		= RXE_POOL_KEY,
 		.key_offset	= offsetof(struct rxe_mc_grp, mgid),
 		.key_size	= sizeof(union ib_gid),
@@ -180,15 +180,15 @@ static u32 alloc_index(struct rxe_pool *pool)
 	return index + pool->index.min_index;
 }
 
-static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new)
+static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new)
 {
 	struct rb_node **link = &pool->index.tree.rb_node;
 	struct rb_node *parent = NULL;
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 
 	while (*link) {
 		parent = *link;
-		elem = rb_entry(parent, struct rxe_pool_entry, index_node);
+		elem = rb_entry(parent, struct rxe_pool_elem, index_node);
 
 		/* this can happen if memory was recycled and/or the
 		 * old object was not deleted from the pool index
@@ -211,16 +211,16 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new)
 	return 0;
 }
 
-static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
+static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new)
 {
 	struct rb_node **link = &pool->key.tree.rb_node;
 	struct rb_node *parent = NULL;
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 	int cmp;
 
 	while (*link) {
 		parent = *link;
-		elem = rb_entry(parent, struct rxe_pool_entry, key_node);
+		elem = rb_entry(parent, struct rxe_pool_elem, key_node);
 
 		cmp = memcmp((u8 *)elem + pool->key.key_offset,
 			     (u8 *)new + pool->key.key_offset,
@@ -243,7 +243,7 @@ static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
 	return 0;
 }
 
-static int rxe_add_index(struct rxe_pool_entry *elem)
+static int rxe_add_index(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 	int err;
@@ -257,7 +257,7 @@ static int rxe_add_index(struct rxe_pool_entry *elem)
 	return err;
 }
 
-static void rxe_drop_index(struct rxe_pool_entry *elem)
+static void rxe_drop_index(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 
@@ -267,7 +267,7 @@ static void rxe_drop_index(struct rxe_pool_entry *elem)
 
 static void *__rxe_alloc_locked(struct rxe_pool *pool)
 {
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 	void *obj;
 	int err;
 
@@ -278,7 +278,7 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool)
 	if (!obj)
 		goto out_cnt;
 
-	elem = (struct rxe_pool_entry *)((u8 *)obj + pool->elem_offset);
+	elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset);
 
 	elem->pool = pool;
 	elem->obj = obj;
@@ -300,14 +300,14 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool)
 
 void *rxe_alloc_locked(struct rxe_pool *pool)
 {
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 	void *obj;
 
 	obj = __rxe_alloc_locked(pool);
 	if (!obj)
 		return NULL;
 
-	elem = (struct rxe_pool_entry *)(obj + pool->elem_offset);
+	elem = (struct rxe_pool_elem *)(obj + pool->elem_offset);
 	refcount_set(&elem->refcnt, 1);
 
 	return obj;
@@ -315,7 +315,7 @@ void *rxe_alloc_locked(struct rxe_pool *pool)
 
 void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key)
 {
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 	void *obj;
 	int err;
 
@@ -323,7 +323,7 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key)
 	if (!obj)
 		return NULL;
 
-	elem = (struct rxe_pool_entry *)((u8 *)obj + pool->elem_offset);
+	elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset);
 	memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size);
 	err = rxe_insert_key(pool, elem);
 	if (err) {
@@ -362,7 +362,7 @@ void *rxe_alloc_with_key(struct rxe_pool *pool, void *key)
 	return obj;
 }
 
-int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem)
+int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 {
 	int err;
 
@@ -393,13 +393,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem)
 void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
 {
 	struct rb_node *node;
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 	void *obj = NULL;
 
 	node = pool->index.tree.rb_node;
 
 	while (node) {
-		elem = rb_entry(node, struct rxe_pool_entry, index_node);
+		elem = rb_entry(node, struct rxe_pool_elem, index_node);
 
 		if (elem->index > index)
 			node = node->rb_left;
@@ -429,14 +429,14 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key)
 {
 	struct rb_node *node;
-	struct rxe_pool_entry *elem;
+	struct rxe_pool_elem *elem;
 	void *obj = NULL;
 	int cmp;
 
 	node = pool->key.tree.rb_node;
 
 	while (node) {
-		elem = rb_entry(node, struct rxe_pool_entry, key_node);
+		elem = rb_entry(node, struct rxe_pool_elem, key_node);
 
 		cmp = memcmp((u8 *)elem + pool->key.key_offset,
 			     key, pool->key.key_size);
@@ -466,7 +466,7 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key)
 	return obj;
 }
 
-int __rxe_add_ref_locked(struct rxe_pool_entry *elem)
+int __rxe_add_ref_locked(struct rxe_pool_elem *elem)
 {
 	int done;
 
@@ -477,7 +477,7 @@ int __rxe_add_ref_locked(struct rxe_pool_entry *elem)
 		return -EINVAL;
 }
 
-int __rxe_add_ref(struct rxe_pool_entry *elem)
+int __rxe_add_ref(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 	int ret;
@@ -489,7 +489,7 @@ int __rxe_add_ref(struct rxe_pool_entry *elem)
 	return ret;
 }
 
-int __rxe_drop_ref_locked(struct rxe_pool_entry *elem)
+int __rxe_drop_ref_locked(struct rxe_pool_elem *elem)
 {
 	int done;
 
@@ -500,7 +500,7 @@ int __rxe_drop_ref_locked(struct rxe_pool_entry *elem)
 		return -EINVAL;
 }
 
-int __rxe_drop_ref(struct rxe_pool_entry *elem)
+int __rxe_drop_ref(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 	int ret;
@@ -512,7 +512,7 @@ int __rxe_drop_ref(struct rxe_pool_entry *elem)
 	return ret;
 }
 
-static int __rxe_fini(struct rxe_pool_entry *elem)
+static int __rxe_fini(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 	int done;
@@ -533,7 +533,7 @@ static int __rxe_fini(struct rxe_pool_entry *elem)
 /* can only be used by pools that have a cleanup
  * routine that can run while holding a spinlock
  */
-int __rxe_fini_ref_locked(struct rxe_pool_entry *elem)
+int __rxe_fini_ref_locked(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 	int ret;
@@ -550,7 +550,7 @@ int __rxe_fini_ref_locked(struct rxe_pool_entry *elem)
 	return ret;
 }
 
-int __rxe_fini_ref(struct rxe_pool_entry *elem)
+int __rxe_fini_ref(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
 	int ret;
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index f04df69c52ba..3e78c275c7c5 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -31,13 +31,13 @@ enum rxe_elem_type {
 	RXE_NUM_TYPES,		/* keep me last */
 };
 
-struct rxe_pool_entry;
+struct rxe_pool_elem;
 
 struct rxe_type_info {
 	const char		*name;
 	size_t			size;
 	size_t			elem_offset;
-	void			(*cleanup)(struct rxe_pool_entry *obj);
+	void			(*cleanup)(struct rxe_pool_elem *obj);
 	enum rxe_pool_flags	flags;
 	u32			max_index;
 	u32			min_index;
@@ -45,7 +45,7 @@ struct rxe_type_info {
 	size_t			key_size;
 };
 
-struct rxe_pool_entry {
+struct rxe_pool_elem {
 	struct rxe_pool		*pool;
 	void			*obj;
 	refcount_t		refcnt;
@@ -63,7 +63,7 @@ struct rxe_pool {
 	struct rxe_dev		*rxe;
 	const char		*name;
 	rwlock_t		pool_lock; /* protects pool add/del/search */
-	void			(*cleanup)(struct rxe_pool_entry *obj);
+	void			(*cleanup)(struct rxe_pool_elem *obj);
 	enum rxe_pool_flags	flags;
 	enum rxe_elem_type	type;
 
@@ -110,9 +110,9 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key);
 void *rxe_alloc_with_key(struct rxe_pool *pool, void *key);
 
 /* connect already allocated object to pool */
-int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem);
+int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem);
 
-#define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->pelem)
+#define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem)
 
 /* lookup an indexed object from index holding and not holding the pool_lock.
  * takes a reference on object
@@ -129,32 +129,32 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key);
 void *rxe_pool_get_key(struct rxe_pool *pool, void *key);
 
 /* take a reference on an object */
-int __rxe_add_ref_locked(struct rxe_pool_entry *elem);
+int __rxe_add_ref_locked(struct rxe_pool_elem *elem);
 
-#define rxe_add_ref_locked(obj) __rxe_add_ref_locked(&(obj)->pelem)
+#define rxe_add_ref_locked(obj) __rxe_add_ref_locked(&(obj)->elem)
 
-int __rxe_add_ref(struct rxe_pool_entry *elem);
+int __rxe_add_ref(struct rxe_pool_elem *elem);
 
-#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->pelem)
+#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->elem)
 
 /* drop a reference on an object */
-int __rxe_drop_ref_locked(struct rxe_pool_entry *elem);
+int __rxe_drop_ref_locked(struct rxe_pool_elem *elem);
 
-#define rxe_drop_ref_locked(obj) __rxe_drop_ref_locked(&(obj)->pelem)
+#define rxe_drop_ref_locked(obj) __rxe_drop_ref_locked(&(obj)->elem)
 
-int __rxe_drop_ref(struct rxe_pool_entry *elem);
+int __rxe_drop_ref(struct rxe_pool_elem *elem);
 
-#define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->pelem)
+#define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem)
 
 /* drop last reference on an object */
-int __rxe_fini_ref_locked(struct rxe_pool_entry *elem);
+int __rxe_fini_ref_locked(struct rxe_pool_elem *elem);
 
-#define rxe_fini_ref_locked(obj) __rxe_fini_ref_locked(&(obj)->pelem)
+#define rxe_fini_ref_locked(obj) __rxe_fini_ref_locked(&(obj)->elem)
 
-int __rxe_fini_ref(struct rxe_pool_entry *elem);
+int __rxe_fini_ref(struct rxe_pool_elem *elem);
 
-#define rxe_fini_ref(obj) __rxe_fini_ref(&(obj)->pelem)
+#define rxe_fini_ref(obj) __rxe_fini_ref(&(obj)->elem)
 
-#define rxe_read_ref(obj) refcount_read(&(obj)->pelem.refcnt)
+#define rxe_read_ref(obj) refcount_read(&(obj)->elem.refcnt)
 
 #endif /* RXE_POOL_H */
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index 23b4ffe23c4f..f1b89585d6e0 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -163,7 +163,7 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
 	qp->attr.path_mtu	= 1;
 	qp->mtu			= ib_mtu_enum_to_int(qp->attr.path_mtu);
 
-	qpn			= qp->pelem.index;
+	qpn			= qp->elem.index;
 	port			= &rxe->port;
 
 	switch (init->qp_type) {
@@ -825,9 +825,9 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
 }
 
 /* called when the last reference to the qp is dropped */
-void rxe_qp_cleanup(struct rxe_pool_entry *arg)
+void rxe_qp_cleanup(struct rxe_pool_elem *arg)
 {
-	struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem);
+	struct rxe_qp *qp = container_of(arg, typeof(*qp), elem);
 
 	rxe_qp_destroy(qp);
 	execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work);
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index bb00643a2929..66273342da9f 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -83,7 +83,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 	srq->ibsrq.event_handler	= init->event_handler;
 	srq->ibsrq.srq_context		= init->srq_context;
 	srq->limit		= init->attr.srq_limit;
-	srq->srq_num		= srq->pelem.index;
+	srq->srq_num		= srq->elem.index;
 	srq->rq.max_wr		= init->attr.max_wr;
 	srq->rq.max_sge		= init->attr.max_sge;
 
@@ -155,9 +155,9 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 	return err;
 }
 
-void rxe_srq_cleanup(struct rxe_pool_entry *arg)
+void rxe_srq_cleanup(struct rxe_pool_elem *arg)
 {
-	struct rxe_srq *srq = container_of(arg, typeof(*srq), pelem);
+	struct rxe_srq *srq = container_of(arg, typeof(*srq), elem);
 
 	if (srq->rq.queue)
 		rxe_queue_cleanup(srq->rq.queue);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 2b0ba33cff31..eea89873215d 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -180,7 +180,7 @@ static int rxe_create_ah(struct ib_ah *ibah,
 		return err;
 
 	/* create index > 0 */
-	ah->ah_num = ah->pelem.index;
+	ah->ah_num = ah->elem.index;
 
 	if (uresp) {
 		/* only if new user provider */
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 0cfbef7a36c9..52e8752d2983 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -35,17 +35,17 @@ static inline int psn_compare(u32 psn_a, u32 psn_b)
 
 struct rxe_ucontext {
 	struct ib_ucontext ibuc;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 };
 
 struct rxe_pd {
 	struct ib_pd            ibpd;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 };
 
 struct rxe_ah {
 	struct ib_ah		ibah;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	struct rxe_av		av;
 	bool			is_user;
 	int			ah_num;
@@ -60,7 +60,7 @@ struct rxe_cqe {
 
 struct rxe_cq {
 	struct ib_cq		ibcq;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	struct rxe_queue	*queue;
 	spinlock_t		cq_lock;
 	u8			notify;
@@ -95,7 +95,7 @@ struct rxe_rq {
 
 struct rxe_srq {
 	struct ib_srq		ibsrq;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	struct rxe_pd		*pd;
 	struct rxe_rq		rq;
 	u32			srq_num;
@@ -208,7 +208,7 @@ struct rxe_resp_info {
 
 struct rxe_qp {
 	struct ib_qp		ibqp;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	struct ib_qp_attr	attr;
 	unsigned int		valid;
 	unsigned int		mtu;
@@ -308,7 +308,7 @@ static inline int rkey_is_mw(u32 rkey)
 }
 
 struct rxe_mr {
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	struct ib_mr		ibmr;
 
 	struct ib_umem		*umem;
@@ -341,7 +341,7 @@ enum rxe_mw_state {
 
 struct rxe_mw {
 	struct ib_mw		ibmw;
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	spinlock_t		lock;
 	enum rxe_mw_state	state;
 	struct rxe_qp		*qp; /* Type 2 only */
@@ -353,7 +353,7 @@ struct rxe_mw {
 };
 
 struct rxe_mc_grp {
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	spinlock_t		mcg_lock; /* guard group */
 	struct rxe_dev		*rxe;
 	struct list_head	qp_list;
@@ -364,7 +364,7 @@ struct rxe_mc_grp {
 };
 
 struct rxe_mc_elem {
-	struct rxe_pool_entry	pelem;
+	struct rxe_pool_elem	elem;
 	struct list_head	qp_list;
 	struct list_head	grp_list;
 	struct rxe_qp		*qp;
@@ -484,6 +484,6 @@ static inline struct rxe_pd *rxe_mw_pd(struct rxe_mw *mw)
 
 int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name);
 
-void rxe_mc_cleanup(struct rxe_pool_entry *arg);
+void rxe_mc_cleanup(struct rxe_pool_elem *arg);
 
 #endif /* RXE_VERBS_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 1/7] RDMA/rxe: Replace irqsave locks with bh locks Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 2/7] RDMA/rxe: Cleanup rxe_pool_entry Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  2021-10-21 11:53   ` Dennis Dalessandro
  2021-10-20 22:05 ` [PATCH for-next 4/7] RDMA/rxe: Replace pool_lock by xa_lock Bob Pearson
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently the rxe driver uses red-black trees to add indices and keys
to the rxe object pool. Linux xarrays provide a better way to implement
the same functionality for indices but not keys. This patch adds a
second alternative to adding indices based on cyclic allocating xarrays.
The AH pool is modified to hold either xarrays or red-black trees.
The code is tested for both options.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 100 ++++++++++++++++++++++-----
 drivers/infiniband/sw/rxe/rxe_pool.h |   9 +++
 2 files changed, 92 insertions(+), 17 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 24ebd1b663c3..ba5c600fa9e8 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -25,7 +25,8 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.name		= "rxe-ah",
 		.size		= sizeof(struct rxe_ah),
 		.elem_offset	= offsetof(struct rxe_ah, elem),
-		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
+		//.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_AH_INDEX,
 		.max_index	= RXE_MAX_AH_INDEX,
 	},
@@ -128,15 +129,20 @@ int rxe_pool_init(
 	pool->elem_size		= ALIGN(info->size, RXE_POOL_ALIGN);
 	pool->elem_offset	= info->elem_offset;
 	pool->flags		= info->flags;
-	pool->index.tree	= RB_ROOT;
-	pool->key.tree		= RB_ROOT;
 	pool->cleanup		= info->cleanup;
 
 	atomic_set(&pool->num_elem, 0);
 
 	rwlock_init(&pool->pool_lock);
 
+	if (info->flags & RXE_POOL_XARRAY) {
+		xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC);
+		pool->xarray.limit.max = info->max_index;
+		pool->xarray.limit.min = info->min_index;
+	}
+
 	if (info->flags & RXE_POOL_INDEX) {
+		pool->index.tree = RB_ROOT;
 		err = rxe_pool_init_index(pool, info->max_index,
 					  info->min_index);
 		if (err)
@@ -144,6 +150,7 @@ int rxe_pool_init(
 	}
 
 	if (info->flags & RXE_POOL_KEY) {
+		pool->key.tree = RB_ROOT;
 		pool->key.key_offset = info->key_offset;
 		pool->key.key_size = info->key_size;
 	}
@@ -158,7 +165,8 @@ void rxe_pool_cleanup(struct rxe_pool *pool)
 		pr_warn("%s pool destroyed with unfree'd elem\n",
 			pool->name);
 
-	kfree(pool->index.table);
+	if (pool->flags & RXE_POOL_INDEX)
+		kfree(pool->index.table);
 }
 
 /* should never fail because there are at least as many indices as
@@ -272,28 +280,35 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool)
 	int err;
 
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
-		goto out_cnt;
+		goto err;
 
 	obj = kzalloc(pool->elem_size, GFP_ATOMIC);
 	if (!obj)
-		goto out_cnt;
+		goto err;
 
 	elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset);
 
 	elem->pool = pool;
 	elem->obj = obj;
 
+	if (pool->flags & RXE_POOL_XARRAY) {
+		err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem,
+					 pool->xarray.limit,
+					 &pool->xarray.next, GFP_KERNEL);
+		if (err)
+			goto err;
+	}
+
 	if (pool->flags & RXE_POOL_INDEX) {
 		err = rxe_add_index(elem);
-		if (err) {
-			kfree(obj);
-			goto out_cnt;
-		}
+		if (err)
+			goto err;
 	}
 
 	return obj;
 
-out_cnt:
+err:
+	kfree(obj);
 	atomic_dec(&pool->num_elem);
 	return NULL;
 }
@@ -368,15 +383,23 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 
 	write_lock_bh(&pool->pool_lock);
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
-		goto out_cnt;
+		goto err;
 
 	elem->pool = pool;
 	elem->obj = (u8 *)elem - pool->elem_offset;
 
+	if (pool->flags & RXE_POOL_XARRAY) {
+		err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem,
+					 pool->xarray.limit,
+					 &pool->xarray.next, GFP_KERNEL);
+		if (err)
+			goto err;
+	}
+
 	if (pool->flags & RXE_POOL_INDEX) {
 		err = rxe_add_index(elem);
 		if (err)
-			goto out_cnt;
+			goto err;
 	}
 
 	refcount_set(&elem->refcnt, 1);
@@ -384,13 +407,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 
 	return 0;
 
-out_cnt:
+err:
 	atomic_dec(&pool->num_elem);
 	write_unlock_bh(&pool->pool_lock);
 	return -EINVAL;
 }
 
-void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
+static void *__rxe_get_index_locked(struct rxe_pool *pool, u32 index)
 {
 	struct rb_node *node;
 	struct rxe_pool_elem *elem;
@@ -415,17 +438,58 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
 	return obj;
 }
 
-void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
+static void *__rxe_get_index(struct rxe_pool *pool, u32 index)
+{
+	void *obj;
+
+	read_lock_bh(&pool->pool_lock);
+	obj = __rxe_get_index_locked(pool, index);
+	read_unlock_bh(&pool->pool_lock);
+
+	return obj;
+}
+
+static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index)
+{
+	struct rxe_pool_elem *elem;
+	void *obj = NULL;
+
+	elem = xa_load(&pool->xarray.xa, index);
+	if (elem && refcount_inc_not_zero(&elem->refcnt))
+		obj = elem->obj;
+
+	return obj;
+}
+
+static void *__rxe_get_xarray(struct rxe_pool *pool, u32 index)
 {
 	void *obj;
 
 	read_lock_bh(&pool->pool_lock);
-	obj = rxe_pool_get_index_locked(pool, index);
+	obj = __rxe_get_xarray_locked(pool, index);
 	read_unlock_bh(&pool->pool_lock);
 
 	return obj;
 }
 
+void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
+{
+	if (pool->flags & RXE_POOL_XARRAY)
+		return __rxe_get_xarray_locked(pool, index);
+	if (pool->flags & RXE_POOL_INDEX)
+		return __rxe_get_index_locked(pool, index);
+	return NULL;
+}
+
+void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
+{
+	if (pool->flags & RXE_POOL_XARRAY)
+		return __rxe_get_xarray(pool, index);
+	if (pool->flags & RXE_POOL_INDEX)
+		return __rxe_get_index(pool, index);
+	return NULL;
+}
+
 void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key)
 {
 	struct rb_node *node;
@@ -519,6 +583,8 @@ static int __rxe_fini(struct rxe_pool_elem *elem)
 
 	done = refcount_dec_if_one(&elem->refcnt);
 	if (done) {
+		if (pool->flags & RXE_POOL_XARRAY)
+			xa_erase(&pool->xarray.xa, elem->index);
 		if (pool->flags & RXE_POOL_INDEX)
 			rxe_drop_index(elem);
 		if (pool->flags & RXE_POOL_KEY)
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index 3e78c275c7c5..f9c4f09cdcc9 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -8,6 +8,7 @@
 #define RXE_POOL_H
 
 #include <linux/refcount.h>
+#include <linux/xarray.h>
 
 #define RXE_POOL_ALIGN		(16)
 #define RXE_POOL_CACHE_FLAGS	(0)
@@ -15,6 +16,7 @@
 enum rxe_pool_flags {
 	RXE_POOL_INDEX		= BIT(1),
 	RXE_POOL_KEY		= BIT(2),
+	RXE_POOL_XARRAY		= BIT(3),
 	RXE_POOL_NO_ALLOC	= BIT(4),
 };
 
@@ -72,6 +74,13 @@ struct rxe_pool {
 	size_t			elem_size;
 	size_t			elem_offset;
 
+	/* only used if xarray */
+	struct {
+		struct xarray		xa;
+		struct xa_limit		limit;
+		u32			next;
+	} xarray;
+
 	/* only used if indexed */
 	struct {
 		struct rb_root		tree;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH for-next 4/7] RDMA/rxe: Replace pool_lock by xa_lock
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
                   ` (2 preceding siblings ...)
  2021-10-20 22:05 ` [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 5/7] RDMA/rxe: Convert remaining pools to xarrays Bob Pearson
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

In rxe_pool.c xa_alloc_bh and xa_erase_bh and variants already include
	spin_lock_bh()
	__xa_alloc()
	spin_unlock_bh()
So we are double locking. Replacing pool_lock by xa_lock and using xa_lock
in all the places that were previously locked by pool_lock but dropping the
double locks is a performance improvement.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 54 ++++++++++++++--------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index ba5c600fa9e8..1b7269dd6d9e 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -133,8 +133,6 @@ int rxe_pool_init(
 
 	atomic_set(&pool->num_elem, 0);
 
-	rwlock_init(&pool->pool_lock);
-
 	if (info->flags & RXE_POOL_XARRAY) {
 		xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC);
 		pool->xarray.limit.max = info->max_index;
@@ -292,9 +290,9 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool)
 	elem->obj = obj;
 
 	if (pool->flags & RXE_POOL_XARRAY) {
-		err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem,
-					 pool->xarray.limit,
-					 &pool->xarray.next, GFP_KERNEL);
+		err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem,
+					pool->xarray.limit,
+					&pool->xarray.next, GFP_KERNEL);
 		if (err)
 			goto err;
 	}
@@ -359,9 +357,9 @@ void *rxe_alloc(struct rxe_pool *pool)
 {
 	void *obj;
 
-	write_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	obj = rxe_alloc_locked(pool);
-	write_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return obj;
 }
@@ -370,9 +368,9 @@ void *rxe_alloc_with_key(struct rxe_pool *pool, void *key)
 {
 	void *obj;
 
-	write_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	obj = rxe_alloc_with_key_locked(pool, key);
-	write_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return obj;
 }
@@ -381,7 +379,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 {
 	int err;
 
-	write_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
 		goto err;
 
@@ -389,9 +387,9 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	elem->obj = (u8 *)elem - pool->elem_offset;
 
 	if (pool->flags & RXE_POOL_XARRAY) {
-		err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem,
-					 pool->xarray.limit,
-					 &pool->xarray.next, GFP_KERNEL);
+		err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem,
+					pool->xarray.limit,
+					&pool->xarray.next, GFP_KERNEL);
 		if (err)
 			goto err;
 	}
@@ -403,13 +401,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	}
 
 	refcount_set(&elem->refcnt, 1);
-	write_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return 0;
 
 err:
 	atomic_dec(&pool->num_elem);
-	write_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 	return -EINVAL;
 }
 
@@ -442,9 +440,9 @@ static void *__rxe_get_index(struct rxe_pool *pool, u32 index)
 {
 	void *obj;
 
-	read_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	obj = __rxe_get_index_locked(pool, index);
-	read_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return obj;
 }
@@ -465,9 +463,9 @@ static void *__rxe_get_xarray(struct rxe_pool *pool, u32 index)
 {
 	void *obj;
 
-	read_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	obj = __rxe_get_xarray_locked(pool, index);
-	read_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return obj;
 }
@@ -523,9 +521,9 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key)
 {
 	void *obj;
 
-	read_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	obj = rxe_pool_get_key_locked(pool, key);
-	read_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return obj;
 }
@@ -546,9 +544,9 @@ int __rxe_add_ref(struct rxe_pool_elem *elem)
 	struct rxe_pool *pool = elem->pool;
 	int ret;
 
-	read_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	ret = __rxe_add_ref_locked(elem);
-	read_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return ret;
 }
@@ -569,9 +567,9 @@ int __rxe_drop_ref(struct rxe_pool_elem *elem)
 	struct rxe_pool *pool = elem->pool;
 	int ret;
 
-	read_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	ret = __rxe_drop_ref_locked(elem);
-	read_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	return ret;
 }
@@ -584,7 +582,7 @@ static int __rxe_fini(struct rxe_pool_elem *elem)
 	done = refcount_dec_if_one(&elem->refcnt);
 	if (done) {
 		if (pool->flags & RXE_POOL_XARRAY)
-			xa_erase(&pool->xarray.xa, elem->index);
+			__xa_erase(&pool->xarray.xa, elem->index);
 		if (pool->flags & RXE_POOL_INDEX)
 			rxe_drop_index(elem);
 		if (pool->flags & RXE_POOL_KEY)
@@ -621,9 +619,9 @@ int __rxe_fini_ref(struct rxe_pool_elem *elem)
 	struct rxe_pool *pool = elem->pool;
 	int ret;
 
-	read_lock_bh(&pool->pool_lock);
+	xa_lock_bh(&pool->xarray.xa);
 	ret = __rxe_fini(elem);
-	read_unlock_bh(&pool->pool_lock);
+	xa_unlock_bh(&pool->xarray.xa);
 
 	if (!ret) {
 		if (pool->cleanup)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH for-next 5/7] RDMA/rxe: Convert remaining pools to xarrays
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
                   ` (3 preceding siblings ...)
  2021-10-20 22:05 ` [PATCH for-next 4/7] RDMA/rxe: Replace pool_lock by xa_lock Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 6/7] RDMA/rxe: Remove old index code from rxe_pool.c Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 7/7] RDMA/rxe: Rename XARRAY as INDEX Bob Pearson
  6 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

This patch converts the remaining pools with RXE_POOL_INDEX set to
RXE_POOL_XARRAY.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 1b7269dd6d9e..364449c284a3 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -25,7 +25,6 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.name		= "rxe-ah",
 		.size		= sizeof(struct rxe_ah),
 		.elem_offset	= offsetof(struct rxe_ah, elem),
-		//.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_AH_INDEX,
 		.max_index	= RXE_MAX_AH_INDEX,
@@ -35,7 +34,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_srq),
 		.elem_offset	= offsetof(struct rxe_srq, elem),
 		.cleanup	= rxe_srq_cleanup,
-		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_SRQ_INDEX,
 		.max_index	= RXE_MAX_SRQ_INDEX,
 	},
@@ -44,7 +43,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_qp),
 		.elem_offset	= offsetof(struct rxe_qp, elem),
 		.cleanup	= rxe_qp_cleanup,
-		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_QP_INDEX,
 		.max_index	= RXE_MAX_QP_INDEX,
 	},
@@ -60,7 +59,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_mr),
 		.elem_offset	= offsetof(struct rxe_mr, elem),
 		.cleanup	= rxe_mr_cleanup,
-		.flags		= RXE_POOL_INDEX,
+		.flags		= RXE_POOL_XARRAY,
 		.max_index	= RXE_MAX_MR_INDEX,
 		.min_index	= RXE_MIN_MR_INDEX,
 	},
@@ -69,7 +68,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_mw),
 		.elem_offset	= offsetof(struct rxe_mw, elem),
 		.cleanup	= rxe_mw_cleanup,
-		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.min_index	= RXE_MIN_MW_INDEX,
 	},
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH for-next 6/7] RDMA/rxe: Remove old index code from rxe_pool.c
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
                   ` (4 preceding siblings ...)
  2021-10-20 22:05 ` [PATCH for-next 5/7] RDMA/rxe: Convert remaining pools to xarrays Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  2021-10-20 22:05 ` [PATCH for-next 7/7] RDMA/rxe: Rename XARRAY as INDEX Bob Pearson
  6 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Remove all red-black tree based index code from rxe_pool.c.
Change some functions from int to void as errors are no longer returned.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe.c      |  86 ++------------
 drivers/infiniband/sw/rxe/rxe_pool.c | 171 +--------------------------
 drivers/infiniband/sw/rxe/rxe_pool.h |  14 +--
 3 files changed, 15 insertions(+), 256 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
index 4298a1d20ad5..804c5630ed55 100644
--- a/drivers/infiniband/sw/rxe/rxe.c
+++ b/drivers/infiniband/sw/rxe/rxe.c
@@ -115,90 +115,37 @@ static void rxe_init_ports(struct rxe_dev *rxe)
 }
 
 /* init pools of managed objects */
-static int rxe_init_pools(struct rxe_dev *rxe)
+static void rxe_init_pools(struct rxe_dev *rxe)
 {
-	int err;
-
-	err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC,
+	rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC,
 			    rxe->max_ucontext);
-	if (err)
-		goto err1;
-
-	err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD,
+	rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD,
 			    rxe->attr.max_pd);
-	if (err)
-		goto err2;
-
-	err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH,
+	rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH,
 			    rxe->attr.max_ah);
-	if (err)
-		goto err3;
-
-	err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ,
+	rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ,
 			    rxe->attr.max_srq);
-	if (err)
-		goto err4;
-
-	err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP,
+	rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP,
 			    rxe->attr.max_qp);
-	if (err)
-		goto err5;
-
-	err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ,
+	rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ,
 			    rxe->attr.max_cq);
-	if (err)
-		goto err6;
-
-	err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR,
+	rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR,
 			    rxe->attr.max_mr);
-	if (err)
-		goto err7;
-
-	err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW,
+	rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW,
 			    rxe->attr.max_mw);
-	if (err)
-		goto err8;
-
-	err = rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP,
+	rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP,
 			    rxe->attr.max_mcast_grp);
-	if (err)
-		goto err9;
-
-	return 0;
-
-err9:
-	rxe_pool_cleanup(&rxe->mw_pool);
-err8:
-	rxe_pool_cleanup(&rxe->mr_pool);
-err7:
-	rxe_pool_cleanup(&rxe->cq_pool);
-err6:
-	rxe_pool_cleanup(&rxe->qp_pool);
-err5:
-	rxe_pool_cleanup(&rxe->srq_pool);
-err4:
-	rxe_pool_cleanup(&rxe->ah_pool);
-err3:
-	rxe_pool_cleanup(&rxe->pd_pool);
-err2:
-	rxe_pool_cleanup(&rxe->uc_pool);
-err1:
-	return err;
 }
 
 /* initialize rxe device state */
-static int rxe_init(struct rxe_dev *rxe)
+static void rxe_init(struct rxe_dev *rxe)
 {
-	int err;
-
 	/* init default device parameters */
 	rxe_init_device_param(rxe);
 
 	rxe_init_ports(rxe);
 
-	err = rxe_init_pools(rxe);
-	if (err)
-		return err;
+	rxe_init_pools(rxe);
 
 	/* init pending mmap list */
 	spin_lock_init(&rxe->mmap_offset_lock);
@@ -206,8 +153,6 @@ static int rxe_init(struct rxe_dev *rxe)
 	INIT_LIST_HEAD(&rxe->pending_mmaps);
 
 	mutex_init(&rxe->usdev_lock);
-
-	return 0;
 }
 
 void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
@@ -229,12 +174,7 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
  */
 int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name)
 {
-	int err;
-
-	err = rxe_init(rxe);
-	if (err)
-		return err;
-
+	rxe_init(rxe);
 	rxe_set_mtu(rxe, mtu);
 
 	return rxe_register_device(rxe, ibdev_name);
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 364449c284a3..6e51483c0494 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -82,42 +82,13 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	},
 };
 
-static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min)
-{
-	int err = 0;
-	size_t size;
-
-	if ((max - min + 1) < pool->max_elem) {
-		pr_warn("not enough indices for max_elem\n");
-		err = -EINVAL;
-		goto out;
-	}
-
-	pool->index.max_index = max;
-	pool->index.min_index = min;
-
-	size = BITS_TO_LONGS(max - min + 1) * sizeof(long);
-	pool->index.table = kmalloc(size, GFP_KERNEL);
-	if (!pool->index.table) {
-		err = -ENOMEM;
-		goto out;
-	}
-
-	pool->index.table_size = size;
-	bitmap_zero(pool->index.table, max - min + 1);
-
-out:
-	return err;
-}
-
-int rxe_pool_init(
+void rxe_pool_init(
 	struct rxe_dev		*rxe,
 	struct rxe_pool		*pool,
 	enum rxe_elem_type	type,
 	unsigned int		max_elem)
 {
 	const struct rxe_type_info *info = &rxe_type_info[type];
-	int			err = 0;
 
 	memset(pool, 0, sizeof(*pool));
 
@@ -138,22 +109,11 @@ int rxe_pool_init(
 		pool->xarray.limit.min = info->min_index;
 	}
 
-	if (info->flags & RXE_POOL_INDEX) {
-		pool->index.tree = RB_ROOT;
-		err = rxe_pool_init_index(pool, info->max_index,
-					  info->min_index);
-		if (err)
-			goto out;
-	}
-
 	if (info->flags & RXE_POOL_KEY) {
 		pool->key.tree = RB_ROOT;
 		pool->key.key_offset = info->key_offset;
 		pool->key.key_size = info->key_size;
 	}
-
-out:
-	return err;
 }
 
 void rxe_pool_cleanup(struct rxe_pool *pool)
@@ -161,59 +121,6 @@ void rxe_pool_cleanup(struct rxe_pool *pool)
 	if (atomic_read(&pool->num_elem) > 0)
 		pr_warn("%s pool destroyed with unfree'd elem\n",
 			pool->name);
-
-	if (pool->flags & RXE_POOL_INDEX)
-		kfree(pool->index.table);
-}
-
-/* should never fail because there are at least as many indices as
- * max objects
- */
-static u32 alloc_index(struct rxe_pool *pool)
-{
-	u32 index;
-	u32 range = pool->index.max_index - pool->index.min_index + 1;
-
-	index = find_next_zero_bit(pool->index.table, range,
-				   pool->index.last);
-	if (index >= range)
-		index = find_first_zero_bit(pool->index.table, range);
-
-	WARN_ON_ONCE(index >= range);
-	set_bit(index, pool->index.table);
-	pool->index.last = index;
-	return index + pool->index.min_index;
-}
-
-static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new)
-{
-	struct rb_node **link = &pool->index.tree.rb_node;
-	struct rb_node *parent = NULL;
-	struct rxe_pool_elem *elem;
-
-	while (*link) {
-		parent = *link;
-		elem = rb_entry(parent, struct rxe_pool_elem, index_node);
-
-		/* this can happen if memory was recycled and/or the
-		 * old object was not deleted from the pool index
-		 */
-		if (unlikely(elem == new || elem->index == new->index)) {
-			pr_warn("%s#%d: already in pool\n", pool->name,
-					new->index);
-			return -EINVAL;
-		}
-
-		if (elem->index > new->index)
-			link = &(*link)->rb_left;
-		else
-			link = &(*link)->rb_right;
-	}
-
-	rb_link_node(&new->index_node, parent, link);
-	rb_insert_color(&new->index_node, &pool->index.tree);
-
-	return 0;
 }
 
 static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new)
@@ -248,28 +155,6 @@ static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new)
 	return 0;
 }
 
-static int rxe_add_index(struct rxe_pool_elem *elem)
-{
-	struct rxe_pool *pool = elem->pool;
-	int err;
-
-	elem->index = alloc_index(pool);
-	err = rxe_insert_index(pool, elem);
-	if (err)
-		clear_bit(elem->index - pool->index.min_index,
-			  pool->index.table);
-
-	return err;
-}
-
-static void rxe_drop_index(struct rxe_pool_elem *elem)
-{
-	struct rxe_pool *pool = elem->pool;
-
-	clear_bit(elem->index - pool->index.min_index, pool->index.table);
-	rb_erase(&elem->index_node, &pool->index.tree);
-}
-
 static void *__rxe_alloc_locked(struct rxe_pool *pool)
 {
 	struct rxe_pool_elem *elem;
@@ -296,12 +181,6 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool)
 			goto err;
 	}
 
-	if (pool->flags & RXE_POOL_INDEX) {
-		err = rxe_add_index(elem);
-		if (err)
-			goto err;
-	}
-
 	return obj;
 
 err:
@@ -393,12 +272,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 			goto err;
 	}
 
-	if (pool->flags & RXE_POOL_INDEX) {
-		err = rxe_add_index(elem);
-		if (err)
-			goto err;
-	}
-
 	refcount_set(&elem->refcnt, 1);
 	xa_unlock_bh(&pool->xarray.xa);
 
@@ -410,42 +283,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	return -EINVAL;
 }
 
-static void *__rxe_get_index_locked(struct rxe_pool *pool, u32 index)
-{
-	struct rb_node *node;
-	struct rxe_pool_elem *elem;
-	void *obj = NULL;
-
-	node = pool->index.tree.rb_node;
-
-	while (node) {
-		elem = rb_entry(node, struct rxe_pool_elem, index_node);
-
-		if (elem->index > index)
-			node = node->rb_left;
-		else if (elem->index < index)
-			node = node->rb_right;
-		else
-			break;
-	}
-
-	if (node && refcount_inc_not_zero(&elem->refcnt))
-		obj = elem->obj;
-
-	return obj;
-}
-
-static void *__rxe_get_index(struct rxe_pool *pool, u32 index)
-{
-	void *obj;
-
-	xa_lock_bh(&pool->xarray.xa);
-	obj = __rxe_get_index_locked(pool, index);
-	xa_unlock_bh(&pool->xarray.xa);
-
-	return obj;
-}
-
 static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index)
 {
 	struct rxe_pool_elem *elem;
@@ -473,8 +310,6 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
 {
 	if (pool->flags & RXE_POOL_XARRAY)
 		return __rxe_get_xarray_locked(pool, index);
-	if (pool->flags & RXE_POOL_INDEX)
-		return __rxe_get_index_locked(pool, index);
 	return NULL;
 }
 
@@ -482,8 +317,6 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	if (pool->flags & RXE_POOL_XARRAY)
 		return __rxe_get_xarray(pool, index);
-	if (pool->flags & RXE_POOL_INDEX)
-		return __rxe_get_index(pool, index);
 	return NULL;
 }
 
@@ -582,8 +415,6 @@ static int __rxe_fini(struct rxe_pool_elem *elem)
 	if (done) {
 		if (pool->flags & RXE_POOL_XARRAY)
 			__xa_erase(&pool->xarray.xa, elem->index);
-		if (pool->flags & RXE_POOL_INDEX)
-			rxe_drop_index(elem);
 		if (pool->flags & RXE_POOL_KEY)
 			rb_erase(&elem->key_node, &pool->key.tree);
 		atomic_dec(&pool->num_elem);
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index f9c4f09cdcc9..191e5aea454f 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -14,7 +14,6 @@
 #define RXE_POOL_CACHE_FLAGS	(0)
 
 enum rxe_pool_flags {
-	RXE_POOL_INDEX		= BIT(1),
 	RXE_POOL_KEY		= BIT(2),
 	RXE_POOL_XARRAY		= BIT(3),
 	RXE_POOL_NO_ALLOC	= BIT(4),
@@ -57,7 +56,6 @@ struct rxe_pool_elem {
 	struct rb_node		key_node;
 
 	/* only used if indexed */
-	struct rb_node		index_node;
 	u32			index;
 };
 
@@ -81,16 +79,6 @@ struct rxe_pool {
 		u32			next;
 	} xarray;
 
-	/* only used if indexed */
-	struct {
-		struct rb_root		tree;
-		unsigned long		*table;
-		size_t			table_size;
-		u32			last;
-		u32			max_index;
-		u32			min_index;
-	} index;
-
 	/* only used if keyed */
 	struct {
 		struct rb_root		tree;
@@ -103,7 +91,7 @@ struct rxe_pool {
  * number of elements. gets parameters from rxe_type_info
  * pool elements will be allocated out of a slab cache
  */
-int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool,
+void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool,
 		  enum rxe_elem_type type, u32 max_elem);
 
 /* free resources from object pool */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH for-next 7/7] RDMA/rxe: Rename XARRAY as INDEX
  2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
                   ` (5 preceding siblings ...)
  2021-10-20 22:05 ` [PATCH for-next 6/7] RDMA/rxe: Remove old index code from rxe_pool.c Bob Pearson
@ 2021-10-20 22:05 ` Bob Pearson
  6 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-20 22:05 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Rename RXE_POOL_XARRAY as RXE_POOL_INDEX and change several function
names .._index_... from .._xarray_.. which completes the process of
replacing red-black trees by xarrays.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 38 +++++++++-------------------
 drivers/infiniband/sw/rxe/rxe_pool.h |  4 +--
 2 files changed, 14 insertions(+), 28 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 6e51483c0494..6367cf68d19d 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -25,7 +25,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.name		= "rxe-ah",
 		.size		= sizeof(struct rxe_ah),
 		.elem_offset	= offsetof(struct rxe_ah, elem),
-		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_AH_INDEX,
 		.max_index	= RXE_MAX_AH_INDEX,
 	},
@@ -34,7 +34,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_srq),
 		.elem_offset	= offsetof(struct rxe_srq, elem),
 		.cleanup	= rxe_srq_cleanup,
-		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_SRQ_INDEX,
 		.max_index	= RXE_MAX_SRQ_INDEX,
 	},
@@ -43,7 +43,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_qp),
 		.elem_offset	= offsetof(struct rxe_qp, elem),
 		.cleanup	= rxe_qp_cleanup,
-		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.min_index	= RXE_MIN_QP_INDEX,
 		.max_index	= RXE_MAX_QP_INDEX,
 	},
@@ -59,7 +59,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_mr),
 		.elem_offset	= offsetof(struct rxe_mr, elem),
 		.cleanup	= rxe_mr_cleanup,
-		.flags		= RXE_POOL_XARRAY,
+		.flags		= RXE_POOL_INDEX,
 		.max_index	= RXE_MAX_MR_INDEX,
 		.min_index	= RXE_MIN_MR_INDEX,
 	},
@@ -68,7 +68,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.size		= sizeof(struct rxe_mw),
 		.elem_offset	= offsetof(struct rxe_mw, elem),
 		.cleanup	= rxe_mw_cleanup,
-		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.min_index	= RXE_MIN_MW_INDEX,
 	},
@@ -103,7 +103,7 @@ void rxe_pool_init(
 
 	atomic_set(&pool->num_elem, 0);
 
-	if (info->flags & RXE_POOL_XARRAY) {
+	if (info->flags & RXE_POOL_INDEX) {
 		xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC);
 		pool->xarray.limit.max = info->max_index;
 		pool->xarray.limit.min = info->min_index;
@@ -173,7 +173,7 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool)
 	elem->pool = pool;
 	elem->obj = obj;
 
-	if (pool->flags & RXE_POOL_XARRAY) {
+	if (pool->flags & RXE_POOL_INDEX) {
 		err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem,
 					pool->xarray.limit,
 					&pool->xarray.next, GFP_KERNEL);
@@ -264,7 +264,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	elem->pool = pool;
 	elem->obj = (u8 *)elem - pool->elem_offset;
 
-	if (pool->flags & RXE_POOL_XARRAY) {
+	if (pool->flags & RXE_POOL_INDEX) {
 		err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem,
 					pool->xarray.limit,
 					&pool->xarray.next, GFP_KERNEL);
@@ -283,7 +283,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	return -EINVAL;
 }
 
-static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index)
+void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
 {
 	struct rxe_pool_elem *elem;
 	void *obj = NULL;
@@ -295,31 +295,17 @@ static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index)
 	return obj;
 }
 
-static void *__rxe_get_xarray(struct rxe_pool *pool, u32 index)
+void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	void *obj;
 
 	xa_lock_bh(&pool->xarray.xa);
-	obj = __rxe_get_xarray_locked(pool, index);
+	obj = rxe_pool_get_index_locked(pool, index);
 	xa_unlock_bh(&pool->xarray.xa);
 
 	return obj;
 }
 
-void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index)
-{
-	if (pool->flags & RXE_POOL_XARRAY)
-		return __rxe_get_xarray_locked(pool, index);
-	return NULL;
-}
-
-void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
-{
-	if (pool->flags & RXE_POOL_XARRAY)
-		return __rxe_get_xarray(pool, index);
-	return NULL;
-}
-
 void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key)
 {
 	struct rb_node *node;
@@ -413,7 +399,7 @@ static int __rxe_fini(struct rxe_pool_elem *elem)
 
 	done = refcount_dec_if_one(&elem->refcnt);
 	if (done) {
-		if (pool->flags & RXE_POOL_XARRAY)
+		if (pool->flags & RXE_POOL_INDEX)
 			__xa_erase(&pool->xarray.xa, elem->index);
 		if (pool->flags & RXE_POOL_KEY)
 			rb_erase(&elem->key_node, &pool->key.tree);
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index 191e5aea454f..95a6b1e5232f 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -14,8 +14,8 @@
 #define RXE_POOL_CACHE_FLAGS	(0)
 
 enum rxe_pool_flags {
+	RXE_POOL_INDEX		= BIT(1),
 	RXE_POOL_KEY		= BIT(2),
-	RXE_POOL_XARRAY		= BIT(3),
 	RXE_POOL_NO_ALLOC	= BIT(4),
 };
 
@@ -72,7 +72,7 @@ struct rxe_pool {
 	size_t			elem_size;
 	size_t			elem_offset;
 
-	/* only used if xarray */
+	/* only used if indexed */
 	struct {
 		struct xarray		xa;
 		struct xa_limit		limit;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c
  2021-10-20 22:05 ` [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c Bob Pearson
@ 2021-10-21 11:53   ` Dennis Dalessandro
  2021-10-21 17:02     ` Bob Pearson
  0 siblings, 1 reply; 10+ messages in thread
From: Dennis Dalessandro @ 2021-10-21 11:53 UTC (permalink / raw)
  To: Bob Pearson, jgg, zyjzyj2000, linux-rdma

On 10/20/21 6:05 PM, Bob Pearson wrote:
> -		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
> +		//.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
> +		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,

Assume you meant to remove that comment line.

-Denny

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c
  2021-10-21 11:53   ` Dennis Dalessandro
@ 2021-10-21 17:02     ` Bob Pearson
  0 siblings, 0 replies; 10+ messages in thread
From: Bob Pearson @ 2021-10-21 17:02 UTC (permalink / raw)
  To: Dennis Dalessandro, jgg, zyjzyj2000, linux-rdma

On 10/21/21 6:53 AM, Dennis Dalessandro wrote:
> On 10/20/21 6:05 PM, Bob Pearson wrote:
>> -		.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
>> +		//.flags		= RXE_POOL_INDEX | RXE_POOL_NO_ALLOC,
>> +		.flags		= RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC,
> 
> Assume you meant to remove that comment line.
> 
> -Denny
> 

This was an intentionally intermediate step and was a hint that you can run with either choice. In the end of the series all the old stuff goes away including the comment.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-10-21 17:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-20 22:05 [PATCH for-next 0/7] Replace red-black tree by xarray Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 1/7] RDMA/rxe: Replace irqsave locks with bh locks Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 2/7] RDMA/rxe: Cleanup rxe_pool_entry Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c Bob Pearson
2021-10-21 11:53   ` Dennis Dalessandro
2021-10-21 17:02     ` Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 4/7] RDMA/rxe: Replace pool_lock by xa_lock Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 5/7] RDMA/rxe: Convert remaining pools to xarrays Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 6/7] RDMA/rxe: Remove old index code from rxe_pool.c Bob Pearson
2021-10-20 22:05 ` [PATCH for-next 7/7] RDMA/rxe: Rename XARRAY as INDEX Bob Pearson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).