All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next v13 00/10] Fix race conditions in rxe_pool
@ 2022-04-04 21:50 Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK Bob Pearson
                   ` (10 more replies)
  0 siblings, 11 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

There are several race conditions discovered in the current rdma_rxe
driver.  They mostly relate to races between normal operations and
destroying objects.  This patch series
 - Makes several minor cleanups in rxe_pool.[ch]
 - Adds wait for completions to the paths in verbs APIs which destroy
   objects.
 - Changes read side locking to rcu.
 - Moves object cleanup code to after ref count is zero

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
v13
  Rebased to current for-next
  Addressed Jason's comments on patch 1, 8 and 9. 8 and 9 are
  combined into one patch.
v12
  Rebased to current wip/jgg-for-next
  Dropped patches already accepted from v11.
  Moved all object cleanup code to type specific cleanup routines.
  Renamed to match Jason's requests.
  Addressed some other issued raised.
  Kept the contentious rxe_hide() function but renamed to
  rxe_disable_lookup() which says what it does. I am still convinced
  this cleaner than other alternatives like moving xa_erase to the
  top of destroy routines but just for indexed objects.
v11
  Rebased to current for-next.
  Reordered patches and made other changes to respond to issues
  reported by Jason Gunthorpe.
v10
  Rebased to current wip/jgg-for-next.
  Split some patches into smaller ones.
v9
  Corrected issues reported by Jason Gunthorpe,
  Converted locking in rxe_mcast.c and rxe_pool.c to use RCU
  Split up the patches into smaller changes
v8
  Fixed an additional race in 3/8 which was not handled correctly.
v7
  Corrected issues reported by Jason Gunthorpe
Link: https://lore.kernel.org/linux-rdma/20211207190947.GH6385@nvidia.com/
Link: https://lore.kernel.org/linux-rdma/20211207191857.GI6385@nvidia.com/
Link: https://lore.kernel.org/linux-rdma/20211207192824.GJ6385@nvidia.com/
v6
  Fixed a kzalloc flags bug.
  Fixed comment bug reported by 'Kernel Test Robot'.
  Changed type of rxe_pool.c in __rxe_fini().
v5
  Removed patches already accepted into for-next and addressed comments
  from Jason Gunthorpe.
v4
  Restructured patch series to change to xarray earlier which
  greatly simplified the changes.
  Rebased to current for-next
v3
  Changed rxe_alloc to use GFP_KERNEL
  Addressed other comments by Jason Gunthorp
  Merged the previous 06/10 and 07/10 patches into one since they overlapped
  Added some minor cleanups as 10/10
v2
  Rebased to current for-next.
  Added 4 additional patches

Bob Pearson (10):
  RDMA/rxe: Remove IB_SRQ_INIT_MASK
  RDMA/rxe: Add rxe_srq_cleanup()
  RDMA/rxe: Check rxe_get() return value
  RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()
  RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()
  RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()
  RDMA/rxe: Enforce IBA C11-17
  RDMA/rxe: Stop lookup of partially built objects
  RDMA/rxe: Convert read side locking to rcu
  RDMA/rxe: Cleanup rxe_pool.c

 drivers/infiniband/sw/rxe/rxe_comp.c  |   3 +-
 drivers/infiniband/sw/rxe/rxe_loc.h   |  17 ++-
 drivers/infiniband/sw/rxe/rxe_mr.c    |  12 +-
 drivers/infiniband/sw/rxe/rxe_mw.c    |  61 +++++-----
 drivers/infiniband/sw/rxe/rxe_pool.c  | 157 ++++++++++++++++++++++----
 drivers/infiniband/sw/rxe/rxe_pool.h  |  11 +-
 drivers/infiniband/sw/rxe/rxe_qp.c    |  22 ++--
 drivers/infiniband/sw/rxe/rxe_req.c   |   3 +-
 drivers/infiniband/sw/rxe/rxe_resp.c  |   3 +-
 drivers/infiniband/sw/rxe/rxe_srq.c   | 129 +++++++++++++--------
 drivers/infiniband/sw/rxe/rxe_verbs.c |  68 ++++++-----
 drivers/infiniband/sw/rxe/rxe_verbs.h |   1 +
 12 files changed, 325 insertions(+), 162 deletions(-)

base-commit: 3123109284176b1532874591f7c81f3837bbdc17
-- 
2.32.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 02/10] RDMA/rxe: Add rxe_srq_cleanup() Bob Pearson
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently the #define IB_SRQ_INIT_MASK is used to distinguish
the rxe_create_srq verb from the rxe_modify_srq verb so that some code
can be shared between these two subroutines.

This commit splits rxe_srq_chk_attr into two subroutines: rxe_srq_chk_init
and rxe_srq_chk_attr which handle the create_srq and modify_srq verbs
separately.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |   9 +-
 drivers/infiniband/sw/rxe/rxe_srq.c   | 118 +++++++++++++++-----------
 drivers/infiniband/sw/rxe/rxe_verbs.c |   4 +-
 3 files changed, 74 insertions(+), 57 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 2ffbe3390668..ff6cae2c2949 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -159,15 +159,12 @@ void retransmit_timer(struct timer_list *t);
 void rnr_nak_timer(struct timer_list *t);
 
 /* rxe_srq.c */
-#define IB_SRQ_INIT_MASK (~IB_SRQ_LIMIT)
-
-int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
-		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask);
-
+int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init);
 int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_init_attr *init, struct ib_udata *udata,
 		      struct rxe_create_srq_resp __user *uresp);
-
+int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask);
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata);
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index 0c0721f04357..e2dcfc5d97e3 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -6,64 +6,34 @@
 
 #include <linux/vmalloc.h>
 #include "rxe.h"
-#include "rxe_loc.h"
 #include "rxe_queue.h"
 
-int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
-		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask)
+int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init)
 {
-	if (srq && srq->error) {
-		pr_warn("srq in error state\n");
+	struct ib_srq_attr *attr = &init->attr;
+
+	if (attr->max_wr > rxe->attr.max_srq_wr) {
+		pr_warn("max_wr(%d) > max_srq_wr(%d)\n",
+			attr->max_wr, rxe->attr.max_srq_wr);
 		goto err1;
 	}
 
-	if (mask & IB_SRQ_MAX_WR) {
-		if (attr->max_wr > rxe->attr.max_srq_wr) {
-			pr_warn("max_wr(%d) > max_srq_wr(%d)\n",
-				attr->max_wr, rxe->attr.max_srq_wr);
-			goto err1;
-		}
-
-		if (attr->max_wr <= 0) {
-			pr_warn("max_wr(%d) <= 0\n", attr->max_wr);
-			goto err1;
-		}
-
-		if (srq && srq->limit && (attr->max_wr < srq->limit)) {
-			pr_warn("max_wr (%d) < srq->limit (%d)\n",
-				attr->max_wr, srq->limit);
-			goto err1;
-		}
-
-		if (attr->max_wr < RXE_MIN_SRQ_WR)
-			attr->max_wr = RXE_MIN_SRQ_WR;
+	if (attr->max_wr <= 0) {
+		pr_warn("max_wr(%d) <= 0\n", attr->max_wr);
+		goto err1;
 	}
 
-	if (mask & IB_SRQ_LIMIT) {
-		if (attr->srq_limit > rxe->attr.max_srq_wr) {
-			pr_warn("srq_limit(%d) > max_srq_wr(%d)\n",
-				attr->srq_limit, rxe->attr.max_srq_wr);
-			goto err1;
-		}
+	if (attr->max_wr < RXE_MIN_SRQ_WR)
+		attr->max_wr = RXE_MIN_SRQ_WR;
 
-		if (srq && (attr->srq_limit > srq->rq.queue->buf->index_mask)) {
-			pr_warn("srq_limit (%d) > cur limit(%d)\n",
-				attr->srq_limit,
-				 srq->rq.queue->buf->index_mask);
-			goto err1;
-		}
+	if (attr->max_sge > rxe->attr.max_srq_sge) {
+		pr_warn("max_sge(%d) > max_srq_sge(%d)\n",
+			attr->max_sge, rxe->attr.max_srq_sge);
+		goto err1;
 	}
 
-	if (mask == IB_SRQ_INIT_MASK) {
-		if (attr->max_sge > rxe->attr.max_srq_sge) {
-			pr_warn("max_sge(%d) > max_srq_sge(%d)\n",
-				attr->max_sge, rxe->attr.max_srq_sge);
-			goto err1;
-		}
-
-		if (attr->max_sge < RXE_MIN_SRQ_SGE)
-			attr->max_sge = RXE_MIN_SRQ_SGE;
-	}
+	if (attr->max_sge < RXE_MIN_SRQ_SGE)
+		attr->max_sge = RXE_MIN_SRQ_SGE;
 
 	return 0;
 
@@ -93,8 +63,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 	spin_lock_init(&srq->rq.consumer_lock);
 
 	type = QUEUE_TYPE_FROM_CLIENT;
-	q = rxe_queue_init(rxe, &srq->rq.max_wr,
-			srq_wqe_size, type);
+	q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type);
 	if (!q) {
 		pr_warn("unable to allocate queue for srq\n");
 		return -ENOMEM;
@@ -121,6 +90,57 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 	return 0;
 }
 
+int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask)
+{
+	if (srq->error) {
+		pr_warn("srq in error state\n");
+		goto err1;
+	}
+
+	if (mask & IB_SRQ_MAX_WR) {
+		if (attr->max_wr > rxe->attr.max_srq_wr) {
+			pr_warn("max_wr(%d) > max_srq_wr(%d)\n",
+				attr->max_wr, rxe->attr.max_srq_wr);
+			goto err1;
+		}
+
+		if (attr->max_wr <= 0) {
+			pr_warn("max_wr(%d) <= 0\n", attr->max_wr);
+			goto err1;
+		}
+
+		if (srq->limit && (attr->max_wr < srq->limit)) {
+			pr_warn("max_wr (%d) < srq->limit (%d)\n",
+				attr->max_wr, srq->limit);
+			goto err1;
+		}
+
+		if (attr->max_wr < RXE_MIN_SRQ_WR)
+			attr->max_wr = RXE_MIN_SRQ_WR;
+	}
+
+	if (mask & IB_SRQ_LIMIT) {
+		if (attr->srq_limit > rxe->attr.max_srq_wr) {
+			pr_warn("srq_limit(%d) > max_srq_wr(%d)\n",
+				attr->srq_limit, rxe->attr.max_srq_wr);
+			goto err1;
+		}
+
+		if (attr->srq_limit > srq->rq.queue->buf->index_mask) {
+			pr_warn("srq_limit (%d) > cur limit(%d)\n",
+				attr->srq_limit,
+				srq->rq.queue->buf->index_mask);
+			goto err1;
+		}
+	}
+
+	return 0;
+
+err1:
+	return -EINVAL;
+}
+
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata)
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 67184b0281a0..c1d543e24281 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -7,8 +7,8 @@
 #include <linux/dma-mapping.h>
 #include <net/addrconf.h>
 #include <rdma/uverbs_ioctl.h>
+
 #include "rxe.h"
-#include "rxe_loc.h"
 #include "rxe_queue.h"
 #include "rxe_hw_counters.h"
 
@@ -295,7 +295,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 		uresp = udata->outbuf;
 	}
 
-	err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK);
+	err = rxe_srq_chk_init(rxe, init);
 	if (err)
 		goto err1;
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 02/10] RDMA/rxe: Add rxe_srq_cleanup()
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value Bob Pearson
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move cleanup code from rxe_destroy_srq() to rxe_srq_cleanup()
which is called after all references are dropped to allow
code depending on the srq object to complete.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |  7 ++++---
 drivers/infiniband/sw/rxe/rxe_pool.c  |  1 +
 drivers/infiniband/sw/rxe/rxe_srq.c   | 11 +++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++++++----------------
 4 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index ff6cae2c2949..18f3c5dac381 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -37,7 +37,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited);
 
 void rxe_cq_disable(struct rxe_cq *cq);
 
-void rxe_cq_cleanup(struct rxe_pool_elem *arg);
+void rxe_cq_cleanup(struct rxe_pool_elem *elem);
 
 /* rxe_mcast.c */
 struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid);
@@ -81,7 +81,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey);
 int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr);
 int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
-void rxe_mr_cleanup(struct rxe_pool_elem *arg);
+void rxe_mr_cleanup(struct rxe_pool_elem *elem);
 
 /* rxe_mw.c */
 int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata);
@@ -89,7 +89,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw);
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey);
 struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey);
-void rxe_mw_cleanup(struct rxe_pool_elem *arg);
+void rxe_mw_cleanup(struct rxe_pool_elem *elem);
 
 /* rxe_net.c */
 struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
@@ -168,6 +168,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata);
+void rxe_srq_cleanup(struct rxe_pool_elem *elem);
 
 void rxe_dealloc(struct ib_device *ib_dev);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 87066d04ed18..5963b1429ad8 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -46,6 +46,7 @@ static const struct rxe_type_info {
 		.name		= "srq",
 		.size		= sizeof(struct rxe_srq),
 		.elem_offset	= offsetof(struct rxe_srq, elem),
+		.cleanup	= rxe_srq_cleanup,
 		.min_index	= RXE_MIN_SRQ_INDEX,
 		.max_index	= RXE_MAX_SRQ_INDEX,
 		.max_elem	= RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1,
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index e2dcfc5d97e3..02b39498c370 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -174,3 +174,14 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 	srq->rq.queue = NULL;
 	return err;
 }
+
+void rxe_srq_cleanup(struct rxe_pool_elem *elem)
+{
+	struct rxe_srq *srq = container_of(elem, typeof(*srq), elem);
+
+	if (srq->pd)
+		rxe_put(srq->pd);
+
+	if (srq->rq.queue)
+		rxe_queue_cleanup(srq->rq.queue);
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index c1d543e24281..83271bea83b1 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -286,36 +286,35 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 	struct rxe_create_srq_resp __user *uresp = NULL;
 
-	if (init->srq_type != IB_SRQT_BASIC)
-		return -EOPNOTSUPP;
-
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp))
 			return -EINVAL;
 		uresp = udata->outbuf;
 	}
 
+	if (init->srq_type != IB_SRQT_BASIC)
+		return -EOPNOTSUPP;
+
 	err = rxe_srq_chk_init(rxe, init);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	err = rxe_add_to_pool(&rxe->srq_pool, srq);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	rxe_get(pd);
 	srq->pd = pd;
 
 	err = rxe_srq_from_init(rxe, srq, init, udata, uresp);
 	if (err)
-		goto err2;
+		goto err_cleanup;
 
 	return 0;
 
-err2:
-	rxe_put(pd);
+err_cleanup:
 	rxe_put(srq);
-err1:
+err_out:
 	return err;
 }
 
@@ -339,15 +338,15 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
 
 	err = rxe_srq_chk_attr(rxe, srq, attr, mask);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	return 0;
 
-err1:
+err_out:
 	return err;
 }
 
@@ -368,10 +367,6 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
 {
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 
-	if (srq->rq.queue)
-		rxe_queue_cleanup(srq->rq.queue);
-
-	rxe_put(srq->pd);
 	rxe_put(srq);
 	return 0;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 02/10] RDMA/rxe: Add rxe_srq_cleanup() Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-08 17:52   ` Jason Gunthorpe
  2022-04-04 21:50 ` [PATCH for-next v13 04/10] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Bob Pearson
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

In the tasklets (completer, responder, and requester) check the
return value from rxe_get() to detect failures to get a reference.
This only occurs if the qp has had its reference count drop to
zero which indicates that it no longer should be used. This is
in preparation to an upcoming change that will move the qp cleanup
code to rxe_qp_cleanup().

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c | 3 ++-
 drivers/infiniband/sw/rxe/rxe_req.c  | 3 ++-
 drivers/infiniband/sw/rxe/rxe_resp.c | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index 138b3e7d3a5f..da3a398053b8 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -562,7 +562,8 @@ int rxe_completer(void *arg)
 	enum comp_state state;
 	int ret = 0;
 
-	rxe_get(qp);
+	if (!rxe_get(qp))
+		return -EAGAIN;
 
 	if (!qp->valid || qp->req.state == QP_STATE_ERROR ||
 	    qp->req.state == QP_STATE_RESET) {
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index ae5fbc79dd5c..27aba921cc66 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -611,7 +611,8 @@ int rxe_requester(void *arg)
 	struct rxe_ah *ah;
 	struct rxe_av *av;
 
-	rxe_get(qp);
+	if (!rxe_get(qp))
+		return -EAGAIN;
 
 next_wqe:
 	if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 16fc7ea1298d..1ed45c192cf5 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -1250,7 +1250,8 @@ int rxe_responder(void *arg)
 	struct rxe_pkt_info *pkt = NULL;
 	int ret = 0;
 
-	rxe_get(qp);
+	if (!rxe_get(qp))
+		return -EAGAIN;
 
 	qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED;
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 04/10] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (2 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 05/10] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Bob Pearson
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move the code from rxe_qp_destroy() to rxe_qp_do_cleanup().
This allows flows holding references to qp to complete before
the qp object is torn down.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |  1 -
 drivers/infiniband/sw/rxe/rxe_qp.c    | 12 ++++--------
 drivers/infiniband/sw/rxe/rxe_verbs.c |  1 -
 3 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 18f3c5dac381..0e022ae1b8a5 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -114,7 +114,6 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr,
 int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask);
 void rxe_qp_error(struct rxe_qp *qp);
 int rxe_qp_chk_destroy(struct rxe_qp *qp);
-void rxe_qp_destroy(struct rxe_qp *qp);
 void rxe_qp_cleanup(struct rxe_pool_elem *elem);
 
 static inline int qp_num(struct rxe_qp *qp)
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index 62acf890af6c..f5200777399c 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -777,9 +777,11 @@ int rxe_qp_chk_destroy(struct rxe_qp *qp)
 	return 0;
 }
 
-/* called by the destroy qp verb */
-void rxe_qp_destroy(struct rxe_qp *qp)
+/* called when the last reference to the qp is dropped */
+static void rxe_qp_do_cleanup(struct work_struct *work)
 {
+	struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
+
 	qp->valid = 0;
 	qp->qp_timeout_jiffies = 0;
 	rxe_cleanup_task(&qp->resp.task);
@@ -798,12 +800,6 @@ void rxe_qp_destroy(struct rxe_qp *qp)
 		__rxe_do_task(&qp->comp.task);
 		__rxe_do_task(&qp->req.task);
 	}
-}
-
-/* called when the last reference to the qp is dropped */
-static void rxe_qp_do_cleanup(struct work_struct *work)
-{
-	struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
 
 	if (qp->sq.queue)
 		rxe_queue_cleanup(qp->sq.queue);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 83271bea83b1..6738f1b4a543 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -490,7 +490,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 	if (ret)
 		return ret;
 
-	rxe_qp_destroy(qp);
 	rxe_put(qp);
 	return 0;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 05/10] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (3 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 04/10] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 06/10] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Bob Pearson
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move the code which tears down an mr to rxe_mr_cleanup to allow
operations holding a reference to the mr to complete.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 60a31b718774..fc3942e04a1f 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -683,14 +683,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 {
 	struct rxe_mr *mr = to_rmr(ibmr);
 
-	if (atomic_read(&mr->num_mw) > 0) {
-		pr_warn("%s: Attempt to deregister an MR while bound to MWs\n",
-			__func__);
+	/* See IBA 10.6.7.2.6 */
+	if (atomic_read(&mr->num_mw) > 0)
 		return -EINVAL;
-	}
 
-	mr->state = RXE_MR_STATE_INVALID;
-	rxe_put(mr_pd(mr));
 	rxe_put(mr);
 
 	return 0;
@@ -700,6 +696,8 @@ void rxe_mr_cleanup(struct rxe_pool_elem *elem)
 {
 	struct rxe_mr *mr = container_of(elem, typeof(*mr), elem);
 
+	rxe_put(mr_pd(mr));
+
 	ib_umem_release(mr->umem);
 
 	if (mr->cur_map_set)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 06/10] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (4 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 05/10] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 07/10] RDMA/rxe: Enforce IBA C11-17 Bob Pearson
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move code from rxe_dealloc_mw() to rxe_mw_cleanup() to allow
flows which hold a reference to mw to complete.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mw.c   | 57 ++++++++++++++--------------
 drivers/infiniband/sw/rxe/rxe_pool.c |  1 +
 2 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index c86b2efd58f2..ba3f94c69171 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -28,40 +28,11 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
 	return 0;
 }
 
-static void rxe_do_dealloc_mw(struct rxe_mw *mw)
-{
-	if (mw->mr) {
-		struct rxe_mr *mr = mw->mr;
-
-		mw->mr = NULL;
-		atomic_dec(&mr->num_mw);
-		rxe_put(mr);
-	}
-
-	if (mw->qp) {
-		struct rxe_qp *qp = mw->qp;
-
-		mw->qp = NULL;
-		rxe_put(qp);
-	}
-
-	mw->access = 0;
-	mw->addr = 0;
-	mw->length = 0;
-	mw->state = RXE_MW_STATE_INVALID;
-}
-
 int rxe_dealloc_mw(struct ib_mw *ibmw)
 {
 	struct rxe_mw *mw = to_rmw(ibmw);
-	struct rxe_pd *pd = to_rpd(ibmw->pd);
-
-	spin_lock_bh(&mw->lock);
-	rxe_do_dealloc_mw(mw);
-	spin_unlock_bh(&mw->lock);
 
 	rxe_put(mw);
-	rxe_put(pd);
 
 	return 0;
 }
@@ -328,3 +299,31 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey)
 
 	return mw;
 }
+
+void rxe_mw_cleanup(struct rxe_pool_elem *elem)
+{
+	struct rxe_mw *mw = container_of(elem, typeof(*mw), elem);
+	struct rxe_pd *pd = to_rpd(mw->ibmw.pd);
+
+	rxe_put(pd);
+
+	if (mw->mr) {
+		struct rxe_mr *mr = mw->mr;
+
+		mw->mr = NULL;
+		atomic_dec(&mr->num_mw);
+		rxe_put(mr);
+	}
+
+	if (mw->qp) {
+		struct rxe_qp *qp = mw->qp;
+
+		mw->qp = NULL;
+		rxe_put(qp);
+	}
+
+	mw->access = 0;
+	mw->addr = 0;
+	mw->length = 0;
+	mw->state = RXE_MW_STATE_INVALID;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 5963b1429ad8..0fdde3d46949 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -83,6 +83,7 @@ static const struct rxe_type_info {
 		.name		= "mw",
 		.size		= sizeof(struct rxe_mw),
 		.elem_offset	= offsetof(struct rxe_mw, elem),
+		.cleanup	= rxe_mw_cleanup,
 		.min_index	= RXE_MIN_MW_INDEX,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.max_elem	= RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1,
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 07/10] RDMA/rxe: Enforce IBA C11-17
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (5 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 06/10] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 08/10] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Add a counter to keep track of the number of WQs connected to
a CQ and return an error if destroy_cq is called while the
counter is non zero.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_qp.c    | 10 ++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.c |  6 ++++++
 drivers/infiniband/sw/rxe/rxe_verbs.h |  1 +
 3 files changed, 17 insertions(+)

diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index f5200777399c..18861b9edbfd 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -334,6 +334,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
 	qp->scq			= scq;
 	qp->srq			= srq;
 
+	atomic_inc(&rcq->num_wq);
+	atomic_inc(&scq->num_wq);
+
 	rxe_qp_init_misc(rxe, qp, init);
 
 	err = rxe_qp_init_req(rxe, qp, init, udata, uresp);
@@ -353,6 +356,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
 	rxe_queue_cleanup(qp->sq.queue);
 	qp->sq.queue = NULL;
 err1:
+	atomic_dec(&rcq->num_wq);
+	atomic_dec(&scq->num_wq);
+
 	qp->pd = NULL;
 	qp->rcq = NULL;
 	qp->scq = NULL;
@@ -810,10 +816,14 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
 	if (qp->rq.queue)
 		rxe_queue_cleanup(qp->rq.queue);
 
+	atomic_dec(&qp->scq->num_wq);
 	if (qp->scq)
 		rxe_put(qp->scq);
+
+	atomic_dec(&qp->rcq->num_wq);
 	if (qp->rcq)
 		rxe_put(qp->rcq);
+
 	if (qp->pd)
 		rxe_put(qp->pd);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 6738f1b4a543..4adcb93af9b1 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -801,6 +801,12 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
 {
 	struct rxe_cq *cq = to_rcq(ibcq);
 
+	/* See IBA C11-17: The CI shall return an error if this Verb is
+	 * invoked while a Work Queue is still associated with the CQ.
+	 */
+	if (atomic_read(&cq->num_wq))
+		return -EINVAL;
+
 	rxe_cq_disable(cq);
 
 	rxe_put(cq);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index e7eff1ca75e9..bb15a74f624e 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -67,6 +67,7 @@ struct rxe_cq {
 	bool			is_dying;
 	bool			is_user;
 	struct tasklet_struct	comp_task;
+	atomic_t		num_wq;
 };
 
 enum wqe_state {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 08/10] RDMA/rxe: Stop lookup of partially built objects
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (6 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 07/10] RDMA/rxe: Enforce IBA C11-17 Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:50 ` [PATCH for-next v13 09/10] RDMA/rxe: Convert read side locking to rcu Bob Pearson
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently the rdma_rxe driver has a security weakness due to giving
objects which are partially initialized indices allowing external
actors to gain access to them by sending packets which refer to
their index (e.g. qpn, rkey, etc) causing unpredictable results.

This patch adds a new API rxe_finalize(obj) which enables looking up
pool objects from indices using rxe_pool_get_index() for
AH, QP, MR, and MW. They are added in create verbs only after the
objects are fully initialized.

It also adds wait for completion to destroy/dealloc verbs to assure that
all references have been dropped before returning to rdma_core by
implementing a new rxe_pool API rxe_cleanup() which drops a reference
to the object and then waits for all other references to be dropped.
When the last reference is dropped the object is completed by kref.
After that it cleans up the object and if locally allocated frees
the memory.

Combined with deferring cleanup code to type specific cleanup
routines this allows all pending activity referring to objects to
complete before returning to rdma_core.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c    |  2 +-
 drivers/infiniband/sw/rxe/rxe_mw.c    |  4 +-
 drivers/infiniband/sw/rxe/rxe_pool.c  | 61 +++++++++++++++++++++++++--
 drivers/infiniband/sw/rxe/rxe_pool.h  | 11 +++--
 drivers/infiniband/sw/rxe/rxe_verbs.c | 30 ++++++++-----
 5 files changed, 89 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index fc3942e04a1f..9a5c2af6a56f 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -687,7 +687,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 	if (atomic_read(&mr->num_mw) > 0)
 		return -EINVAL;
 
-	rxe_put(mr);
+	rxe_cleanup(mr);
 
 	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index ba3f94c69171..ebd95b6453da 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -25,6 +25,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
 			RXE_MW_STATE_FREE : RXE_MW_STATE_VALID;
 	spin_lock_init(&mw->lock);
 
+	rxe_finalize(mw);
+
 	return 0;
 }
 
@@ -32,7 +34,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 {
 	struct rxe_mw *mw = to_rmw(ibmw);
 
-	rxe_put(mw);
+	rxe_cleanup(mw);
 
 	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 0fdde3d46949..38f435762238 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -6,6 +6,8 @@
 
 #include "rxe.h"
 
+#define RXE_POOL_TIMEOUT	(200)
+#define RXE_POOL_MAX_TIMEOUTS	(3)
 #define RXE_POOL_ALIGN		(16)
 
 static const struct rxe_type_info {
@@ -139,8 +141,11 @@ void *rxe_alloc(struct rxe_pool *pool)
 	elem->pool = pool;
 	elem->obj = obj;
 	kref_init(&elem->ref_cnt);
+	init_completion(&elem->complete);
 
-	err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+	/* allocate index in array but leave pointer as NULL so it
+	 * can't be looked up until rxe_finalize() is called */
+	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
 	if (err)
 		goto err_free;
@@ -167,8 +172,9 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	elem->pool = pool;
 	elem->obj = (u8 *)elem - pool->elem_offset;
 	kref_init(&elem->ref_cnt);
+	init_completion(&elem->complete);
 
-	err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
 	if (err)
 		goto err_cnt;
@@ -201,9 +207,44 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 static void rxe_elem_release(struct kref *kref)
 {
 	struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt);
+
+	complete(&elem->complete);
+}
+
+int __rxe_cleanup(struct rxe_pool_elem *elem)
+{
 	struct rxe_pool *pool = elem->pool;
+	struct xarray *xa = &pool->xa;
+	static int timeout = RXE_POOL_TIMEOUT;
+	unsigned long flags;
+	int ret, err = 0;
+	void *xa_ret;
 
-	xa_erase(&pool->xa, elem->index);
+	/* erase xarray entry to prevent looking up
+	 * the pool elem from its index
+	 */
+	xa_lock_irqsave(xa, flags);
+	xa_ret = __xa_erase(xa, elem->index);
+	xa_unlock_irqrestore(xa, flags);
+	WARN_ON(xa_err(xa_ret));
+
+	/* if this is the last call to rxe_put complete the
+	 * object. It is safe to touch elem after this since
+	 * it is freed below
+	 */
+	__rxe_put(elem);
+
+	if (timeout) {
+		ret = wait_for_completion_timeout(&elem->complete, timeout);
+		if (!ret) {
+			pr_warn("Timed out waiting for %s#%d to complete\n",
+				pool->name, elem->index);
+			if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS)
+				timeout = 0;
+
+			err = -EINVAL;
+		}
+	}
 
 	if (pool->cleanup)
 		pool->cleanup(elem);
@@ -212,6 +253,8 @@ static void rxe_elem_release(struct kref *kref)
 		kfree(elem->obj);
 
 	atomic_dec(&pool->num_elem);
+
+	return err;
 }
 
 int __rxe_get(struct rxe_pool_elem *elem)
@@ -223,3 +266,15 @@ int __rxe_put(struct rxe_pool_elem *elem)
 {
 	return kref_put(&elem->ref_cnt, rxe_elem_release);
 }
+
+void __rxe_finalize(struct rxe_pool_elem *elem)
+{
+	struct xarray *xa = &elem->pool->xa;
+	unsigned long flags;
+	void *ret;
+
+	xa_lock_irqsave(xa, flags);
+	ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL);
+	xa_unlock_irqrestore(xa, flags);
+	WARN_ON(xa_err(ret));
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index 24bcc786c1b3..83f96b2d5096 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -28,6 +28,7 @@ struct rxe_pool_elem {
 	void			*obj;
 	struct kref		ref_cnt;
 	struct list_head	list;
+	struct completion	complete;
 	u32			index;
 };
 
@@ -37,6 +38,7 @@ struct rxe_pool {
 	void			(*cleanup)(struct rxe_pool_elem *elem);
 	enum rxe_pool_flags	flags;
 	enum rxe_elem_type	type;
+	unsigned int		timeouts;
 
 	unsigned int		max_elem;
 	atomic_t		num_elem;
@@ -63,20 +65,23 @@ void *rxe_alloc(struct rxe_pool *pool);
 
 /* connect already allocated object to pool */
 int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem);
-
 #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem)
 
 /* lookup an indexed object from index. takes a reference on object */
 void *rxe_pool_get_index(struct rxe_pool *pool, u32 index);
 
 int __rxe_get(struct rxe_pool_elem *elem);
-
 #define rxe_get(obj) __rxe_get(&(obj)->elem)
 
 int __rxe_put(struct rxe_pool_elem *elem);
-
 #define rxe_put(obj) __rxe_put(&(obj)->elem)
 
+int __rxe_cleanup(struct rxe_pool_elem *elem);
+#define rxe_cleanup(obj) __rxe_cleanup(&(obj)->elem)
+
 #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt)
 
+void __rxe_finalize(struct rxe_pool_elem *elem);
+#define rxe_finalize(obj) __rxe_finalize(&(obj)->elem)
+
 #endif /* RXE_POOL_H */
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 4adcb93af9b1..5ca21922b5e7 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc)
 {
 	struct rxe_ucontext *uc = to_ruc(ibuc);
 
-	rxe_put(uc);
+	rxe_cleanup(uc);
 }
 
 static int rxe_port_immutable(struct ib_device *dev, u32 port_num,
@@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
 {
 	struct rxe_pd *pd = to_rpd(ibpd);
 
-	rxe_put(pd);
+	rxe_cleanup(pd);
 	return 0;
 }
 
@@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah,
 		err = copy_to_user(&uresp->ah_num, &ah->ah_num,
 					 sizeof(uresp->ah_num));
 		if (err) {
-			rxe_put(ah);
+			rxe_cleanup(ah);
 			return -EFAULT;
 		}
 	} else if (ah->is_user) {
@@ -197,6 +197,8 @@ static int rxe_create_ah(struct ib_ah *ibah,
 	}
 
 	rxe_init_av(init_attr->ah_attr, &ah->av);
+	rxe_finalize(ah);
+
 	return 0;
 }
 
@@ -228,7 +230,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags)
 {
 	struct rxe_ah *ah = to_rah(ibah);
 
-	rxe_put(ah);
+	rxe_cleanup(ah);
 	return 0;
 }
 
@@ -313,7 +315,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 	return 0;
 
 err_cleanup:
-	rxe_put(srq);
+	rxe_cleanup(srq);
 err_out:
 	return err;
 }
@@ -367,7 +369,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
 {
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 
-	rxe_put(srq);
+	rxe_cleanup(srq);
 	return 0;
 }
 
@@ -434,10 +436,11 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init,
 	if (err)
 		goto qp_init;
 
+	rxe_finalize(qp);
 	return 0;
 
 qp_init:
-	rxe_put(qp);
+	rxe_cleanup(qp);
 	return err;
 }
 
@@ -490,7 +493,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 	if (ret)
 		return ret;
 
-	rxe_put(qp);
+	rxe_cleanup(qp);
 	return 0;
 }
 
@@ -809,7 +812,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
 
 	rxe_cq_disable(cq);
 
-	rxe_put(cq);
+	rxe_cleanup(cq);
 	return 0;
 }
 
@@ -904,6 +907,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access)
 
 	rxe_get(pd);
 	rxe_mr_init_dma(pd, access, mr);
+	rxe_finalize(mr);
 
 	return &mr->ibmr;
 }
@@ -932,11 +936,13 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	if (err)
 		goto err3;
 
+	rxe_finalize(mr);
+
 	return &mr->ibmr;
 
 err3:
 	rxe_put(pd);
-	rxe_put(mr);
+	rxe_cleanup(mr);
 err2:
 	return ERR_PTR(err);
 }
@@ -964,11 +970,13 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 	if (err)
 		goto err2;
 
+	rxe_finalize(mr);
+
 	return &mr->ibmr;
 
 err2:
 	rxe_put(pd);
-	rxe_put(mr);
+	rxe_cleanup(mr);
 err1:
 	return ERR_PTR(err);
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 09/10] RDMA/rxe: Convert read side locking to rcu
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (7 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 08/10] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
@ 2022-04-04 21:50 ` Bob Pearson
  2022-04-04 21:51 ` [PATCH for-next v13 10/10] RDMA/rxe: Cleanup rxe_pool.c Bob Pearson
  2022-04-08 18:06 ` [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Jason Gunthorpe
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:50 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Use rcu_read_lock() for protecting read side operations in rxe_pool.c.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 38f435762238..cbe7b05c3b66 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -190,16 +190,15 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	struct rxe_pool_elem *elem;
 	struct xarray *xa = &pool->xa;
-	unsigned long flags;
 	void *obj;
 
-	xa_lock_irqsave(xa, flags);
+	rcu_read_lock();
 	elem = xa_load(xa, index);
 	if (elem && kref_get_unless_zero(&elem->ref_cnt))
 		obj = elem->obj;
 	else
 		obj = NULL;
-	xa_unlock_irqrestore(xa, flags);
+	rcu_read_unlock();
 
 	return obj;
 }
@@ -250,7 +249,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem)
 		pool->cleanup(elem);
 
 	if (pool->flags & RXE_POOL_ALLOC)
-		kfree(elem->obj);
+		kfree_rcu(elem->obj);
 
 	atomic_dec(&pool->num_elem);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH for-next v13 10/10] RDMA/rxe: Cleanup rxe_pool.c
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (8 preceding siblings ...)
  2022-04-04 21:50 ` [PATCH for-next v13 09/10] RDMA/rxe: Convert read side locking to rcu Bob Pearson
@ 2022-04-04 21:51 ` Bob Pearson
  2022-04-08 18:06 ` [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Jason Gunthorpe
  10 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-04 21:51 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Minor cleanup of rxe_pool.c. Add document comment headers for
the subroutines. Increase alignment for pool elements.
Convert some printk's to WARN-ON's.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 89 +++++++++++++++++++++++-----
 1 file changed, 73 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index cbe7b05c3b66..3b98650fb971 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -8,7 +8,7 @@
 
 #define RXE_POOL_TIMEOUT	(200)
 #define RXE_POOL_MAX_TIMEOUTS	(3)
-#define RXE_POOL_ALIGN		(16)
+#define RXE_POOL_ALIGN		(64)
 
 static const struct rxe_type_info {
 	const char *name;
@@ -120,24 +120,35 @@ void rxe_pool_cleanup(struct rxe_pool *pool)
 	WARN_ON(!xa_empty(&pool->xa));
 }
 
+/**
+ * rxe_alloc - allocate a new pool object
+ * @pool: object pool
+ *
+ * Context: in task.
+ * Returns: object on success else an ERR_PTR
+ */
 void *rxe_alloc(struct rxe_pool *pool)
 {
 	struct rxe_pool_elem *elem;
 	void *obj;
-	int err;
+	int err = -EINVAL;
 
 	if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC)))
-		return NULL;
+		goto err_out;
+
+	if (WARN_ON(!in_task()))
+		goto err_out;
 
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
-		goto err_cnt;
+		goto err_dec;
 
 	obj = kzalloc(pool->elem_size, GFP_KERNEL);
-	if (!obj)
-		goto err_cnt;
+	if (!obj) {
+		err = -ENOMEM;
+		goto err_dec;
+	}
 
 	elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset);
-
 	elem->pool = pool;
 	elem->obj = obj;
 	kref_init(&elem->ref_cnt);
@@ -154,20 +165,32 @@ void *rxe_alloc(struct rxe_pool *pool)
 
 err_free:
 	kfree(obj);
-err_cnt:
+err_dec:
 	atomic_dec(&pool->num_elem);
-	return NULL;
+err_out:
+	return ERR_PTR(err);
 }
 
+/**
+ * __rxe_add_to_pool - add rdma-core allocated object to rxe object pool
+ * @pool: object pool
+ * @elem: rxe_pool_elem embedded in object
+ *
+ * Context: in task.
+ * Returns: 0 on success else an error
+ */
 int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 {
-	int err;
+	int err = -EINVAL;
 
 	if (WARN_ON(pool->flags & RXE_POOL_ALLOC))
-		return -EINVAL;
+		goto err_out;
+
+	if (WARN_ON(!in_task()))
+		goto err_out;
 
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
-		goto err_cnt;
+		goto err_dec;
 
 	elem->pool = pool;
 	elem->obj = (u8 *)elem - pool->elem_offset;
@@ -177,15 +200,23 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
 	if (err)
-		goto err_cnt;
+		goto err_dec;
 
 	return 0;
 
-err_cnt:
+err_dec:
 	atomic_dec(&pool->num_elem);
-	return -EINVAL;
+err_out:
+	return err;
 }
 
+/**
+ * rxe_pool_get_index - find object in pool with given index
+ * @pool: object pool
+ * @index: index
+ *
+ * Returns: object on success else NULL
+ */
 void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	struct rxe_pool_elem *elem;
@@ -203,6 +234,10 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 	return obj;
 }
 
+/**
+ * rxe_elem_release - complete object when last reference is dropped
+ * @kref: kref contained in rxe_pool_elem
+ */
 static void rxe_elem_release(struct kref *kref)
 {
 	struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt);
@@ -210,6 +245,12 @@ static void rxe_elem_release(struct kref *kref)
 	complete(&elem->complete);
 }
 
+/**
+ * __rxe_cleanup - cleanup object after waiting for all refs to be dropped
+ * @elem: rxe_pool_elem
+ *
+ * Returns: 0 on success else an error
+ */
 int __rxe_cleanup(struct rxe_pool_elem *elem)
 {
 	struct rxe_pool *pool = elem->pool;
@@ -229,7 +270,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem)
 
 	/* if this is the last call to rxe_put complete the
 	 * object. It is safe to touch elem after this since
-	 * it is freed below
+	 * it is freed below if locally allocated
 	 */
 	__rxe_put(elem);
 
@@ -256,16 +297,32 @@ int __rxe_cleanup(struct rxe_pool_elem *elem)
 	return err;
 }
 
+/**
+ * __rxe_get - takes a ref on the object unless ref count is zero
+ * @elem: rxe_pool_elem embedded in object
+ *
+ * Returns: 1 if reference is added else 0
+ */
 int __rxe_get(struct rxe_pool_elem *elem)
 {
 	return kref_get_unless_zero(&elem->ref_cnt);
 }
 
+/**
+ * __rxe_put - puts a ref on the object
+ * @elem: rxe_pool_elem embedded in object
+ *
+ * Returns: 1 if ref count reaches zero and release called else 0
+ */
 int __rxe_put(struct rxe_pool_elem *elem)
 {
 	return kref_put(&elem->ref_cnt, rxe_elem_release);
 }
 
+/**
+ * __rxe_finalize - enable looking up object from index
+ * @elem: rxe_pool_elem embedded in object
+ */
 void __rxe_finalize(struct rxe_pool_elem *elem)
 {
 	struct xarray *xa = &elem->pool->xa;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value
  2022-04-04 21:50 ` [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value Bob Pearson
@ 2022-04-08 17:52   ` Jason Gunthorpe
  2022-04-20 22:47     ` Bob Pearson
  0 siblings, 1 reply; 18+ messages in thread
From: Jason Gunthorpe @ 2022-04-08 17:52 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma

On Mon, Apr 04, 2022 at 04:50:53PM -0500, Bob Pearson wrote:
> In the tasklets (completer, responder, and requester) check the
> return value from rxe_get() to detect failures to get a reference.
> This only occurs if the qp has had its reference count drop to
> zero which indicates that it no longer should be used. This is
> in preparation to an upcoming change that will move the qp cleanup
> code to rxe_qp_cleanup().

These need some comments explaining how this is safe..

It looks to me like it works because the 0 ref keeps the memory alive
while a work queue triggers rxe_cleanup_task() (though who fences the
responder task?)

At least after the next patch, I'm a little unclear how this works
at this moment..

Jason

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v13 00/10] Fix race conditions in rxe_pool
  2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
                   ` (9 preceding siblings ...)
  2022-04-04 21:51 ` [PATCH for-next v13 10/10] RDMA/rxe: Cleanup rxe_pool.c Bob Pearson
@ 2022-04-08 18:06 ` Jason Gunthorpe
  2022-04-08 18:15   ` Bob Pearson
  2022-04-20 23:04   ` Bob Pearson
  10 siblings, 2 replies; 18+ messages in thread
From: Jason Gunthorpe @ 2022-04-08 18:06 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma

On Mon, Apr 04, 2022 at 04:50:50PM -0500, Bob Pearson wrote:
> There are several race conditions discovered in the current rdma_rxe
> driver.  They mostly relate to races between normal operations and
> destroying objects.  This patch series
>  - Makes several minor cleanups in rxe_pool.[ch]
>  - Adds wait for completions to the paths in verbs APIs which destroy
>    objects.
>  - Changes read side locking to rcu.
>  - Moves object cleanup code to after ref count is zero

This all seems fine to me now, except for the question about the
tasklets

Thanks,
Jason

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v13 00/10] Fix race conditions in rxe_pool
  2022-04-08 18:06 ` [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Jason Gunthorpe
@ 2022-04-08 18:15   ` Bob Pearson
  2022-04-20 23:04   ` Bob Pearson
  1 sibling, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-08 18:15 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: zyjzyj2000, linux-rdma

On 4/8/22 13:06, Jason Gunthorpe wrote:
> On Mon, Apr 04, 2022 at 04:50:50PM -0500, Bob Pearson wrote:
>> There are several race conditions discovered in the current rdma_rxe
>> driver.  They mostly relate to races between normal operations and
>> destroying objects.  This patch series
>>  - Makes several minor cleanups in rxe_pool.[ch]
>>  - Adds wait for completions to the paths in verbs APIs which destroy
>>    objects.
>>  - Changes read side locking to rcu.
>>  - Moves object cleanup code to after ref count is zero
> 
> This all seems fine to me now, except for the question about the
> tasklets
> 
> Thanks,
> Jason
Thanks. At the moment the first priority is to figure out how
blktest regressed in 5.18-rc1+. Hopefully it will be easy to figure out.

Bob

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value
  2022-04-08 17:52   ` Jason Gunthorpe
@ 2022-04-20 22:47     ` Bob Pearson
  0 siblings, 0 replies; 18+ messages in thread
From: Bob Pearson @ 2022-04-20 22:47 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: zyjzyj2000, linux-rdma

On 4/8/22 12:52, Jason Gunthorpe wrote:
> On Mon, Apr 04, 2022 at 04:50:53PM -0500, Bob Pearson wrote:
>> In the tasklets (completer, responder, and requester) check the
>> return value from rxe_get() to detect failures to get a reference.
>> This only occurs if the qp has had its reference count drop to
>> zero which indicates that it no longer should be used. This is
>> in preparation to an upcoming change that will move the qp cleanup
>> code to rxe_qp_cleanup().
> 
> These need some comments explaining how this is safe..
> 
> It looks to me like it works because the 0 ref keeps the memory alive
> while a work queue triggers rxe_cleanup_task() (though who fences the
> responder task?)
> 
> At least after the next patch, I'm a little unclear how this works
> at this moment..
> 
> Jason

I started writing the comment (here)

    If rxe_get() fails qp is not going to be around for long

    because its ref count has gone to zero and rxe_complete()

    is cleaning up and returning to rdma-core which will

    free the qp. However rxe_do_qp_cleanup() has to finish first

    and it will wait for the tasklets to finish running.

    This fixes a hard bug to solve since the code calling

    rxe_run_task() will hold a valid reference on qp but the

    tasklet can be deferred until later and that reference may

    be gone when the tasklet starts.
 


but I realized that at the end of the day there isn't a problem because
complete/wait_for_completion together with qp cleanup code shutting down
the tasklets means that the race won't happen once the series is all in
place. So I will just drop that patch.

Bob

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v13 00/10] Fix race conditions in rxe_pool
  2022-04-08 18:06 ` [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Jason Gunthorpe
  2022-04-08 18:15   ` Bob Pearson
@ 2022-04-20 23:04   ` Bob Pearson
  2022-04-21  2:13     ` Zhu Yanjun
  1 sibling, 1 reply; 18+ messages in thread
From: Bob Pearson @ 2022-04-20 23:04 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: zyjzyj2000, linux-rdma

On 4/8/22 13:06, Jason Gunthorpe wrote:
> On Mon, Apr 04, 2022 at 04:50:50PM -0500, Bob Pearson wrote:
>> There are several race conditions discovered in the current rdma_rxe
>> driver.  They mostly relate to races between normal operations and
>> destroying objects.  This patch series
>>  - Makes several minor cleanups in rxe_pool.[ch]
>>  - Adds wait for completions to the paths in verbs APIs which destroy
>>    objects.
>>  - Changes read side locking to rcu.
>>  - Moves object cleanup code to after ref count is zero
> 
> This all seems fine to me now, except for the question about the
> tasklets
> 
> Thanks,
> Jason

There has been a long delay because of the mr = NULL bug and the locking
problems. With the following patches applied (last to first) I do not
see any lockdep warnings, seg faults or anything else in dmesg for
long runs of

	pyverbs
	perftests (ib_xxx_bw, ib_xxx_lat)
	rping (node to node)
	blktests (srp)

These patches were in v13 of the "Fix race conditions" patch. I will send v14 today.
8d342cb8d7ce RDMA/rxe: Cleanup rxe_pool.c

6e4c52e04bc9 RDMA/rxe: Convert read side locking to rcu

e3e46d864b98 RDMA/rxe: Stop lookup of partially built objects

e1fb6b7225d0 RDMA/rxe: Enforce IBA C11-17

2607d042376f RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()

ca082913b915 RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()

394f24ebc81b RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()


3fb445b66e5c RDMA/rxe: Add rxe_srq_cleanup()

4730b0ed751a RDMA/rxe: Remove IB_SRQ_INIT_MASK


These patches are already submitted
d02e7a7266cf RDMA/rxe: Fix "RDMA/rxe: Cleanup rxe_mcast.c"

569aba28f67c RDMA/rxe: Fix "Replace mr by rkey in responder resources(2)"
 or whatever you called it.
5e74a5ecfb53 RDMA/rxe: Fix "Replace mr by rkey in responder resources"

007493744865 RDMA/rxe: Fix typo: replace paylen by payload


This patch was submitted to scsi by Bart and addressed long timeouts that
were not rxe related (same issue also happens with siw)
cdd844a1ba45 Revert "scsi: scsi_debug: Address races following module load"

If Zhu is not OK with this let know what bugs remain that need fixing.

Bob

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH for-next v13 00/10] Fix race conditions in rxe_pool
  2022-04-20 23:04   ` Bob Pearson
@ 2022-04-21  2:13     ` Zhu Yanjun
  2022-04-21 16:09       ` Pearson, Robert B
  0 siblings, 1 reply; 18+ messages in thread
From: Zhu Yanjun @ 2022-04-21  2:13 UTC (permalink / raw)
  To: Bob Pearson; +Cc: Jason Gunthorpe, RDMA mailing list

On Thu, Apr 21, 2022 at 7:04 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>
> On 4/8/22 13:06, Jason Gunthorpe wrote:
> > On Mon, Apr 04, 2022 at 04:50:50PM -0500, Bob Pearson wrote:
> >> There are several race conditions discovered in the current rdma_rxe
> >> driver.  They mostly relate to races between normal operations and
> >> destroying objects.  This patch series
> >>  - Makes several minor cleanups in rxe_pool.[ch]
> >>  - Adds wait for completions to the paths in verbs APIs which destroy
> >>    objects.
> >>  - Changes read side locking to rcu.
> >>  - Moves object cleanup code to after ref count is zero
> >
> > This all seems fine to me now, except for the question about the
> > tasklets
> >
> > Thanks,
> > Jason
>
> There has been a long delay because of the mr = NULL bug and the locking
> problems. With the following patches applied (last to first) I do not
> see any lockdep warnings, seg faults or anything else in dmesg for
> long runs of
>
>         pyverbs
>         perftests (ib_xxx_bw, ib_xxx_lat)
>         rping (node to node)
>         blktests (srp)
>
> These patches were in v13 of the "Fix race conditions" patch. I will send v14 today.
> 8d342cb8d7ce RDMA/rxe: Cleanup rxe_pool.c
>
> 6e4c52e04bc9 RDMA/rxe: Convert read side locking to rcu
>
> e3e46d864b98 RDMA/rxe: Stop lookup of partially built objects
>
> e1fb6b7225d0 RDMA/rxe: Enforce IBA C11-17
>
> 2607d042376f RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()
>
> ca082913b915 RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()
>
> 394f24ebc81b RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()
>
>
> 3fb445b66e5c RDMA/rxe: Add rxe_srq_cleanup()
>
> 4730b0ed751a RDMA/rxe: Remove IB_SRQ_INIT_MASK
>
>
> These patches are already submitted
> d02e7a7266cf RDMA/rxe: Fix "RDMA/rxe: Cleanup rxe_mcast.c"
>
> 569aba28f67c RDMA/rxe: Fix "Replace mr by rkey in responder resources(2)"
>  or whatever you called it.
> 5e74a5ecfb53 RDMA/rxe: Fix "Replace mr by rkey in responder resources"
>
> 007493744865 RDMA/rxe: Fix typo: replace paylen by payload
>
>
> This patch was submitted to scsi by Bart and addressed long timeouts that
> were not rxe related (same issue also happens with siw)
> cdd844a1ba45 Revert "scsi: scsi_debug: Address races following module load"
>
> If Zhu is not OK with this let know what bugs remain that need fixing.

How do you get this conclusion that I am not OK with this?

Zhu Yanjun

>
> Bob

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH for-next v13 00/10] Fix race conditions in rxe_pool
  2022-04-21  2:13     ` Zhu Yanjun
@ 2022-04-21 16:09       ` Pearson, Robert B
  0 siblings, 0 replies; 18+ messages in thread
From: Pearson, Robert B @ 2022-04-21 16:09 UTC (permalink / raw)
  To: Zhu Yanjun, Bob Pearson; +Cc: Jason Gunthorpe, RDMA mailing list

Zhu,

Sorry. I am not trying to imply you are against this. Just that you are more aware of the
current outstanding bugs reported.

Bob

-----Original Message-----
From: Zhu Yanjun <zyjzyj2000@gmail.com> 
Sent: Wednesday, April 20, 2022 9:13 PM
To: Bob Pearson <rpearsonhpe@gmail.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>; RDMA mailing list <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH for-next v13 00/10] Fix race conditions in rxe_pool

On Thu, Apr 21, 2022 at 7:04 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>
> On 4/8/22 13:06, Jason Gunthorpe wrote:
> > On Mon, Apr 04, 2022 at 04:50:50PM -0500, Bob Pearson wrote:
> >> There are several race conditions discovered in the current 
> >> rdma_rxe driver.  They mostly relate to races between normal 
> >> operations and destroying objects.  This patch series
> >>  - Makes several minor cleanups in rxe_pool.[ch]
> >>  - Adds wait for completions to the paths in verbs APIs which destroy
> >>    objects.
> >>  - Changes read side locking to rcu.
> >>  - Moves object cleanup code to after ref count is zero
> >
> > This all seems fine to me now, except for the question about the 
> > tasklets
> >
> > Thanks,
> > Jason
>
> There has been a long delay because of the mr = NULL bug and the 
> locking problems. With the following patches applied (last to first) I 
> do not see any lockdep warnings, seg faults or anything else in dmesg 
> for long runs of
>
>         pyverbs
>         perftests (ib_xxx_bw, ib_xxx_lat)
>         rping (node to node)
>         blktests (srp)
>
> These patches were in v13 of the "Fix race conditions" patch. I will send v14 today.
> 8d342cb8d7ce RDMA/rxe: Cleanup rxe_pool.c
>
> 6e4c52e04bc9 RDMA/rxe: Convert read side locking to rcu
>
> e3e46d864b98 RDMA/rxe: Stop lookup of partially built objects
>
> e1fb6b7225d0 RDMA/rxe: Enforce IBA C11-17
>
> 2607d042376f RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()
>
> ca082913b915 RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()
>
> 394f24ebc81b RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()
>
>
> 3fb445b66e5c RDMA/rxe: Add rxe_srq_cleanup()
>
> 4730b0ed751a RDMA/rxe: Remove IB_SRQ_INIT_MASK
>
>
> These patches are already submitted
> d02e7a7266cf RDMA/rxe: Fix "RDMA/rxe: Cleanup rxe_mcast.c"
>
> 569aba28f67c RDMA/rxe: Fix "Replace mr by rkey in responder resources(2)"
>  or whatever you called it.
> 5e74a5ecfb53 RDMA/rxe: Fix "Replace mr by rkey in responder resources"
>
> 007493744865 RDMA/rxe: Fix typo: replace paylen by payload
>
>
> This patch was submitted to scsi by Bart and addressed long timeouts 
> that were not rxe related (same issue also happens with siw)
> cdd844a1ba45 Revert "scsi: scsi_debug: Address races following module load"
>
> If Zhu is not OK with this let know what bugs remain that need fixing.

How do you get this conclusion that I am not OK with this?

Zhu Yanjun

>
> Bob

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-04-21 16:24 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-04 21:50 [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 01/10] RDMA/rxe: Remove IB_SRQ_INIT_MASK Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 02/10] RDMA/rxe: Add rxe_srq_cleanup() Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 03/10] RDMA/rxe: Check rxe_get() return value Bob Pearson
2022-04-08 17:52   ` Jason Gunthorpe
2022-04-20 22:47     ` Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 04/10] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 05/10] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 06/10] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 07/10] RDMA/rxe: Enforce IBA C11-17 Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 08/10] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
2022-04-04 21:50 ` [PATCH for-next v13 09/10] RDMA/rxe: Convert read side locking to rcu Bob Pearson
2022-04-04 21:51 ` [PATCH for-next v13 10/10] RDMA/rxe: Cleanup rxe_pool.c Bob Pearson
2022-04-08 18:06 ` [PATCH for-next v13 00/10] Fix race conditions in rxe_pool Jason Gunthorpe
2022-04-08 18:15   ` Bob Pearson
2022-04-20 23:04   ` Bob Pearson
2022-04-21  2:13     ` Zhu Yanjun
2022-04-21 16:09       ` Pearson, Robert B

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.