All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next v12 00/11] Fix race conditions in rxe_pool
@ 2022-03-18  1:55 Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum Bob Pearson
                   ` (10 more replies)
  0 siblings, 11 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

There are several race conditions discovered in the current rdma_rxe
driver.  They mostly relate to races between normal operations and
destroying objects.  This patch series
 - Makes several minor cleanups in rxe_pool.[ch]
 - Adds wait for completions to the paths in verbs APIs which destroy
   objects.
 - Changes read side locking to rcu.
 - Moves object cleanup code to after ref count is zero

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
v12
  Rebased to current wip/jgg-for-next
  Dropped patches already accepted from v11.
  Moved all object cleanup code to type specific cleanup routines.
  Renamed to match Jason's requests.
  Addressed some other issued raised.
  Kept the contentious rxe_hide() function but renamed to
  rxe_disable_lookup() which says what it does. I am still convinced
  this cleaner than other alternatives like moving xa_erase to the
  top of destroy routines but just for indexed objects.
v11
  Rebased to current for-next.
  Reordered patches and made other changes to respond to issues
  reported by Jason Gunthorpe.
v10
  Rebased to current wip/jgg-for-next.
  Split some patches into smaller ones.
v9
  Corrected issues reported by Jason Gunthorpe,
  Converted locking in rxe_mcast.c and rxe_pool.c to use RCU
  Split up the patches into smaller changes
v8
  Fixed an additional race in 3/8 which was not handled correctly.
v7
  Corrected issues reported by Jason Gunthorpe
Link: https://lore.kernel.org/linux-rdma/20211207190947.GH6385@nvidia.com/
Link: https://lore.kernel.org/linux-rdma/20211207191857.GI6385@nvidia.com/
Link: https://lore.kernel.org/linux-rdma/20211207192824.GJ6385@nvidia.com/
v6
  Fixed a kzalloc flags bug.
  Fixed comment bug reported by 'Kernel Test Robot'.
  Changed type of rxe_pool.c in __rxe_fini().
v5
  Removed patches already accepted into for-next and addressed comments
  from Jason Gunthorpe.
v4
  Restructured patch series to change to xarray earlier which
  greatly simplified the changes.
  Rebased to current for-next
v3
  Changed rxe_alloc to use GFP_KERNEL
  Addressed other comments by Jason Gunthorp
  Merged the previous 06/10 and 07/10 patches into one since they overlapped
  Added some minor cleanups as 10/10
v2
  Rebased to current for-next.
  Added 4 additional patches

Bob Pearson (11):
  RDMA/rxe: Replace #define by enum
  RDMA/rxe: Add rxe_srq_cleanup()
  RDMA/rxe: Check rxe_get() return value
  RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()
  RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()
  RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()
  RDMA/rxe: Enforce IBA C11-17
  RDMA/rxe: Stop lookup of partially built objects
  RDMA/rxe: Add wait_for_completion to pool objects
  RDMA/rxe: Convert read side locking to rcu
  RDMA/rxe: Cleanup rxe_pool.c

 drivers/infiniband/sw/rxe/rxe_comp.c  |   3 +-
 drivers/infiniband/sw/rxe/rxe_loc.h   |  16 +--
 drivers/infiniband/sw/rxe/rxe_mr.c    |  13 +--
 drivers/infiniband/sw/rxe/rxe_mw.c    |  62 ++++++-----
 drivers/infiniband/sw/rxe/rxe_pool.c  | 155 ++++++++++++++++++++++----
 drivers/infiniband/sw/rxe/rxe_pool.h  |  14 ++-
 drivers/infiniband/sw/rxe/rxe_qp.c    |  22 ++--
 drivers/infiniband/sw/rxe/rxe_req.c   |   3 +-
 drivers/infiniband/sw/rxe/rxe_resp.c  |   3 +-
 drivers/infiniband/sw/rxe/rxe_srq.c   |  21 +++-
 drivers/infiniband/sw/rxe/rxe_verbs.c |  73 ++++++------
 drivers/infiniband/sw/rxe/rxe_verbs.h |   7 ++
 12 files changed, 273 insertions(+), 119 deletions(-)

base-commit: 3197706abd053275d2a561cfb7dc8f6cfaf7d02c
-- 
2.32.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18 18:00   ` Jason Gunthorpe
  2022-03-18  1:55 ` [PATCH for-next v12 02/11] RDMA/rxe: Add rxe_srq_cleanup() Bob Pearson
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently the #define IB_SRQ_INIT_MASK is used to distinguish
the rxe_create_srq verb from the rxe_modify_srq verb so that some code
can be shared between these two subroutines.

This commit replaces the #define with an enum which extends
enum ib_srq_attr_mask and makes related changes to prototypes
to clean up type warnings. The parameter is given a rxe specific
name.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |  8 ++------
 drivers/infiniband/sw/rxe/rxe_srq.c   | 10 +++++-----
 drivers/infiniband/sw/rxe/rxe_verbs.c |  7 ++++---
 drivers/infiniband/sw/rxe/rxe_verbs.h |  6 ++++++
 4 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 2ffbe3390668..9067d3b6f1ee 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -159,17 +159,13 @@ void retransmit_timer(struct timer_list *t);
 void rnr_nak_timer(struct timer_list *t);
 
 /* rxe_srq.c */
-#define IB_SRQ_INIT_MASK (~IB_SRQ_LIMIT)
-
 int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
-		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask);
-
+		     struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask);
 int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_init_attr *init, struct ib_udata *udata,
 		      struct rxe_create_srq_resp __user *uresp);
-
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
-		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
+		      struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata);
 
 void rxe_dealloc(struct ib_device *ib_dev);
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index 0c0721f04357..862aa749c93a 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -10,14 +10,14 @@
 #include "rxe_queue.h"
 
 int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
-		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask)
+		     struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask)
 {
 	if (srq && srq->error) {
 		pr_warn("srq in error state\n");
 		goto err1;
 	}
 
-	if (mask & IB_SRQ_MAX_WR) {
+	if (mask & RXE_SRQ_MAX_WR) {
 		if (attr->max_wr > rxe->attr.max_srq_wr) {
 			pr_warn("max_wr(%d) > max_srq_wr(%d)\n",
 				attr->max_wr, rxe->attr.max_srq_wr);
@@ -39,7 +39,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 			attr->max_wr = RXE_MIN_SRQ_WR;
 	}
 
-	if (mask & IB_SRQ_LIMIT) {
+	if (mask & RXE_SRQ_LIMIT) {
 		if (attr->srq_limit > rxe->attr.max_srq_wr) {
 			pr_warn("srq_limit(%d) > max_srq_wr(%d)\n",
 				attr->srq_limit, rxe->attr.max_srq_wr);
@@ -54,7 +54,7 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		}
 	}
 
-	if (mask == IB_SRQ_INIT_MASK) {
+	if (mask == RXE_SRQ_INIT) {
 		if (attr->max_sge > rxe->attr.max_srq_sge) {
 			pr_warn("max_sge(%d) > max_srq_sge(%d)\n",
 				attr->max_sge, rxe->attr.max_srq_sge);
@@ -122,7 +122,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 }
 
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
-		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
+		      struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata)
 {
 	int err;
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 67184b0281a0..5609956d2bc3 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -7,8 +7,8 @@
 #include <linux/dma-mapping.h>
 #include <net/addrconf.h>
 #include <rdma/uverbs_ioctl.h>
+
 #include "rxe.h"
-#include "rxe_loc.h"
 #include "rxe_queue.h"
 #include "rxe_hw_counters.h"
 
@@ -295,7 +295,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 		uresp = udata->outbuf;
 	}
 
-	err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK);
+	err = rxe_srq_chk_attr(rxe, NULL, &init->attr, RXE_SRQ_INIT);
 	if (err)
 		goto err1;
 
@@ -320,13 +320,14 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 }
 
 static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
-			  enum ib_srq_attr_mask mask,
+			  enum ib_srq_attr_mask ibmask,
 			  struct ib_udata *udata)
 {
 	int err;
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 	struct rxe_dev *rxe = to_rdev(ibsrq->device);
 	struct rxe_modify_srq_cmd ucmd = {};
+	enum rxe_srq_attr_mask mask = (enum rxe_srq_attr_mask)ibmask;
 
 	if (udata) {
 		if (udata->inlen < sizeof(ucmd))
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index e7eff1ca75e9..34aa013c7801 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -93,6 +93,12 @@ struct rxe_rq {
 	struct rxe_queue	*queue;
 };
 
+enum rxe_srq_attr_mask {
+	RXE_SRQ_MAX_WR		= IB_SRQ_MAX_WR,
+	RXE_SRQ_LIMIT		= IB_SRQ_LIMIT,
+	RXE_SRQ_INIT		= BIT(2),
+};
+
 struct rxe_srq {
 	struct ib_srq		ibsrq;
 	struct rxe_pool_elem	elem;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 02/11] RDMA/rxe: Add rxe_srq_cleanup()
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 03/11] RDMA/rxe: Check rxe_get() return value Bob Pearson
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move cleanup code from rxe_destroy_srq() to rxe_srq_cleanup()
which is called after all references are dropped to allow
code depending on the srq object to complete.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |  7 ++++---
 drivers/infiniband/sw/rxe/rxe_pool.c  |  1 +
 drivers/infiniband/sw/rxe/rxe_srq.c   | 11 +++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++++++----------------
 4 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 9067d3b6f1ee..300c702f432a 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -37,7 +37,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited);
 
 void rxe_cq_disable(struct rxe_cq *cq);
 
-void rxe_cq_cleanup(struct rxe_pool_elem *arg);
+void rxe_cq_cleanup(struct rxe_pool_elem *elem);
 
 /* rxe_mcast.c */
 struct rxe_mcg *rxe_lookup_mcg(struct rxe_dev *rxe, union ib_gid *mgid);
@@ -81,7 +81,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey);
 int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr);
 int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
-void rxe_mr_cleanup(struct rxe_pool_elem *arg);
+void rxe_mr_cleanup(struct rxe_pool_elem *elem);
 
 /* rxe_mw.c */
 int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata);
@@ -89,7 +89,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw);
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey);
 struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey);
-void rxe_mw_cleanup(struct rxe_pool_elem *arg);
+void rxe_mw_cleanup(struct rxe_pool_elem *elem);
 
 /* rxe_net.c */
 struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
@@ -167,6 +167,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum rxe_srq_attr_mask mask,
 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata);
+void rxe_srq_cleanup(struct rxe_pool_elem *elem);
 
 void rxe_dealloc(struct ib_device *ib_dev);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 87066d04ed18..5963b1429ad8 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -46,6 +46,7 @@ static const struct rxe_type_info {
 		.name		= "srq",
 		.size		= sizeof(struct rxe_srq),
 		.elem_offset	= offsetof(struct rxe_srq, elem),
+		.cleanup	= rxe_srq_cleanup,
 		.min_index	= RXE_MIN_SRQ_INDEX,
 		.max_index	= RXE_MAX_SRQ_INDEX,
 		.max_elem	= RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1,
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index 862aa749c93a..26e7ac35733e 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -154,3 +154,14 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 	srq->rq.queue = NULL;
 	return err;
 }
+
+void rxe_srq_cleanup(struct rxe_pool_elem *elem)
+{
+	struct rxe_srq *srq = container_of(elem, typeof(*srq), elem);
+
+	if (srq->pd)
+		rxe_put(srq->pd);
+
+	if (srq->rq.queue)
+		rxe_queue_cleanup(srq->rq.queue);
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 5609956d2bc3..89f4f30f7247 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -286,36 +286,35 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 	struct rxe_create_srq_resp __user *uresp = NULL;
 
-	if (init->srq_type != IB_SRQT_BASIC)
-		return -EOPNOTSUPP;
-
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp))
 			return -EINVAL;
 		uresp = udata->outbuf;
 	}
 
+	if (init->srq_type != IB_SRQT_BASIC)
+		return -EOPNOTSUPP;
+
 	err = rxe_srq_chk_attr(rxe, NULL, &init->attr, RXE_SRQ_INIT);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	err = rxe_add_to_pool(&rxe->srq_pool, srq);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	rxe_get(pd);
 	srq->pd = pd;
 
 	err = rxe_srq_from_init(rxe, srq, init, udata, uresp);
 	if (err)
-		goto err2;
+		goto err_cleanup;
 
 	return 0;
 
-err2:
-	rxe_put(pd);
+err_cleanup:
 	rxe_put(srq);
-err1:
+err_out:
 	return err;
 }
 
@@ -340,15 +339,15 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
 
 	err = rxe_srq_chk_attr(rxe, srq, attr, mask);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata);
 	if (err)
-		goto err1;
+		goto err_out;
 
 	return 0;
 
-err1:
+err_out:
 	return err;
 }
 
@@ -369,10 +368,6 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
 {
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 
-	if (srq->rq.queue)
-		rxe_queue_cleanup(srq->rq.queue);
-
-	rxe_put(srq->pd);
 	rxe_put(srq);
 	return 0;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 03/11] RDMA/rxe: Check rxe_get() return value
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 02/11] RDMA/rxe: Add rxe_srq_cleanup() Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 04/11] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Bob Pearson
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

In the tasklets (completer, responder, and requester) check the
return value from rxe_get() to detect failures to get a reference.
This only occurs if the qp has had its reference count drop to
zero which indicates that it no longer should be used. This is
in preparation to an upcoming change that will move the qp cleanup
code to rxe_qp_cleanup().

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c | 3 ++-
 drivers/infiniband/sw/rxe/rxe_req.c  | 3 ++-
 drivers/infiniband/sw/rxe/rxe_resp.c | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index 138b3e7d3a5f..da3a398053b8 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -562,7 +562,8 @@ int rxe_completer(void *arg)
 	enum comp_state state;
 	int ret = 0;
 
-	rxe_get(qp);
+	if (!rxe_get(qp))
+		return -EAGAIN;
 
 	if (!qp->valid || qp->req.state == QP_STATE_ERROR ||
 	    qp->req.state == QP_STATE_RESET) {
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index ae5fbc79dd5c..27aba921cc66 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -611,7 +611,8 @@ int rxe_requester(void *arg)
 	struct rxe_ah *ah;
 	struct rxe_av *av;
 
-	rxe_get(qp);
+	if (!rxe_get(qp))
+		return -EAGAIN;
 
 next_wqe:
 	if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 16fc7ea1298d..1ed45c192cf5 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -1250,7 +1250,8 @@ int rxe_responder(void *arg)
 	struct rxe_pkt_info *pkt = NULL;
 	int ret = 0;
 
-	rxe_get(qp);
+	if (!rxe_get(qp))
+		return -EAGAIN;
 
 	qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED;
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 04/11] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup()
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (2 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 03/11] RDMA/rxe: Check rxe_get() return value Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 05/11] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Bob Pearson
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move the code from rxe_qp_destroy() to rxe_qp_do_cleanup().
This allows flows holding references to qp to complete before
the qp object is torn down.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |  1 -
 drivers/infiniband/sw/rxe/rxe_qp.c    | 12 ++++--------
 drivers/infiniband/sw/rxe/rxe_verbs.c |  1 -
 3 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 300c702f432a..ddf91d3d5527 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -114,7 +114,6 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr,
 int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask);
 void rxe_qp_error(struct rxe_qp *qp);
 int rxe_qp_chk_destroy(struct rxe_qp *qp);
-void rxe_qp_destroy(struct rxe_qp *qp);
 void rxe_qp_cleanup(struct rxe_pool_elem *elem);
 
 static inline int qp_num(struct rxe_qp *qp)
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index 62acf890af6c..f5200777399c 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -777,9 +777,11 @@ int rxe_qp_chk_destroy(struct rxe_qp *qp)
 	return 0;
 }
 
-/* called by the destroy qp verb */
-void rxe_qp_destroy(struct rxe_qp *qp)
+/* called when the last reference to the qp is dropped */
+static void rxe_qp_do_cleanup(struct work_struct *work)
 {
+	struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
+
 	qp->valid = 0;
 	qp->qp_timeout_jiffies = 0;
 	rxe_cleanup_task(&qp->resp.task);
@@ -798,12 +800,6 @@ void rxe_qp_destroy(struct rxe_qp *qp)
 		__rxe_do_task(&qp->comp.task);
 		__rxe_do_task(&qp->req.task);
 	}
-}
-
-/* called when the last reference to the qp is dropped */
-static void rxe_qp_do_cleanup(struct work_struct *work)
-{
-	struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
 
 	if (qp->sq.queue)
 		rxe_queue_cleanup(qp->sq.queue);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 89f4f30f7247..9a3c33dad979 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -491,7 +491,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 	if (ret)
 		return ret;
 
-	rxe_qp_destroy(qp);
 	rxe_put(qp);
 	return 0;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 05/11] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup()
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (3 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 04/11] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 06/11] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Bob Pearson
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move the code which tears down an mr to rxe_mr_cleanup to allow
operations holding a reference to the mr to complete.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 60a31b718774..fc3942e04a1f 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -683,14 +683,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 {
 	struct rxe_mr *mr = to_rmr(ibmr);
 
-	if (atomic_read(&mr->num_mw) > 0) {
-		pr_warn("%s: Attempt to deregister an MR while bound to MWs\n",
-			__func__);
+	/* See IBA 10.6.7.2.6 */
+	if (atomic_read(&mr->num_mw) > 0)
 		return -EINVAL;
-	}
 
-	mr->state = RXE_MR_STATE_INVALID;
-	rxe_put(mr_pd(mr));
 	rxe_put(mr);
 
 	return 0;
@@ -700,6 +696,8 @@ void rxe_mr_cleanup(struct rxe_pool_elem *elem)
 {
 	struct rxe_mr *mr = container_of(elem, typeof(*mr), elem);
 
+	rxe_put(mr_pd(mr));
+
 	ib_umem_release(mr->umem);
 
 	if (mr->cur_map_set)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 06/11] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup()
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (4 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 05/11] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 07/11] RDMA/rxe: Enforce IBA C11-17 Bob Pearson
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Move code from rxe_dealloc_mw() to rxe_mw_cleanup() to allow
flows which hold a reference to mw to complete.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mw.c   | 57 ++++++++++++++--------------
 drivers/infiniband/sw/rxe/rxe_pool.c |  1 +
 2 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index c86b2efd58f2..ba3f94c69171 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -28,40 +28,11 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
 	return 0;
 }
 
-static void rxe_do_dealloc_mw(struct rxe_mw *mw)
-{
-	if (mw->mr) {
-		struct rxe_mr *mr = mw->mr;
-
-		mw->mr = NULL;
-		atomic_dec(&mr->num_mw);
-		rxe_put(mr);
-	}
-
-	if (mw->qp) {
-		struct rxe_qp *qp = mw->qp;
-
-		mw->qp = NULL;
-		rxe_put(qp);
-	}
-
-	mw->access = 0;
-	mw->addr = 0;
-	mw->length = 0;
-	mw->state = RXE_MW_STATE_INVALID;
-}
-
 int rxe_dealloc_mw(struct ib_mw *ibmw)
 {
 	struct rxe_mw *mw = to_rmw(ibmw);
-	struct rxe_pd *pd = to_rpd(ibmw->pd);
-
-	spin_lock_bh(&mw->lock);
-	rxe_do_dealloc_mw(mw);
-	spin_unlock_bh(&mw->lock);
 
 	rxe_put(mw);
-	rxe_put(pd);
 
 	return 0;
 }
@@ -328,3 +299,31 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey)
 
 	return mw;
 }
+
+void rxe_mw_cleanup(struct rxe_pool_elem *elem)
+{
+	struct rxe_mw *mw = container_of(elem, typeof(*mw), elem);
+	struct rxe_pd *pd = to_rpd(mw->ibmw.pd);
+
+	rxe_put(pd);
+
+	if (mw->mr) {
+		struct rxe_mr *mr = mw->mr;
+
+		mw->mr = NULL;
+		atomic_dec(&mr->num_mw);
+		rxe_put(mr);
+	}
+
+	if (mw->qp) {
+		struct rxe_qp *qp = mw->qp;
+
+		mw->qp = NULL;
+		rxe_put(qp);
+	}
+
+	mw->access = 0;
+	mw->addr = 0;
+	mw->length = 0;
+	mw->state = RXE_MW_STATE_INVALID;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 5963b1429ad8..0fdde3d46949 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -83,6 +83,7 @@ static const struct rxe_type_info {
 		.name		= "mw",
 		.size		= sizeof(struct rxe_mw),
 		.elem_offset	= offsetof(struct rxe_mw, elem),
+		.cleanup	= rxe_mw_cleanup,
 		.min_index	= RXE_MIN_MW_INDEX,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.max_elem	= RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1,
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 07/11] RDMA/rxe: Enforce IBA C11-17
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (5 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 06/11] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Add a counter to keep track of the number of WQs connected to
a CQ and return an error if destroy_cq is called while the
counter is non zero.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_qp.c    | 10 ++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.c |  6 ++++++
 drivers/infiniband/sw/rxe/rxe_verbs.h |  1 +
 3 files changed, 17 insertions(+)

diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index f5200777399c..18861b9edbfd 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -334,6 +334,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
 	qp->scq			= scq;
 	qp->srq			= srq;
 
+	atomic_inc(&rcq->num_wq);
+	atomic_inc(&scq->num_wq);
+
 	rxe_qp_init_misc(rxe, qp, init);
 
 	err = rxe_qp_init_req(rxe, qp, init, udata, uresp);
@@ -353,6 +356,9 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
 	rxe_queue_cleanup(qp->sq.queue);
 	qp->sq.queue = NULL;
 err1:
+	atomic_dec(&rcq->num_wq);
+	atomic_dec(&scq->num_wq);
+
 	qp->pd = NULL;
 	qp->rcq = NULL;
 	qp->scq = NULL;
@@ -810,10 +816,14 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
 	if (qp->rq.queue)
 		rxe_queue_cleanup(qp->rq.queue);
 
+	atomic_dec(&qp->scq->num_wq);
 	if (qp->scq)
 		rxe_put(qp->scq);
+
+	atomic_dec(&qp->rcq->num_wq);
 	if (qp->rcq)
 		rxe_put(qp->rcq);
+
 	if (qp->pd)
 		rxe_put(qp->pd);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 9a3c33dad979..4c082ac439c6 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -802,6 +802,12 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
 {
 	struct rxe_cq *cq = to_rcq(ibcq);
 
+	/* See IBA C11-17: The CI shall return an error if this Verb is
+	 * invoked while a Work Queue is still associated with the CQ.
+	 */
+	if (atomic_read(&cq->num_wq))
+		return -EINVAL;
+
 	rxe_cq_disable(cq);
 
 	rxe_put(cq);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 34aa013c7801..5764aeed921a 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -67,6 +67,7 @@ struct rxe_cq {
 	bool			is_dying;
 	bool			is_user;
 	struct tasklet_struct	comp_task;
+	atomic_t		num_wq;
 };
 
 enum wqe_state {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (6 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 07/11] RDMA/rxe: Enforce IBA C11-17 Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18 18:07   ` Jason Gunthorpe
  2022-03-18  1:55 ` [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects Bob Pearson
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Currently the rdma_rxe driver has a security weakness due to giving
objects which are partially initialized indices allowing external
actors to gain access to them by sending packets which refer to
their index (e.g. qpn, rkey, etc) causing unpredictable results.

This patch adds two new APIs rxe_finalize(obj) and
rxe_disable_lookup(obj) which enable or disable looking up pool objects
from indices using rxe_pool_get_index(). By default objects are disabled.
These APIs are used to enable looking up objects which have indices:
AH, QP, MR, and MW. They are added in create verbs after the
objects are fully initialized and as soon as possible in destroy
verbs.

Note, the sequence
	rxe_disable_lookup(obj);
	rxe_put(obj);
in destroy verbs stops the looking up of objects by clearing the
pointer in the xarray with __xa_store() before a possible
long wait until the last reference is dropped somewhere else and
the xarray entry is erased with xa_erase().

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c    |  1 +
 drivers/infiniband/sw/rxe/rxe_mw.c    |  3 +++
 drivers/infiniband/sw/rxe/rxe_pool.c  | 36 ++++++++++++++++++++++++---
 drivers/infiniband/sw/rxe/rxe_pool.h  |  9 ++++---
 drivers/infiniband/sw/rxe/rxe_verbs.c | 10 ++++++++
 5 files changed, 53 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index fc3942e04a1f..8059f31882ae 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -687,6 +687,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 	if (atomic_read(&mr->num_mw) > 0)
 		return -EINVAL;
 
+	rxe_disable_lookup(mr);
 	rxe_put(mr);
 
 	return 0;
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index ba3f94c69171..2464952a04d4 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -25,6 +25,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
 			RXE_MW_STATE_FREE : RXE_MW_STATE_VALID;
 	spin_lock_init(&mw->lock);
 
+	rxe_finalize(mw);
+
 	return 0;
 }
 
@@ -32,6 +34,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 {
 	struct rxe_mw *mw = to_rmw(ibmw);
 
+	rxe_disable_lookup(mw);
 	rxe_put(mw);
 
 	return 0;
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 0fdde3d46949..87b89d263f80 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -140,7 +140,9 @@ void *rxe_alloc(struct rxe_pool *pool)
 	elem->obj = obj;
 	kref_init(&elem->ref_cnt);
 
-	err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+	/* allocate index in array but leave pointer as NULL so it
+	 * can't be looked up until rxe_finalize() is called */
+	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
 	if (err)
 		goto err_free;
@@ -168,7 +170,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	elem->obj = (u8 *)elem - pool->elem_offset;
 	kref_init(&elem->ref_cnt);
 
-	err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
 	if (err)
 		goto err_cnt;
@@ -202,8 +204,12 @@ static void rxe_elem_release(struct kref *kref)
 {
 	struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt);
 	struct rxe_pool *pool = elem->pool;
+	struct xarray *xa = &pool->xa;
+	unsigned long flags;
 
-	xa_erase(&pool->xa, elem->index);
+	xa_lock_irqsave(xa, flags);
+	__xa_erase(&pool->xa, elem->index);
+	xa_unlock_irqrestore(xa, flags);
 
 	if (pool->cleanup)
 		pool->cleanup(elem);
@@ -223,3 +229,27 @@ int __rxe_put(struct rxe_pool_elem *elem)
 {
 	return kref_put(&elem->ref_cnt, rxe_elem_release);
 }
+
+void __rxe_finalize(struct rxe_pool_elem *elem)
+{
+	struct xarray *xa = &elem->pool->xa;
+	unsigned long flags;
+	void *ret;
+
+	xa_lock_irqsave(xa, flags);
+	ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL);
+	xa_unlock_irqrestore(xa, flags);
+	WARN_ON(xa_err(ret));
+}
+
+void __rxe_disable_lookup(struct rxe_pool_elem *elem)
+{
+	struct xarray *xa = &elem->pool->xa;
+	unsigned long flags;
+	void *ret;
+
+	xa_lock_irqsave(xa, flags);
+	ret = __xa_store(&elem->pool->xa, elem->index, NULL, GFP_KERNEL);
+	xa_unlock_irqrestore(xa, flags);
+	WARN_ON(xa_err(ret));
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index 24bcc786c1b3..aa66d0eea13b 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -63,20 +63,23 @@ void *rxe_alloc(struct rxe_pool *pool);
 
 /* connect already allocated object to pool */
 int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem);
-
 #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem)
 
 /* lookup an indexed object from index. takes a reference on object */
 void *rxe_pool_get_index(struct rxe_pool *pool, u32 index);
 
 int __rxe_get(struct rxe_pool_elem *elem);
-
 #define rxe_get(obj) __rxe_get(&(obj)->elem)
 
 int __rxe_put(struct rxe_pool_elem *elem);
-
 #define rxe_put(obj) __rxe_put(&(obj)->elem)
 
 #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt)
 
+void __rxe_finalize(struct rxe_pool_elem *elem);
+#define rxe_finalize(obj) __rxe_finalize(&(obj)->elem)
+
+void __rxe_disable_lookup(struct rxe_pool_elem *elem);
+#define rxe_disable_lookup(obj) __rxe_disable_lookup(&(obj)->elem)
+
 #endif /* RXE_POOL_H */
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 4c082ac439c6..6a83b4a630f5 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -197,6 +197,8 @@ static int rxe_create_ah(struct ib_ah *ibah,
 	}
 
 	rxe_init_av(init_attr->ah_attr, &ah->av);
+	rxe_finalize(ah);
+
 	return 0;
 }
 
@@ -228,6 +230,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags)
 {
 	struct rxe_ah *ah = to_rah(ibah);
 
+	rxe_disable_lookup(ah);
 	rxe_put(ah);
 	return 0;
 }
@@ -435,6 +438,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init,
 	if (err)
 		goto qp_init;
 
+	rxe_finalize(qp);
 	return 0;
 
 qp_init:
@@ -487,6 +491,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 	struct rxe_qp *qp = to_rqp(ibqp);
 	int ret;
 
+	rxe_disable_lookup(qp);
 	ret = rxe_qp_chk_destroy(qp);
 	if (ret)
 		return ret;
@@ -905,6 +910,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access)
 
 	rxe_get(pd);
 	rxe_mr_init_dma(pd, access, mr);
+	rxe_finalize(mr);
 
 	return &mr->ibmr;
 }
@@ -933,6 +939,8 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	if (err)
 		goto err3;
 
+	rxe_finalize(mr);
+
 	return &mr->ibmr;
 
 err3:
@@ -965,6 +973,8 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 	if (err)
 		goto err2;
 
+	rxe_finalize(mr);
+
 	return &mr->ibmr;
 
 err2:
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (7 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18 18:10   ` Jason Gunthorpe
  2022-03-18  1:55 ` [PATCH for-next v12 10/11] RDMA/rxe: Convert read side locking to rcu Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 11/11] RDMA/rxe: Cleanup rxe_pool.c Bob Pearson
  10 siblings, 1 reply; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Add wait for completion to destroy/dealloc verbs to assure that
all references have been dropped before returning to rdma_core.
Implement a new rxe_pool API rxe_cleanup() which drops a reference
to the object and then waits for all other references to be dropped.
After that it cleans up the object and if locally allocated frees
the memory.

Combined with deferring cleanup code to type specific cleanup
routines this allows all pending activity referring to objects to
complete before returning to rdma_core.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c    |  2 +-
 drivers/infiniband/sw/rxe/rxe_mw.c    |  2 +-
 drivers/infiniband/sw/rxe/rxe_pool.c  | 29 +++++++++++++++++++++++++++
 drivers/infiniband/sw/rxe/rxe_pool.h  |  5 +++++
 drivers/infiniband/sw/rxe/rxe_verbs.c | 24 +++++++++++-----------
 5 files changed, 48 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 8059f31882ae..94416e76c5d9 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -688,7 +688,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 		return -EINVAL;
 
 	rxe_disable_lookup(mr);
-	rxe_put(mr);
+	rxe_cleanup(mr);
 
 	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 2464952a04d4..f439548c8945 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -35,7 +35,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 	struct rxe_mw *mw = to_rmw(ibmw);
 
 	rxe_disable_lookup(mw);
-	rxe_put(mw);
+	rxe_cleanup(mw);
 
 	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 87b89d263f80..6f39f74233d2 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -6,6 +6,8 @@
 
 #include "rxe.h"
 
+#define RXE_POOL_TIMEOUT	(200)
+#define RXE_POOL_MAX_TIMEOUTS	(3)
 #define RXE_POOL_ALIGN		(16)
 
 static const struct rxe_type_info {
@@ -139,6 +141,7 @@ void *rxe_alloc(struct rxe_pool *pool)
 	elem->pool = pool;
 	elem->obj = obj;
 	kref_init(&elem->ref_cnt);
+	init_completion(&elem->complete);
 
 	/* allocate index in array but leave pointer as NULL so it
 	 * can't be looked up until rxe_finalize() is called */
@@ -169,6 +172,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	elem->pool = pool;
 	elem->obj = (u8 *)elem - pool->elem_offset;
 	kref_init(&elem->ref_cnt);
+	init_completion(&elem->complete);
 
 	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
@@ -211,6 +215,29 @@ static void rxe_elem_release(struct kref *kref)
 	__xa_erase(&pool->xa, elem->index);
 	xa_unlock_irqrestore(xa, flags);
 
+	complete(&elem->complete);
+}
+
+int __rxe_cleanup(struct rxe_pool_elem *elem)
+{
+	struct rxe_pool *pool = elem->pool;
+	static int timeout = RXE_POOL_TIMEOUT;
+	int ret, err = 0;
+
+	__rxe_put(elem);
+
+	if (timeout) {
+		ret = wait_for_completion_timeout(&elem->complete, timeout);
+		if (!ret) {
+			pr_warn("Timed out waiting for %s#%d to complete\n",
+				pool->name, elem->index);
+			if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS)
+				timeout = 0;
+
+			err = -EINVAL;
+		}
+	}
+
 	if (pool->cleanup)
 		pool->cleanup(elem);
 
@@ -218,6 +245,8 @@ static void rxe_elem_release(struct kref *kref)
 		kfree(elem->obj);
 
 	atomic_dec(&pool->num_elem);
+
+	return err;
 }
 
 int __rxe_get(struct rxe_pool_elem *elem)
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index aa66d0eea13b..3b2c80d5345a 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -28,6 +28,7 @@ struct rxe_pool_elem {
 	void			*obj;
 	struct kref		ref_cnt;
 	struct list_head	list;
+	struct completion	complete;
 	u32			index;
 };
 
@@ -37,6 +38,7 @@ struct rxe_pool {
 	void			(*cleanup)(struct rxe_pool_elem *elem);
 	enum rxe_pool_flags	flags;
 	enum rxe_elem_type	type;
+	unsigned int		timeouts;
 
 	unsigned int		max_elem;
 	atomic_t		num_elem;
@@ -74,6 +76,9 @@ int __rxe_get(struct rxe_pool_elem *elem);
 int __rxe_put(struct rxe_pool_elem *elem);
 #define rxe_put(obj) __rxe_put(&(obj)->elem)
 
+int __rxe_cleanup(struct rxe_pool_elem *elem);
+#define rxe_cleanup(obj) __rxe_cleanup(&(obj)->elem)
+
 #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt)
 
 void __rxe_finalize(struct rxe_pool_elem *elem);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 6a83b4a630f5..a08d7b0c6221 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc)
 {
 	struct rxe_ucontext *uc = to_ruc(ibuc);
 
-	rxe_put(uc);
+	rxe_cleanup(uc);
 }
 
 static int rxe_port_immutable(struct ib_device *dev, u32 port_num,
@@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
 {
 	struct rxe_pd *pd = to_rpd(ibpd);
 
-	rxe_put(pd);
+	rxe_cleanup(pd);
 	return 0;
 }
 
@@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah,
 		err = copy_to_user(&uresp->ah_num, &ah->ah_num,
 					 sizeof(uresp->ah_num));
 		if (err) {
-			rxe_put(ah);
+			rxe_cleanup(ah);
 			return -EFAULT;
 		}
 	} else if (ah->is_user) {
@@ -231,7 +231,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags)
 	struct rxe_ah *ah = to_rah(ibah);
 
 	rxe_disable_lookup(ah);
-	rxe_put(ah);
+	rxe_cleanup(ah);
 	return 0;
 }
 
@@ -316,7 +316,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init,
 	return 0;
 
 err_cleanup:
-	rxe_put(srq);
+	rxe_cleanup(srq);
 err_out:
 	return err;
 }
@@ -371,7 +371,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
 {
 	struct rxe_srq *srq = to_rsrq(ibsrq);
 
-	rxe_put(srq);
+	rxe_cleanup(srq);
 	return 0;
 }
 
@@ -442,7 +442,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init,
 	return 0;
 
 qp_init:
-	rxe_put(qp);
+	rxe_cleanup(qp);
 	return err;
 }
 
@@ -491,12 +491,12 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 	struct rxe_qp *qp = to_rqp(ibqp);
 	int ret;
 
-	rxe_disable_lookup(qp);
 	ret = rxe_qp_chk_destroy(qp);
 	if (ret)
 		return ret;
 
-	rxe_put(qp);
+	rxe_disable_lookup(qp);
+	rxe_cleanup(qp);
 	return 0;
 }
 
@@ -815,7 +815,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
 
 	rxe_cq_disable(cq);
 
-	rxe_put(cq);
+	rxe_cleanup(cq);
 	return 0;
 }
 
@@ -945,7 +945,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 
 err3:
 	rxe_put(pd);
-	rxe_put(mr);
+	rxe_cleanup(mr);
 err2:
 	return ERR_PTR(err);
 }
@@ -979,7 +979,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 
 err2:
 	rxe_put(pd);
-	rxe_put(mr);
+	rxe_cleanup(mr);
 err1:
 	return ERR_PTR(err);
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 10/11] RDMA/rxe: Convert read side locking to rcu
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (8 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  2022-03-18  1:55 ` [PATCH for-next v12 11/11] RDMA/rxe: Cleanup rxe_pool.c Bob Pearson
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Use rcu_read_lock() for protecting read side operations in rxe_pool.c.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 6f39f74233d2..a2c74beceeae 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -190,16 +190,15 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	struct rxe_pool_elem *elem;
 	struct xarray *xa = &pool->xa;
-	unsigned long flags;
 	void *obj;
 
-	xa_lock_irqsave(xa, flags);
+	rcu_read_lock();
 	elem = xa_load(xa, index);
 	if (elem && kref_get_unless_zero(&elem->ref_cnt))
 		obj = elem->obj;
 	else
 		obj = NULL;
-	xa_unlock_irqrestore(xa, flags);
+	rcu_read_unlock();
 
 	return obj;
 }
@@ -242,7 +241,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem)
 		pool->cleanup(elem);
 
 	if (pool->flags & RXE_POOL_ALLOC)
-		kfree(elem->obj);
+		kfree_rcu(elem->obj);
 
 	atomic_dec(&pool->num_elem);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-next v12 11/11] RDMA/rxe: Cleanup rxe_pool.c
  2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
                   ` (9 preceding siblings ...)
  2022-03-18  1:55 ` [PATCH for-next v12 10/11] RDMA/rxe: Convert read side locking to rcu Bob Pearson
@ 2022-03-18  1:55 ` Bob Pearson
  10 siblings, 0 replies; 15+ messages in thread
From: Bob Pearson @ 2022-03-18  1:55 UTC (permalink / raw)
  To: jgg, zyjzyj2000, linux-rdma; +Cc: Bob Pearson

Minor cleanup of rxe_pool.c. Add document comment headers for
the subroutines. Increase alignment for pool elements.
Convert some printk's to WARN-ON's.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 81 ++++++++++++++++++++++------
 1 file changed, 66 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index a2c74beceeae..268757a106ce 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -8,7 +8,7 @@
 
 #define RXE_POOL_TIMEOUT	(200)
 #define RXE_POOL_MAX_TIMEOUTS	(3)
-#define RXE_POOL_ALIGN		(16)
+#define RXE_POOL_ALIGN		(64)
 
 static const struct rxe_type_info {
 	const char *name;
@@ -120,24 +120,35 @@ void rxe_pool_cleanup(struct rxe_pool *pool)
 	WARN_ON(!xa_empty(&pool->xa));
 }
 
+/**
+ * rxe_alloc - allocate a new pool object
+ * @pool: object pool
+ *
+ * Context: in task.
+ * Returns: object on success else an ERR_PTR
+ */
 void *rxe_alloc(struct rxe_pool *pool)
 {
 	struct rxe_pool_elem *elem;
 	void *obj;
-	int err;
+	int err = -EINVAL;
 
 	if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC)))
-		return NULL;
+		goto err_out;
+
+	if (WARN_ON(!in_task()))
+		goto err_out;
 
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
-		goto err_cnt;
+		goto err_dec;
 
 	obj = kzalloc(pool->elem_size, GFP_KERNEL);
-	if (!obj)
-		goto err_cnt;
+	if (!obj) {
+		err = -ENOMEM;
+		goto err_dec;
+	}
 
 	elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset);
-
 	elem->pool = pool;
 	elem->obj = obj;
 	kref_init(&elem->ref_cnt);
@@ -154,20 +165,32 @@ void *rxe_alloc(struct rxe_pool *pool)
 
 err_free:
 	kfree(obj);
-err_cnt:
+err_dec:
 	atomic_dec(&pool->num_elem);
-	return NULL;
+err_out:
+	return ERR_PTR(err);
 }
 
+/**
+ * __rxe_add_to_pool - add rdma-core allocated object to rxe object pool
+ * @pool: object pool
+ * @elem: rxe_pool_elem embedded in object
+ *
+ * Context: in task.
+ * Returns: 0 on success else an error
+ */
 int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 {
-	int err;
+	int err = -EINVAL;
 
 	if (WARN_ON(pool->flags & RXE_POOL_ALLOC))
-		return -EINVAL;
+		goto err_out;
+
+	if (WARN_ON(!in_task()))
+		goto err_out;
 
 	if (atomic_inc_return(&pool->num_elem) > pool->max_elem)
-		goto err_cnt;
+		goto err_dec;
 
 	elem->pool = pool;
 	elem->obj = (u8 *)elem - pool->elem_offset;
@@ -177,15 +200,23 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem)
 	err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit,
 			      &pool->next, GFP_KERNEL);
 	if (err)
-		goto err_cnt;
+		goto err_dec;
 
 	return 0;
 
-err_cnt:
+err_dec:
 	atomic_dec(&pool->num_elem);
-	return -EINVAL;
+err_out:
+	return err;
 }
 
+/**
+ * rxe_pool_get_index - find object in pool with given index
+ * @pool: object pool
+ * @index: index
+ *
+ * Returns: object on success else NULL
+ */
 void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 {
 	struct rxe_pool_elem *elem;
@@ -248,16 +279,32 @@ int __rxe_cleanup(struct rxe_pool_elem *elem)
 	return err;
 }
 
+/**
+ * __rxe_get - takes a ref on the object unless ref count is zero
+ * @elem: rxe_pool_elem embedded in object
+ *
+ * Returns: 1 if reference is added else 0
+ */
 int __rxe_get(struct rxe_pool_elem *elem)
 {
 	return kref_get_unless_zero(&elem->ref_cnt);
 }
 
+/**
+ * __rxe_put - puts a ref on the object
+ * @elem: rxe_pool_elem embedded in object
+ *
+ * Returns: 1 if ref count reaches zero and release called else 0
+ */
 int __rxe_put(struct rxe_pool_elem *elem)
 {
 	return kref_put(&elem->ref_cnt, rxe_elem_release);
 }
 
+/**
+ * __rxe_finalize - enable looking up object from index
+ * @elem: rxe_pool_elem embedded in object
+ */
 void __rxe_finalize(struct rxe_pool_elem *elem)
 {
 	struct xarray *xa = &elem->pool->xa;
@@ -270,6 +317,10 @@ void __rxe_finalize(struct rxe_pool_elem *elem)
 	WARN_ON(xa_err(ret));
 }
 
+/**
+ * __rxe_disable_lookup - disable looking up object from index
+ * @elem: rxe_pool_elem embedded in object
+ */
 void __rxe_disable_lookup(struct rxe_pool_elem *elem)
 {
 	struct xarray *xa = &elem->pool->xa;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum
  2022-03-18  1:55 ` [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum Bob Pearson
@ 2022-03-18 18:00   ` Jason Gunthorpe
  0 siblings, 0 replies; 15+ messages in thread
From: Jason Gunthorpe @ 2022-03-18 18:00 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma

On Thu, Mar 17, 2022 at 08:55:04PM -0500, Bob Pearson wrote:
> Currently the #define IB_SRQ_INIT_MASK is used to distinguish
> the rxe_create_srq verb from the rxe_modify_srq verb so that some code
> can be shared between these two subroutines.
> 
> This commit replaces the #define with an enum which extends
> enum ib_srq_attr_mask and makes related changes to prototypes
> to clean up type warnings. The parameter is given a rxe specific
> name.
> 
> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
>  drivers/infiniband/sw/rxe/rxe_loc.h   |  8 ++------
>  drivers/infiniband/sw/rxe/rxe_srq.c   | 10 +++++-----
>  drivers/infiniband/sw/rxe/rxe_verbs.c |  7 ++++---
>  drivers/infiniband/sw/rxe/rxe_verbs.h |  6 ++++++
>  4 files changed, 17 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
> index 2ffbe3390668..9067d3b6f1ee 100644
> +++ b/drivers/infiniband/sw/rxe/rxe_loc.h
> @@ -159,17 +159,13 @@ void retransmit_timer(struct timer_list *t);
>  void rnr_nak_timer(struct timer_list *t);
>  
>  /* rxe_srq.c */
> -#define IB_SRQ_INIT_MASK (~IB_SRQ_LIMIT)
> -
>  int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
> -		     struct ib_srq_attr *attr, enum ib_srq_attr_mask mask);
> -
> +		     struct ib_srq_attr *attr, enum rxe_srq_attr_mask
> mask);

It is very old code that is passing around a enum to indicate a
bitfield, this is not the correct way.

Please just change these to unsigned int.

> +enum rxe_srq_attr_mask {
> +	RXE_SRQ_MAX_WR		= IB_SRQ_MAX_WR,
> +	RXE_SRQ_LIMIT		= IB_SRQ_LIMIT,
> +	RXE_SRQ_INIT		= BIT(2),
> +};

But this seems pretty dodgy?

Why not add another parameter, or maybe split the function or
something?

Jason

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects
  2022-03-18  1:55 ` [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
@ 2022-03-18 18:07   ` Jason Gunthorpe
  0 siblings, 0 replies; 15+ messages in thread
From: Jason Gunthorpe @ 2022-03-18 18:07 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma

On Thu, Mar 17, 2022 at 08:55:11PM -0500, Bob Pearson wrote:
> diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
> index fc3942e04a1f..8059f31882ae 100644
> +++ b/drivers/infiniband/sw/rxe/rxe_mr.c
> @@ -687,6 +687,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
>  	if (atomic_read(&mr->num_mw) > 0)
>  		return -EINVAL;
>  
> +	rxe_disable_lookup(mr);
>  	rxe_put(mr);
>  
>  	return 0;

  
> @@ -32,6 +34,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
>  {
>  	struct rxe_mw *mw = to_rmw(ibmw);
>  
> +	rxe_disable_lookup(mw);
>  	rxe_put(mw);
>  
>  	return 0;

> @@ -487,6 +491,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
>  	struct rxe_qp *qp = to_rqp(ibqp);
>  	int ret;
>  
> +	rxe_disable_lookup(qp);
>  	ret = rxe_qp_chk_destroy(qp);
>  	if (ret)
>  		return ret;

All the places doing 'rxe_disable_lookup' immediately follow it by
rxe_put(), except this one. This one is buggy and should be ordered
after.

If all places do rxe_disable_lookp()/rxe_put() then I'm back to what I
said earlier, this should just be a function that does xa_erase and
forget about 'disable'

Putting the xa_erase in the kref release is a mistake.

Jason

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects
  2022-03-18  1:55 ` [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects Bob Pearson
@ 2022-03-18 18:10   ` Jason Gunthorpe
  0 siblings, 0 replies; 15+ messages in thread
From: Jason Gunthorpe @ 2022-03-18 18:10 UTC (permalink / raw)
  To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma

On Thu, Mar 17, 2022 at 08:55:12PM -0500, Bob Pearson wrote:

> +int __rxe_cleanup(struct rxe_pool_elem *elem)
> +{
> +	struct rxe_pool *pool = elem->pool;
> +	static int timeout = RXE_POOL_TIMEOUT;
> +	int ret, err = 0;
> +
> +	__rxe_put(elem);

Like here for instance, we could just put the xa_erase

And why is it safe to call put then continue to touch elem ? There is
a kfree(elem) in rxe_elem_release()

Jason

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-03-18 18:10 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-18  1:55 [PATCH for-next v12 00/11] Fix race conditions in rxe_pool Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 01/11] RDMA/rxe: Replace #define by enum Bob Pearson
2022-03-18 18:00   ` Jason Gunthorpe
2022-03-18  1:55 ` [PATCH for-next v12 02/11] RDMA/rxe: Add rxe_srq_cleanup() Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 03/11] RDMA/rxe: Check rxe_get() return value Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 04/11] RDMA/rxe: Move qp cleanup code to rxe_qp_do_cleanup() Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 05/11] RDMA/rxe: Move mr cleanup code to rxe_mr_cleanup() Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 06/11] RDMA/rxe: Move mw cleanup code to rxe_mw_cleanup() Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 07/11] RDMA/rxe: Enforce IBA C11-17 Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 08/11] RDMA/rxe: Stop lookup of partially built objects Bob Pearson
2022-03-18 18:07   ` Jason Gunthorpe
2022-03-18  1:55 ` [PATCH for-next v12 09/11] RDMA/rxe: Add wait_for_completion to pool objects Bob Pearson
2022-03-18 18:10   ` Jason Gunthorpe
2022-03-18  1:55 ` [PATCH for-next v12 10/11] RDMA/rxe: Convert read side locking to rcu Bob Pearson
2022-03-18  1:55 ` [PATCH for-next v12 11/11] RDMA/rxe: Cleanup rxe_pool.c Bob Pearson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.