linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/2] RDMA/qedr: Fix memory leaks and synchronization
@ 2019-10-03 12:03 Michal Kalderon
  2019-10-03 12:03 ` [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr Michal Kalderon
  2019-10-03 12:03 ` [PATCH rdma-next 2/2] RDMA/qedr: Fix memory leak in user qp and mr Michal Kalderon
  0 siblings, 2 replies; 9+ messages in thread
From: Michal Kalderon @ 2019-10-03 12:03 UTC (permalink / raw)
  To: michal.kalderon, aelior, dledford, jgg; +Cc: linux-rdma

Several leaks and issues were found when running iWARP with kmemleak.
some apply to RoCE as well.

This series fixes some memory leaks and some wrong methods of
synchronization which were used to wait for iWARP CM related events.

Michal Kalderon (2):
  RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  RDMA/qedr: Fix memory leak in user qp and mr

 drivers/infiniband/hw/qedr/qedr.h       |  23 ++++--
 drivers/infiniband/hw/qedr/qedr_iw_cm.c | 120 +++++++++++++++++++++-----------
 drivers/infiniband/hw/qedr/verbs.c      |  54 +++++++-------
 3 files changed, 128 insertions(+), 69 deletions(-)

-- 
2.14.5


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-03 12:03 [PATCH rdma-next 0/2] RDMA/qedr: Fix memory leaks and synchronization Michal Kalderon
@ 2019-10-03 12:03 ` Michal Kalderon
  2019-10-03 16:16   ` Jason Gunthorpe
  2019-10-03 12:03 ` [PATCH rdma-next 2/2] RDMA/qedr: Fix memory leak in user qp and mr Michal Kalderon
  1 sibling, 1 reply; 9+ messages in thread
From: Michal Kalderon @ 2019-10-03 12:03 UTC (permalink / raw)
  To: michal.kalderon, aelior, dledford, jgg; +Cc: linux-rdma, Ariel Elior

Re-design of the iWARP CM related objects reference
counting and synchronization methods, to ensure operations
are synchronized correctly and and that memory allocated for "ep"
is released. (was leaked).
Also makes sure QP memory is not released before ep is finished
accessing it.

Where as the QP object is created/destroyed by external operations,
the ep is created/destroyed by internal operations and represents
the tcp connection associated with the QP.

QP destruction flow:
- needs to wait for ep establishment to complete (either successfully
  or with error)
- needs to wait for ep disconnect to be fully posted to avoid a
  race condition of disconnect being called after reset.
- both the operations above don't always happen, so we use atomic
  flags to indicate whether the qp destruction flow needs to wait
  for these completions or not, if the destroy is called before
  these operations began, the flows will check the flags and not
  execute them ( connect / disconnect).

We use completion structure for waiting for the completions mentioned
above.

The QP refcnt was modified to kref object.
The EP has a kref added to it to handle additional worker thread
accessing it.

Memory Leaks - https://www.spinics.net/lists/linux-rdma/msg83762.html
Reported-by: Chuck Lever <chuck.lever@oracle.com>

Concurrency not managed correctly -
https://www.spinics.net/lists/linux-rdma/msg67949.html
Reported-by: Jason Gunthorpe <jgg@ziepe.ca>

Fixes: e411e0587e0d ("RDMA/qedr: Add iWARP connection management functions")
Signed-off-by: Ariel Elior <ariel.elior@marvell.com>
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
---
 drivers/infiniband/hw/qedr/qedr.h       |  23 ++++--
 drivers/infiniband/hw/qedr/qedr_iw_cm.c | 120 +++++++++++++++++++++-----------
 drivers/infiniband/hw/qedr/verbs.c      |  42 ++++++-----
 3 files changed, 118 insertions(+), 67 deletions(-)

diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h
index 0cfd849b13d6..8e927f6c1520 100644
--- a/drivers/infiniband/hw/qedr/qedr.h
+++ b/drivers/infiniband/hw/qedr/qedr.h
@@ -40,6 +40,7 @@
 #include <linux/qed/qed_rdma_if.h>
 #include <linux/qed/qede_rdma.h>
 #include <linux/qed/roce_common.h>
+#include <linux/completion.h>
 #include "qedr_hsi_rdma.h"
 
 #define QEDR_NODE_DESC "QLogic 579xx RoCE HCA"
@@ -377,10 +378,20 @@ enum qedr_qp_err_bitmap {
 	QEDR_QP_ERR_RQ_PBL_FULL = 32,
 };
 
+enum qedr_qp_create_type {
+	QEDR_QP_CREATE_NONE,
+	QEDR_QP_CREATE_USER,
+	QEDR_QP_CREATE_KERNEL,
+};
+
+enum qedr_iwarp_cm_flags {
+	QEDR_IWARP_CM_WAIT_FOR_CONNECT    = BIT(0),
+	QEDR_IWARP_CM_WAIT_FOR_DISCONNECT = BIT(1),
+};
+
 struct qedr_qp {
 	struct ib_qp ibqp;	/* must be first */
 	struct qedr_dev *dev;
-	struct qedr_iw_ep *ep;
 	struct qedr_qp_hwq_info sq;
 	struct qedr_qp_hwq_info rq;
 
@@ -395,6 +406,7 @@ struct qedr_qp {
 	u32 id;
 	struct qedr_pd *pd;
 	enum ib_qp_type qp_type;
+	enum qedr_qp_create_type create_type;
 	struct qed_rdma_qp *qed_qp;
 	u32 qp_id;
 	u16 icid;
@@ -437,8 +449,11 @@ struct qedr_qp {
 	/* Relevant to qps created from user space only (applications) */
 	struct qedr_userq usq;
 	struct qedr_userq urq;
-	atomic_t refcnt;
-	bool destroyed;
+
+	/* synchronization objects used with iwarp ep */
+	struct kref refcnt;
+	struct completion iwarp_cm_comp;
+	unsigned long iwarp_cm_flags; /* enum iwarp_cm_flags */
 };
 
 struct qedr_ah {
@@ -531,7 +546,7 @@ struct qedr_iw_ep {
 	struct iw_cm_id	*cm_id;
 	struct qedr_qp	*qp;
 	void		*qed_context;
-	u8		during_connect;
+	struct kref	refcnt;
 };
 
 static inline
diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
index 22881d4442b9..ebc6bc25a0e2 100644
--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
@@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct qed_iwarp_cm_info *cm_info,
 	}
 }
 
+static void qedr_iw_free_qp(struct kref *ref)
+{
+	struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
+
+	xa_erase_irq(&qp->dev->qps, qp->qp_id);
+	kfree(qp);
+}
+
+static void
+qedr_iw_free_ep(struct kref *ref)
+{
+	struct qedr_iw_ep *ep = container_of(ref, struct qedr_iw_ep, refcnt);
+
+	if (ep->qp)
+		kref_put(&ep->qp->refcnt, qedr_iw_free_qp);
+
+	if (ep->cm_id)
+		ep->cm_id->rem_ref(ep->cm_id);
+
+	kfree(ep);
+}
+
 static void
 qedr_iw_mpa_request(void *context, struct qed_iwarp_cm_event_params *params)
 {
@@ -93,6 +115,7 @@ qedr_iw_mpa_request(void *context, struct qed_iwarp_cm_event_params *params)
 
 	ep->dev = dev;
 	ep->qed_context = params->ep_context;
+	kref_init(&ep->refcnt);
 
 	memset(&event, 0, sizeof(event));
 	event.event = IW_CM_EVENT_CONNECT_REQUEST;
@@ -141,12 +164,10 @@ qedr_iw_close_event(void *context, struct qed_iwarp_cm_event_params *params)
 {
 	struct qedr_iw_ep *ep = (struct qedr_iw_ep *)context;
 
-	if (ep->cm_id) {
+	if (ep->cm_id)
 		qedr_iw_issue_event(context, params, IW_CM_EVENT_CLOSE);
 
-		ep->cm_id->rem_ref(ep->cm_id);
-		ep->cm_id = NULL;
-	}
+	kref_put(&ep->refcnt, qedr_iw_free_ep);
 }
 
 static void
@@ -186,11 +207,13 @@ static void qedr_iw_disconnect_worker(struct work_struct *work)
 	struct qedr_qp *qp = ep->qp;
 	struct iw_cm_event event;
 
-	if (qp->destroyed) {
-		kfree(dwork);
-		qedr_iw_qp_rem_ref(&qp->ibqp);
-		return;
-	}
+	/* The qp won't be released until we release the ep.
+	 * the ep's refcnt was increased before calling this
+	 * function, therefore it is safe to access qp
+	 */
+	if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_DISCONNECT,
+			     &qp->iwarp_cm_flags))
+		goto out;
 
 	memset(&event, 0, sizeof(event));
 	event.status = dwork->status;
@@ -204,7 +227,6 @@ static void qedr_iw_disconnect_worker(struct work_struct *work)
 	else
 		qp_params.new_state = QED_ROCE_QP_STATE_SQD;
 
-	kfree(dwork);
 
 	if (ep->cm_id)
 		ep->cm_id->event_handler(ep->cm_id, &event);
@@ -214,7 +236,10 @@ static void qedr_iw_disconnect_worker(struct work_struct *work)
 
 	dev->ops->rdma_modify_qp(dev->rdma_ctx, qp->qed_qp, &qp_params);
 
-	qedr_iw_qp_rem_ref(&qp->ibqp);
+	complete(&ep->qp->iwarp_cm_comp);
+out:
+	kfree(dwork);
+	kref_put(&ep->refcnt, qedr_iw_free_ep);
 }
 
 static void
@@ -224,13 +249,18 @@ qedr_iw_disconnect_event(void *context,
 	struct qedr_discon_work *work;
 	struct qedr_iw_ep *ep = (struct qedr_iw_ep *)context;
 	struct qedr_dev *dev = ep->dev;
-	struct qedr_qp *qp = ep->qp;
 
 	work = kzalloc(sizeof(*work), GFP_ATOMIC);
 	if (!work)
 		return;
 
-	qedr_iw_qp_add_ref(&qp->ibqp);
+	/* We can't get a close event before disconnect, but since
+	 * we're scheduling a work queue we need to make sure close
+	 * won't delete the ep, so we increase the refcnt
+	 */
+	if (!kref_get_unless_zero(&ep->refcnt))
+		return;
+
 	work->ep = ep;
 	work->event = params->event;
 	work->status = params->status;
@@ -252,16 +282,30 @@ qedr_iw_passive_complete(void *context,
 	if ((params->status == -ECONNREFUSED) && (!ep->qp)) {
 		DP_DEBUG(dev, QEDR_MSG_IWARP,
 			 "PASSIVE connection refused releasing ep...\n");
-		kfree(ep);
+		kref_put(&ep->refcnt, qedr_iw_free_ep);
 		return;
 	}
 
+	complete(&ep->qp->iwarp_cm_comp);
 	qedr_iw_issue_event(context, params, IW_CM_EVENT_ESTABLISHED);
 
 	if (params->status < 0)
 		qedr_iw_close_event(context, params);
 }
 
+static void
+qedr_iw_active_complete(void *context,
+			struct qed_iwarp_cm_event_params *params)
+{
+	struct qedr_iw_ep *ep = (struct qedr_iw_ep *)context;
+
+	complete(&ep->qp->iwarp_cm_comp);
+	qedr_iw_issue_event(context, params, IW_CM_EVENT_CONNECT_REPLY);
+
+	if (params->status < 0)
+		kref_put(&ep->refcnt, qedr_iw_free_ep);
+}
+
 static int
 qedr_iw_mpa_reply(void *context, struct qed_iwarp_cm_event_params *params)
 {
@@ -288,27 +332,15 @@ qedr_iw_event_handler(void *context, struct qed_iwarp_cm_event_params *params)
 		qedr_iw_mpa_reply(context, params);
 		break;
 	case QED_IWARP_EVENT_PASSIVE_COMPLETE:
-		ep->during_connect = 0;
 		qedr_iw_passive_complete(context, params);
 		break;
-
 	case QED_IWARP_EVENT_ACTIVE_COMPLETE:
-		ep->during_connect = 0;
-		qedr_iw_issue_event(context,
-				    params,
-				    IW_CM_EVENT_CONNECT_REPLY);
-		if (params->status < 0) {
-			struct qedr_iw_ep *ep = (struct qedr_iw_ep *)context;
-
-			ep->cm_id->rem_ref(ep->cm_id);
-			ep->cm_id = NULL;
-		}
+		qedr_iw_active_complete(context, params);
 		break;
 	case QED_IWARP_EVENT_DISCONNECT:
 		qedr_iw_disconnect_event(context, params);
 		break;
 	case QED_IWARP_EVENT_CLOSE:
-		ep->during_connect = 0;
 		qedr_iw_close_event(context, params);
 		break;
 	case QED_IWARP_EVENT_RQ_EMPTY:
@@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 		return -ENOMEM;
 
 	ep->dev = dev;
+	kref_init(&ep->refcnt);
+
+	kref_get(&qp->refcnt);
 	ep->qp = qp;
-	qp->ep = ep;
 	cm_id->add_ref(cm_id);
 	ep->cm_id = cm_id;
 
@@ -580,7 +614,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 	in_params.qp = qp->qed_qp;
 	memcpy(in_params.local_mac_addr, dev->ndev->dev_addr, ETH_ALEN);
 
-	ep->during_connect = 1;
+	if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
+			     &qp->iwarp_cm_flags))
+		goto err; /* QP already being destroyed */
+
 	rc = dev->ops->iwarp_connect(dev->rdma_ctx, &in_params, &out_params);
 	if (rc)
 		goto err;
@@ -588,8 +625,8 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 	return rc;
 
 err:
-	cm_id->rem_ref(cm_id);
-	kfree(ep);
+	complete(&qp->iwarp_cm_comp);
+	kref_put(&ep->refcnt, qedr_iw_free_ep);
 	return rc;
 }
 
@@ -677,7 +714,7 @@ int qedr_iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 	struct qedr_dev *dev = ep->dev;
 	struct qedr_qp *qp;
 	struct qed_iwarp_accept_in params;
-	int rc;
+	int rc = 0;
 
 	DP_DEBUG(dev, QEDR_MSG_IWARP, "Accept on qpid=%d\n", conn_param->qpn);
 
@@ -687,8 +724,8 @@ int qedr_iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 		return -EINVAL;
 	}
 
+	kref_get(&qp->refcnt);
 	ep->qp = qp;
-	qp->ep = ep;
 	cm_id->add_ref(cm_id);
 	ep->cm_id = cm_id;
 
@@ -700,15 +737,19 @@ int qedr_iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 	params.ird = conn_param->ird;
 	params.ord = conn_param->ord;
 
-	ep->during_connect = 1;
+	if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
+			     &qp->iwarp_cm_flags))
+		goto err; /* QP already destroyed */
+
 	rc = dev->ops->iwarp_accept(dev->rdma_ctx, &params);
 	if (rc)
 		goto err;
 
 	return rc;
 err:
-	ep->during_connect = 0;
-	cm_id->rem_ref(cm_id);
+	complete(&qp->iwarp_cm_comp);
+	kref_put(&ep->refcnt, qedr_iw_free_ep);
+
 	return rc;
 }
 
@@ -731,17 +772,14 @@ void qedr_iw_qp_add_ref(struct ib_qp *ibqp)
 {
 	struct qedr_qp *qp = get_qedr_qp(ibqp);
 
-	atomic_inc(&qp->refcnt);
+	kref_get(&qp->refcnt);
 }
 
 void qedr_iw_qp_rem_ref(struct ib_qp *ibqp)
 {
 	struct qedr_qp *qp = get_qedr_qp(ibqp);
 
-	if (atomic_dec_and_test(&qp->refcnt)) {
-		xa_erase_irq(&qp->dev->qps, qp->qp_id);
-		kfree(qp);
-	}
+	kref_put(&qp->refcnt, qedr_iw_free_qp);
 }
 
 struct ib_qp *qedr_iw_get_qp(struct ib_device *ibdev, int qpn)
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 6f3ce86019b7..dbb0b0000594 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -51,6 +51,7 @@
 #include "verbs.h"
 #include <rdma/qedr-abi.h>
 #include "qedr_roce_cm.h"
+#include "qedr_iw_cm.h"
 
 #define QEDR_SRQ_WQE_ELEM_SIZE	sizeof(union rdma_srq_elm)
 #define	RDMA_MAX_SGE_PER_SRQ	(4)
@@ -1193,7 +1194,10 @@ static void qedr_set_common_qp_params(struct qedr_dev *dev,
 				      struct ib_qp_init_attr *attrs)
 {
 	spin_lock_init(&qp->q_lock);
-	atomic_set(&qp->refcnt, 1);
+	if (rdma_protocol_iwarp(&dev->ibdev, 1)) {
+		kref_init(&qp->refcnt);
+		init_completion(&qp->iwarp_cm_comp);
+	}
 	qp->pd = pd;
 	qp->qp_type = attrs->qp_type;
 	qp->max_inline_data = attrs->cap.max_inline_data;
@@ -1592,6 +1596,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
 	int alloc_and_init = rdma_protocol_roce(&dev->ibdev, 1);
 	int rc = -EINVAL;
 
+	qp->create_type = QEDR_QP_CREATE_USER;
 	memset(&ureq, 0, sizeof(ureq));
 	rc = ib_copy_from_udata(&ureq, udata, sizeof(ureq));
 	if (rc) {
@@ -1805,6 +1810,7 @@ static int qedr_create_kernel_qp(struct qedr_dev *dev,
 	u32 n_sq_entries;
 
 	memset(&in_params, 0, sizeof(in_params));
+	qp->create_type = QEDR_QP_CREATE_KERNEL;
 
 	/* A single work request may take up to QEDR_MAX_SQ_WQE_SIZE elements in
 	 * the ring. The ring should allow at least a single WR, even if the
@@ -2437,7 +2443,7 @@ static int qedr_free_qp_resources(struct qedr_dev *dev, struct qedr_qp *qp,
 			return rc;
 	}
 
-	if (udata)
+	if (qp->create_type == QEDR_QP_CREATE_USER)
 		qedr_cleanup_user(dev, qp);
 	else
 		qedr_cleanup_kernel(dev, qp);
@@ -2467,22 +2473,14 @@ int qedr_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 			qedr_modify_qp(ibqp, &attr, attr_mask, NULL);
 		}
 	} else {
-		/* Wait for the connect/accept to complete */
-		if (qp->ep) {
-			int wait_count = 1;
-
-			while (qp->ep->during_connect) {
-				DP_DEBUG(dev, QEDR_MSG_QP,
-					 "Still in during connect/accept\n");
-
-				msleep(100);
-				if (wait_count++ > 200) {
-					DP_NOTICE(dev,
-						  "during connect timeout\n");
-					break;
-				}
-			}
-		}
+		/* Wait for the connection setup to complete */
+		if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
+				     &qp->iwarp_cm_flags))
+			wait_for_completion(&qp->iwarp_cm_comp);
+
+		if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_DISCONNECT,
+				     &qp->iwarp_cm_flags))
+			wait_for_completion(&qp->iwarp_cm_comp);
 	}
 
 	if (qp->qp_type == IB_QPT_GSI)
@@ -2490,11 +2488,11 @@ int qedr_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 
 	qedr_free_qp_resources(dev, qp, udata);
 
-	if (atomic_dec_and_test(&qp->refcnt) &&
-	    rdma_protocol_iwarp(&dev->ibdev, 1)) {
-		xa_erase_irq(&dev->qps, qp->qp_id);
+	if (rdma_protocol_iwarp(&dev->ibdev, 1))
+		qedr_iw_qp_rem_ref(&qp->ibqp);
+	else
 		kfree(qp);
-	}
+
 	return 0;
 }
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 2/2] RDMA/qedr: Fix memory leak in user qp and mr
  2019-10-03 12:03 [PATCH rdma-next 0/2] RDMA/qedr: Fix memory leaks and synchronization Michal Kalderon
  2019-10-03 12:03 ` [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr Michal Kalderon
@ 2019-10-03 12:03 ` Michal Kalderon
  1 sibling, 0 replies; 9+ messages in thread
From: Michal Kalderon @ 2019-10-03 12:03 UTC (permalink / raw)
  To: michal.kalderon, aelior, dledford, jgg; +Cc: linux-rdma, Ariel Elior

User QPs pbl's won't freed properly.
MR pbls won't freed properly.

Fixes: e0290cce6ac0 ("qedr: Add support for memory registeration verbs")
Signed-off-by: Ariel Elior <ariel.elior@marvell.com>
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
---
 drivers/infiniband/hw/qedr/verbs.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index dbb0b0000594..8d4164380984 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -1581,6 +1581,14 @@ static void qedr_cleanup_user(struct qedr_dev *dev, struct qedr_qp *qp)
 
 	ib_umem_release(qp->urq.umem);
 	qp->urq.umem = NULL;
+
+	if (rdma_protocol_roce(&dev->ibdev, 1)) {
+		qedr_free_pbl(dev, &qp->usq.pbl_info, qp->usq.pbl_tbl);
+		qedr_free_pbl(dev, &qp->urq.pbl_info, qp->urq.pbl_tbl);
+	} else {
+		kfree(qp->usq.pbl_tbl);
+		kfree(qp->urq.pbl_tbl);
+	}
 }
 
 static int qedr_create_user_qp(struct qedr_dev *dev,
@@ -2671,8 +2679,8 @@ int qedr_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata)
 
 	dev->ops->rdma_free_tid(dev->rdma_ctx, mr->hw_mr.itid);
 
-	if ((mr->type != QEDR_MR_DMA) && (mr->type != QEDR_MR_FRMR))
-		qedr_free_pbl(dev, &mr->info.pbl_info, mr->info.pbl_table);
+	if (mr->type != QEDR_MR_DMA)
+		free_mr_info(dev, &mr->info);
 
 	/* it could be user registered memory. */
 	ib_umem_release(mr->umem);
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-03 12:03 ` [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr Michal Kalderon
@ 2019-10-03 16:16   ` Jason Gunthorpe
  2019-10-03 19:33     ` [EXT] " Michal Kalderon
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2019-10-03 16:16 UTC (permalink / raw)
  To: Michal Kalderon; +Cc: aelior, dledford, linux-rdma, Ariel Elior

On Thu, Oct 03, 2019 at 03:03:41PM +0300, Michal Kalderon wrote:

> diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> index 22881d4442b9..ebc6bc25a0e2 100644
> +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> @@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct qed_iwarp_cm_info *cm_info,
>  	}
>  }
>  
> +static void qedr_iw_free_qp(struct kref *ref)
> +{
> +	struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
> +
> +	xa_erase_irq(&qp->dev->qps, qp->qp_id);

why is it _irq? Where are we in an irq when using the xa_lock on this xarray?


> +	kfree(qp);

[..]

> @@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
>  		return -ENOMEM;
>  
>  	ep->dev = dev;
> +	kref_init(&ep->refcnt);
> +
> +	kref_get(&qp->refcnt);

Here 'qp' comes out of an xa_load, but the QP is still visible in the
xarray with a 0 refcount, so this is invalid.

Also, the xa_load doesn't have any locking around it, so the entire
thing looks wrong to me.

Most probably you want to hold the xa_lock during xa_load and then use
a kref_get_not_zero - failure of the get also means the qp is not in
the xarray

Or rework things so the qp is removed from the xarray earlier, I'm not
sure.

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [EXT] Re: [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-03 16:16   ` Jason Gunthorpe
@ 2019-10-03 19:33     ` Michal Kalderon
  2019-10-04  0:36       ` Jason Gunthorpe
  0 siblings, 1 reply; 9+ messages in thread
From: Michal Kalderon @ 2019-10-03 19:33 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Ariel Elior, dledford, linux-rdma, Ariel Elior

> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Thursday, October 3, 2019 7:17 PM
> On Thu, Oct 03, 2019 at 03:03:41PM +0300, Michal Kalderon wrote:
> 
> > diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > index 22881d4442b9..ebc6bc25a0e2 100644
> > +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > @@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct qed_iwarp_cm_info
> *cm_info,
> >  	}
> >  }
> >
> > +static void qedr_iw_free_qp(struct kref *ref) {
> > +	struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
> > +
> > +	xa_erase_irq(&qp->dev->qps, qp->qp_id);
> 
> why is it _irq? Where are we in an irq when using the xa_lock on this xarray?
We could be under a spin lock when called from several locations in core/iwcm.c
> 
> 
> > +	kfree(qp);
> 
> [..]
> 
> > @@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id,
> struct iw_cm_conn_param *conn_param)
> >  		return -ENOMEM;
> >
> >  	ep->dev = dev;
> > +	kref_init(&ep->refcnt);
> > +
> > +	kref_get(&qp->refcnt);
> 
> Here 'qp' comes out of an xa_load, but the QP is still visible in the xarray with
> a 0 refcount, so this is invalid.
The core/iwcm takes a refcnt of the QP before calling connect, so it can't be with
refcnt zero

> 
> Also, the xa_load doesn't have any locking around it, so the entire thing looks
> wrong to me.
Since the functions calling it from core/iwcm ( connect / accept ) take a qp
Ref-cnt before the calling there's no risk of the entry being deleted while
xa_load is called

> 
> Most probably you want to hold the xa_lock during xa_load and then use a
> kref_get_not_zero - failure of the get also means the qp is not in the xarray
> 
> Or rework things so the qp is removed from the xarray earlier, I'm not sure.
This would make qedr more robust, but I think it not needed given the existing
Core/iwcm implementation. 

Thanks,
Michal
> 
> Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [EXT] Re: [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-03 19:33     ` [EXT] " Michal Kalderon
@ 2019-10-04  0:36       ` Jason Gunthorpe
  2019-10-04 17:10         ` Michal Kalderon
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2019-10-04  0:36 UTC (permalink / raw)
  To: Michal Kalderon; +Cc: Ariel Elior, dledford, linux-rdma

On Thu, Oct 03, 2019 at 07:33:00PM +0000, Michal Kalderon wrote:
> > From: Jason Gunthorpe <jgg@ziepe.ca>
> > Sent: Thursday, October 3, 2019 7:17 PM
> > On Thu, Oct 03, 2019 at 03:03:41PM +0300, Michal Kalderon wrote:
> > 
> > > diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > index 22881d4442b9..ebc6bc25a0e2 100644
> > > +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > @@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct qed_iwarp_cm_info
> > *cm_info,
> > >  	}
> > >  }
> > >
> > > +static void qedr_iw_free_qp(struct kref *ref) {
> > > +	struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
> > > +
> > > +	xa_erase_irq(&qp->dev->qps, qp->qp_id);
> > 
> > why is it _irq? Where are we in an irq when using the xa_lock on this xarray?
> We could be under a spin lock when called from several locations in core/iwcm.c

spinlock is OK, _irq is only needed if the code needs to mask IRQs
because there is a user of the same lock in an IRQ context, see the
documentation.

> > > @@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id,
> > struct iw_cm_conn_param *conn_param)
> > >  		return -ENOMEM;
> > >
> > >  	ep->dev = dev;
> > > +	kref_init(&ep->refcnt);
> > > +
> > > +	kref_get(&qp->refcnt);
> > 
> > Here 'qp' comes out of an xa_load, but the QP is still visible in the xarray with
> > a 0 refcount, so this is invalid.

> The core/iwcm takes a refcnt of the QP before calling connect, so it can't be with
> refcnt zero

> > Also, the xa_load doesn't have any locking around it, so the entire thing looks
> > wrong to me.
> Since the functions calling it from core/iwcm ( connect / accept ) take a qp
> Ref-cnt before the calling there's no risk of the entry being deleted while
> xa_load is called

Then why look it up in an xarray at all? If you already have the
pointer to get a refcount then pass the refcounted pointer in and get
rid of the sketchy xarray lookup.

I'm skeptical of this explanation

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [EXT] Re: [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-04  0:36       ` Jason Gunthorpe
@ 2019-10-04 17:10         ` Michal Kalderon
  2019-10-04 17:28           ` Jason Gunthorpe
  0 siblings, 1 reply; 9+ messages in thread
From: Michal Kalderon @ 2019-10-04 17:10 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Ariel Elior, dledford, linux-rdma

> From: linux-rdma-owner@vger.kernel.org <linux-rdma-
> owner@vger.kernel.org> On Behalf Of Jason Gunthorpe
> 
> On Thu, Oct 03, 2019 at 07:33:00PM +0000, Michal Kalderon wrote:
> > > From: Jason Gunthorpe <jgg@ziepe.ca>
> > > Sent: Thursday, October 3, 2019 7:17 PM On Thu, Oct 03, 2019 at
> > > 03:03:41PM +0300, Michal Kalderon wrote:
> > >
> > > > diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > index 22881d4442b9..ebc6bc25a0e2 100644
> > > > +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > @@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct
> > > > qed_iwarp_cm_info
> > > *cm_info,
> > > >  	}
> > > >  }
> > > >
> > > > +static void qedr_iw_free_qp(struct kref *ref) {
> > > > +	struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
> > > > +
> > > > +	xa_erase_irq(&qp->dev->qps, qp->qp_id);
> > >
> > > why is it _irq? Where are we in an irq when using the xa_lock on this
> xarray?
> > We could be under a spin lock when called from several locations in
> > core/iwcm.c
> 
> spinlock is OK, _irq is only needed if the code needs to mask IRQs because
> there is a user of the same lock in an IRQ context, see the documentation.
> 
> > > > @@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id,
> > > struct iw_cm_conn_param *conn_param)
> > > >  		return -ENOMEM;
> > > >
> > > >  	ep->dev = dev;
> > > > +	kref_init(&ep->refcnt);
> > > > +
> > > > +	kref_get(&qp->refcnt);
> > >
> > > Here 'qp' comes out of an xa_load, but the QP is still visible in
> > > the xarray with a 0 refcount, so this is invalid.
> 
> > The core/iwcm takes a refcnt of the QP before calling connect, so it
> > can't be with refcnt zero
> 
> > > Also, the xa_load doesn't have any locking around it, so the entire
> > > thing looks wrong to me.
> > Since the functions calling it from core/iwcm ( connect / accept )
> > take a qp Ref-cnt before the calling there's no risk of the entry
> > being deleted while xa_load is called
> 
> Then why look it up in an xarray at all? If you already have the pointer to get a
> refcount then pass the refcounted pointer in and get rid of the sketchy
> xarray lookup.
> 
I don't have the pointer, the core/iwcm has the pointer. 
The interface between the core and driver is that the driver gets a qp number from
the core/iwcm and needs to get the QP pointer from it's database. All the iWARP drivers
are implemented this way, this is also not new to qedr. 

> I'm skeptical of this explanation
> 
> Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [EXT] Re: [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-04 17:10         ` Michal Kalderon
@ 2019-10-04 17:28           ` Jason Gunthorpe
  2019-10-06 19:49             ` Michal Kalderon
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2019-10-04 17:28 UTC (permalink / raw)
  To: Michal Kalderon; +Cc: Ariel Elior, dledford, linux-rdma

On Fri, Oct 04, 2019 at 05:10:20PM +0000, Michal Kalderon wrote:
> > From: linux-rdma-owner@vger.kernel.org <linux-rdma-
> > owner@vger.kernel.org> On Behalf Of Jason Gunthorpe
> > 
> > On Thu, Oct 03, 2019 at 07:33:00PM +0000, Michal Kalderon wrote:
> > > > From: Jason Gunthorpe <jgg@ziepe.ca>
> > > > Sent: Thursday, October 3, 2019 7:17 PM On Thu, Oct 03, 2019 at
> > > > 03:03:41PM +0300, Michal Kalderon wrote:
> > > >
> > > > > diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > > b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > > index 22881d4442b9..ebc6bc25a0e2 100644
> > > > > +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > > @@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct
> > > > > qed_iwarp_cm_info
> > > > *cm_info,
> > > > >  	}
> > > > >  }
> > > > >
> > > > > +static void qedr_iw_free_qp(struct kref *ref) {
> > > > > +	struct qedr_qp *qp = container_of(ref, struct qedr_qp, refcnt);
> > > > > +
> > > > > +	xa_erase_irq(&qp->dev->qps, qp->qp_id);
> > > >
> > > > why is it _irq? Where are we in an irq when using the xa_lock on this
> > xarray?
> > > We could be under a spin lock when called from several locations in
> > > core/iwcm.c
> > 
> > spinlock is OK, _irq is only needed if the code needs to mask IRQs because
> > there is a user of the same lock in an IRQ context, see the documentation.
> > 
> > > > > @@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id,
> > > > struct iw_cm_conn_param *conn_param)
> > > > >  		return -ENOMEM;
> > > > >
> > > > >  	ep->dev = dev;
> > > > > +	kref_init(&ep->refcnt);
> > > > > +
> > > > > +	kref_get(&qp->refcnt);
> > > >
> > > > Here 'qp' comes out of an xa_load, but the QP is still visible in
> > > > the xarray with a 0 refcount, so this is invalid.
> > 
> > > The core/iwcm takes a refcnt of the QP before calling connect, so it
> > > can't be with refcnt zero
> > 
> > > > Also, the xa_load doesn't have any locking around it, so the entire
> > > > thing looks wrong to me.
> > > Since the functions calling it from core/iwcm ( connect / accept )
> > > take a qp Ref-cnt before the calling there's no risk of the entry
> > > being deleted while xa_load is called
> > 
> > Then why look it up in an xarray at all? If you already have the pointer to get a
> > refcount then pass the refcounted pointer in and get rid of the sketchy
> > xarray lookup.
> > 
> I don't have the pointer, the core/iwcm has the pointer. 
> The interface between the core and driver is that the driver gets a qp number from
> the core/iwcm and needs to get the QP pointer from it's database. All the iWARP drivers
> are implemented this way, this is also not new to qedr. 

That seems crazy.

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [EXT] Re: [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr
  2019-10-04 17:28           ` Jason Gunthorpe
@ 2019-10-06 19:49             ` Michal Kalderon
  0 siblings, 0 replies; 9+ messages in thread
From: Michal Kalderon @ 2019-10-06 19:49 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Ariel Elior, dledford, linux-rdma

> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Friday, October 4, 2019 8:28 PM
> 
> On Fri, Oct 04, 2019 at 05:10:20PM +0000, Michal Kalderon wrote:
> > > From: linux-rdma-owner@vger.kernel.org <linux-rdma-
> > > owner@vger.kernel.org> On Behalf Of Jason Gunthorpe
> > >
> > > On Thu, Oct 03, 2019 at 07:33:00PM +0000, Michal Kalderon wrote:
> > > > > From: Jason Gunthorpe <jgg@ziepe.ca>
> > > > > Sent: Thursday, October 3, 2019 7:17 PM On Thu, Oct 03, 2019 at
> > > > > 03:03:41PM +0300, Michal Kalderon wrote:
> > > > >
> > > > > > diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > > > b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > > > index 22881d4442b9..ebc6bc25a0e2 100644
> > > > > > +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
> > > > > > @@ -79,6 +79,28 @@ qedr_fill_sockaddr6(const struct
> > > > > > qed_iwarp_cm_info
> > > > > *cm_info,
> > > > > >  	}
> > > > > >  }
> > > > > >
> > > > > > +static void qedr_iw_free_qp(struct kref *ref) {
> > > > > > +	struct qedr_qp *qp = container_of(ref, struct qedr_qp,
> > > > > > +refcnt);
> > > > > > +
> > > > > > +	xa_erase_irq(&qp->dev->qps, qp->qp_id);
> > > > >
> > > > > why is it _irq? Where are we in an irq when using the xa_lock on
> > > > > this
> > > xarray?
> > > > We could be under a spin lock when called from several locations
> > > > in core/iwcm.c
> > >
> > > spinlock is OK, _irq is only needed if the code needs to mask IRQs
> > > because there is a user of the same lock in an IRQ context, see the
> documentation.
> > >
> > > > > > @@ -516,8 +548,10 @@ int qedr_iw_connect(struct iw_cm_id
> > > > > > *cm_id,
> > > > > struct iw_cm_conn_param *conn_param)
> > > > > >  		return -ENOMEM;
> > > > > >
> > > > > >  	ep->dev = dev;
> > > > > > +	kref_init(&ep->refcnt);
> > > > > > +
> > > > > > +	kref_get(&qp->refcnt);
> > > > >
> > > > > Here 'qp' comes out of an xa_load, but the QP is still visible
> > > > > in the xarray with a 0 refcount, so this is invalid.
> > >
> > > > The core/iwcm takes a refcnt of the QP before calling connect, so
> > > > it can't be with refcnt zero
> > >
> > > > > Also, the xa_load doesn't have any locking around it, so the
> > > > > entire thing looks wrong to me.
> > > > Since the functions calling it from core/iwcm ( connect / accept )
> > > > take a qp Ref-cnt before the calling there's no risk of the entry
> > > > being deleted while xa_load is called
> > >
> > > Then why look it up in an xarray at all? If you already have the
> > > pointer to get a refcount then pass the refcounted pointer in and
> > > get rid of the sketchy xarray lookup.
> > >
> > I don't have the pointer, the core/iwcm has the pointer.
> > The interface between the core and driver is that the driver gets a qp
> > number from the core/iwcm and needs to get the QP pointer from it's
> > database. All the iWARP drivers are implemented this way, this is also not
> new to qedr.
> 
> That seems crazy.

I can take an action item on looking into redesigning this together with the other iwarp vendors.
For this series, that attempts to fix some leaks and concurrency issues in qedr ,
are there any more issues except the  xa_erase_irq which you would want me to fix for v2?

Thanks,
Michal 

> Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-10-06 19:49 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-03 12:03 [PATCH rdma-next 0/2] RDMA/qedr: Fix memory leaks and synchronization Michal Kalderon
2019-10-03 12:03 ` [PATCH rdma-next 1/2] RDMA/qedr: Fix synchronization methods and memory leaks in qedr Michal Kalderon
2019-10-03 16:16   ` Jason Gunthorpe
2019-10-03 19:33     ` [EXT] " Michal Kalderon
2019-10-04  0:36       ` Jason Gunthorpe
2019-10-04 17:10         ` Michal Kalderon
2019-10-04 17:28           ` Jason Gunthorpe
2019-10-06 19:49             ` Michal Kalderon
2019-10-03 12:03 ` [PATCH rdma-next 2/2] RDMA/qedr: Fix memory leak in user qp and mr Michal Kalderon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).