All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Last WQE Reached event treatment
@ 2018-01-17 13:52 ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)
  To: jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w, root

From: root <root-btHsVWeQhtoPzdh4EBCT85oR3lBoLCS1@public.gmane.org>

This series adds a new API to support a drain of a QP that is
associated with a SRQ (Shared Receive Queue).
Leakage problem can occur if we won't treat Last WQE Reached event:
1. QP may have consumed SRQ WQE, but no CQE received yet (especially
   in error flow that can take long time)
2. polling responder CQ at this stage is meaningless (since CQEs
   haven't been posted)
3. destroying QP at this point will cause SRQ credit leakage

So the solution (according to IB spec):
1. modify QP to err state
2. wait for Last WQE Reached event
3. poll CQ to free SRQ WQEs completions
4. destroy the QP.

In the series, I based on earlier patchset that enables
direct polling the CQ regardless of the type.
I've added support for ULPs: NVMe-oF, iSER and SRP as well.

Max Gurtovoy (4):
  IB/core: add support for draining Shared receive queues
  nvmet-rdma: notify QP on Last WQE Reached event arrival
  iser-target: remove dead code
  IB/srpt: notify QP on Last WQE Reached event arrival

 drivers/infiniband/core/verbs.c         | 69 ++++++++++++++++++++++++++++++++-
 drivers/infiniband/ulp/isert/ib_isert.c |  3 --
 drivers/infiniband/ulp/srpt/ib_srpt.c   |  1 +
 drivers/nvme/target/rdma.c              |  3 ++
 include/rdma/ib_verbs.h                 |  8 ++++
 5 files changed, 80 insertions(+), 4 deletions(-)

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 0/4] Last WQE Reached event treatment
@ 2018-01-17 13:52 ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)


From: root <root@rsws50.mtr.labs.mlnx>

This series adds a new API to support a drain of a QP that is
associated with a SRQ (Shared Receive Queue).
Leakage problem can occur if we won't treat Last WQE Reached event:
1. QP may have consumed SRQ WQE, but no CQE received yet (especially
   in error flow that can take long time)
2. polling responder CQ at this stage is meaningless (since CQEs
   haven't been posted)
3. destroying QP at this point will cause SRQ credit leakage

So the solution (according to IB spec):
1. modify QP to err state
2. wait for Last WQE Reached event
3. poll CQ to free SRQ WQEs completions
4. destroy the QP.

In the series, I based on earlier patchset that enables
direct polling the CQ regardless of the type.
I've added support for ULPs: NVMe-oF, iSER and SRP as well.

Max Gurtovoy (4):
  IB/core: add support for draining Shared receive queues
  nvmet-rdma: notify QP on Last WQE Reached event arrival
  iser-target: remove dead code
  IB/srpt: notify QP on Last WQE Reached event arrival

 drivers/infiniband/core/verbs.c         | 69 ++++++++++++++++++++++++++++++++-
 drivers/infiniband/ulp/isert/ib_isert.c |  3 --
 drivers/infiniband/ulp/srpt/ib_srpt.c   |  1 +
 drivers/nvme/target/rdma.c              |  3 ++
 include/rdma/ib_verbs.h                 |  8 ++++
 5 files changed, 80 insertions(+), 4 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-17 13:52 ` Max Gurtovoy
@ 2018-01-17 13:52     ` Max Gurtovoy
  -1 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)
  To: jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w, Max Gurtovoy

To avoid theoretical leakage for QPs assocoated with SRQ,
according to IB spec (section 10.3.1):

"Note, for QPs that are associated with an SRQ, the Consumer should take
the QP through the Error State before invoking a Destroy QP or a Modify
QP to the Reset State. The Consumer may invoke the Destroy QP without
first performing a Modify QP to the Error State and waiting for the Affiliated
Asynchronous Last WQE Reached Event. However, if the Consumer
does not wait for the Affiliated Asynchronous Last WQE Reached Event,
then WQE and Data Segment leakage may occur. Therefore, it is good
programming practice to tear down a QP that is associated with an SRQ
by using the following process:
 - Put the QP in the Error State;
 - wait for the Affiliated Asynchronous Last WQE Reached Event;
 - either:
   - drain the CQ by invoking the Poll CQ verb and either wait for CQ
     to be empty or the number of Poll CQ operations has exceeded
     CQ capacity size; or
   - post another WR that completes on the same CQ and wait for this
     WR to return as a WC;
 - and then invoke a Destroy QP or Reset QP."

Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/verbs.c | 69 ++++++++++++++++++++++++++++++++++++++++-
 include/rdma/ib_verbs.h         |  8 +++++
 2 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 7868727..7604450 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
 		if (qp_init_attr->recv_cq)
 			atomic_inc(&qp_init_attr->recv_cq->usecnt);
 		qp->srq = qp_init_attr->srq;
-		if (qp->srq)
+		if (qp->srq) {
 			atomic_inc(&qp_init_attr->srq->usecnt);
+			init_completion(&qp->srq_completion);
+		}
 	}
 
 	qp->pd	    = pd;
@@ -1405,6 +1407,22 @@ int ib_get_eth_speed(struct ib_device *dev, u8 port_num, u8 *speed, u8 *width)
 }
 EXPORT_SYMBOL(ib_get_eth_speed);
 
+int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event)
+{
+	int ret = 0;
+
+	switch (event) {
+	case IB_EVENT_QP_LAST_WQE_REACHED:
+		complete(&qp->srq_completion);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(ib_notify_qp);
+
 int ib_modify_qp(struct ib_qp *qp,
 		 struct ib_qp_attr *qp_attr,
 		 int qp_attr_mask)
@@ -2213,6 +2231,53 @@ static void __ib_drain_rq(struct ib_qp *qp)
 		wait_for_completion(&rdrain.done);
 }
 
+/*
+ * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
+ *                    timeout expires (best effort).
+ * @qp:               queue pair associated with SRQ to drain
+ *
+ * In order to avoid WQE and data segment leakage, one should destroy
+ * QP associated after performing the following:
+ *  - moving QP to err state
+ *  - wait for the Affiliated Asynchronous Last WQE Reached Event
+ *  - drain the CQ
+ */
+static void __ib_drain_srq(struct ib_qp *qp)
+{
+	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
+	struct ib_cq *cq;
+	int ret;
+
+	if (!qp->srq) {
+		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
+		return;
+	}
+
+	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
+	if (ret) {
+		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n", ret);
+		return;
+	}
+
+	if (ib_srq_has_cq(qp->srq->srq_type)) {
+		cq = qp->srq->ext.cq;
+	} else if (qp->recv_cq) {
+		cq = qp->recv_cq;
+	} else {
+		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n", qp);
+		return;
+	}
+
+	/*
+         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
+         * arrival, otherwise timeout will expire and leakage may occur.
+         * Use long timeout, for the buggy ULPs/HCAs that don't notify the
+         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
+         */
+	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
+		ib_process_cq_direct(cq, -1);
+}
+
 /**
  * ib_drain_sq() - Block until all SQ CQEs have been consumed by the
  *		   application.
@@ -2289,5 +2354,7 @@ void ib_drain_qp(struct ib_qp *qp)
 	ib_drain_sq(qp);
 	if (!qp->srq)
 		ib_drain_rq(qp);
+	else
+		__ib_drain_srq(qp);
 }
 EXPORT_SYMBOL(ib_drain_qp);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index fd84cda..c5febae 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1728,6 +1728,7 @@ struct ib_qp {
 	struct list_head	rdma_mrs;
 	struct list_head	sig_mrs;
 	struct ib_srq	       *srq;
+	struct completion	srq_completion;
 	struct ib_xrcd	       *xrcd; /* XRC TGT QPs only */
 	struct list_head	xrcd_list;
 
@@ -3060,6 +3061,13 @@ int ib_modify_qp(struct ib_qp *qp,
 		 int qp_attr_mask);
 
 /**
+ * ib_notify_qp - Notifies the QP for event arrival
+ * @qp: The QP to notify.
+ * @event: Specifies the event to notify.
+ */
+int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event);
+
+/**
  * ib_query_qp - Returns the attribute list and current values for the
  *   specified QP.
  * @qp: The QP to query.
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-17 13:52     ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)


To avoid theoretical leakage for QPs assocoated with SRQ,
according to IB spec (section 10.3.1):

"Note, for QPs that are associated with an SRQ, the Consumer should take
the QP through the Error State before invoking a Destroy QP or a Modify
QP to the Reset State. The Consumer may invoke the Destroy QP without
first performing a Modify QP to the Error State and waiting for the Affiliated
Asynchronous Last WQE Reached Event. However, if the Consumer
does not wait for the Affiliated Asynchronous Last WQE Reached Event,
then WQE and Data Segment leakage may occur. Therefore, it is good
programming practice to tear down a QP that is associated with an SRQ
by using the following process:
 - Put the QP in the Error State;
 - wait for the Affiliated Asynchronous Last WQE Reached Event;
 - either:
   - drain the CQ by invoking the Poll CQ verb and either wait for CQ
     to be empty or the number of Poll CQ operations has exceeded
     CQ capacity size; or
   - post another WR that completes on the same CQ and wait for this
     WR to return as a WC;
 - and then invoke a Destroy QP or Reset QP."

Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/infiniband/core/verbs.c | 69 ++++++++++++++++++++++++++++++++++++++++-
 include/rdma/ib_verbs.h         |  8 +++++
 2 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 7868727..7604450 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
 		if (qp_init_attr->recv_cq)
 			atomic_inc(&qp_init_attr->recv_cq->usecnt);
 		qp->srq = qp_init_attr->srq;
-		if (qp->srq)
+		if (qp->srq) {
 			atomic_inc(&qp_init_attr->srq->usecnt);
+			init_completion(&qp->srq_completion);
+		}
 	}
 
 	qp->pd	    = pd;
@@ -1405,6 +1407,22 @@ int ib_get_eth_speed(struct ib_device *dev, u8 port_num, u8 *speed, u8 *width)
 }
 EXPORT_SYMBOL(ib_get_eth_speed);
 
+int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event)
+{
+	int ret = 0;
+
+	switch (event) {
+	case IB_EVENT_QP_LAST_WQE_REACHED:
+		complete(&qp->srq_completion);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(ib_notify_qp);
+
 int ib_modify_qp(struct ib_qp *qp,
 		 struct ib_qp_attr *qp_attr,
 		 int qp_attr_mask)
@@ -2213,6 +2231,53 @@ static void __ib_drain_rq(struct ib_qp *qp)
 		wait_for_completion(&rdrain.done);
 }
 
+/*
+ * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
+ *                    timeout expires (best effort).
+ * @qp:               queue pair associated with SRQ to drain
+ *
+ * In order to avoid WQE and data segment leakage, one should destroy
+ * QP associated after performing the following:
+ *  - moving QP to err state
+ *  - wait for the Affiliated Asynchronous Last WQE Reached Event
+ *  - drain the CQ
+ */
+static void __ib_drain_srq(struct ib_qp *qp)
+{
+	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
+	struct ib_cq *cq;
+	int ret;
+
+	if (!qp->srq) {
+		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
+		return;
+	}
+
+	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
+	if (ret) {
+		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n", ret);
+		return;
+	}
+
+	if (ib_srq_has_cq(qp->srq->srq_type)) {
+		cq = qp->srq->ext.cq;
+	} else if (qp->recv_cq) {
+		cq = qp->recv_cq;
+	} else {
+		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n", qp);
+		return;
+	}
+
+	/*
+         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
+         * arrival, otherwise timeout will expire and leakage may occur.
+         * Use long timeout, for the buggy ULPs/HCAs that don't notify the
+         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
+         */
+	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
+		ib_process_cq_direct(cq, -1);
+}
+
 /**
  * ib_drain_sq() - Block until all SQ CQEs have been consumed by the
  *		   application.
@@ -2289,5 +2354,7 @@ void ib_drain_qp(struct ib_qp *qp)
 	ib_drain_sq(qp);
 	if (!qp->srq)
 		ib_drain_rq(qp);
+	else
+		__ib_drain_srq(qp);
 }
 EXPORT_SYMBOL(ib_drain_qp);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index fd84cda..c5febae 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1728,6 +1728,7 @@ struct ib_qp {
 	struct list_head	rdma_mrs;
 	struct list_head	sig_mrs;
 	struct ib_srq	       *srq;
+	struct completion	srq_completion;
 	struct ib_xrcd	       *xrcd; /* XRC TGT QPs only */
 	struct list_head	xrcd_list;
 
@@ -3060,6 +3061,13 @@ int ib_modify_qp(struct ib_qp *qp,
 		 int qp_attr_mask);
 
 /**
+ * ib_notify_qp - Notifies the QP for event arrival
+ * @qp: The QP to notify.
+ * @event: Specifies the event to notify.
+ */
+int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event);
+
+/**
  * ib_query_qp - Returns the attribute list and current values for the
  *   specified QP.
  * @qp: The QP to query.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival
  2018-01-17 13:52 ` Max Gurtovoy
@ 2018-01-17 13:52     ` Max Gurtovoy
  -1 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)
  To: jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w, Max Gurtovoy

In order to avoid resource leakage for QP associated with
a Shared Receive Queue (SRQ), notify it on Last WQE Reached
event arrival.

Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/nvme/target/rdma.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4991290..99a14a7 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1151,6 +1151,9 @@ static void nvmet_rdma_qp_event(struct ib_event *event, void *priv)
 	case IB_EVENT_COMM_EST:
 		rdma_notify(queue->cm_id, event->event);
 		break;
+	case IB_EVENT_QP_LAST_WQE_REACHED:
+		ib_notify_qp(queue->cm_id->qp, event->event);
+		break;
 	default:
 		pr_err("received IB QP event: %s (%d)\n",
 		       ib_event_msg(event->event), event->event);
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival
@ 2018-01-17 13:52     ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)


In order to avoid resource leakage for QP associated with
a Shared Receive Queue (SRQ), notify it on Last WQE Reached
event arrival.

Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/nvme/target/rdma.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4991290..99a14a7 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1151,6 +1151,9 @@ static void nvmet_rdma_qp_event(struct ib_event *event, void *priv)
 	case IB_EVENT_COMM_EST:
 		rdma_notify(queue->cm_id, event->event);
 		break;
+	case IB_EVENT_QP_LAST_WQE_REACHED:
+		ib_notify_qp(queue->cm_id->qp, event->event);
+		break;
 	default:
 		pr_err("received IB QP event: %s (%d)\n",
 		       ib_event_msg(event->event), event->event);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/4] iser-target: remove dead code
  2018-01-17 13:52 ` Max Gurtovoy
@ 2018-01-17 13:52     ` Max Gurtovoy
  -1 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)
  To: jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w, Max Gurtovoy

IB_EVENT_QP_LAST_WQE_REACHED event can arrive only for
QP associated with Shared Receive Queue (SRQ). There is
no SRQ usage in iser target.

Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/ulp/isert/ib_isert.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 720dfb3..bde22e1 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -81,9 +81,6 @@
 	case IB_EVENT_COMM_EST:
 		rdma_notify(isert_conn->cm_id, IB_EVENT_COMM_EST);
 		break;
-	case IB_EVENT_QP_LAST_WQE_REACHED:
-		isert_warn("Reached TX IB_EVENT_QP_LAST_WQE_REACHED\n");
-		break;
 	default:
 		break;
 	}
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/4] iser-target: remove dead code
@ 2018-01-17 13:52     ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)


IB_EVENT_QP_LAST_WQE_REACHED event can arrive only for
QP associated with Shared Receive Queue (SRQ). There is
no SRQ usage in iser target.

Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/infiniband/ulp/isert/ib_isert.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 720dfb3..bde22e1 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -81,9 +81,6 @@
 	case IB_EVENT_COMM_EST:
 		rdma_notify(isert_conn->cm_id, IB_EVENT_COMM_EST);
 		break;
-	case IB_EVENT_QP_LAST_WQE_REACHED:
-		isert_warn("Reached TX IB_EVENT_QP_LAST_WQE_REACHED\n");
-		break;
 	default:
 		break;
 	}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/4] IB/srpt: notify QP on Last WQE Reached event arrival
  2018-01-17 13:52 ` Max Gurtovoy
@ 2018-01-17 13:52     ` Max Gurtovoy
  -1 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)
  To: jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w, Max Gurtovoy

In order to avoid resource leakage for QP associated with
a Shared Receive Queue (SRQ), notify it on Last WQE Reached
event arrival.

Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/ulp/srpt/ib_srpt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index bfa576a..241c8eb 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -209,6 +209,7 @@ static void srpt_qp_event(struct ib_event *event, struct srpt_rdma_ch *ch)
 		pr_debug("%s-%d, state %s: received Last WQE event.\n",
 			 ch->sess_name, ch->qp->qp_num,
 			 get_ch_state_name(ch->state));
+		ib_notify_qp(ch->qp, event->event);
 		break;
 	default:
 		pr_err("received unrecognized IB QP event %d\n", event->event);
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/4] IB/srpt: notify QP on Last WQE Reached event arrival
@ 2018-01-17 13:52     ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-17 13:52 UTC (permalink / raw)


In order to avoid resource leakage for QP associated with
a Shared Receive Queue (SRQ), notify it on Last WQE Reached
event arrival.

Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/infiniband/ulp/srpt/ib_srpt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index bfa576a..241c8eb 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -209,6 +209,7 @@ static void srpt_qp_event(struct ib_event *event, struct srpt_rdma_ch *ch)
 		pr_debug("%s-%d, state %s: received Last WQE event.\n",
 			 ch->sess_name, ch->qp->qp_num,
 			 get_ch_state_name(ch->state));
+		ib_notify_qp(ch->qp, event->event);
 		break;
 	default:
 		pr_err("received unrecognized IB QP event %d\n", event->event);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* RE: [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-17 13:52     ` Max Gurtovoy
@ 2018-01-17 15:35         ` Steve Wise
  -1 siblings, 0 replies; 26+ messages in thread
From: Steve Wise @ 2018-01-17 15:35 UTC (permalink / raw)
  To: 'Max Gurtovoy',
	jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w

> 
> To avoid theoretical leakage for QPs assocoated with SRQ,
> according to IB spec (section 10.3.1):
> 
> "Note, for QPs that are associated with an SRQ, the Consumer should take
> the QP through the Error State before invoking a Destroy QP or a Modify
> QP to the Reset State. The Consumer may invoke the Destroy QP without
> first performing a Modify QP to the Error State and waiting for the
Affiliated
> Asynchronous Last WQE Reached Event. However, if the Consumer
> does not wait for the Affiliated Asynchronous Last WQE Reached Event,
> then WQE and Data Segment leakage may occur. Therefore, it is good
> programming practice to tear down a QP that is associated with an SRQ
> by using the following process:
>  - Put the QP in the Error State;
>  - wait for the Affiliated Asynchronous Last WQE Reached Event;
>  - either:
>    - drain the CQ by invoking the Poll CQ verb and either wait for CQ
>      to be empty or the number of Poll CQ operations has exceeded
>      CQ capacity size; or
>    - post another WR that completes on the same CQ and wait for this
>      WR to return as a WC;
>  - and then invoke a Destroy QP or Reset QP."
> 
> Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---
>  drivers/infiniband/core/verbs.c | 69
> ++++++++++++++++++++++++++++++++++++++++-
>  include/rdma/ib_verbs.h         |  8 +++++
>  2 files changed, 76 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/core/verbs.c
b/drivers/infiniband/core/verbs.c
> index 7868727..7604450 100644
> --- a/drivers/infiniband/core/verbs.c
> +++ b/drivers/infiniband/core/verbs.c
> @@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
>  		if (qp_init_attr->recv_cq)
>  			atomic_inc(&qp_init_attr->recv_cq->usecnt);
>  		qp->srq = qp_init_attr->srq;
> -		if (qp->srq)
> +		if (qp->srq) {
>  			atomic_inc(&qp_init_attr->srq->usecnt);
> +			init_completion(&qp->srq_completion);
> +		}
>  	}
> 
>  	qp->pd	    = pd;
> @@ -1405,6 +1407,22 @@ int ib_get_eth_speed(struct ib_device *dev, u8
> port_num, u8 *speed, u8 *width)
>  }
>  EXPORT_SYMBOL(ib_get_eth_speed);
> 
> +int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event)
> +{
> +	int ret = 0;
> +
> +	switch (event) {
> +	case IB_EVENT_QP_LAST_WQE_REACHED:
> +		complete(&qp->srq_completion);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(ib_notify_qp);
> +
>  int ib_modify_qp(struct ib_qp *qp,
>  		 struct ib_qp_attr *qp_attr,
>  		 int qp_attr_mask)
> @@ -2213,6 +2231,53 @@ static void __ib_drain_rq(struct ib_qp *qp)
>  		wait_for_completion(&rdrain.done);
>  }
> 
> +/*
> + * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
> + *                    timeout expires (best effort).
> + * @qp:               queue pair associated with SRQ to drain
> + *
> + * In order to avoid WQE and data segment leakage, one should destroy
> + * QP associated after performing the following:
> + *  - moving QP to err state
> + *  - wait for the Affiliated Asynchronous Last WQE Reached Event
> + *  - drain the CQ
> + */
> +static void __ib_drain_srq(struct ib_qp *qp)
> +{
> +	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
> +	struct ib_cq *cq;
> +	int ret;
> +
> +	if (!qp->srq) {
> +		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
> +		return;
> +	}
> +
> +	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
> +	if (ret) {
> +		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n",
> ret);
> +		return;
> +	}
> +
> +	if (ib_srq_has_cq(qp->srq->srq_type)) {
> +		cq = qp->srq->ext.cq;
> +	} else if (qp->recv_cq) {
> +		cq = qp->recv_cq;
> +	} else {
> +		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n",
> qp);
> +		return;
> +	}
> +
> +	/*
> +         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
> +         * arrival, otherwise timeout will expire and leakage may occur.
> +         * Use long timeout, for the buggy ULPs/HCAs that don't notify
the
> +         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
> +         */
> +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
> +		ib_process_cq_direct(cq, -1);
> +}
> +

Perhaps a WARN_ONCE is warranted? 


>  /**
>   * ib_drain_sq() - Block until all SQ CQEs have been consumed by the
>   *		   application.
> @@ -2289,5 +2354,7 @@ void ib_drain_qp(struct ib_qp *qp)
>  	ib_drain_sq(qp);
>  	if (!qp->srq)
>  		ib_drain_rq(qp);
> +	else
> +		__ib_drain_srq(qp);
>  }
>  EXPORT_SYMBOL(ib_drain_qp);
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index fd84cda..c5febae 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -1728,6 +1728,7 @@ struct ib_qp {
>  	struct list_head	rdma_mrs;
>  	struct list_head	sig_mrs;
>  	struct ib_srq	       *srq;
> +	struct completion	srq_completion;
>  	struct ib_xrcd	       *xrcd; /* XRC TGT QPs only */
>  	struct list_head	xrcd_list;
> 
> @@ -3060,6 +3061,13 @@ int ib_modify_qp(struct ib_qp *qp,
>  		 int qp_attr_mask);
> 
>  /**
> + * ib_notify_qp - Notifies the QP for event arrival
> + * @qp: The QP to notify.
> + * @event: Specifies the event to notify.
> + */
> +int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event);
> +
> +/**
>   * ib_query_qp - Returns the attribute list and current values for the
>   *   specified QP.
>   * @qp: The QP to query.
> --
> 1.8.3.1
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-17 15:35         ` Steve Wise
  0 siblings, 0 replies; 26+ messages in thread
From: Steve Wise @ 2018-01-17 15:35 UTC (permalink / raw)


> 
> To avoid theoretical leakage for QPs assocoated with SRQ,
> according to IB spec (section 10.3.1):
> 
> "Note, for QPs that are associated with an SRQ, the Consumer should take
> the QP through the Error State before invoking a Destroy QP or a Modify
> QP to the Reset State. The Consumer may invoke the Destroy QP without
> first performing a Modify QP to the Error State and waiting for the
Affiliated
> Asynchronous Last WQE Reached Event. However, if the Consumer
> does not wait for the Affiliated Asynchronous Last WQE Reached Event,
> then WQE and Data Segment leakage may occur. Therefore, it is good
> programming practice to tear down a QP that is associated with an SRQ
> by using the following process:
>  - Put the QP in the Error State;
>  - wait for the Affiliated Asynchronous Last WQE Reached Event;
>  - either:
>    - drain the CQ by invoking the Poll CQ verb and either wait for CQ
>      to be empty or the number of Poll CQ operations has exceeded
>      CQ capacity size; or
>    - post another WR that completes on the same CQ and wait for this
>      WR to return as a WC;
>  - and then invoke a Destroy QP or Reset QP."
> 
> Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
> ---
>  drivers/infiniband/core/verbs.c | 69
> ++++++++++++++++++++++++++++++++++++++++-
>  include/rdma/ib_verbs.h         |  8 +++++
>  2 files changed, 76 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/core/verbs.c
b/drivers/infiniband/core/verbs.c
> index 7868727..7604450 100644
> --- a/drivers/infiniband/core/verbs.c
> +++ b/drivers/infiniband/core/verbs.c
> @@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
>  		if (qp_init_attr->recv_cq)
>  			atomic_inc(&qp_init_attr->recv_cq->usecnt);
>  		qp->srq = qp_init_attr->srq;
> -		if (qp->srq)
> +		if (qp->srq) {
>  			atomic_inc(&qp_init_attr->srq->usecnt);
> +			init_completion(&qp->srq_completion);
> +		}
>  	}
> 
>  	qp->pd	    = pd;
> @@ -1405,6 +1407,22 @@ int ib_get_eth_speed(struct ib_device *dev, u8
> port_num, u8 *speed, u8 *width)
>  }
>  EXPORT_SYMBOL(ib_get_eth_speed);
> 
> +int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event)
> +{
> +	int ret = 0;
> +
> +	switch (event) {
> +	case IB_EVENT_QP_LAST_WQE_REACHED:
> +		complete(&qp->srq_completion);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(ib_notify_qp);
> +
>  int ib_modify_qp(struct ib_qp *qp,
>  		 struct ib_qp_attr *qp_attr,
>  		 int qp_attr_mask)
> @@ -2213,6 +2231,53 @@ static void __ib_drain_rq(struct ib_qp *qp)
>  		wait_for_completion(&rdrain.done);
>  }
> 
> +/*
> + * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
> + *                    timeout expires (best effort).
> + * @qp:               queue pair associated with SRQ to drain
> + *
> + * In order to avoid WQE and data segment leakage, one should destroy
> + * QP associated after performing the following:
> + *  - moving QP to err state
> + *  - wait for the Affiliated Asynchronous Last WQE Reached Event
> + *  - drain the CQ
> + */
> +static void __ib_drain_srq(struct ib_qp *qp)
> +{
> +	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
> +	struct ib_cq *cq;
> +	int ret;
> +
> +	if (!qp->srq) {
> +		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
> +		return;
> +	}
> +
> +	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
> +	if (ret) {
> +		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n",
> ret);
> +		return;
> +	}
> +
> +	if (ib_srq_has_cq(qp->srq->srq_type)) {
> +		cq = qp->srq->ext.cq;
> +	} else if (qp->recv_cq) {
> +		cq = qp->recv_cq;
> +	} else {
> +		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n",
> qp);
> +		return;
> +	}
> +
> +	/*
> +         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
> +         * arrival, otherwise timeout will expire and leakage may occur.
> +         * Use long timeout, for the buggy ULPs/HCAs that don't notify
the
> +         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
> +         */
> +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
> +		ib_process_cq_direct(cq, -1);
> +}
> +

Perhaps a WARN_ONCE is warranted? 


>  /**
>   * ib_drain_sq() - Block until all SQ CQEs have been consumed by the
>   *		   application.
> @@ -2289,5 +2354,7 @@ void ib_drain_qp(struct ib_qp *qp)
>  	ib_drain_sq(qp);
>  	if (!qp->srq)
>  		ib_drain_rq(qp);
> +	else
> +		__ib_drain_srq(qp);
>  }
>  EXPORT_SYMBOL(ib_drain_qp);
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index fd84cda..c5febae 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -1728,6 +1728,7 @@ struct ib_qp {
>  	struct list_head	rdma_mrs;
>  	struct list_head	sig_mrs;
>  	struct ib_srq	       *srq;
> +	struct completion	srq_completion;
>  	struct ib_xrcd	       *xrcd; /* XRC TGT QPs only */
>  	struct list_head	xrcd_list;
> 
> @@ -3060,6 +3061,13 @@ int ib_modify_qp(struct ib_qp *qp,
>  		 int qp_attr_mask);
> 
>  /**
> + * ib_notify_qp - Notifies the QP for event arrival
> + * @qp: The QP to notify.
> + * @event: Specifies the event to notify.
> + */
> +int ib_notify_qp(struct ib_qp *qp, enum ib_event_type event);
> +
> +/**
>   * ib_query_qp - Returns the attribute list and current values for the
>   *   specified QP.
>   * @qp: The QP to query.
> --
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival
  2018-01-17 13:52     ` Max Gurtovoy
@ 2018-01-17 15:37         ` Steve Wise
  -1 siblings, 0 replies; 26+ messages in thread
From: Steve Wise @ 2018-01-17 15:37 UTC (permalink / raw)
  To: 'Max Gurtovoy',
	jgg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	sagi-NQWnxTmZq1alnMjI0IkVqw, bart.vanassche-Sjgp3cTcYWE,
	hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w

> 
> In order to avoid resource leakage for QP associated with
> a Shared Receive Queue (SRQ), notify it on Last WQE Reached
> event arrival.
> 
> Signed-off-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---
>  drivers/nvme/target/rdma.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 4991290..99a14a7 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -1151,6 +1151,9 @@ static void nvmet_rdma_qp_event(struct ib_event
> *event, void *priv)
>  	case IB_EVENT_COMM_EST:
>  		rdma_notify(queue->cm_id, event->event);
>  		break;
> +	case IB_EVENT_QP_LAST_WQE_REACHED:
> +		ib_notify_qp(queue->cm_id->qp, event->event);
> +		break;
>  	default:
>  		pr_err("received IB QP event: %s (%d)\n",
>  		       ib_event_msg(event->event), event->event);

I wonder if this could be handled in ib_dispatch_event() for all ULPS?



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival
@ 2018-01-17 15:37         ` Steve Wise
  0 siblings, 0 replies; 26+ messages in thread
From: Steve Wise @ 2018-01-17 15:37 UTC (permalink / raw)


> 
> In order to avoid resource leakage for QP associated with
> a Shared Receive Queue (SRQ), notify it on Last WQE Reached
> event arrival.
> 
> Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
> ---
>  drivers/nvme/target/rdma.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 4991290..99a14a7 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -1151,6 +1151,9 @@ static void nvmet_rdma_qp_event(struct ib_event
> *event, void *priv)
>  	case IB_EVENT_COMM_EST:
>  		rdma_notify(queue->cm_id, event->event);
>  		break;
> +	case IB_EVENT_QP_LAST_WQE_REACHED:
> +		ib_notify_qp(queue->cm_id->qp, event->event);
> +		break;
>  	default:
>  		pr_err("received IB QP event: %s (%d)\n",
>  		       ib_event_msg(event->event), event->event);

I wonder if this could be handled in ib_dispatch_event() for all ULPS?

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-17 13:52     ` Max Gurtovoy
@ 2018-01-17 16:11         ` Bart Van Assche
  -1 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-01-17 16:11 UTC (permalink / raw)
  To: jgg-VPRAkNaXOzVWk0Htik3J/w, maxg-VPRAkNaXOzVWk0Htik3J/w,
	hch-jcswGhMUV9g, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w

On Wed, 2018-01-17 at 15:52 +0200, Max Gurtovoy wrote:
> +/*
> + * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
> + *                    timeout expires (best effort).
> + * @qp:               queue pair associated with SRQ to drain
> + *
> + * In order to avoid WQE and data segment leakage, one should destroy
> + * QP associated after performing the following:
> + *  - moving QP to err state
> + *  - wait for the Affiliated Asynchronous Last WQE Reached Event
> + *  - drain the CQ
> + */
> +static void __ib_drain_srq(struct ib_qp *qp)
> +{
> +	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
> +	struct ib_cq *cq;
> +	int ret;
> +
> +	if (!qp->srq) {
> +		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
> +		return;
> +	}
> +
> +	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
> +	if (ret) {
> +		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n", ret);
> +		return;
> +	}
> +
> +	if (ib_srq_has_cq(qp->srq->srq_type)) {
> +		cq = qp->srq->ext.cq;
> +	} else if (qp->recv_cq) {
> +		cq = qp->recv_cq;
> +	} else {
> +		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n", qp);
> +		return;
> +	}
> +
> +	/*
> +         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
> +         * arrival, otherwise timeout will expire and leakage may occur.
> +         * Use long timeout, for the buggy ULPs/HCAs that don't notify the
> +         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
> +         */
> +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
> +		ib_process_cq_direct(cq, -1);
> +}

Hello Max,

It seems weird to me that __ib_drain_srq() does not follow the same approach as
__ib_drain_rq(). Have you considered to post an additional receive work entry
on the SRQ and to wait until the completion for that work entry is signaled?
That would avoid that a completion has to be added in the ib_qp data structure
and would also avoid that all ULPs that use SRQs have to be modified.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-17 16:11         ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-01-17 16:11 UTC (permalink / raw)


On Wed, 2018-01-17@15:52 +0200, Max Gurtovoy wrote:
> +/*
> + * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
> + *                    timeout expires (best effort).
> + * @qp:               queue pair associated with SRQ to drain
> + *
> + * In order to avoid WQE and data segment leakage, one should destroy
> + * QP associated after performing the following:
> + *  - moving QP to err state
> + *  - wait for the Affiliated Asynchronous Last WQE Reached Event
> + *  - drain the CQ
> + */
> +static void __ib_drain_srq(struct ib_qp *qp)
> +{
> +	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
> +	struct ib_cq *cq;
> +	int ret;
> +
> +	if (!qp->srq) {
> +		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
> +		return;
> +	}
> +
> +	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
> +	if (ret) {
> +		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n", ret);
> +		return;
> +	}
> +
> +	if (ib_srq_has_cq(qp->srq->srq_type)) {
> +		cq = qp->srq->ext.cq;
> +	} else if (qp->recv_cq) {
> +		cq = qp->recv_cq;
> +	} else {
> +		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n", qp);
> +		return;
> +	}
> +
> +	/*
> +         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
> +         * arrival, otherwise timeout will expire and leakage may occur.
> +         * Use long timeout, for the buggy ULPs/HCAs that don't notify the
> +         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
> +         */
> +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
> +		ib_process_cq_direct(cq, -1);
> +}

Hello Max,

It seems weird to me that __ib_drain_srq() does not follow the same approach as
__ib_drain_rq(). Have you considered to post an additional receive work entry
on the SRQ and to wait until the completion for that work entry is signaled?
That would avoid that a completion has to be added in the ib_qp data structure
and would also avoid that all ULPs that use SRQs have to be modified.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-17 16:11         ` Bart Van Assche
@ 2018-01-18 10:31             ` Max Gurtovoy
  -1 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-18 10:31 UTC (permalink / raw)
  To: Bart Van Assche, jgg-VPRAkNaXOzVWk0Htik3J/w, hch-jcswGhMUV9g,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, sagi-NQWnxTmZq1alnMjI0IkVqw
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w



On 1/17/2018 6:11 PM, Bart Van Assche wrote:
> On Wed, 2018-01-17 at 15:52 +0200, Max Gurtovoy wrote:
>> +/*
>> + * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
>> + *                    timeout expires (best effort).
>> + * @qp:               queue pair associated with SRQ to drain
>> + *
>> + * In order to avoid WQE and data segment leakage, one should destroy
>> + * QP associated after performing the following:
>> + *  - moving QP to err state
>> + *  - wait for the Affiliated Asynchronous Last WQE Reached Event
>> + *  - drain the CQ
>> + */
>> +static void __ib_drain_srq(struct ib_qp *qp)
>> +{
>> +	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
>> +	struct ib_cq *cq;
>> +	int ret;
>> +
>> +	if (!qp->srq) {
>> +		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
>> +		return;
>> +	}
>> +
>> +	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
>> +	if (ret) {
>> +		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n", ret);
>> +		return;
>> +	}
>> +
>> +	if (ib_srq_has_cq(qp->srq->srq_type)) {
>> +		cq = qp->srq->ext.cq;
>> +	} else if (qp->recv_cq) {
>> +		cq = qp->recv_cq;
>> +	} else {
>> +		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n", qp);
>> +		return;
>> +	}
>> +
>> +	/*
>> +         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
>> +         * arrival, otherwise timeout will expire and leakage may occur.
>> +         * Use long timeout, for the buggy ULPs/HCAs that don't notify the
>> +         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
>> +         */
>> +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
>> +		ib_process_cq_direct(cq, -1);
>> +}
> 
> Hello Max,

Hello Bart,

> 
> It seems weird to me that __ib_drain_srq() does not follow the same approach as
> __ib_drain_rq(). Have you considered to post an additional receive work entry
> on the SRQ and to wait until the completion for that work entry is signaled?

This approach will never generate a completion. No flushes for SRQ. we 
get completion (SRQ recv completion) only if consumed from the wire (and 
this will not happen in our case).

> That would avoid that a completion has to be added in the ib_qp data structure
> and would also avoid that all ULPs that use SRQs have to be modified.

I'm always open for new suggestions for implementation :)

> 
> Thanks,
> 
> Bart.
> 

-Max.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-18 10:31             ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-18 10:31 UTC (permalink / raw)




On 1/17/2018 6:11 PM, Bart Van Assche wrote:
> On Wed, 2018-01-17@15:52 +0200, Max Gurtovoy wrote:
>> +/*
>> + * __ib_drain_srq() - Block until all Last WQE Reached event arrives, or
>> + *                    timeout expires (best effort).
>> + * @qp:               queue pair associated with SRQ to drain
>> + *
>> + * In order to avoid WQE and data segment leakage, one should destroy
>> + * QP associated after performing the following:
>> + *  - moving QP to err state
>> + *  - wait for the Affiliated Asynchronous Last WQE Reached Event
>> + *  - drain the CQ
>> + */
>> +static void __ib_drain_srq(struct ib_qp *qp)
>> +{
>> +	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
>> +	struct ib_cq *cq;
>> +	int ret;
>> +
>> +	if (!qp->srq) {
>> +		WARN_ONCE(1, "QP 0x%p is not associated with SRQ\n", qp);
>> +		return;
>> +	}
>> +
>> +	ret = ib_modify_qp(qp, &attr, IB_QP_STATE);
>> +	if (ret) {
>> +		WARN_ONCE(ret, "failed to drain shared recv queue: %d\n", ret);
>> +		return;
>> +	}
>> +
>> +	if (ib_srq_has_cq(qp->srq->srq_type)) {
>> +		cq = qp->srq->ext.cq;
>> +	} else if (qp->recv_cq) {
>> +		cq = qp->recv_cq;
>> +	} else {
>> +		WARN_ONCE(1, "QP 0x%p has no CQ associated with SRQ\n", qp);
>> +		return;
>> +	}
>> +
>> +	/*
>> +         * ULP should invoke ib_notify_qp on IB_EVENT_QP_LAST_WQE_REACHED
>> +         * arrival, otherwise timeout will expire and leakage may occur.
>> +         * Use long timeout, for the buggy ULPs/HCAs that don't notify the
>> +         * QP nor raising IB_EVENT_QP_LAST_WQE_REACHED event.
>> +         */
>> +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
>> +		ib_process_cq_direct(cq, -1);
>> +}
> 
> Hello Max,

Hello Bart,

> 
> It seems weird to me that __ib_drain_srq() does not follow the same approach as
> __ib_drain_rq(). Have you considered to post an additional receive work entry
> on the SRQ and to wait until the completion for that work entry is signaled?

This approach will never generate a completion. No flushes for SRQ. we 
get completion (SRQ recv completion) only if consumed from the wire (and 
this will not happen in our case).

> That would avoid that a completion has to be added in the ib_qp data structure
> and would also avoid that all ULPs that use SRQs have to be modified.

I'm always open for new suggestions for implementation :)

> 
> Thanks,
> 
> Bart.
> 

-Max.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival
  2018-01-17 15:37         ` Steve Wise
@ 2018-01-18 18:04           ` Bart Van Assche
  -1 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-01-18 18:04 UTC (permalink / raw)
  To: swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, sagi-NQWnxTmZq1alnMjI0IkVqw,
	jgg-VPRAkNaXOzVWk0Htik3J/w, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	maxg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w

On Wed, 2018-01-17 at 09:37 -0600, Steve Wise wrote:
> > 
> > In order to avoid resource leakage for QP associated with
> > a Shared Receive Queue (SRQ), notify it on Last WQE Reached
> > event arrival.
> > 
> > Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> > ---
> >  drivers/nvme/target/rdma.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> > index 4991290..99a14a7 100644
> > --- a/drivers/nvme/target/rdma.c
> > +++ b/drivers/nvme/target/rdma.c
> > @@ -1151,6 +1151,9 @@ static void nvmet_rdma_qp_event(struct ib_event
> > *event, void *priv)
> >  	case IB_EVENT_COMM_EST:
> >  		rdma_notify(queue->cm_id, event->event);
> >  		break;
> > +	case IB_EVENT_QP_LAST_WQE_REACHED:
> > +		ib_notify_qp(queue->cm_id->qp, event->event);
> > +		break;
> >  	default:
> >  		pr_err("received IB QP event: %s (%d)\n",
> >  		       ib_event_msg(event->event), event->event);
> 
> I wonder if this could be handled in ib_dispatch_event() for all ULPS?

As far as I can see all ib_dispatch_event() calls for IB_EVENT_QP_LAST_WQE_REACHED
set element.qp in struct ib_event. So I'd also like to see this proposal to be 
explored further.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival
@ 2018-01-18 18:04           ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-01-18 18:04 UTC (permalink / raw)


On Wed, 2018-01-17@09:37 -0600, Steve Wise wrote:
> > 
> > In order to avoid resource leakage for QP associated with
> > a Shared Receive Queue (SRQ), notify it on Last WQE Reached
> > event arrival.
> > 
> > Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
> > ---
> >  drivers/nvme/target/rdma.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> > index 4991290..99a14a7 100644
> > --- a/drivers/nvme/target/rdma.c
> > +++ b/drivers/nvme/target/rdma.c
> > @@ -1151,6 +1151,9 @@ static void nvmet_rdma_qp_event(struct ib_event
> > *event, void *priv)
> >  	case IB_EVENT_COMM_EST:
> >  		rdma_notify(queue->cm_id, event->event);
> >  		break;
> > +	case IB_EVENT_QP_LAST_WQE_REACHED:
> > +		ib_notify_qp(queue->cm_id->qp, event->event);
> > +		break;
> >  	default:
> >  		pr_err("received IB QP event: %s (%d)\n",
> >  		       ib_event_msg(event->event), event->event);
> 
> I wonder if this could be handled in ib_dispatch_event() for all ULPS?

As far as I can see all ib_dispatch_event() calls for IB_EVENT_QP_LAST_WQE_REACHED
set element.qp in struct ib_event. So I'd also like to see this proposal to be 
explored further.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-17 15:35         ` Steve Wise
@ 2018-01-18 18:06           ` Bart Van Assche
  -1 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-01-18 18:06 UTC (permalink / raw)
  To: swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, sagi-NQWnxTmZq1alnMjI0IkVqw,
	jgg-VPRAkNaXOzVWk0Htik3J/w, hch-jcswGhMUV9g,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	maxg-VPRAkNaXOzVWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w

On Wed, 2018-01-17 at 09:35 -0600, Steve Wise wrote:
> > +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
> > +		ib_process_cq_direct(cq, -1);
> > +}
> 
> Perhaps a WARN_ONCE is warranted? 

Agreed. I would also like to see a WARN_ONCE() if the timeout expires before
the completion has been signaled.

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-18 18:06           ` Bart Van Assche
  0 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2018-01-18 18:06 UTC (permalink / raw)


On Wed, 2018-01-17@09:35 -0600, Steve Wise wrote:
> > +	if (wait_for_completion_timeout(&qp->srq_completion, 10 * HZ) > 0)
> > +		ib_process_cq_direct(cq, -1);
> > +}
> 
> Perhaps a WARN_ONCE is warranted? 

Agreed. I would also like to see a WARN_ONCE() if the timeout expires before
the completion has been signaled.

Bart.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-17 13:52     ` Max Gurtovoy
@ 2018-01-24  6:39         ` Sagi Grimberg
  -1 siblings, 0 replies; 26+ messages in thread
From: Sagi Grimberg @ 2018-01-24  6:39 UTC (permalink / raw)
  To: Max Gurtovoy, jgg-VPRAkNaXOzVWk0Htik3J/w,
	dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	bart.vanassche-Sjgp3cTcYWE, hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w

Hi Max,

> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> index 7868727..7604450 100644
> --- a/drivers/infiniband/core/verbs.c
> +++ b/drivers/infiniband/core/verbs.c
> @@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
>   		if (qp_init_attr->recv_cq)
>   			atomic_inc(&qp_init_attr->recv_cq->usecnt);
>   		qp->srq = qp_init_attr->srq;
> -		if (qp->srq)
> +		if (qp->srq) {
>   			atomic_inc(&qp_init_attr->srq->usecnt);
> +			init_completion(&qp->srq_completion);
> +		}
>   	}

How about initializing the completion at ib_drain_srq and complete it
always in last wqe reached event in the core instead of relying on ULPs
to call ib_notify_qp?

The simplest way I can think of is to have the core register its own
event handler for that..
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-24  6:39         ` Sagi Grimberg
  0 siblings, 0 replies; 26+ messages in thread
From: Sagi Grimberg @ 2018-01-24  6:39 UTC (permalink / raw)


Hi Max,

> diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
> index 7868727..7604450 100644
> --- a/drivers/infiniband/core/verbs.c
> +++ b/drivers/infiniband/core/verbs.c
> @@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
>   		if (qp_init_attr->recv_cq)
>   			atomic_inc(&qp_init_attr->recv_cq->usecnt);
>   		qp->srq = qp_init_attr->srq;
> -		if (qp->srq)
> +		if (qp->srq) {
>   			atomic_inc(&qp_init_attr->srq->usecnt);
> +			init_completion(&qp->srq_completion);
> +		}
>   	}

How about initializing the completion at ib_drain_srq and complete it
always in last wqe reached event in the core instead of relying on ULPs
to call ib_notify_qp?

The simplest way I can think of is to have the core register its own
event handler for that..

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/4] IB/core: add support for draining Shared receive queues
  2018-01-24  6:39         ` Sagi Grimberg
@ 2018-01-24 10:20             ` Max Gurtovoy
  -1 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-24 10:20 UTC (permalink / raw)
  To: Sagi Grimberg, jgg-VPRAkNaXOzVWk0Htik3J/w,
	dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	bart.vanassche-Sjgp3cTcYWE, hch-jcswGhMUV9g
  Cc: danielm-VPRAkNaXOzVWk0Htik3J/w



On 1/24/2018 8:39 AM, Sagi Grimberg wrote:
> Hi Max,

Hi Sagi,

> 
>> diff --git a/drivers/infiniband/core/verbs.c 
>> b/drivers/infiniband/core/verbs.c
>> index 7868727..7604450 100644
>> --- a/drivers/infiniband/core/verbs.c
>> +++ b/drivers/infiniband/core/verbs.c
>> @@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
>>           if (qp_init_attr->recv_cq)
>>               atomic_inc(&qp_init_attr->recv_cq->usecnt);
>>           qp->srq = qp_init_attr->srq;
>> -        if (qp->srq)
>> +        if (qp->srq) {
>>               atomic_inc(&qp_init_attr->srq->usecnt);
>> +            init_completion(&qp->srq_completion);
>> +        }
>>       }
> 
> How about initializing the completion at ib_drain_srq and complete it
> always in last wqe reached event in the core instead of relying on ULPs
> to call ib_notify_qp?

I don't think I can. The event can arrive *before* the call to ib_drain_srq.

> 
> The simplest way I can think of is to have the core register its own
> event handler for that..

Do you mean not raising this event to ULP at all ? I can check this (I 
don't think we do it for other events we get...).
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/4] IB/core: add support for draining Shared receive queues
@ 2018-01-24 10:20             ` Max Gurtovoy
  0 siblings, 0 replies; 26+ messages in thread
From: Max Gurtovoy @ 2018-01-24 10:20 UTC (permalink / raw)




On 1/24/2018 8:39 AM, Sagi Grimberg wrote:
> Hi Max,

Hi Sagi,

> 
>> diff --git a/drivers/infiniband/core/verbs.c 
>> b/drivers/infiniband/core/verbs.c
>> index 7868727..7604450 100644
>> --- a/drivers/infiniband/core/verbs.c
>> +++ b/drivers/infiniband/core/verbs.c
>> @@ -886,8 +886,10 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
>> ????????? if (qp_init_attr->recv_cq)
>> ????????????? atomic_inc(&qp_init_attr->recv_cq->usecnt);
>> ????????? qp->srq = qp_init_attr->srq;
>> -??????? if (qp->srq)
>> +??????? if (qp->srq) {
>> ????????????? atomic_inc(&qp_init_attr->srq->usecnt);
>> +??????????? init_completion(&qp->srq_completion);
>> +??????? }
>> ????? }
> 
> How about initializing the completion at ib_drain_srq and complete it
> always in last wqe reached event in the core instead of relying on ULPs
> to call ib_notify_qp?

I don't think I can. The event can arrive *before* the call to ib_drain_srq.

> 
> The simplest way I can think of is to have the core register its own
> event handler for that..

Do you mean not raising this event to ULP at all ? I can check this (I 
don't think we do it for other events we get...).

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-01-24 10:20 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-17 13:52 [PATCH 0/4] Last WQE Reached event treatment Max Gurtovoy
2018-01-17 13:52 ` Max Gurtovoy
     [not found] ` <1516197178-26493-1-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-01-17 13:52   ` [PATCH 1/4] IB/core: add support for draining Shared receive queues Max Gurtovoy
2018-01-17 13:52     ` Max Gurtovoy
     [not found]     ` <1516197178-26493-2-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-01-17 15:35       ` Steve Wise
2018-01-17 15:35         ` Steve Wise
2018-01-18 18:06         ` Bart Van Assche
2018-01-18 18:06           ` Bart Van Assche
2018-01-17 16:11       ` Bart Van Assche
2018-01-17 16:11         ` Bart Van Assche
     [not found]         ` <1516205474.2820.5.camel-Sjgp3cTcYWE@public.gmane.org>
2018-01-18 10:31           ` Max Gurtovoy
2018-01-18 10:31             ` Max Gurtovoy
2018-01-24  6:39       ` Sagi Grimberg
2018-01-24  6:39         ` Sagi Grimberg
     [not found]         ` <11f8c447-065f-ee0c-f88e-ee3a006f8571-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2018-01-24 10:20           ` Max Gurtovoy
2018-01-24 10:20             ` Max Gurtovoy
2018-01-17 13:52   ` [PATCH 2/4] nvmet-rdma: notify QP on Last WQE Reached event arrival Max Gurtovoy
2018-01-17 13:52     ` Max Gurtovoy
     [not found]     ` <1516197178-26493-3-git-send-email-maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2018-01-17 15:37       ` Steve Wise
2018-01-17 15:37         ` Steve Wise
2018-01-18 18:04         ` Bart Van Assche
2018-01-18 18:04           ` Bart Van Assche
2018-01-17 13:52   ` [PATCH 3/4] iser-target: remove dead code Max Gurtovoy
2018-01-17 13:52     ` Max Gurtovoy
2018-01-17 13:52   ` [PATCH 4/4] IB/srpt: notify QP on Last WQE Reached event arrival Max Gurtovoy
2018-01-17 13:52     ` Max Gurtovoy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.