All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 for-next 0/2] Fix crash due to sleepy mutex while holding lock in post_{send|recv|poll}
@ 2019-11-12 12:52 Yixian Liu
  2019-11-12 12:52 ` [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu
  2019-11-12 12:52 ` [PATCH v2 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu
  0 siblings, 2 replies; 11+ messages in thread
From: Yixian Liu @ 2019-11-12 12:52 UTC (permalink / raw)
  To: dledford, jgg, leon; +Cc: linux-rdma, linuxarm

Earlier Background:
HiP08 RoCE hardware lacks ability(a known hardware problem) to flush
outstanding WQEs if QP state gets into errored mode for some reason.
To overcome this hardware problem and as a workaround, when QP is
detected to be in errored state during various legs like post send,
post receive etc [1], flush needs to be performed from the driver.

These data-path legs might get called concurrently from various context,
like thread and interrupt as well (like NVMe driver). Hence, these need
to be protected with spin-locks for the concurrency. This code exists
within the driver.

Problem:
Earlier The patch[1] sent to solve the hardware limitation explained
in the background section had a bug in the software flushing leg. It
acquired mutex while modifying QP state to errored state and while
conveying it to the hardware using the mailbox. This caused leg to
sleep while holding spin-lock and caused crash.

Suggested Solution:
In this patch, we have proposed to defer the flushing of the QP in
Errored state using the workqueue.

We do understand that this might have an impact on the recovery times
as scheduling of the wqorkqueue handler depends upon the occupancy of
the system. Therefore to roughly mitigate this affect we have tried
to use Concurrency Managed workqueue to give worker thread (and
hence handler) a chance to run over more than one core.


[1] https://patchwork.kernel.org/patch/10534271/


This patch-set consists of:
[Patch 001] Introduce workqueue based WQE Flush Handler
[Patch 002] Call WQE flush handler in post {send|receive|poll}

v2 changes:
1. Remove new created workqueue according to Jason's comment
2. Remove dynamic allocation for flush_work according to Jason's comment
3. Change current irq singlethread workqueue to concurrency management
   workqueue to ensure work unblocked.

Yixian Liu (2):
  RDMA/hns: Add the workqueue framework for flush cqe handler
  RDMA/hns: Delayed flush cqe process with workqueue

 drivers/infiniband/hw/hns/hns_roce_device.h |  3 +
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 88 +++++++++++++----------------
 drivers/infiniband/hw/hns/hns_roce_qp.c     | 33 +++++++++++
 3 files changed, 76 insertions(+), 48 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-12 12:52 [PATCH v2 for-next 0/2] Fix crash due to sleepy mutex while holding lock in post_{send|recv|poll} Yixian Liu
@ 2019-11-12 12:52 ` Yixian Liu
  2019-11-15 21:06   ` Jason Gunthorpe
  2019-11-12 12:52 ` [PATCH v2 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu
  1 sibling, 1 reply; 11+ messages in thread
From: Yixian Liu @ 2019-11-12 12:52 UTC (permalink / raw)
  To: dledford, jgg, leon; +Cc: linux-rdma, linuxarm

HiP08 RoCE hardware lacks ability(a known hardware problem) to flush
outstanding WQEs if QP state gets into errored mode for some reason.
To overcome this hardware problem and as a workaround, when QP is
detected to be in errored state during various legs like post send,
post receive etc [1], flush needs to be performed from the driver.

The earlier patch[1] sent to solve the hardware limitation explained
in the cover-letter had a bug in the software flushing leg. It
acquired mutex while modifying QP state to errored state and while
conveying it to the hardware using the mailbox. This caused leg to
sleep while holding spin-lock and caused crash.

Suggested Solution:
we have proposed to defer the flushing of the QP in the Errored state
using the workqueue to get around with the limitation of our hardware.

This patch adds the framework of the workqueue and the flush handler
function.

[1] https://patchwork.kernel.org/patch/10534271/

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Reviewed-by: Salil Mehta <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h |  3 +++
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  |  4 ++--
 drivers/infiniband/hw/hns/hns_roce_qp.c     | 33 +++++++++++++++++++++++++++++
 3 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index a1b712e..42d8a5a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -906,6 +906,7 @@ struct hns_roce_caps {
 struct hns_roce_work {
 	struct hns_roce_dev *hr_dev;
 	struct work_struct work;
+	struct hns_roce_qp *hr_qp;
 	u32 qpn;
 	u32 cqn;
 	int event_type;
@@ -1034,6 +1035,7 @@ struct hns_roce_dev {
 	const struct hns_roce_hw *hw;
 	void			*priv;
 	struct workqueue_struct *irq_workq;
+	struct hns_roce_work flush_work;
 	const struct hns_roce_dfx_hw *dfx;
 };
 
@@ -1226,6 +1228,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd,
 				 struct ib_udata *udata);
 int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 		       int attr_mask, struct ib_udata *udata);
+void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp);
 void *get_recv_wqe(struct hns_roce_qp *hr_qp, int n);
 void *get_send_wqe(struct hns_roce_qp *hr_qp, int n);
 void *get_send_extend_sge(struct hns_roce_qp *hr_qp, int n);
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 907c951..ec48e7e 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -5967,8 +5967,8 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
 		goto err_request_irq_fail;
 	}
 
-	hr_dev->irq_workq =
-		create_singlethread_workqueue("hns_roce_irq_workqueue");
+	hr_dev->irq_workq = alloc_workqueue("hns_roce_irq_workqueue",
+					    WQ_MEM_RECLAIM, 0);
 	if (!hr_dev->irq_workq) {
 		dev_err(dev, "Create irq workqueue failed!\n");
 		ret = -ENOMEM;
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 9442f01..0111f2e 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -43,6 +43,39 @@
 
 #define SQP_NUM				(2 * HNS_ROCE_MAX_PORTS)
 
+static void flush_work_handle(struct work_struct *work)
+{
+	struct hns_roce_work *flush_work = container_of(work,
+					struct hns_roce_work, work);
+	struct hns_roce_qp *hr_qp = flush_work->hr_qp;
+	struct device *dev = flush_work->hr_dev->dev;
+	struct ib_qp_attr attr;
+	int attr_mask;
+	int ret;
+
+	attr_mask = IB_QP_STATE;
+	attr.qp_state = IB_QPS_ERR;
+
+	ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL);
+	if (ret)
+		dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n",
+			ret);
+
+	if (atomic_dec_and_test(&hr_qp->refcount))
+		complete(&hr_qp->free);
+}
+
+void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+{
+	struct hns_roce_work *flush_work = &hr_dev->flush_work;
+
+	flush_work->hr_dev = hr_dev;
+	flush_work->hr_qp = hr_qp;
+	INIT_WORK(&flush_work->work, flush_work_handle);
+	atomic_inc(&hr_qp->refcount);
+	queue_work(hr_dev->irq_workq, &flush_work->work);
+}
+
 void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
 {
 	struct device *dev = hr_dev->dev;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue
  2019-11-12 12:52 [PATCH v2 for-next 0/2] Fix crash due to sleepy mutex while holding lock in post_{send|recv|poll} Yixian Liu
  2019-11-12 12:52 ` [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu
@ 2019-11-12 12:52 ` Yixian Liu
  1 sibling, 0 replies; 11+ messages in thread
From: Yixian Liu @ 2019-11-12 12:52 UTC (permalink / raw)
  To: dledford, jgg, leon; +Cc: linux-rdma, linuxarm

HiP08 RoCE hardware lacks ability(a known hardware problem) to flush
outstanding WQEs if QP state gets into errored mode for some reason.
To overcome this hardware problem and as a workaround, when QP is
detected to be in errored state during various legs like post send,
post receive etc[1], flush needs to be performed from the driver.

The earlier patch[1] sent to solve the hardware limitation explained
in the cover-letter had a bug in the software flushing leg. It
acquired mutex while modifying QP state to errored state and while
conveying it to the hardware using the mailbox. This caused leg to
sleep while holding spin-lock and caused crash.

Suggested Solution:
we have proposed to defer the flushing of the QP in the Errored state
using the workqueue to get around with the limitation of our hardware.

This patch specifically adds the calls to the flush handler from
where parts of the code like post_send/post_recv etc. when the QP
state gets into the errored mode.

[1] https://patchwork.kernel.org/patch/10534271/

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Reviewed-by: Salil Mehta <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 84 ++++++++++++++----------------
 1 file changed, 38 insertions(+), 46 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index ec48e7e..bf8a710 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -221,11 +221,6 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
 	return 0;
 }
 
-static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
-				 const struct ib_qp_attr *attr,
-				 int attr_mask, enum ib_qp_state cur_state,
-				 enum ib_qp_state new_state);
-
 static int hns_roce_v2_post_send(struct ib_qp *ibqp,
 				 const struct ib_send_wr *wr,
 				 const struct ib_send_wr **bad_wr)
@@ -238,14 +233,12 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
 	struct hns_roce_wqe_frmr_seg *fseg;
 	struct device *dev = hr_dev->dev;
 	struct hns_roce_v2_db sq_db;
-	struct ib_qp_attr attr;
 	unsigned int sge_ind;
 	unsigned int owner_bit;
 	unsigned long flags;
 	unsigned int ind;
 	void *wqe = NULL;
 	bool loopback;
-	int attr_mask;
 	u32 tmp_len;
 	int ret = 0;
 	u32 hr_op;
@@ -591,18 +584,17 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
 		qp->sq_next_wqe = ind;
 		qp->next_sge = sge_ind;
 
-		if (qp->state == IB_QPS_ERR) {
-			attr_mask = IB_QP_STATE;
-			attr.qp_state = IB_QPS_ERR;
-
-			ret = hns_roce_v2_modify_qp(&qp->ibqp, &attr, attr_mask,
-						    qp->state, IB_QPS_ERR);
-			if (ret) {
-				spin_unlock_irqrestore(&qp->sq.lock, flags);
-				*bad_wr = wr;
-				return ret;
-			}
-		}
+		/*
+		 * Hip08 hardware cannot flush the WQEs in SQ if the QP state
+		 * gets into errored mode. Hence, as a workaround to this
+		 * hardware limitation, driver needs to assist in flushing. But
+		 * the flushing operation uses mailbox to convey the QP state to
+		 * the hardware and which can sleep due to the mutex protection
+		 * around the mailbox calls. Hence, use the deferred flush for
+		 * now.
+		 */
+		if (qp->state == IB_QPS_ERR)
+			init_flush_work(hr_dev, qp);
 	}
 
 	spin_unlock_irqrestore(&qp->sq.lock, flags);
@@ -619,10 +611,8 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
 	struct hns_roce_v2_wqe_data_seg *dseg;
 	struct hns_roce_rinl_sge *sge_list;
 	struct device *dev = hr_dev->dev;
-	struct ib_qp_attr attr;
 	unsigned long flags;
 	void *wqe = NULL;
-	int attr_mask;
 	int ret = 0;
 	int nreq;
 	int ind;
@@ -692,19 +682,17 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
 
 		*hr_qp->rdb.db_record = hr_qp->rq.head & 0xffff;
 
-		if (hr_qp->state == IB_QPS_ERR) {
-			attr_mask = IB_QP_STATE;
-			attr.qp_state = IB_QPS_ERR;
-
-			ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, &attr,
-						    attr_mask, hr_qp->state,
-						    IB_QPS_ERR);
-			if (ret) {
-				spin_unlock_irqrestore(&hr_qp->rq.lock, flags);
-				*bad_wr = wr;
-				return ret;
-			}
-		}
+		/*
+		 * Hip08 hardware cannot flush the WQEs in RQ if the QP state
+		 * gets into errored mode. Hence, as a workaround to this
+		 * hardware limitation, driver needs to assist in flushing. But
+		 * the flushing operation uses mailbox to convey the QP state to
+		 * the hardware and which can sleep due to the mutex protection
+		 * around the mailbox calls. Hence, use the deferred flush for
+		 * now.
+		 */
+		if (hr_qp->state == IB_QPS_ERR)
+			init_flush_work(hr_dev, hr_qp);
 	}
 	spin_unlock_irqrestore(&hr_qp->rq.lock, flags);
 
@@ -2691,13 +2679,11 @@ static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe,
 static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
 				struct hns_roce_qp **cur_qp, struct ib_wc *wc)
 {
+	struct hns_roce_dev *hr_dev = to_hr_dev(hr_cq->ib_cq.device);
 	struct hns_roce_srq *srq = NULL;
-	struct hns_roce_dev *hr_dev;
 	struct hns_roce_v2_cqe *cqe;
 	struct hns_roce_qp *hr_qp;
 	struct hns_roce_wq *wq;
-	struct ib_qp_attr attr;
-	int attr_mask;
 	int is_send;
 	u16 wqe_ctr;
 	u32 opcode;
@@ -2721,7 +2707,6 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
 				V2_CQE_BYTE_16_LCL_QPN_S);
 
 	if (!*cur_qp || (qpn & HNS_ROCE_V2_CQE_QPN_MASK) != (*cur_qp)->qpn) {
-		hr_dev = to_hr_dev(hr_cq->ib_cq.device);
 		hr_qp = __hns_roce_qp_lookup(hr_dev, qpn);
 		if (unlikely(!hr_qp)) {
 			dev_err(hr_dev->dev, "CQ %06lx with entry for unknown QPN %06x\n",
@@ -2815,14 +2800,21 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
 		break;
 	}
 
-	/* flush cqe if wc status is error, excluding flush error */
-	if ((wc->status != IB_WC_SUCCESS) &&
-	    (wc->status != IB_WC_WR_FLUSH_ERR)) {
-		attr_mask = IB_QP_STATE;
-		attr.qp_state = IB_QPS_ERR;
-		return hns_roce_v2_modify_qp(&(*cur_qp)->ibqp,
-					     &attr, attr_mask,
-					     (*cur_qp)->state, IB_QPS_ERR);
+	/*
+	 * Hip08 hardware cannot flush the WQEs in SQ/RQ if the QP state gets
+	 * into errored mode. Hence, as a workaround to this hardware
+	 * limitation, driver needs to assist in flushing. But the flushing
+	 * operation uses mailbox to convey the QP state to the hardware and
+	 * which can sleep due to the mutex protection around the mailbox calls.
+	 * Hence, use the deferred flush for now. Once wc error detected, the
+	 * flushing operation is needed.
+	 */
+	if (wc->status != IB_WC_SUCCESS &&
+	    wc->status != IB_WC_WR_FLUSH_ERR) {
+		dev_err(hr_dev->dev, "error cqe status is: 0x%x\n",
+			status & HNS_ROCE_V2_CQE_STATUS_MASK);
+		init_flush_work(hr_dev, *cur_qp);
+		return 0;
 	}
 
 	if (wc->status == IB_WC_WR_FLUSH_ERR)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-12 12:52 ` [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu
@ 2019-11-15 21:06   ` Jason Gunthorpe
  2019-11-18 13:50     ` Liuyixian (Eason)
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2019-11-15 21:06 UTC (permalink / raw)
  To: Yixian Liu; +Cc: dledford, leon, linux-rdma, linuxarm

On Tue, Nov 12, 2019 at 08:52:03PM +0800, Yixian Liu wrote:
> HiP08 RoCE hardware lacks ability(a known hardware problem) to flush
> outstanding WQEs if QP state gets into errored mode for some reason.
> To overcome this hardware problem and as a workaround, when QP is
> detected to be in errored state during various legs like post send,
> post receive etc [1], flush needs to be performed from the driver.
> 
> The earlier patch[1] sent to solve the hardware limitation explained
> in the cover-letter had a bug in the software flushing leg. It
> acquired mutex while modifying QP state to errored state and while
> conveying it to the hardware using the mailbox. This caused leg to
> sleep while holding spin-lock and caused crash.
> 
> Suggested Solution:
> we have proposed to defer the flushing of the QP in the Errored state
> using the workqueue to get around with the limitation of our hardware.
> 
> This patch adds the framework of the workqueue and the flush handler
> function.
> 
> [1] https://patchwork.kernel.org/patch/10534271/
> 
> Signed-off-by: Yixian Liu <liuyixian@huawei.com>
> Reviewed-by: Salil Mehta <salil.mehta@huawei.com>
>  drivers/infiniband/hw/hns/hns_roce_device.h |  3 +++
>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  |  4 ++--
>  drivers/infiniband/hw/hns/hns_roce_qp.c     | 33 +++++++++++++++++++++++++++++
>  3 files changed, 38 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
> index a1b712e..42d8a5a 100644
> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
> @@ -906,6 +906,7 @@ struct hns_roce_caps {
>  struct hns_roce_work {
>  	struct hns_roce_dev *hr_dev;
>  	struct work_struct work;
> +	struct hns_roce_qp *hr_qp;
>  	u32 qpn;
>  	u32 cqn;
>  	int event_type;
> @@ -1034,6 +1035,7 @@ struct hns_roce_dev {
>  	const struct hns_roce_hw *hw;
>  	void			*priv;
>  	struct workqueue_struct *irq_workq;
> +	struct hns_roce_work flush_work;
>  	const struct hns_roce_dfx_hw *dfx;
>  };
>  
> @@ -1226,6 +1228,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd,
>  				 struct ib_udata *udata);
>  int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>  		       int attr_mask, struct ib_udata *udata);
> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp);
>  void *get_recv_wqe(struct hns_roce_qp *hr_qp, int n);
>  void *get_send_wqe(struct hns_roce_qp *hr_qp, int n);
>  void *get_send_extend_sge(struct hns_roce_qp *hr_qp, int n);
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> index 907c951..ec48e7e 100644
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> @@ -5967,8 +5967,8 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
>  		goto err_request_irq_fail;
>  	}
>  
> -	hr_dev->irq_workq =
> -		create_singlethread_workqueue("hns_roce_irq_workqueue");
> +	hr_dev->irq_workq = alloc_workqueue("hns_roce_irq_workqueue",
> +					    WQ_MEM_RECLAIM, 0);
>  	if (!hr_dev->irq_workq) {
>  		dev_err(dev, "Create irq workqueue failed!\n");
>  		ret = -ENOMEM;
> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
> index 9442f01..0111f2e 100644
> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
> @@ -43,6 +43,39 @@
>  
>  #define SQP_NUM				(2 * HNS_ROCE_MAX_PORTS)
>  
> +static void flush_work_handle(struct work_struct *work)
> +{
> +	struct hns_roce_work *flush_work = container_of(work,
> +					struct hns_roce_work, work);
> +	struct hns_roce_qp *hr_qp = flush_work->hr_qp;
> +	struct device *dev = flush_work->hr_dev->dev;
> +	struct ib_qp_attr attr;
> +	int attr_mask;
> +	int ret;
> +
> +	attr_mask = IB_QP_STATE;
> +	attr.qp_state = IB_QPS_ERR;
> +
> +	ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL);
> +	if (ret)
> +		dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n",
> +			ret);
> +
> +	if (atomic_dec_and_test(&hr_qp->refcount))
> +		complete(&hr_qp->free);
> +}
> +
> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
> +{
> +	struct hns_roce_work *flush_work = &hr_dev->flush_work;
> +
> +	flush_work->hr_dev = hr_dev;
> +	flush_work->hr_qp = hr_qp;
> +	INIT_WORK(&flush_work->work, flush_work_handle);
> +	atomic_inc(&hr_qp->refcount);
> +	queue_work(hr_dev->irq_workq, &flush_work->work);

It kind of looks like this can be called multiple times? It won't work
right unless it is called exactly once

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-15 21:06   ` Jason Gunthorpe
@ 2019-11-18 13:50     ` Liuyixian (Eason)
  2019-11-18 17:02       ` Jason Gunthorpe
  0 siblings, 1 reply; 11+ messages in thread
From: Liuyixian (Eason) @ 2019-11-18 13:50 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, linuxarm



On 2019/11/16 5:06, Jason Gunthorpe wrote:
> On Tue, Nov 12, 2019 at 08:52:03PM +0800, Yixian Liu wrote:
>> HiP08 RoCE hardware lacks ability(a known hardware problem) to flush
>> outstanding WQEs if QP state gets into errored mode for some reason.
>> To overcome this hardware problem and as a workaround, when QP is
>> detected to be in errored state during various legs like post send,
>> post receive etc [1], flush needs to be performed from the driver.
>>
>> The earlier patch[1] sent to solve the hardware limitation explained
>> in the cover-letter had a bug in the software flushing leg. It
>> acquired mutex while modifying QP state to errored state and while
>> conveying it to the hardware using the mailbox. This caused leg to
>> sleep while holding spin-lock and caused crash.
>>
>> Suggested Solution:
>> we have proposed to defer the flushing of the QP in the Errored state
>> using the workqueue to get around with the limitation of our hardware.
>>
>> This patch adds the framework of the workqueue and the flush handler
>> function.
>>
>> [1] https://patchwork.kernel.org/patch/10534271/
>>
>> Signed-off-by: Yixian Liu <liuyixian@huawei.com>
>> Reviewed-by: Salil Mehta <salil.mehta@huawei.com>
>>  drivers/infiniband/hw/hns/hns_roce_device.h |  3 +++
>>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  |  4 ++--
>>  drivers/infiniband/hw/hns/hns_roce_qp.c     | 33 +++++++++++++++++++++++++++++
>>  3 files changed, 38 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
>> index a1b712e..42d8a5a 100644
>> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
>> @@ -906,6 +906,7 @@ struct hns_roce_caps {
>>  struct hns_roce_work {
>>  	struct hns_roce_dev *hr_dev;
>>  	struct work_struct work;
>> +	struct hns_roce_qp *hr_qp;
>>  	u32 qpn;
>>  	u32 cqn;
>>  	int event_type;
>> @@ -1034,6 +1035,7 @@ struct hns_roce_dev {
>>  	const struct hns_roce_hw *hw;
>>  	void			*priv;
>>  	struct workqueue_struct *irq_workq;
>> +	struct hns_roce_work flush_work;
>>  	const struct hns_roce_dfx_hw *dfx;
>>  };
>>  
>> @@ -1226,6 +1228,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd,
>>  				 struct ib_udata *udata);
>>  int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>>  		       int attr_mask, struct ib_udata *udata);
>> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp);
>>  void *get_recv_wqe(struct hns_roce_qp *hr_qp, int n);
>>  void *get_send_wqe(struct hns_roce_qp *hr_qp, int n);
>>  void *get_send_extend_sge(struct hns_roce_qp *hr_qp, int n);
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
>> index 907c951..ec48e7e 100644
>> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
>> @@ -5967,8 +5967,8 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
>>  		goto err_request_irq_fail;
>>  	}
>>  
>> -	hr_dev->irq_workq =
>> -		create_singlethread_workqueue("hns_roce_irq_workqueue");
>> +	hr_dev->irq_workq = alloc_workqueue("hns_roce_irq_workqueue",
>> +					    WQ_MEM_RECLAIM, 0);
>>  	if (!hr_dev->irq_workq) {
>>  		dev_err(dev, "Create irq workqueue failed!\n");
>>  		ret = -ENOMEM;
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> index 9442f01..0111f2e 100644
>> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> @@ -43,6 +43,39 @@
>>  
>>  #define SQP_NUM				(2 * HNS_ROCE_MAX_PORTS)
>>  
>> +static void flush_work_handle(struct work_struct *work)
>> +{
>> +	struct hns_roce_work *flush_work = container_of(work,
>> +					struct hns_roce_work, work);
>> +	struct hns_roce_qp *hr_qp = flush_work->hr_qp;
>> +	struct device *dev = flush_work->hr_dev->dev;
>> +	struct ib_qp_attr attr;
>> +	int attr_mask;
>> +	int ret;
>> +
>> +	attr_mask = IB_QP_STATE;
>> +	attr.qp_state = IB_QPS_ERR;
>> +
>> +	ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL);
>> +	if (ret)
>> +		dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n",
>> +			ret);
>> +
>> +	if (atomic_dec_and_test(&hr_qp->refcount))
>> +		complete(&hr_qp->free);
>> +}
>> +
>> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
>> +{
>> +	struct hns_roce_work *flush_work = &hr_dev->flush_work;
>> +
>> +	flush_work->hr_dev = hr_dev;
>> +	flush_work->hr_qp = hr_qp;
>> +	INIT_WORK(&flush_work->work, flush_work_handle);
>> +	atomic_inc(&hr_qp->refcount);
>> +	queue_work(hr_dev->irq_workq, &flush_work->work);
> 
> It kind of looks like this can be called multiple times? It won't work
> right unless it is called exactly once
> 
> Jason

Yes, you are right.

So I think the reasonable solution is to allocate it dynamically, and I think
it is a very very little chance that the allocation will be failed. If this happened,
I think the application also needs to be over.

So I will fall back to v1 for this part in next version.

	flush_work = kzalloc(sizeof(struct hns_roce_flush_work), GFP_ATOMIC)
	if (!flush_work)
		return;

Or, could you give me some advice for it?

Thanks.

> 
> .
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-18 13:50     ` Liuyixian (Eason)
@ 2019-11-18 17:02       ` Jason Gunthorpe
  2019-11-19  8:00         ` Liuyixian (Eason)
  0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2019-11-18 17:02 UTC (permalink / raw)
  To: Liuyixian (Eason); +Cc: dledford, leon, linux-rdma, linuxarm

On Mon, Nov 18, 2019 at 09:50:24PM +0800, Liuyixian (Eason) wrote:
> > It kind of looks like this can be called multiple times? It won't work
> > right unless it is called exactly once
> > 
> > Jason
> 
> Yes, you are right.
> 
> So I think the reasonable solution is to allocate it dynamically, and I think
> it is a very very little chance that the allocation will be failed. If this happened,
> I think the application also needs to be over.

Why do you need more than one work in parallel for this? Once you
start to move the HW to error that only has to happen once, surely?

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-18 17:02       ` Jason Gunthorpe
@ 2019-11-19  8:00         ` Liuyixian (Eason)
  2019-11-19  9:43           ` Zengtao (B)
  2019-11-19 18:46           ` Jason Gunthorpe
  0 siblings, 2 replies; 11+ messages in thread
From: Liuyixian (Eason) @ 2019-11-19  8:00 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, linuxarm



On 2019/11/19 1:02, Jason Gunthorpe wrote:
> On Mon, Nov 18, 2019 at 09:50:24PM +0800, Liuyixian (Eason) wrote:
>>> It kind of looks like this can be called multiple times? It won't work
>>> right unless it is called exactly once
>>>
>>> Jason
>>
>> Yes, you are right.
>>
>> So I think the reasonable solution is to allocate it dynamically, and I think
>> it is a very very little chance that the allocation will be failed. If this happened,
>> I think the application also needs to be over.
> 
> Why do you need more than one work in parallel for this? Once you
> start to move the HW to error that only has to happen once, surely?
> 
> Jason
> 
The flush operation moves QP, not the HW to error.

For the QP, maybe the process A is posting send while the other
process B is modifying qp to error, both of these two operation
needs to initialize one flush work. That's why it could be called
multiple times.

Furthermore, according to IB protocol 9.9.2.4.2, it may can't stop
posting wr into the qp even it already transitions to error state.
That's why it also needs to be called multiple times.

Once the work is implemented successfully, we will free the work,
it is not a big problem to allocate it dynamically again and again.

Thanks.




^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-19  8:00         ` Liuyixian (Eason)
@ 2019-11-19  9:43           ` Zengtao (B)
  2019-11-19 13:09             ` Liuyixian (Eason)
  2019-11-19 18:46           ` Jason Gunthorpe
  1 sibling, 1 reply; 11+ messages in thread
From: Zengtao (B) @ 2019-11-19  9:43 UTC (permalink / raw)
  To: Liuyixian (Eason), Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, Linuxarm

> -----Original Message-----
> From: linux-rdma-owner@vger.kernel.org
> [mailto:linux-rdma-owner@vger.kernel.org] On Behalf Of Liuyixian (Eason)
> Sent: Tuesday, November 19, 2019 4:00 PM
> To: Jason Gunthorpe
> Cc: dledford@redhat.com; leon@kernel.org; linux-rdma@vger.kernel.org;
> Linuxarm
> Subject: Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue
> framework for flush cqe handler
> 
> 
> 
> On 2019/11/19 1:02, Jason Gunthorpe wrote:
> > On Mon, Nov 18, 2019 at 09:50:24PM +0800, Liuyixian (Eason) wrote:
> >>> It kind of looks like this can be called multiple times? It won't work
> >>> right unless it is called exactly once
> >>>
> >>> Jason
> >>
> >> Yes, you are right.
> >>
> >> So I think the reasonable solution is to allocate it dynamically, and I think
> >> it is a very very little chance that the allocation will be failed. If this
> happened,
> >> I think the application also needs to be over.
> >
> > Why do you need more than one work in parallel for this? Once you
> > start to move the HW to error that only has to happen once, surely?
> >
> > Jason
> >
> The flush operation moves QP, not the HW to error.
> 
> For the QP, maybe the process A is posting send while the other
> process B is modifying qp to error, both of these two operation
> needs to initialize one flush work. That's why it could be called
> multiple times.
> 
> Furthermore, according to IB protocol 9.9.2.4.2, it may can't stop
> posting wr into the qp even it already transitions to error state.
> That's why it also needs to be called multiple times.
> 
> Once the work is implemented successfully, we will free the work,
> it is not a big problem to allocate it dynamically again and again.
> 

So can i understand that this function is designed to be reentrant?
If so, I suggest to introduce a per dev/qp lock to protect.

> Thanks.
> 
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-19  9:43           ` Zengtao (B)
@ 2019-11-19 13:09             ` Liuyixian (Eason)
  0 siblings, 0 replies; 11+ messages in thread
From: Liuyixian (Eason) @ 2019-11-19 13:09 UTC (permalink / raw)
  To: Zengtao (B), Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, Linuxarm



On 2019/11/19 17:43, Zengtao (B) wrote:
>> -----Original Message-----
>> From: linux-rdma-owner@vger.kernel.org
>> [mailto:linux-rdma-owner@vger.kernel.org] On Behalf Of Liuyixian (Eason)
>> Sent: Tuesday, November 19, 2019 4:00 PM
>> To: Jason Gunthorpe
>> Cc: dledford@redhat.com; leon@kernel.org; linux-rdma@vger.kernel.org;
>> Linuxarm
>> Subject: Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue
>> framework for flush cqe handler
>>
>>
>>
>> On 2019/11/19 1:02, Jason Gunthorpe wrote:
>>> On Mon, Nov 18, 2019 at 09:50:24PM +0800, Liuyixian (Eason) wrote:
>>>>> It kind of looks like this can be called multiple times? It won't work
>>>>> right unless it is called exactly once
>>>>>
>>>>> Jason
>>>>
>>>> Yes, you are right.
>>>>
>>>> So I think the reasonable solution is to allocate it dynamically, and I think
>>>> it is a very very little chance that the allocation will be failed. If this
>> happened,
>>>> I think the application also needs to be over.
>>>
>>> Why do you need more than one work in parallel for this? Once you
>>> start to move the HW to error that only has to happen once, surely?
>>>
>>> Jason
>>>
>> The flush operation moves QP, not the HW to error.
>>
>> For the QP, maybe the process A is posting send while the other
>> process B is modifying qp to error, both of these two operation
>> needs to initialize one flush work. That's why it could be called
>> multiple times.
>>
>> Furthermore, according to IB protocol 9.9.2.4.2, it may can't stop
>> posting wr into the qp even it already transitions to error state.
>> That's why it also needs to be called multiple times.
>>
>> Once the work is implemented successfully, we will free the work,
>> it is not a big problem to allocate it dynamically again and again.
>>
> 
> So can i understand that this function is designed to be reentrant?
> If so, I suggest to introduce a per dev/qp lock to protect.

Yes, currently we exactly use the following spinlock per qp to protect the verbs
which can be reentrant.

	spin_lock_irqsave(&qp->sq.lock, flags);

> 
>> Thanks.
>>
>>
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-19  8:00         ` Liuyixian (Eason)
  2019-11-19  9:43           ` Zengtao (B)
@ 2019-11-19 18:46           ` Jason Gunthorpe
  2019-11-20 11:00             ` Liuyixian (Eason)
  1 sibling, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2019-11-19 18:46 UTC (permalink / raw)
  To: Liuyixian (Eason); +Cc: dledford, leon, linux-rdma, linuxarm

On Tue, Nov 19, 2019 at 04:00:00PM +0800, Liuyixian (Eason) wrote:
> 
> 
> On 2019/11/19 1:02, Jason Gunthorpe wrote:
> > On Mon, Nov 18, 2019 at 09:50:24PM +0800, Liuyixian (Eason) wrote:
> >>> It kind of looks like this can be called multiple times? It won't work
> >>> right unless it is called exactly once
> >>>
> >>> Jason
> >>
> >> Yes, you are right.
> >>
> >> So I think the reasonable solution is to allocate it dynamically, and I think
> >> it is a very very little chance that the allocation will be failed. If this happened,
> >> I think the application also needs to be over.
> > 
> > Why do you need more than one work in parallel for this? Once you
> > start to move the HW to error that only has to happen once, surely?
> > 
> > Jason
>
> The flush operation moves QP, not the HW to error.
> 
> For the QP, maybe the process A is posting send while the other
> process B is modifying qp to error, both of these two operation
> needs to initialize one flush work. That's why it could be called
> multiple times.

The work function does something that looks like it only has to happen
once per QP.

One do you need to keep re-queing this thing every time the user posts
a WR?

Jason

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler
  2019-11-19 18:46           ` Jason Gunthorpe
@ 2019-11-20 11:00             ` Liuyixian (Eason)
  0 siblings, 0 replies; 11+ messages in thread
From: Liuyixian (Eason) @ 2019-11-20 11:00 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, linuxarm



On 2019/11/20 2:46, Jason Gunthorpe wrote:
> On Tue, Nov 19, 2019 at 04:00:00PM +0800, Liuyixian (Eason) wrote:
>>
>>
>> On 2019/11/19 1:02, Jason Gunthorpe wrote:
>>> On Mon, Nov 18, 2019 at 09:50:24PM +0800, Liuyixian (Eason) wrote:
>>>>> It kind of looks like this can be called multiple times? It won't work
>>>>> right unless it is called exactly once
>>>>>
>>>>> Jason
>>>>
>>>> Yes, you are right.
>>>>
>>>> So I think the reasonable solution is to allocate it dynamically, and I think
>>>> it is a very very little chance that the allocation will be failed. If this happened,
>>>> I think the application also needs to be over.
>>>
>>> Why do you need more than one work in parallel for this? Once you
>>> start to move the HW to error that only has to happen once, surely?
>>>
>>> Jason
>>
>> The flush operation moves QP, not the HW to error.
>>
>> For the QP, maybe the process A is posting send while the other
>> process B is modifying qp to error, both of these two operation
>> needs to initialize one flush work. That's why it could be called
>> multiple times.
> 
> The work function does something that looks like it only has to happen
> once per QP.
No, the work should be re-queued every time the producer index of qp
needs to be updated.

> 
> One do you need to keep re-queing this thing every time the user posts
> a WR?

Once a wr is posted, the producer index (pi) of qp is changed, thus,
the updated pi needs to be delivered into the HW in the flush operation,
to help the HW generated corresponding cqe. That's why modify qp is called
inside flush work, not only modify qp to error, but also transfer the pi into the HW.

In one word, the flush operation includes two parts:
1. change the state of the qp to error
2. deliver the latest pi of the qp to HW

Thanks.

> 
> Jason
> 
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-11-20 11:00 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-12 12:52 [PATCH v2 for-next 0/2] Fix crash due to sleepy mutex while holding lock in post_{send|recv|poll} Yixian Liu
2019-11-12 12:52 ` [PATCH v2 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu
2019-11-15 21:06   ` Jason Gunthorpe
2019-11-18 13:50     ` Liuyixian (Eason)
2019-11-18 17:02       ` Jason Gunthorpe
2019-11-19  8:00         ` Liuyixian (Eason)
2019-11-19  9:43           ` Zengtao (B)
2019-11-19 13:09             ` Liuyixian (Eason)
2019-11-19 18:46           ` Jason Gunthorpe
2019-11-20 11:00             ` Liuyixian (Eason)
2019-11-12 12:52 ` [PATCH v2 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.