* [PATCH v7 for-next 0/2] RDMA/hns: Add the workqueue framework for flush cqe handler @ 2020-01-15 9:49 Yixian Liu 2020-01-15 9:49 ` [PATCH v7 for-next 1/2] " Yixian Liu 2020-01-15 9:49 ` [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu 0 siblings, 2 replies; 9+ messages in thread From: Yixian Liu @ 2020-01-15 9:49 UTC (permalink / raw) To: dledford, jgg, leon; +Cc: linux-rdma, linuxarm Earlier Background: HiP08 RoCE hardware lacks ability(a known hardware problem) to flush outstanding WQEs if QP state gets into errored mode for some reason. To overcome this hardware problem and as a workaround, when QP is detected to be in errored state during various legs like post send, post receive etc [1], flush needs to be performed from the driver. These data-path legs might get called concurrently from various context, like thread and interrupt as well (like NVMe driver). Hence, these need to be protected with spin-locks for the concurrency. This code exists within the driver. Problem: Earlier The patch[1] sent to solve the hardware limitation explained in the background section had a bug in the software flushing leg. It acquired mutex while modifying QP state to errored state and while conveying it to the hardware using the mailbox. This caused leg to sleep while holding spin-lock and caused crash. Suggested Solution: In this patch, we have proposed to defer the flushing of the QP in Errored state using the workqueue. We do understand that this might have an impact on the recovery times as scheduling of the workqueue handler depends upon the occupancy of the system. Therefore to roughly mitigate this affect we have tried to use Concurrency Managed workqueue to give worker thread (and hence handler) a chance to run over more than one core. [1] https://patchwork.kernel.org/patch/10534271/ This patch-set consists of: [Patch 001] Introduce workqueue based WQE Flush Handler [Patch 002] Call WQE flush handler in post {send|receive|poll} v7 changes: 1. Delete unnecessary allocation of flush_work and do flush operation in flush handle function as needed according to Jason's suggestion. v6 changes: 1. Holding lock when updating or referencing the flag being_push according to Jason's comment, i.e., fix the lock holding in hns_roce_v2_modify_qp and hns_roce_v2_poll_one. v5 changes: 1. Remove WQ_MEM_RECLAIM flag according to Leon's suggestion. 2. Change to ordered workqueue for the requirement of flush work. v4 changes: 1. Add flag for PI is being pushed according to Jason's suggestion to reduce unnecessary works submitted to workqueue. v3 changes: 1. Fall back to dynamically allocate flush_work. v2 changes: 1. Remove new created workqueue according to Jason's comment 2. Remove dynamic allocation for flush_work according to Jason's comment 3. Change current irq singlethread workqueue to concurrency management workqueue to ensure work unblocked. Yixian Liu (2): RDMA/hns: Add the workqueue framework for flush cqe handler RDMA/hns: Delayed flush cqe process with workqueue drivers/infiniband/hw/hns/hns_roce_device.h | 4 ++ drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 103 ++++++++++++++++------------ drivers/infiniband/hw/hns/hns_roce_qp.c | 44 ++++++++++++ 3 files changed, 109 insertions(+), 42 deletions(-) -- 2.7.4 ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v7 for-next 1/2] RDMA/hns: Add the workqueue framework for flush cqe handler 2020-01-15 9:49 [PATCH v7 for-next 0/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu @ 2020-01-15 9:49 ` Yixian Liu 2020-01-15 9:49 ` [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu 1 sibling, 0 replies; 9+ messages in thread From: Yixian Liu @ 2020-01-15 9:49 UTC (permalink / raw) To: dledford, jgg, leon; +Cc: linux-rdma, linuxarm HiP08 RoCE hardware lacks ability(a known hardware problem) to flush outstanding WQEs if QP state gets into errored mode for some reason. To overcome this hardware problem and as a workaround, when QP is detected to be in errored state during various legs like post send, post receive etc [1], flush needs to be performed from the driver. The earlier patch[1] sent to solve the hardware limitation explained in the cover-letter had a bug in the software flushing leg. It acquired mutex while modifying QP state to errored state and while conveying it to the hardware using the mailbox. This caused leg to sleep while holding spin-lock and caused crash. Suggested Solution: we have proposed to defer the flushing of the QP in the Errored state using the workqueue to get around with the limitation of our hardware. This patch adds the framework of the workqueue and the flush handler function. [1] https://patchwork.kernel.org/patch/10534271/ Signed-off-by: Yixian Liu <liuyixian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> --- drivers/infiniband/hw/hns/hns_roce_device.h | 3 +++ drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 3 +-- drivers/infiniband/hw/hns/hns_roce_qp.c | 37 +++++++++++++++++++++++++++++ 3 files changed, 41 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 5617434..70f0b73 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -900,6 +900,7 @@ struct hns_roce_caps { struct hns_roce_work { struct hns_roce_dev *hr_dev; struct work_struct work; + struct hns_roce_qp *hr_qp; u32 qpn; u32 cqn; int event_type; @@ -1028,6 +1029,7 @@ struct hns_roce_dev { const struct hns_roce_hw *hw; void *priv; struct workqueue_struct *irq_workq; + struct hns_roce_work flush_work; const struct hns_roce_dfx_hw *dfx; }; @@ -1220,6 +1222,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd, struct ib_udata *udata); int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask, struct ib_udata *udata); +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp); void *get_recv_wqe(struct hns_roce_qp *hr_qp, int n); void *get_send_wqe(struct hns_roce_qp *hr_qp, int n); void *get_send_extend_sge(struct hns_roce_qp *hr_qp, int n); diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 1026ac6..2afcedd 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -5966,8 +5966,7 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev) goto err_request_irq_fail; } - hr_dev->irq_workq = - create_singlethread_workqueue("hns_roce_irq_workqueue"); + hr_dev->irq_workq = alloc_ordered_workqueue("hns_roce_irq_workq", 0); if (!hr_dev->irq_workq) { dev_err(dev, "Create irq workqueue failed!\n"); ret = -ENOMEM; diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index a6565b6..fa38582 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -43,6 +43,43 @@ #define SQP_NUM (2 * HNS_ROCE_MAX_PORTS) +static void flush_work_handle(struct work_struct *work) +{ + struct hns_roce_work *flush_work = container_of(work, + struct hns_roce_work, work); + struct hns_roce_qp *hr_qp = flush_work->hr_qp; + struct device *dev = flush_work->hr_dev->dev; + struct ib_qp_attr attr; + int attr_mask; + int ret; + + attr_mask = IB_QP_STATE; + attr.qp_state = IB_QPS_ERR; + + ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); + if (ret) + dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", + ret); + + /* + * make sure we signal QP destroy leg that flush QP was completed + * so that it can safely proceed ahead now and destroy QP + */ + if (atomic_dec_and_test(&hr_qp->refcount)) + complete(&hr_qp->free); +} + +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) +{ + struct hns_roce_work *flush_work = &hr_dev->flush_work; + + flush_work->hr_dev = hr_dev; + flush_work->hr_qp = hr_qp; + INIT_WORK(&flush_work->work, flush_work_handle); + atomic_inc(&hr_qp->refcount); + queue_work(hr_dev->irq_workq, &flush_work->work); +} + void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type) { struct device *dev = hr_dev->dev; -- 2.7.4 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-01-15 9:49 [PATCH v7 for-next 0/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu 2020-01-15 9:49 ` [PATCH v7 for-next 1/2] " Yixian Liu @ 2020-01-15 9:49 ` Yixian Liu 2020-01-28 19:56 ` Jason Gunthorpe 2020-01-28 20:05 ` Jason Gunthorpe 1 sibling, 2 replies; 9+ messages in thread From: Yixian Liu @ 2020-01-15 9:49 UTC (permalink / raw) To: dledford, jgg, leon; +Cc: linux-rdma, linuxarm HiP08 RoCE hardware lacks ability(a known hardware problem) to flush outstanding WQEs if QP state gets into errored mode for some reason. To overcome this hardware problem and as a workaround, when QP is detected to be in errored state during various legs like post send, post receive etc[1], flush needs to be performed from the driver. The earlier patch[1] sent to solve the hardware limitation explained in the cover-letter had a bug in the software flushing leg. It acquired mutex while modifying QP state to errored state and while conveying it to the hardware using the mailbox. This caused leg to sleep while holding spin-lock and caused crash. Suggested Solution: we have proposed to defer the flushing of the QP in the Errored state using the workqueue to get around with the limitation of our hardware. This patch specifically adds the calls to the flush handler from where parts of the code like post_send/post_recv etc. when the QP state gets into the errored mode. [1] https://patchwork.kernel.org/patch/10534271/ Signed-off-by: Yixian Liu <liuyixian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> --- drivers/infiniband/hw/hns/hns_roce_device.h | 1 + drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 100 +++++++++++++++++----------- drivers/infiniband/hw/hns/hns_roce_qp.c | 15 +++-- 3 files changed, 72 insertions(+), 44 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 70f0b73..2422a11 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -676,6 +676,7 @@ struct hns_roce_qp { unsigned long qpn; atomic_t refcount; + atomic_t flush_cnt; struct completion free; struct hns_roce_sge sge; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 2afcedd..a7d10a9 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -221,11 +221,6 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr, return 0; } -static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, - const struct ib_qp_attr *attr, - int attr_mask, enum ib_qp_state cur_state, - enum ib_qp_state new_state); - static int hns_roce_v2_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, const struct ib_send_wr **bad_wr) @@ -238,14 +233,12 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct hns_roce_wqe_frmr_seg *fseg; struct device *dev = hr_dev->dev; struct hns_roce_v2_db sq_db; - struct ib_qp_attr attr; unsigned int sge_ind; unsigned int owner_bit; unsigned long flags; unsigned int ind; void *wqe = NULL; bool loopback; - int attr_mask; u32 tmp_len; int ret = 0; u32 hr_op; @@ -591,16 +584,21 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, qp->sq_next_wqe = ind; qp->next_sge = sge_ind; + /* + * Hip08 hardware cannot flush the WQEs in SQ if the QP state + * gets into errored mode. Hence, as a workaround to this + * hardware limitation, driver needs to assist in flushing. But + * the flushing operation uses mailbox to convey the QP state to + * the hardware and which can sleep due to the mutex protection + * around the mailbox calls. Hence, use the deferred flush for + * now. + */ if (qp->state == IB_QPS_ERR) { - attr_mask = IB_QP_STATE; - attr.qp_state = IB_QPS_ERR; - - ret = hns_roce_v2_modify_qp(&qp->ibqp, &attr, attr_mask, - qp->state, IB_QPS_ERR); - if (ret) { - spin_unlock_irqrestore(&qp->sq.lock, flags); - *bad_wr = wr; - return ret; + if (atomic_read(&qp->flush_cnt) == 0) { + atomic_set(&qp->flush_cnt, 1); + init_flush_work(hr_dev, qp); + } else { + atomic_inc(&qp->flush_cnt); } } } @@ -619,10 +617,8 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp, struct hns_roce_v2_wqe_data_seg *dseg; struct hns_roce_rinl_sge *sge_list; struct device *dev = hr_dev->dev; - struct ib_qp_attr attr; unsigned long flags; void *wqe = NULL; - int attr_mask; int ret = 0; int nreq; int ind; @@ -692,17 +688,21 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp, *hr_qp->rdb.db_record = hr_qp->rq.head & 0xffff; + /* + * Hip08 hardware cannot flush the WQEs in RQ if the QP state + * gets into errored mode. Hence, as a workaround to this + * hardware limitation, driver needs to assist in flushing. But + * the flushing operation uses mailbox to convey the QP state to + * the hardware and which can sleep due to the mutex protection + * around the mailbox calls. Hence, use the deferred flush for + * now. + */ if (hr_qp->state == IB_QPS_ERR) { - attr_mask = IB_QP_STATE; - attr.qp_state = IB_QPS_ERR; - - ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, &attr, - attr_mask, hr_qp->state, - IB_QPS_ERR); - if (ret) { - spin_unlock_irqrestore(&hr_qp->rq.lock, flags); - *bad_wr = wr; - return ret; + if (atomic_read(&hr_qp->flush_cnt) == 0) { + atomic_set(&hr_qp->flush_cnt, 1); + init_flush_work(hr_dev, hr_qp); + } else { + atomic_inc(&hr_qp->flush_cnt); } } } @@ -2690,13 +2690,11 @@ static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe, static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, struct hns_roce_qp **cur_qp, struct ib_wc *wc) { + struct hns_roce_dev *hr_dev = to_hr_dev(hr_cq->ib_cq.device); struct hns_roce_srq *srq = NULL; - struct hns_roce_dev *hr_dev; struct hns_roce_v2_cqe *cqe; struct hns_roce_qp *hr_qp; struct hns_roce_wq *wq; - struct ib_qp_attr attr; - int attr_mask; int is_send; u16 wqe_ctr; u32 opcode; @@ -2720,7 +2718,6 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, V2_CQE_BYTE_16_LCL_QPN_S); if (!*cur_qp || (qpn & HNS_ROCE_V2_CQE_QPN_MASK) != (*cur_qp)->qpn) { - hr_dev = to_hr_dev(hr_cq->ib_cq.device); hr_qp = __hns_roce_qp_lookup(hr_dev, qpn); if (unlikely(!hr_qp)) { dev_err(hr_dev->dev, "CQ %06lx with entry for unknown QPN %06x\n", @@ -2730,6 +2727,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, *cur_qp = hr_qp; } + hr_qp = *cur_qp; wc->qp = &(*cur_qp)->ibqp; wc->vendor_err = 0; @@ -2814,14 +2812,27 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq, break; } - /* flush cqe if wc status is error, excluding flush error */ - if ((wc->status != IB_WC_SUCCESS) && - (wc->status != IB_WC_WR_FLUSH_ERR)) { - attr_mask = IB_QP_STATE; - attr.qp_state = IB_QPS_ERR; - return hns_roce_v2_modify_qp(&(*cur_qp)->ibqp, - &attr, attr_mask, - (*cur_qp)->state, IB_QPS_ERR); + /* + * Hip08 hardware cannot flush the WQEs in SQ/RQ if the QP state gets + * into errored mode. Hence, as a workaround to this hardware + * limitation, driver needs to assist in flushing. But the flushing + * operation uses mailbox to convey the QP state to the hardware and + * which can sleep due to the mutex protection around the mailbox calls. + * Hence, use the deferred flush for now. Once wc error detected, the + * flushing operation is needed. + */ + if (wc->status != IB_WC_SUCCESS && + wc->status != IB_WC_WR_FLUSH_ERR) { + dev_err(hr_dev->dev, "error cqe status is: 0x%x\n", + status & HNS_ROCE_V2_CQE_STATUS_MASK); + + if (atomic_read(&hr_qp->flush_cnt) == 0) { + atomic_set(&hr_qp->flush_cnt, 1); + init_flush_work(hr_dev, hr_qp); + } else { + atomic_inc(&hr_qp->flush_cnt); + } + return 0; } if (wc->status == IB_WC_WR_FLUSH_ERR) @@ -4389,6 +4400,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, struct hns_roce_v2_qp_context *context = ctx; struct hns_roce_v2_qp_context *qpc_mask = ctx + 1; struct device *dev = hr_dev->dev; + unsigned long sq_flag = 0; + unsigned long rq_flag = 0; int ret; /* @@ -4406,6 +4419,9 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, /* When QP state is err, SQ and RQ WQE should be flushed */ if (new_state == IB_QPS_ERR) { + spin_lock_irqsave(&hr_qp->sq.lock, sq_flag); + spin_lock_irqsave(&hr_qp->rq.lock, rq_flag); + hr_qp->state = IB_QPS_ERR; roce_set_field(context->byte_160_sq_ci_pi, V2_QPC_BYTE_160_SQ_PRODUCER_IDX_M, V2_QPC_BYTE_160_SQ_PRODUCER_IDX_S, @@ -4423,6 +4439,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, V2_QPC_BYTE_84_RQ_PRODUCER_IDX_M, V2_QPC_BYTE_84_RQ_PRODUCER_IDX_S, 0); } + spin_unlock_irqrestore(&hr_qp->rq.lock, rq_flag); + spin_unlock_irqrestore(&hr_qp->sq.lock, sq_flag); } /* Configure the optional fields */ @@ -4468,6 +4486,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, hr_qp->next_sge = 0; if (hr_qp->rq.wqe_cnt) *hr_qp->rdb.db_record = 0; + + atomic_set(&hr_qp->flush_cnt, 0); } out: diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index fa38582..ad7ed07 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -56,10 +56,16 @@ static void flush_work_handle(struct work_struct *work) attr_mask = IB_QP_STATE; attr.qp_state = IB_QPS_ERR; - ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); - if (ret) - dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", - ret); + while (atomic_read(&hr_qp->flush_cnt)) { + ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); + if (ret) + dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", + ret); + + /* If flush_cnt larger than 1, only need one more time flush */ + if (atomic_dec_and_test(&hr_qp->flush_cnt)) + atomic_set(&hr_qp->flush_cnt, 1); + } /* * make sure we signal QP destroy leg that flush QP was completed @@ -742,6 +748,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, spin_lock_init(&hr_qp->rq.lock); hr_qp->state = IB_QPS_RESET; + atomic_set(&hr_qp->flush_cnt, 0); hr_qp->ibqp.qp_type = init_attr->qp_type; -- 2.7.4 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-01-15 9:49 ` [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu @ 2020-01-28 19:56 ` Jason Gunthorpe 2020-02-04 7:41 ` Liuyixian (Eason) 2020-01-28 20:05 ` Jason Gunthorpe 1 sibling, 1 reply; 9+ messages in thread From: Jason Gunthorpe @ 2020-01-28 19:56 UTC (permalink / raw) To: Yixian Liu; +Cc: dledford, leon, linux-rdma, linuxarm On Wed, Jan 15, 2020 at 05:49:13PM +0800, Yixian Liu wrote: > - if (ret) { > - spin_unlock_irqrestore(&qp->sq.lock, flags); > - *bad_wr = wr; > - return ret; > + if (atomic_read(&qp->flush_cnt) == 0) { > + atomic_set(&qp->flush_cnt, 1); > + init_flush_work(hr_dev, qp); > + } else { > + atomic_inc(&qp->flush_cnt); > } Surely this should be written using atomic_add_return ?? Jason ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-01-28 19:56 ` Jason Gunthorpe @ 2020-02-04 7:41 ` Liuyixian (Eason) 0 siblings, 0 replies; 9+ messages in thread From: Liuyixian (Eason) @ 2020-02-04 7:41 UTC (permalink / raw) To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, linuxarm On 2020/1/29 3:56, Jason Gunthorpe wrote: > On Wed, Jan 15, 2020 at 05:49:13PM +0800, Yixian Liu wrote: >> - if (ret) { >> - spin_unlock_irqrestore(&qp->sq.lock, flags); >> - *bad_wr = wr; >> - return ret; >> + if (atomic_read(&qp->flush_cnt) == 0) { >> + atomic_set(&qp->flush_cnt, 1); >> + init_flush_work(hr_dev, qp); >> + } else { >> + atomic_inc(&qp->flush_cnt); >> } > > Surely this should be written using atomic_add_return ?? > > Jason Hi Jason, Thanks very much for your good suggestion! The code then can be simplified as: if (atomic_add_return(1, &qp->flush_cnt) == 1) init_flush_work(hr_dev, qp); > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-01-15 9:49 ` [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu 2020-01-28 19:56 ` Jason Gunthorpe @ 2020-01-28 20:05 ` Jason Gunthorpe 2020-02-04 8:47 ` Liuyixian (Eason) 1 sibling, 1 reply; 9+ messages in thread From: Jason Gunthorpe @ 2020-01-28 20:05 UTC (permalink / raw) To: Yixian Liu; +Cc: dledford, leon, linux-rdma, linuxarm On Wed, Jan 15, 2020 at 05:49:13PM +0800, Yixian Liu wrote: > diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c > index fa38582..ad7ed07 100644 > +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c > @@ -56,10 +56,16 @@ static void flush_work_handle(struct work_struct *work) > attr_mask = IB_QP_STATE; > attr.qp_state = IB_QPS_ERR; > > - ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); > - if (ret) > - dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", > - ret); > + while (atomic_read(&hr_qp->flush_cnt)) { > + ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); > + if (ret) > + dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", > + ret); > + > + /* If flush_cnt larger than 1, only need one more time flush */ > + if (atomic_dec_and_test(&hr_qp->flush_cnt)) > + atomic_set(&hr_qp->flush_cnt, 1); > + } And this while loop is just if (atomic_xchg(&hr_qp->flush_cnt, 0)) { [..] } I'm not even sure this needs to be a counter, all you need is set_bit() and test_and_clear() Jason ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-01-28 20:05 ` Jason Gunthorpe @ 2020-02-04 8:47 ` Liuyixian (Eason) 2020-02-05 20:30 ` Jason Gunthorpe 0 siblings, 1 reply; 9+ messages in thread From: Liuyixian (Eason) @ 2020-02-04 8:47 UTC (permalink / raw) To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, linuxarm On 2020/1/29 4:05, Jason Gunthorpe wrote: > On Wed, Jan 15, 2020 at 05:49:13PM +0800, Yixian Liu wrote: >> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c >> index fa38582..ad7ed07 100644 >> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c >> @@ -56,10 +56,16 @@ static void flush_work_handle(struct work_struct *work) >> attr_mask = IB_QP_STATE; >> attr.qp_state = IB_QPS_ERR; >> >> - ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); >> - if (ret) >> - dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", >> - ret); >> + while (atomic_read(&hr_qp->flush_cnt)) { >> + ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); >> + if (ret) >> + dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", >> + ret); >> + >> + /* If flush_cnt larger than 1, only need one more time flush */ >> + if (atomic_dec_and_test(&hr_qp->flush_cnt)) >> + atomic_set(&hr_qp->flush_cnt, 1); >> + } > > And this while loop is just There is a bug here, the code should be: if (!atomic_dec_and_test(&hr_qp->flush_cnt)) atomic_set(&hr_qp->flush_cnt, 1); It merges all further flush operation requirements into only one more time flush, that is, do the loop once again if flush_cnt larger than 1. > > if (atomic_xchg(&hr_qp->flush_cnt, 0)) { > [..] > } I think we can't use if instead of while loop. Our current solution can merge all further flush requirements after the inflection point (the place of reading PI of SQ and RQ in hns_roce_modify_qp) into only one more time flush. That is, the flush_cnt can be changed again by post send/recv at any place of the implementation of hns_roce_modify_qp. We need one more time flush to update the PI of SQ and RQ. With your solution, when user posts a new wr during the implementation of [...] in if condition, it will re-queue a new init_flush_work, which will lead to a multiple call problem as we discussed in v2. > > I'm not even sure this needs to be a counter, all you need is set_bit() > and test_and_clear() We need the value of flush_cnt large than 1 to record further flush requirements, that's why flush_cnt can be defined as a flag or bit value. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-02-04 8:47 ` Liuyixian (Eason) @ 2020-02-05 20:30 ` Jason Gunthorpe 2020-02-06 9:14 ` Liuyixian (Eason) 0 siblings, 1 reply; 9+ messages in thread From: Jason Gunthorpe @ 2020-02-05 20:30 UTC (permalink / raw) To: Liuyixian (Eason); +Cc: dledford, leon, linux-rdma, linuxarm On Tue, Feb 04, 2020 at 04:47:38PM +0800, Liuyixian (Eason) wrote: > > > On 2020/1/29 4:05, Jason Gunthorpe wrote: > > On Wed, Jan 15, 2020 at 05:49:13PM +0800, Yixian Liu wrote: > >> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c > >> index fa38582..ad7ed07 100644 > >> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c > >> @@ -56,10 +56,16 @@ static void flush_work_handle(struct work_struct *work) > >> attr_mask = IB_QP_STATE; > >> attr.qp_state = IB_QPS_ERR; > >> > >> - ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); > >> - if (ret) > >> - dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", > >> - ret); > >> + while (atomic_read(&hr_qp->flush_cnt)) { > >> + ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); > >> + if (ret) > >> + dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", > >> + ret); > >> + > >> + /* If flush_cnt larger than 1, only need one more time flush */ > >> + if (atomic_dec_and_test(&hr_qp->flush_cnt)) > >> + atomic_set(&hr_qp->flush_cnt, 1); > >> + } > > > > And this while loop is just > > There is a bug here, the code should be: > if (!atomic_dec_and_test(&hr_qp->flush_cnt)) > atomic_set(&hr_qp->flush_cnt, 1); > > It merges all further flush operation requirements into only one more time flush, > that is, do the loop once again if flush_cnt larger than 1. > > > > > if (atomic_xchg(&hr_qp->flush_cnt, 0)) { > > [..] > > } > > I think we can't use if instead of while loop. Well, you can't do two operations and still have an atomic, so you have to fix it somehow. Possibly this needs a spinlock approach instead. > With your solution, when user posts a new wr during the > implementation of [...] in if condition, it will re-queue a new > init_flush_work, which will lead to a multiple call problem as we > discussed in v2. queue_work can be called while a work is still running, it just makes sure it will run again. > > I'm not even sure this needs to be a counter, all you need is set_bit() > > and test_and_clear() > > We need the value of flush_cnt large than 1 to record further flush > requirements, that's why flush_cnt can be defined as a flag or bit > value. This explanation doesn't make sense, the counter isn't being used to count anything, it is just a flag. Jason ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue 2020-02-05 20:30 ` Jason Gunthorpe @ 2020-02-06 9:14 ` Liuyixian (Eason) 0 siblings, 0 replies; 9+ messages in thread From: Liuyixian (Eason) @ 2020-02-06 9:14 UTC (permalink / raw) To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, linuxarm On 2020/2/6 4:30, Jason Gunthorpe wrote: > On Tue, Feb 04, 2020 at 04:47:38PM +0800, Liuyixian (Eason) wrote: >> >> >> On 2020/1/29 4:05, Jason Gunthorpe wrote: >>> On Wed, Jan 15, 2020 at 05:49:13PM +0800, Yixian Liu wrote: >>>> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c >>>> index fa38582..ad7ed07 100644 >>>> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c >>>> @@ -56,10 +56,16 @@ static void flush_work_handle(struct work_struct *work) >>>> attr_mask = IB_QP_STATE; >>>> attr.qp_state = IB_QPS_ERR; >>>> >>>> - ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); >>>> - if (ret) >>>> - dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", >>>> - ret); >>>> + while (atomic_read(&hr_qp->flush_cnt)) { >>>> + ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL); >>>> + if (ret) >>>> + dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n", >>>> + ret); >>>> + >>>> + /* If flush_cnt larger than 1, only need one more time flush */ >>>> + if (atomic_dec_and_test(&hr_qp->flush_cnt)) >>>> + atomic_set(&hr_qp->flush_cnt, 1); >>>> + } >>> >>> And this while loop is just >> >> There is a bug here, the code should be: >> if (!atomic_dec_and_test(&hr_qp->flush_cnt)) >> atomic_set(&hr_qp->flush_cnt, 1); >> >> It merges all further flush operation requirements into only one more time flush, >> that is, do the loop once again if flush_cnt larger than 1. >> >>> >>> if (atomic_xchg(&hr_qp->flush_cnt, 0)) { >>> [..] >>> } >> >> I think we can't use if instead of while loop. > > Well, you can't do two operations and still have an atomic, so you > have to fix it somehow. Possibly this needs a spinlock approach > instead. Agree. > >> With your solution, when user posts a new wr during the >> implementation of [...] in if condition, it will re-queue a new >> init_flush_work, which will lead to a multiple call problem as we >> discussed in v2. > > queue_work can be called while a work is still running, it just makes > sure it will run again. Agree. > >>> I'm not even sure this needs to be a counter, all you need is set_bit() >>> and test_and_clear() >> >> We need the value of flush_cnt large than 1 to record further flush >> requirements, that's why flush_cnt can be defined as a flag or bit >> value. > > This explanation doesn't make sense, the counter isn't being used to > count anything, it is just a flag. Yes, you are right. I have reconsidered the solution with your suggestion, flag is enough for whole solution. Will fix it in v8 with flag idea. Thanks a lot. > > Jason > > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-02-06 9:15 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-01-15 9:49 [PATCH v7 for-next 0/2] RDMA/hns: Add the workqueue framework for flush cqe handler Yixian Liu 2020-01-15 9:49 ` [PATCH v7 for-next 1/2] " Yixian Liu 2020-01-15 9:49 ` [PATCH v7 for-next 2/2] RDMA/hns: Delayed flush cqe process with workqueue Yixian Liu 2020-01-28 19:56 ` Jason Gunthorpe 2020-02-04 7:41 ` Liuyixian (Eason) 2020-01-28 20:05 ` Jason Gunthorpe 2020-02-04 8:47 ` Liuyixian (Eason) 2020-02-05 20:30 ` Jason Gunthorpe 2020-02-06 9:14 ` Liuyixian (Eason)
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).