linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8
@ 2020-05-20 13:53 Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 1/9] RDMA/hns: Let software PI/CI grow naturally Weihang Li
                   ` (9 more replies)
  0 siblings, 10 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

These are some cleanups, include removing dead code, modifying
varibles/fields types, and optimizing some functions.

Lang Cheng (3):
  RDMA/hns: Let software PI/CI grow naturally
  RDMA/hns: Add CQ flag instead of independent enable flag
  RDMA/hns: Optimize post and poll process

Weihang Li (2):
  RDMA/hns: Change all page_shift to unsigned
  RDMA/hns: Change variables representing quantity to unsigned

Xi Wang (3):
  RDMA/hns: Rename QP buffer related function
  RDMA/hns: Refactor the QP context filling process related to WQE
    buffer configure
  RDMA/hns: Optimize the usage of MTR

Yangyang Li (1):
  RDMA/hns: Remove unused code about assert

 drivers/infiniband/hw/hns/hns_roce_alloc.c  |   2 +-
 drivers/infiniband/hw/hns/hns_roce_common.h |   4 -
 drivers/infiniband/hw/hns/hns_roce_cq.c     |  10 +-
 drivers/infiniband/hw/hns/hns_roce_device.h |  38 ++--
 drivers/infiniband/hw/hns/hns_roce_hem.c    |   2 +-
 drivers/infiniband/hw/hns/hns_roce_hem.h    |   2 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 323 +++++++++++++++-------------
 drivers/infiniband/hw/hns/hns_roce_main.c   |   1 -
 drivers/infiniband/hw/hns/hns_roce_mr.c     |  72 ++++---
 drivers/infiniband/hw/hns/hns_roce_qp.c     |   8 +-
 10 files changed, 249 insertions(+), 213 deletions(-)

-- 
2.8.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH for-next 1/9] RDMA/hns: Let software PI/CI grow naturally
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag Weihang Li
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

The hardware can truncate PI/CI when posting or polling, the driver does
not need to do truncation. Therefore keep the software's PI/CI consistent
with it in the hardware.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index ebe570a..7d0556ef 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -500,8 +500,7 @@ static inline void update_sq_db(struct hns_roce_dev *hr_dev,
 		roce_set_field(sq_db.byte_4, V2_DB_BYTE_4_CMD_M,
 			       V2_DB_BYTE_4_CMD_S, HNS_ROCE_V2_SQ_DB);
 		roce_set_field(sq_db.parameter, V2_DB_PARAMETER_IDX_M,
-			       V2_DB_PARAMETER_IDX_S,
-			       qp->sq.head & ((qp->sq.wqe_cnt << 1) - 1));
+			       V2_DB_PARAMETER_IDX_S, qp->sq.head);
 		roce_set_field(sq_db.parameter, V2_DB_PARAMETER_SL_M,
 			       V2_DB_PARAMETER_SL_S, qp->sl);
 
@@ -806,7 +805,8 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
 		srq_db.byte_4 =
 			cpu_to_le32(HNS_ROCE_V2_SRQ_DB << V2_DB_BYTE_4_CMD_S |
 				    (srq->srqn & V2_DB_BYTE_4_TAG_M));
-		srq_db.parameter = cpu_to_le32(srq->head);
+		srq_db.parameter =
+			cpu_to_le32(srq->head & V2_DB_PARAMETER_IDX_M);
 
 		hns_roce_write64(hr_dev, (__le32 *)&srq_db, srq->db_reg_l);
 	}
@@ -2947,8 +2947,7 @@ static int hns_roce_v2_req_notify_cq(struct ib_cq *ibcq,
 	roce_set_field(doorbell[0], V2_CQ_DB_BYTE_4_CMD_M, V2_DB_BYTE_4_CMD_S,
 		       HNS_ROCE_V2_CQ_DB_NTR);
 	roce_set_field(doorbell[1], V2_CQ_DB_PARAMETER_CONS_IDX_M,
-		       V2_CQ_DB_PARAMETER_CONS_IDX_S,
-		       hr_cq->cons_index & ((hr_cq->cq_depth << 1) - 1));
+		       V2_CQ_DB_PARAMETER_CONS_IDX_S, hr_cq->cons_index);
 	roce_set_field(doorbell[1], V2_CQ_DB_PARAMETER_CMD_SN_M,
 		       V2_CQ_DB_PARAMETER_CMD_SN_S, hr_cq->arm_sn & 0x3);
 	roce_set_bit(doorbell[1], V2_CQ_DB_PARAMETER_NOTIFY_S,
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 1/9] RDMA/hns: Let software PI/CI grow naturally Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-25 17:06   ` Jason Gunthorpe
  2020-05-20 13:53 ` [PATCH for-next 3/9] RDMA/hns: Optimize post and poll process Weihang Li
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

It's easier to understand and maintain enable flags of cq using a single
field in type of u32 than defining a field for every flags in the structure
hns_roce_cq, and we can add new flags for features more conveniently in the
future.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_cq.c     | 10 +++++-----
 drivers/infiniband/hw/hns/hns_roce_device.h |  6 +++---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  |  6 +++---
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index d2d7074..925fb77 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -186,8 +186,8 @@ static int alloc_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
 						   &hr_cq->db);
 			if (err)
 				return err;
-			hr_cq->db_en = 1;
-			resp->cap_flags |= HNS_ROCE_SUPPORT_CQ_RECORD_DB;
+			hr_cq->flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
+			resp->cap_flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
 		}
 	} else {
 		if (has_db) {
@@ -196,7 +196,7 @@ static int alloc_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
 				return err;
 			hr_cq->set_ci_db = hr_cq->db.db_record;
 			*hr_cq->set_ci_db = 0;
-			hr_cq->db_en = 1;
+			hr_cq->flags |= HNS_ROCE_CQ_FLAG_RECORD_DB;
 		}
 		hr_cq->cq_db_l = hr_dev->reg_base + hr_dev->odb_offset +
 				 DB_REG_OFFSET * hr_dev->priv_uar.index;
@@ -210,10 +210,10 @@ static void free_cq_db(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq,
 {
 	struct hns_roce_ucontext *uctx;
 
-	if (!hr_cq->db_en)
+	if (!(hr_cq->flags & HNS_ROCE_CQ_FLAG_RECORD_DB))
 		return;
 
-	hr_cq->db_en = 0;
+	hr_cq->flags &= ~HNS_ROCE_CQ_FLAG_RECORD_DB;
 	if (udata) {
 		uctx = rdma_udata_to_drv_context(udata,
 						 struct hns_roce_ucontext,
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 4fcd608e..06bafa1 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -135,8 +135,8 @@ enum {
 	HNS_ROCE_QP_CAP_SQ_RECORD_DB = BIT(1),
 };
 
-enum {
-	HNS_ROCE_SUPPORT_CQ_RECORD_DB = 1 << 0,
+enum hns_roce_cq_flags {
+	HNS_ROCE_CQ_FLAG_RECORD_DB = BIT(0),
 };
 
 enum hns_roce_qp_state {
@@ -454,7 +454,7 @@ struct hns_roce_cq {
 	struct ib_cq			ib_cq;
 	struct hns_roce_mtr		mtr;
 	struct hns_roce_db		db;
-	u8				db_en;
+	u32				flags;
 	spinlock_t			lock;
 	u32				cq_depth;
 	u32				cons_index;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 7d0556ef..36a9871 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -2905,9 +2905,9 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
 	roce_set_field(cq_context->byte_40_cqe_ba, V2_CQC_BYTE_40_CQE_BA_M,
 		       V2_CQC_BYTE_40_CQE_BA_S, (dma_handle >> (32 + 3)));
 
-	if (hr_cq->db_en)
-		roce_set_bit(cq_context->byte_44_db_record,
-			     V2_CQC_BYTE_44_DB_RECORD_EN_S, 1);
+	roce_set_bit(cq_context->byte_44_db_record,
+		     V2_CQC_BYTE_44_DB_RECORD_EN_S,
+		     (hr_cq->flags & HNS_ROCE_CQ_FLAG_RECORD_DB) ? 1 : 0);
 
 	roce_set_field(cq_context->byte_44_db_record,
 		       V2_CQC_BYTE_44_DB_RECORD_ADDR_M,
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 3/9] RDMA/hns: Optimize post and poll process
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 1/9] RDMA/hns: Let software PI/CI grow naturally Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 4/9] RDMA/hns: Remove unused code about assert Weihang Li
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

Add unlikely() and likely() to optimize main I/O process code.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 36a9871..1d5fdf6 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -187,15 +187,15 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
 	int i;
 
 	if (wr->send_flags & IB_SEND_INLINE && valid_num_sge) {
-		if (le32_to_cpu(rc_sq_wqe->msg_len) >
-		    hr_dev->caps.max_sq_inline) {
+		if (unlikely(le32_to_cpu(rc_sq_wqe->msg_len) >
+			     hr_dev->caps.max_sq_inline)) {
 			ibdev_err(ibdev, "inline len(1-%d)=%d, illegal",
 				  rc_sq_wqe->msg_len,
 				  hr_dev->caps.max_sq_inline);
 			return -EINVAL;
 		}
 
-		if (wr->opcode == IB_WR_RDMA_READ) {
+		if (unlikely(wr->opcode == IB_WR_RDMA_READ)) {
 			ibdev_err(ibdev, "Not support inline data!\n");
 			return -EINVAL;
 		}
@@ -526,7 +526,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
 	spin_lock_irqsave(&qp->sq.lock, flags);
 
 	ret = check_send_valid(hr_dev, qp);
-	if (ret) {
+	if (unlikely(ret)) {
 		*bad_wr = wr;
 		nreq = 0;
 		goto out;
@@ -562,7 +562,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
 		else if (ibqp->qp_type == IB_QPT_RC)
 			ret = set_rc_wqe(qp, wr, wqe, &sge_idx, owner_bit);
 
-		if (ret) {
+		if (unlikely(ret)) {
 			*bad_wr = wr;
 			goto out;
 		}
@@ -612,15 +612,15 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
 	spin_lock_irqsave(&hr_qp->rq.lock, flags);
 
 	ret = check_recv_valid(hr_dev, hr_qp);
-	if (ret) {
+	if (unlikely(ret)) {
 		*bad_wr = wr;
 		nreq = 0;
 		goto out;
 	}
 
 	for (nreq = 0; wr; ++nreq, wr = wr->next) {
-		if (hns_roce_wq_overflow(&hr_qp->rq, nreq,
-			hr_qp->ibqp.recv_cq)) {
+		if (unlikely(hns_roce_wq_overflow(&hr_qp->rq, nreq,
+						  hr_qp->ibqp.recv_cq))) {
 			ret = -ENOMEM;
 			*bad_wr = wr;
 			goto out;
@@ -765,7 +765,7 @@ static int hns_roce_v2_post_srq_recv(struct ib_srq *ibsrq,
 		}
 
 		wqe_idx = find_empty_entry(&srq->idx_que, srq->wqe_cnt);
-		if (wqe_idx < 0) {
+		if (unlikely(wqe_idx < 0)) {
 			ret = -ENOMEM;
 			*bad_wr = wr;
 			break;
@@ -2984,7 +2984,7 @@ static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe,
 		wqe_buf += size;
 	}
 
-	if (data_len) {
+	if (unlikely(data_len)) {
 		wc->status = IB_WC_LOC_LEN_ERR;
 		return -EAGAIN;
 	}
@@ -3076,7 +3076,8 @@ static void get_cqe_status(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
 			break;
 		}
 
-	if (wc->status == IB_WC_SUCCESS || wc->status == IB_WC_WR_FLUSH_ERR)
+	if (likely(wc->status == IB_WC_SUCCESS ||
+		   wc->status == IB_WC_WR_FLUSH_ERR))
 		return;
 
 	ibdev_err(&hr_dev->ib_dev, "error cqe status 0x%x:\n", cqe_status);
@@ -3171,7 +3172,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
 	}
 
 	get_cqe_status(hr_dev, *cur_qp, cqe, wc);
-	if (wc->status != IB_WC_SUCCESS)
+	if (unlikely(wc->status != IB_WC_SUCCESS))
 		return 0;
 
 	if (is_send) {
@@ -3270,7 +3271,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
 		    opcode == HNS_ROCE_V2_OPCODE_SEND_WITH_INV) &&
 		    (roce_get_bit(cqe->byte_4, V2_CQE_BYTE_4_RQ_INLINE_S))) {
 			ret = hns_roce_handle_recv_inl_wqe(cqe, cur_qp, wc);
-			if (ret)
+			if (unlikely(ret))
 				return -EAGAIN;
 		}
 
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 4/9] RDMA/hns: Remove unused code about assert
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (2 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 3/9] RDMA/hns: Optimize post and poll process Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 5/9] RDMA/hns: Rename QP buffer related function Weihang Li
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Yangyang Li <liyangyang20@huawei.com>

The codes related to assert are no longer used and need to be deleted.

Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_common.h | 4 ----
 drivers/infiniband/hw/hns/hns_roce_main.c   | 1 -
 2 files changed, 5 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_common.h b/drivers/infiniband/hw/hns/hns_roce_common.h
index 8e95a1a..f5669ff 100644
--- a/drivers/infiniband/hw/hns/hns_roce_common.h
+++ b/drivers/infiniband/hw/hns/hns_roce_common.h
@@ -33,10 +33,6 @@
 #ifndef _HNS_ROCE_COMMON_H
 #define _HNS_ROCE_COMMON_H
 
-#ifndef assert
-#define assert(cond)
-#endif
-
 #define roce_write(dev, reg, val)	writel((val), (dev)->reg_base + (reg))
 #define roce_read(dev, reg)		readl((dev)->reg_base + (reg))
 #define roce_raw_write(value, addr) \
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index fd3581e..50763cf 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -233,7 +233,6 @@ static int hns_roce_query_port(struct ib_device *ib_dev, u8 port_num,
 	enum ib_mtu mtu;
 	u8 port;
 
-	assert(port_num > 0);
 	port = port_num - 1;
 
 	/* props being zeroed by the caller, avoid zeroing it here */
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 5/9] RDMA/hns: Rename QP buffer related function
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (3 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 4/9] RDMA/hns: Remove unused code about assert Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 6/9] RDMA/hns: Change all page_shift to unsigned Weihang Li
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Xi Wang <wangxi11@huawei.com>

Rename the function related to QP buffer to make the code more readable.

Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_qp.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index dca979d..ae7495f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -501,9 +501,9 @@ static int set_user_sq_size(struct hns_roce_dev *hr_dev,
 	return 0;
 }
 
-static int split_wqe_buf_region(struct hns_roce_dev *hr_dev,
-				struct hns_roce_qp *hr_qp,
-				struct hns_roce_buf_attr *buf_attr)
+static int set_wqe_buf_attr(struct hns_roce_dev *hr_dev,
+			    struct hns_roce_qp *hr_qp,
+			    struct hns_roce_buf_attr *buf_attr)
 {
 	int buf_size;
 	int idx = 0;
@@ -675,7 +675,7 @@ static int alloc_qp_buf(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
 		hr_qp->rq_inl_buf.wqe_list = NULL;
 	}
 
-	ret = split_wqe_buf_region(hr_dev, hr_qp, &buf_attr);
+	ret = set_wqe_buf_attr(hr_dev, hr_qp, &buf_attr);
 	if (ret) {
 		ibdev_err(ibdev, "failed to split WQE buf, ret = %d.\n", ret);
 		goto err_inline;
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 6/9] RDMA/hns: Change all page_shift to unsigned
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (4 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 5/9] RDMA/hns: Rename QP buffer related function Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 7/9] RDMA/hns: Change variables representing quantity " Weihang Li
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

page_shift is used to calculate the page size, it's always non-negative,
and should be in type of unsigned.

Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_alloc.c  |  2 +-
 drivers/infiniband/hw/hns/hns_roce_device.h | 25 +++++++++++++------------
 drivers/infiniband/hw/hns/hns_roce_hem.c    |  2 +-
 drivers/infiniband/hw/hns/hns_roce_hem.h    |  2 +-
 drivers/infiniband/hw/hns/hns_roce_mr.c     | 20 +++++++++++---------
 5 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c
index 365e7db..9bb3f30 100644
--- a/drivers/infiniband/hw/hns/hns_roce_alloc.c
+++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c
@@ -254,7 +254,7 @@ int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
 
 int hns_roce_get_umem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
 			   int buf_cnt, int start, struct ib_umem *umem,
-			   int page_shift)
+			   unsigned int page_shift)
 {
 	struct ib_block_iter biter;
 	int total = 0;
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 06bafa1..e7622bf 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -338,7 +338,7 @@ struct hns_roce_buf_attr {
 		int	hopnum; /* multi-hop addressing hop num */
 	} region[HNS_ROCE_MAX_BT_REGION];
 	int region_count; /* valid region count */
-	int page_shift;  /* buffer page shift */
+	unsigned int page_shift;  /* buffer page shift */
 	bool fixed_page; /* decide page shift is fixed-size or maximum size */
 	int user_access; /* umem access flag */
 	bool mtt_only; /* only alloc buffer-required MTT memory */
@@ -347,14 +347,14 @@ struct hns_roce_buf_attr {
 /* memory translate region */
 struct hns_roce_mtr {
 	struct hns_roce_hem_list hem_list; /* multi-hop addressing resource */
-	struct ib_umem		 *umem; /* user space buffer */
-	struct hns_roce_buf	 *kmem; /* kernel space buffer */
+	struct ib_umem		*umem; /* user space buffer */
+	struct hns_roce_buf	*kmem; /* kernel space buffer */
 	struct {
-		dma_addr_t	 root_ba; /* root BA table's address */
-		bool		 is_direct; /* addressing without BA table */
-		int		 ba_pg_shift; /* BA table page shift */
-		int		 buf_pg_shift; /* buffer page shift */
-		int		 buf_pg_count;  /* buffer page count */
+		dma_addr_t	root_ba; /* root BA table's address */
+		bool		is_direct; /* addressing without BA table */
+		unsigned int	ba_pg_shift; /* BA table page shift */
+		unsigned int	buf_pg_shift; /* buffer page shift */
+		int		buf_pg_count;  /* buffer page count */
 	} hem_cfg; /* config for hardware addressing */
 };
 
@@ -419,7 +419,7 @@ struct hns_roce_buf {
 	struct hns_roce_buf_list	*page_list;
 	u32				npages;
 	u32				size;
-	int				page_shift;
+	unsigned int			page_shift;
 };
 
 struct hns_roce_db_pgdir {
@@ -1132,8 +1132,9 @@ void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev);
 int hns_roce_mtr_find(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 		      int offset, u64 *mtt_buf, int mtt_max, u64 *base_addr);
 int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
-			struct hns_roce_buf_attr *buf_attr, int page_shift,
-			struct ib_udata *udata, unsigned long user_addr);
+			struct hns_roce_buf_attr *buf_attr,
+			unsigned int page_shift, struct ib_udata *udata,
+			unsigned long user_addr);
 void hns_roce_mtr_destroy(struct hns_roce_dev *hr_dev,
 			  struct hns_roce_mtr *mtr);
 int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
@@ -1203,7 +1204,7 @@ int hns_roce_get_kmem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
 			   int buf_cnt, int start, struct hns_roce_buf *buf);
 int hns_roce_get_umem_bufs(struct hns_roce_dev *hr_dev, dma_addr_t *bufs,
 			   int buf_cnt, int start, struct ib_umem *umem,
-			   int page_shift);
+			   unsigned int page_shift);
 
 int hns_roce_create_srq(struct ib_srq *srq,
 			struct ib_srq_init_attr *srq_init_attr,
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 37d101e..c8db6f8 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -1400,7 +1400,7 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
 int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev,
 			      struct hns_roce_hem_list *hem_list,
 			      const struct hns_roce_buf_region *regions,
-			      int region_cnt, int bt_pg_shift)
+			      int region_cnt, unsigned int bt_pg_shift)
 {
 	const struct hns_roce_buf_region *r;
 	int ofs, end;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h
index 1fa0bdc..b34c940 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.h
@@ -133,7 +133,7 @@ int hns_roce_hem_list_calc_root_ba(const struct hns_roce_buf_region *regions,
 int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev,
 			      struct hns_roce_hem_list *hem_list,
 			      const struct hns_roce_buf_region *regions,
-			      int region_cnt, int bt_pg_shift);
+			      int region_cnt, unsigned int bt_pg_shift);
 void hns_roce_hem_list_release(struct hns_roce_dev *hr_dev,
 			       struct hns_roce_hem_list *hem_list);
 void *hns_roce_hem_list_find_mtt(struct hns_roce_dev *hr_dev,
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index ecd7675..e0f5f55 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -704,7 +704,8 @@ static inline size_t mtr_bufs_size(struct hns_roce_buf_attr *attr)
 	return size;
 }
 
-static inline int mtr_umem_page_count(struct ib_umem *umem, int page_shift)
+static inline int mtr_umem_page_count(struct ib_umem *umem,
+				      unsigned int page_shift)
 {
 	int count = ib_umem_page_count(umem);
 
@@ -717,7 +718,7 @@ static inline int mtr_umem_page_count(struct ib_umem *umem, int page_shift)
 }
 
 static inline size_t mtr_kmem_direct_size(bool is_direct, size_t alloc_size,
-					  int page_shift)
+					  unsigned int page_shift)
 {
 	if (is_direct)
 		return ALIGN(alloc_size, 1 << page_shift);
@@ -730,7 +731,7 @@ static inline size_t mtr_kmem_direct_size(bool is_direct, size_t alloc_size,
  * Returns 0 on success, or the error page num.
  */
 static inline int mtr_check_direct_pages(dma_addr_t *pages, int page_count,
-					 int page_shift)
+					 unsigned int page_shift)
 {
 	size_t page_size = 1 << page_shift;
 	int i;
@@ -763,8 +764,8 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 			  struct ib_udata *udata, unsigned long user_addr)
 {
 	struct ib_device *ibdev = &hr_dev->ib_dev;
-	int max_pg_shift = buf_attr->page_shift;
-	int best_pg_shift = 0;
+	unsigned int max_pg_shift = buf_attr->page_shift;
+	unsigned int best_pg_shift = 0;
 	int all_pg_count = 0;
 	size_t direct_size;
 	size_t total_size;
@@ -834,7 +835,7 @@ static int mtr_alloc_bufs(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 }
 
 static int mtr_get_pages(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
-			 dma_addr_t *pages, int count, int page_shift)
+			 dma_addr_t *pages, int count, unsigned int page_shift)
 {
 	struct ib_device *ibdev = &hr_dev->ib_dev;
 	int npage;
@@ -944,7 +945,7 @@ int hns_roce_mtr_find(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 /* convert buffer size to page index and page count */
 static int mtr_init_region(struct hns_roce_buf_attr *attr, int page_cnt,
 			   struct hns_roce_buf_region *regions, int region_cnt,
-			   int page_shift)
+			   unsigned int page_shift)
 {
 	unsigned int page_size = 1 << page_shift;
 	int max_region = attr->region_count;
@@ -975,8 +976,9 @@ static int mtr_init_region(struct hns_roce_buf_attr *attr, int page_cnt,
  * @buf_alloced: mtr has private buffer, true means need to alloc
  */
 int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
-			struct hns_roce_buf_attr *buf_attr, int page_shift,
-			struct ib_udata *udata, unsigned long user_addr)
+			struct hns_roce_buf_attr *buf_attr,
+			unsigned int page_shift, struct ib_udata *udata,
+			unsigned long user_addr)
 {
 	struct hns_roce_buf_region regions[HNS_ROCE_MAX_BT_REGION] = {};
 	struct ib_device *ibdev = &hr_dev->ib_dev;
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 7/9] RDMA/hns: Change variables representing quantity to unsigned
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (5 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 6/9] RDMA/hns: Change all page_shift to unsigned Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 8/9] RDMA/hns: Refactor the QP context filling process related to WQE buffer configure Weihang Li
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

Number of sge/eqe is always non-negative, they should be defined in type
of unsigned.

Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h |  4 ++--
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 17 +++++++++--------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index e7622bf..e6eb85c 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -404,7 +404,7 @@ struct hns_roce_wq {
 };
 
 struct hns_roce_sge {
-	int		sge_cnt;	/* SGE num */
+	unsigned int	sge_cnt;	/* SGE num */
 	int		offset;
 	int		sge_shift;	/* SGE size */
 };
@@ -730,7 +730,7 @@ struct hns_roce_eq {
 	int				arm_st;
 	int				hop_num;
 	struct hns_roce_mtr		mtr;
-	int				eq_max_cnt;
+	u16				eq_max_cnt;
 	int				eq_period;
 	int				shift;
 	int				event_type;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 1d5fdf6..3d5ba9a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -130,7 +130,7 @@ static void set_frmr_seg(struct hns_roce_v2_rc_send_wqe *rc_sq_wqe,
 
 static void set_atomic_seg(const struct ib_send_wr *wr, void *wqe,
 			   struct hns_roce_v2_rc_send_wqe *rc_sq_wqe,
-			   int valid_num_sge)
+			   unsigned int valid_num_sge)
 {
 	struct hns_roce_wqe_atomic_seg *aseg;
 
@@ -151,12 +151,12 @@ static void set_atomic_seg(const struct ib_send_wr *wr, void *wqe,
 }
 
 static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
-			   unsigned int *sge_ind, int valid_num_sge)
+			   unsigned int *sge_ind, unsigned int valid_num_sge)
 {
 	struct hns_roce_v2_wqe_data_seg *dseg;
+	unsigned int cnt = valid_num_sge;
 	struct ib_sge *sge = wr->sg_list;
 	unsigned int idx = *sge_ind;
-	int cnt = valid_num_sge;
 
 	if (qp->ibqp.qp_type == IB_QPT_RC || qp->ibqp.qp_type == IB_QPT_UC) {
 		cnt -= HNS_ROCE_SGE_IN_WQE;
@@ -177,7 +177,7 @@ static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
 static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
 			     struct hns_roce_v2_rc_send_wqe *rc_sq_wqe,
 			     void *wqe, unsigned int *sge_ind,
-			     int valid_num_sge)
+			     unsigned int valid_num_sge)
 {
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
 	struct hns_roce_v2_wqe_data_seg *dseg = wqe;
@@ -269,10 +269,11 @@ static int check_send_valid(struct hns_roce_dev *hr_dev,
 	return 0;
 }
 
-static inline int calc_wr_sge_num(const struct ib_send_wr *wr, u32 *sge_len)
+static unsigned int calc_wr_sge_num(const struct ib_send_wr *wr,
+				    unsigned int *sge_len)
 {
-	int valid_num = 0;
-	u32 len = 0;
+	unsigned int valid_num = 0;
+	unsigned int len = 0;
 	int i;
 
 	for (i = 0; i < wr->num_sge; i++) {
@@ -403,7 +404,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
 {
 	struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe;
 	unsigned int curr_idx = *sge_idx;
-	int valid_num_sge;
+	unsigned int valid_num_sge;
 	u32 msg_len = 0;
 	int ret = 0;
 
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 8/9] RDMA/hns: Refactor the QP context filling process related to WQE buffer configure
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (6 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 7/9] RDMA/hns: Change variables representing quantity " Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-20 13:53 ` [PATCH for-next 9/9] RDMA/hns: Optimize the usage of MTR Weihang Li
  2020-05-25 17:11 ` [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Jason Gunthorpe
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Xi Wang <wangxi11@huawei.com>

Split the code related to WQE buffer configure from the QPC filling
process into two functions: config_qp_sq_buf() and config_qp_rq_buf(), this
will make the code more readable.

Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 264 ++++++++++++++++-------------
 1 file changed, 149 insertions(+), 115 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 3d5ba9a..db9fd6c 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -3782,27 +3782,16 @@ static bool check_wqe_rq_mtt_count(struct hns_roce_dev *hr_dev,
 	return true;
 }
 
-static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
-				 const struct ib_qp_attr *attr, int attr_mask,
-				 struct hns_roce_v2_qp_context *context,
-				 struct hns_roce_v2_qp_context *qpc_mask)
+static int config_qp_rq_buf(struct hns_roce_dev *hr_dev,
+			    struct hns_roce_qp *hr_qp,
+			    struct hns_roce_v2_qp_context *context,
+			    struct hns_roce_v2_qp_context *qpc_mask)
 {
-	const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
-	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
-	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
-	struct ib_device *ibdev = &hr_dev->ib_dev;
+	struct ib_qp *ibqp = &hr_qp->ibqp;
 	u64 mtts[MTT_MIN_COUNT] = { 0 };
-	dma_addr_t dma_handle_3;
-	dma_addr_t dma_handle_2;
 	u64 wqe_sge_ba;
 	u32 page_size;
-	u8 port_num;
-	u64 *mtts_3;
-	u64 *mtts_2;
 	int count;
-	u8 *dmac;
-	u8 *smac;
-	int port;
 
 	/* Search qp buf's mtts */
 	page_size = 1 << hr_qp->mtr.hem_cfg.buf_pg_shift;
@@ -3813,29 +3802,6 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
 		if (!check_wqe_rq_mtt_count(hr_dev, hr_qp, count, page_size))
 			return -EINVAL;
 
-	/* Search IRRL's mtts */
-	mtts_2 = hns_roce_table_find(hr_dev, &hr_dev->qp_table.irrl_table,
-				     hr_qp->qpn, &dma_handle_2);
-	if (!mtts_2) {
-		ibdev_err(ibdev, "failed to find QP irrl_table\n");
-		return -EINVAL;
-	}
-
-	/* Search TRRL's mtts */
-	mtts_3 = hns_roce_table_find(hr_dev, &hr_dev->qp_table.trrl_table,
-				     hr_qp->qpn, &dma_handle_3);
-	if (!mtts_3) {
-		ibdev_err(ibdev, "failed to find QP trrl_table\n");
-		return -EINVAL;
-	}
-
-	if (attr_mask & IB_QP_ALT_PATH) {
-		ibdev_err(ibdev, "INIT2RTR attr_mask (0x%x) error\n",
-			  attr_mask);
-		return -EINVAL;
-	}
-
-	dmac = (u8 *)attr->ah_attr.roce.dmac;
 	context->wqe_sge_ba = cpu_to_le32(wqe_sge_ba >> 3);
 	qpc_mask->wqe_sge_ba = 0;
 
@@ -3914,23 +3880,154 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
 		       V2_QPC_BYTE_104_RQ_NXT_BLK_ADDR_M,
 		       V2_QPC_BYTE_104_RQ_NXT_BLK_ADDR_S, 0);
 
+	roce_set_field(context->byte_84_rq_ci_pi,
+		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_M,
+		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_S, hr_qp->rq.head);
+	roce_set_field(qpc_mask->byte_84_rq_ci_pi,
+		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_M,
+		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_S, 0);
+
+	roce_set_field(qpc_mask->byte_84_rq_ci_pi,
+		       V2_QPC_BYTE_84_RQ_CONSUMER_IDX_M,
+		       V2_QPC_BYTE_84_RQ_CONSUMER_IDX_S, 0);
+
+	return 0;
+}
+
+static int config_qp_sq_buf(struct hns_roce_dev *hr_dev,
+			    struct hns_roce_qp *hr_qp,
+			    struct hns_roce_v2_qp_context *context,
+			    struct hns_roce_v2_qp_context *qpc_mask)
+{
+	struct ib_device *ibdev = &hr_dev->ib_dev;
+	u64 sge_cur_blk = 0;
+	u64 sq_cur_blk = 0;
+	u32 page_size;
+	int count;
+
+	/* search qp buf's mtts */
+	count = hns_roce_mtr_find(hr_dev, &hr_qp->mtr, 0, &sq_cur_blk, 1, NULL);
+	if (count < 1) {
+		ibdev_err(ibdev, "failed to find QP(0x%lx) SQ buf.\n",
+			  hr_qp->qpn);
+		return -EINVAL;
+	}
+	if (hr_qp->sge.sge_cnt > 0) {
+		page_size = 1 << hr_qp->mtr.hem_cfg.buf_pg_shift;
+		count = hns_roce_mtr_find(hr_dev, &hr_qp->mtr,
+					  hr_qp->sge.offset / page_size,
+					  &sge_cur_blk, 1, NULL);
+		if (count < 1) {
+			ibdev_err(ibdev, "failed to find QP(0x%lx) SGE buf.\n",
+				  hr_qp->qpn);
+			return -EINVAL;
+		}
+	}
+
+	/*
+	 * In v2 engine, software pass context and context mask to hardware
+	 * when modifying qp. If software need modify some fields in context,
+	 * we should set all bits of the relevant fields in context mask to
+	 * 0 at the same time, else set them to 0x1.
+	 */
+	context->sq_cur_blk_addr = cpu_to_le32(to_hr_hw_page_addr(sq_cur_blk));
+	roce_set_field(context->byte_168_irrl_idx,
+		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_M,
+		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_S,
+		       upper_32_bits(to_hr_hw_page_addr(sq_cur_blk)));
+	qpc_mask->sq_cur_blk_addr = 0;
+	roce_set_field(qpc_mask->byte_168_irrl_idx,
+		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_M,
+		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_S, 0);
+
+	context->sq_cur_sge_blk_addr =
+		cpu_to_le32(to_hr_hw_page_addr(sge_cur_blk));
+	roce_set_field(context->byte_184_irrl_idx,
+		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_M,
+		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_S,
+		       upper_32_bits(to_hr_hw_page_addr(sge_cur_blk)));
+	qpc_mask->sq_cur_sge_blk_addr = 0;
+	roce_set_field(qpc_mask->byte_184_irrl_idx,
+		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_M,
+		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_S, 0);
+
+	context->rx_sq_cur_blk_addr =
+		cpu_to_le32(to_hr_hw_page_addr(sq_cur_blk));
+	roce_set_field(context->byte_232_irrl_sge,
+		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_M,
+		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_S,
+		       upper_32_bits(to_hr_hw_page_addr(sq_cur_blk)));
+	qpc_mask->rx_sq_cur_blk_addr = 0;
+	roce_set_field(qpc_mask->byte_232_irrl_sge,
+		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_M,
+		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_S, 0);
+
+	return 0;
+}
+
+static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+				 const struct ib_qp_attr *attr, int attr_mask,
+				 struct hns_roce_v2_qp_context *context,
+				 struct hns_roce_v2_qp_context *qpc_mask)
+{
+	const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
+	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+	struct ib_device *ibdev = &hr_dev->ib_dev;
+	dma_addr_t trrl_ba;
+	dma_addr_t irrl_ba;
+	u8 port_num;
+	u64 *mtts;
+	u8 *dmac;
+	u8 *smac;
+	int port;
+	int ret;
+
+	ret = config_qp_rq_buf(hr_dev, hr_qp, context, qpc_mask);
+	if (ret) {
+		ibdev_err(ibdev, "failed to config rq buf, ret = %d.\n", ret);
+		return ret;
+	}
+
+	/* Search IRRL's mtts */
+	mtts = hns_roce_table_find(hr_dev, &hr_dev->qp_table.irrl_table,
+				   hr_qp->qpn, &irrl_ba);
+	if (!mtts) {
+		ibdev_err(ibdev, "failed to find qp irrl_table.\n");
+		return -EINVAL;
+	}
+
+	/* Search TRRL's mtts */
+	mtts = hns_roce_table_find(hr_dev, &hr_dev->qp_table.trrl_table,
+				   hr_qp->qpn, &trrl_ba);
+	if (!mtts) {
+		ibdev_err(ibdev, "failed to find qp trrl_table.\n");
+		return -EINVAL;
+	}
+
+	if (attr_mask & IB_QP_ALT_PATH) {
+		ibdev_err(ibdev, "INIT2RTR attr_mask (0x%x) error.\n",
+			  attr_mask);
+		return -EINVAL;
+	}
+
 	roce_set_field(context->byte_132_trrl, V2_QPC_BYTE_132_TRRL_BA_M,
-		       V2_QPC_BYTE_132_TRRL_BA_S, dma_handle_3 >> 4);
+		       V2_QPC_BYTE_132_TRRL_BA_S, trrl_ba >> 4);
 	roce_set_field(qpc_mask->byte_132_trrl, V2_QPC_BYTE_132_TRRL_BA_M,
 		       V2_QPC_BYTE_132_TRRL_BA_S, 0);
-	context->trrl_ba = cpu_to_le32(dma_handle_3 >> (16 + 4));
+	context->trrl_ba = cpu_to_le32(trrl_ba >> (16 + 4));
 	qpc_mask->trrl_ba = 0;
 	roce_set_field(context->byte_140_raq, V2_QPC_BYTE_140_TRRL_BA_M,
 		       V2_QPC_BYTE_140_TRRL_BA_S,
-		       (u32)(dma_handle_3 >> (32 + 16 + 4)));
+		       (u32)(trrl_ba >> (32 + 16 + 4)));
 	roce_set_field(qpc_mask->byte_140_raq, V2_QPC_BYTE_140_TRRL_BA_M,
 		       V2_QPC_BYTE_140_TRRL_BA_S, 0);
 
-	context->irrl_ba = cpu_to_le32(dma_handle_2 >> 6);
+	context->irrl_ba = cpu_to_le32(irrl_ba >> 6);
 	qpc_mask->irrl_ba = 0;
 	roce_set_field(context->byte_208_irrl, V2_QPC_BYTE_208_IRRL_BA_M,
 		       V2_QPC_BYTE_208_IRRL_BA_S,
-		       dma_handle_2 >> (32 + 6));
+		       irrl_ba >> (32 + 6));
 	roce_set_field(qpc_mask->byte_208_irrl, V2_QPC_BYTE_208_IRRL_BA_M,
 		       V2_QPC_BYTE_208_IRRL_BA_S, 0);
 
@@ -3967,6 +4064,8 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
 					 grh->sgid_index));
 	roce_set_field(qpc_mask->byte_20_smac_sgid_idx,
 		       V2_QPC_BYTE_20_SGID_IDX_M, V2_QPC_BYTE_20_SGID_IDX_S, 0);
+
+	dmac = (u8 *)attr->ah_attr.roce.dmac;
 	memcpy(&(context->dmac), dmac, sizeof(u32));
 	roce_set_field(context->byte_52_udpspn_dmac, V2_QPC_BYTE_52_DMAC_M,
 		       V2_QPC_BYTE_52_DMAC_S, *((u16 *)(&dmac[4])));
@@ -3991,16 +4090,6 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
 	roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
 		       V2_QPC_BYTE_24_MTU_S, 0);
 
-	roce_set_field(context->byte_84_rq_ci_pi,
-		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_M,
-		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_S, hr_qp->rq.head);
-	roce_set_field(qpc_mask->byte_84_rq_ci_pi,
-		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_M,
-		       V2_QPC_BYTE_84_RQ_PRODUCER_IDX_S, 0);
-
-	roce_set_field(qpc_mask->byte_84_rq_ci_pi,
-		       V2_QPC_BYTE_84_RQ_CONSUMER_IDX_M,
-		       V2_QPC_BYTE_84_RQ_CONSUMER_IDX_S, 0);
 	roce_set_bit(qpc_mask->byte_108_rx_reqepsn,
 		     V2_QPC_BYTE_108_RX_REQ_PSN_ERR_S, 0);
 	roce_set_field(qpc_mask->byte_96_rx_reqmsn, V2_QPC_BYTE_96_RX_REQ_MSN_M,
@@ -4036,30 +4125,7 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
 	struct ib_device *ibdev = &hr_dev->ib_dev;
-	u64 sge_cur_blk = 0;
-	u64 sq_cur_blk = 0;
-	u32 page_size;
-	int count;
-
-	/* Search qp buf's mtts */
-	count = hns_roce_mtr_find(hr_dev, &hr_qp->mtr, 0, &sq_cur_blk, 1, NULL);
-	if (count < 1) {
-		ibdev_err(ibdev, "failed to find QP(0x%lx) SQ buf\n",
-			  hr_qp->qpn);
-		return -EINVAL;
-	}
-
-	if (hr_qp->sge.sge_cnt > 0) {
-		page_size = 1 << hr_qp->mtr.hem_cfg.buf_pg_shift;
-		count = hns_roce_mtr_find(hr_dev, &hr_qp->mtr,
-					  hr_qp->sge.offset / page_size,
-					  &sge_cur_blk, 1, NULL);
-		if (count < 1) {
-			ibdev_err(ibdev, "failed to find QP(0x%lx) SGE buf\n",
-				  hr_qp->qpn);
-			return -EINVAL;
-		}
-	}
+	int ret;
 
 	/* Not support alternate path and path migration */
 	if (attr_mask & (IB_QP_ALT_PATH | IB_QP_PATH_MIG_STATE)) {
@@ -4067,43 +4133,11 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
 		return -EINVAL;
 	}
 
-	/*
-	 * In v2 engine, software pass context and context mask to hardware
-	 * when modifying qp. If software need modify some fields in context,
-	 * we should set all bits of the relevant fields in context mask to
-	 * 0 at the same time, else set them to 0x1.
-	 */
-	context->sq_cur_blk_addr = cpu_to_le32(to_hr_hw_page_addr(sq_cur_blk));
-	roce_set_field(context->byte_168_irrl_idx,
-		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_M,
-		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_S,
-		       upper_32_bits(to_hr_hw_page_addr(sq_cur_blk)));
-	qpc_mask->sq_cur_blk_addr = 0;
-	roce_set_field(qpc_mask->byte_168_irrl_idx,
-		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_M,
-		       V2_QPC_BYTE_168_SQ_CUR_BLK_ADDR_S, 0);
-
-	context->sq_cur_sge_blk_addr =
-		cpu_to_le32(to_hr_hw_page_addr(sge_cur_blk));
-	roce_set_field(context->byte_184_irrl_idx,
-		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_M,
-		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_S,
-		       upper_32_bits(to_hr_hw_page_addr(sge_cur_blk)));
-	qpc_mask->sq_cur_sge_blk_addr = 0;
-	roce_set_field(qpc_mask->byte_184_irrl_idx,
-		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_M,
-		       V2_QPC_BYTE_184_SQ_CUR_SGE_BLK_ADDR_S, 0);
-
-	context->rx_sq_cur_blk_addr =
-		cpu_to_le32(to_hr_hw_page_addr(sq_cur_blk));
-	roce_set_field(context->byte_232_irrl_sge,
-		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_M,
-		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_S,
-		       upper_32_bits(to_hr_hw_page_addr(sq_cur_blk)));
-	qpc_mask->rx_sq_cur_blk_addr = 0;
-	roce_set_field(qpc_mask->byte_232_irrl_sge,
-		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_M,
-		       V2_QPC_BYTE_232_RX_SQ_CUR_BLK_ADDR_S, 0);
+	ret = config_qp_sq_buf(hr_dev, hr_qp, context, qpc_mask);
+	if (ret) {
+		ibdev_err(ibdev, "failed to config sq buf, ret %d\n", ret);
+		return ret;
+	}
 
 	/*
 	 * Set some fields in context to zero, Because the default values
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH for-next 9/9] RDMA/hns: Optimize the usage of MTR
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (7 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 8/9] RDMA/hns: Refactor the QP context filling process related to WQE buffer configure Weihang Li
@ 2020-05-20 13:53 ` Weihang Li
  2020-05-25 17:11 ` [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Jason Gunthorpe
  9 siblings, 0 replies; 17+ messages in thread
From: Weihang Li @ 2020-05-20 13:53 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Xi Wang <wangxi11@huawei.com>

Currently, the MTR region is configed before hns_roce_mtr_map() is invoked,
but in some scenarios, the region is configed at MTR creation, the caller
need to store this config and call hns_roce_mtr_map() later. So optimize
the usage by wrapping the MTR region config into MTR.

Signed-off-by: Xi Wang <wangxi11@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h |  3 +-
 drivers/infiniband/hw/hns/hns_roce_mr.c     | 54 +++++++++++++++--------------
 2 files changed, 30 insertions(+), 27 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index e6eb85c..e33ece5 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -355,6 +355,8 @@ struct hns_roce_mtr {
 		unsigned int	ba_pg_shift; /* BA table page shift */
 		unsigned int	buf_pg_shift; /* buffer page shift */
 		int		buf_pg_count;  /* buffer page count */
+		struct hns_roce_buf_region region[HNS_ROCE_MAX_BT_REGION];
+		unsigned int	region_count;
 	} hem_cfg; /* config for hardware addressing */
 };
 
@@ -1138,7 +1140,6 @@ int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 void hns_roce_mtr_destroy(struct hns_roce_dev *hr_dev,
 			  struct hns_roce_mtr *mtr);
 int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
-		     struct hns_roce_buf_region *regions, int region_cnt,
 		     dma_addr_t *pages, int page_cnt);
 
 int hns_roce_init_pd_table(struct hns_roce_dev *hr_dev);
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index e0f5f55..a62da7a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -481,7 +481,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibmr->device);
 	struct ib_device *ibdev = &hr_dev->ib_dev;
 	struct hns_roce_mr *mr = to_hr_mr(ibmr);
-	struct hns_roce_buf_region region = {};
+	struct hns_roce_mtr *mtr = &mr->pbl_mtr;
 	int ret = 0;
 
 	mr->npages = 0;
@@ -497,11 +497,11 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 		goto err_page_list;
 	}
 
-	region.offset = 0;
-	region.count = mr->npages;
-	region.hopnum = mr->pbl_hop_num;
-	ret = hns_roce_mtr_map(hr_dev, &mr->pbl_mtr, &region, 1, mr->page_list,
-			       mr->npages);
+	mtr->hem_cfg.region[0].offset = 0;
+	mtr->hem_cfg.region[0].count = mr->npages;
+	mtr->hem_cfg.region[0].hopnum = mr->pbl_hop_num;
+	mtr->hem_cfg.region_count = 1;
+	ret = hns_roce_mtr_map(hr_dev, mtr, mr->page_list, mr->npages);
 	if (ret) {
 		ibdev_err(ibdev, "failed to map sg mtr, ret = %d.\n", ret);
 		ret = 0;
@@ -861,7 +861,6 @@ static int mtr_get_pages(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 }
 
 int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
-		     struct hns_roce_buf_region *regions, int region_cnt,
 		     dma_addr_t *pages, int page_cnt)
 {
 	struct ib_device *ibdev = &hr_dev->ib_dev;
@@ -869,8 +868,8 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 	int err;
 	int i;
 
-	for (i = 0; i < region_cnt; i++) {
-		r = &regions[i];
+	for (i = 0; i < mtr->hem_cfg.region_count; i++) {
+		r = &mtr->hem_cfg.region[i];
 		if (r->offset + r->count > page_cnt) {
 			err = -EINVAL;
 			ibdev_err(ibdev,
@@ -943,15 +942,16 @@ int hns_roce_mtr_find(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 }
 
 /* convert buffer size to page index and page count */
-static int mtr_init_region(struct hns_roce_buf_attr *attr, int page_cnt,
-			   struct hns_roce_buf_region *regions, int region_cnt,
-			   unsigned int page_shift)
+static unsigned int mtr_init_region(struct hns_roce_buf_attr *attr,
+				    int page_cnt,
+				    struct hns_roce_buf_region *regions,
+				    int region_cnt, unsigned int page_shift)
 {
 	unsigned int page_size = 1 << page_shift;
 	int max_region = attr->region_count;
 	struct hns_roce_buf_region *r;
+	unsigned int i = 0;
 	int page_idx = 0;
-	int i = 0;
 
 	for (; i < region_cnt && i < max_region && page_idx < page_cnt; i++) {
 		r = &regions[i];
@@ -980,7 +980,6 @@ int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 			unsigned int page_shift, struct ib_udata *udata,
 			unsigned long user_addr)
 {
-	struct hns_roce_buf_region regions[HNS_ROCE_MAX_BT_REGION] = {};
 	struct ib_device *ibdev = &hr_dev->ib_dev;
 	dma_addr_t *pages = NULL;
 	int region_cnt = 0;
@@ -1012,18 +1011,22 @@ int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 	hns_roce_hem_list_init(&mtr->hem_list);
 	mtr->hem_cfg.is_direct = !has_mtt;
 	mtr->hem_cfg.ba_pg_shift = page_shift;
+	mtr->hem_cfg.region_count = 0;
+	region_cnt = mtr_init_region(buf_attr, all_pg_cnt,
+				     mtr->hem_cfg.region,
+				     ARRAY_SIZE(mtr->hem_cfg.region),
+				     mtr->hem_cfg.buf_pg_shift);
+	if (region_cnt < 1) {
+		err = -ENOBUFS;
+		ibdev_err(ibdev, "failed to init mtr region %d\n", region_cnt);
+		goto err_alloc_bufs;
+	}
+
+	mtr->hem_cfg.region_count = region_cnt;
+
 	if (has_mtt) {
-		region_cnt = mtr_init_region(buf_attr, all_pg_cnt,
-					     regions, ARRAY_SIZE(regions),
-					     mtr->hem_cfg.buf_pg_shift);
-		if (region_cnt < 1) {
-			err = -ENOBUFS;
-			ibdev_err(ibdev, "Failed to init mtr region %d\n",
-				  region_cnt);
-			goto err_alloc_bufs;
-		}
 		err = hns_roce_hem_list_request(hr_dev, &mtr->hem_list,
-						regions, region_cnt,
+						mtr->hem_cfg.region, region_cnt,
 						page_shift);
 		if (err) {
 			ibdev_err(ibdev, "Failed to request mtr hem, err %d\n",
@@ -1059,8 +1062,7 @@ int hns_roce_mtr_create(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
 		mtr->hem_cfg.root_ba = pages[0];
 	} else {
 		/* write buffer's dma address to BA table */
-		err = hns_roce_mtr_map(hr_dev, mtr, regions, region_cnt, pages,
-				       all_pg_cnt);
+		err = hns_roce_mtr_map(hr_dev, mtr, pages, all_pg_cnt);
 		if (err) {
 			ibdev_err(ibdev, "Failed to map mtr pages, err %d\n",
 				  err);
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag
  2020-05-20 13:53 ` [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag Weihang Li
@ 2020-05-25 17:06   ` Jason Gunthorpe
  2020-05-26  2:57     ` liweihang
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-05-25 17:06 UTC (permalink / raw)
  To: Weihang Li; +Cc: dledford, leon, linux-rdma, linuxarm

On Wed, May 20, 2020 at 09:53:12PM +0800, Weihang Li wrote:
> +	roce_set_bit(cq_context->byte_44_db_record,
> +		     V2_CQC_BYTE_44_DB_RECORD_EN_S,
> +		     (hr_cq->flags & HNS_ROCE_CQ_FLAG_RECORD_DB) ? 1 : 0);

It seems like the if expression should be inside the roce_set_bit
macro (just cast to bool) as something called 'bit' should have that
safety built in.

Also, if someone wants a project, all this endless stuff should be
using genmask and field_prep instead of this home grown stuff.

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8
  2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
                   ` (8 preceding siblings ...)
  2020-05-20 13:53 ` [PATCH for-next 9/9] RDMA/hns: Optimize the usage of MTR Weihang Li
@ 2020-05-25 17:11 ` Jason Gunthorpe
  2020-05-25 17:36   ` Leon Romanovsky
  9 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-05-25 17:11 UTC (permalink / raw)
  To: Weihang Li; +Cc: dledford, leon, linux-rdma, linuxarm

On Wed, May 20, 2020 at 09:53:10PM +0800, Weihang Li wrote:
> These are some cleanups, include removing dead code, modifying
> varibles/fields types, and optimizing some functions.
> 
> Lang Cheng (3):
>   RDMA/hns: Let software PI/CI grow naturally
>   RDMA/hns: Add CQ flag instead of independent enable flag
>   RDMA/hns: Optimize post and poll process
> 
> Weihang Li (2):
>   RDMA/hns: Change all page_shift to unsigned
>   RDMA/hns: Change variables representing quantity to unsigned
> 
> Xi Wang (3):
>   RDMA/hns: Rename QP buffer related function
>   RDMA/hns: Refactor the QP context filling process related to WQE
>     buffer configure
>   RDMA/hns: Optimize the usage of MTR
> 
> Yangyang Li (1):
>   RDMA/hns: Remove unused code about assert

I'm going to take these anyhow, the field macros could be improved
someday if someone wanted. Applied to for-next

I have to say the patches coming lately from hns have been following
the kernel style and protocols much better. I'm also having a much
easier time understanding the commit message and what the patch is
trying to do. Keep it up!

Thanks,
Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8
  2020-05-25 17:11 ` [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Jason Gunthorpe
@ 2020-05-25 17:36   ` Leon Romanovsky
  2020-05-26  3:13     ` liweihang
  0 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2020-05-25 17:36 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Weihang Li, dledford, linux-rdma, linuxarm

On Mon, May 25, 2020 at 02:11:16PM -0300, Jason Gunthorpe wrote:
> On Wed, May 20, 2020 at 09:53:10PM +0800, Weihang Li wrote:
> > These are some cleanups, include removing dead code, modifying
> > varibles/fields types, and optimizing some functions.
> >
> > Lang Cheng (3):
> >   RDMA/hns: Let software PI/CI grow naturally
> >   RDMA/hns: Add CQ flag instead of independent enable flag
> >   RDMA/hns: Optimize post and poll process
> >
> > Weihang Li (2):
> >   RDMA/hns: Change all page_shift to unsigned
> >   RDMA/hns: Change variables representing quantity to unsigned
> >
> > Xi Wang (3):
> >   RDMA/hns: Rename QP buffer related function
> >   RDMA/hns: Refactor the QP context filling process related to WQE
> >     buffer configure
> >   RDMA/hns: Optimize the usage of MTR
> >
> > Yangyang Li (1):
> >   RDMA/hns: Remove unused code about assert
>
> I'm going to take these anyhow, the field macros could be improved
> someday if someone wanted. Applied to for-next
>
> I have to say the patches coming lately from hns have been following
> the kernel style and protocols much better. I'm also having a much
> easier time understanding the commit message and what the patch is
> trying to do. Keep it up!

+1, I have same feeling.

Thanks

>
> Thanks,
> Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag
  2020-05-25 17:06   ` Jason Gunthorpe
@ 2020-05-26  2:57     ` liweihang
  2020-05-26 12:08       ` Jason Gunthorpe
  0 siblings, 1 reply; 17+ messages in thread
From: liweihang @ 2020-05-26  2:57 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, Linuxarm

On 2020/5/26 1:06, Jason Gunthorpe wrote:
> On Wed, May 20, 2020 at 09:53:12PM +0800, Weihang Li wrote:
>> +	roce_set_bit(cq_context->byte_44_db_record,
>> +		     V2_CQC_BYTE_44_DB_RECORD_EN_S,
>> +		     (hr_cq->flags & HNS_ROCE_CQ_FLAG_RECORD_DB) ? 1 : 0);
> 
> It seems like the if expression should be inside the roce_set_bit
> macro (just cast to bool) as something called 'bit' should have that
> safety built in.
> 

Hi Jason

Thanks for your comments, will prepare a patch to modify it.


> Also, if someone wants a project, all this endless stuff should be
> using genmask and field_prep instead of this home grown stuff.
> 
> Jason
> 

I took a look at this macro, FILED_PREP() can indeed simplify lots of
similar codes in the hns driver. I will take a try and maybe prepare a
patch/series to use it in v5.9.

Weihang

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8
  2020-05-25 17:36   ` Leon Romanovsky
@ 2020-05-26  3:13     ` liweihang
  0 siblings, 0 replies; 17+ messages in thread
From: liweihang @ 2020-05-26  3:13 UTC (permalink / raw)
  To: Leon Romanovsky, Jason Gunthorpe; +Cc: dledford, linux-rdma, Linuxarm

On 2020/5/26 1:36, Leon Romanovsky wrote:
> On Mon, May 25, 2020 at 02:11:16PM -0300, Jason Gunthorpe wrote:
>> On Wed, May 20, 2020 at 09:53:10PM +0800, Weihang Li wrote:
>>> These are some cleanups, include removing dead code, modifying
>>> varibles/fields types, and optimizing some functions.
>>>
>>> Lang Cheng (3):
>>>   RDMA/hns: Let software PI/CI grow naturally
>>>   RDMA/hns: Add CQ flag instead of independent enable flag
>>>   RDMA/hns: Optimize post and poll process
>>>
>>> Weihang Li (2):
>>>   RDMA/hns: Change all page_shift to unsigned
>>>   RDMA/hns: Change variables representing quantity to unsigned
>>>
>>> Xi Wang (3):
>>>   RDMA/hns: Rename QP buffer related function
>>>   RDMA/hns: Refactor the QP context filling process related to WQE
>>>     buffer configure
>>>   RDMA/hns: Optimize the usage of MTR
>>>
>>> Yangyang Li (1):
>>>   RDMA/hns: Remove unused code about assert
>>
>> I'm going to take these anyhow, the field macros could be improved
>> someday if someone wanted. Applied to for-next
>>
>> I have to say the patches coming lately from hns have been following
>> the kernel style and protocols much better. I'm also having a much
>> easier time understanding the commit message and what the patch is
>> trying to do. Keep it up!
> 
> +1, I have same feeling.
> 
> Thanks
> 
>>
>> Thanks,
>> Jason
> 

Thanks for the encouragement and all your helpful suggestions.
We will do our best to make it better :)

Weihang

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag
  2020-05-26  2:57     ` liweihang
@ 2020-05-26 12:08       ` Jason Gunthorpe
  2020-05-28  1:15         ` liweihang
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2020-05-26 12:08 UTC (permalink / raw)
  To: liweihang; +Cc: dledford, leon, linux-rdma, Linuxarm

On Tue, May 26, 2020 at 02:57:39AM +0000, liweihang wrote:
> > Also, if someone wants a project, all this endless stuff should be
> > using genmask and field_prep instead of this home grown stuff.
> > 
> I took a look at this macro, FILED_PREP() can indeed simplify lots of
> similar codes in the hns driver. I will take a try and maybe prepare a
> patch/series to use it in v5.9.

If you look in the git history you can find some Coccinelle spatch
stuff that makes work like this not too hard

eg 91b60a7128d96244794beb9b324eb39273872da2 did something similar for
the IBA CM MADs

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag
  2020-05-26 12:08       ` Jason Gunthorpe
@ 2020-05-28  1:15         ` liweihang
  0 siblings, 0 replies; 17+ messages in thread
From: liweihang @ 2020-05-28  1:15 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: dledford, leon, linux-rdma, Linuxarm

On 2020/5/26 20:08, Jason Gunthorpe wrote:
> On Tue, May 26, 2020 at 02:57:39AM +0000, liweihang wrote:
>>> Also, if someone wants a project, all this endless stuff should be
>>> using genmask and field_prep instead of this home grown stuff.
>>>
>> I took a look at this macro, FILED_PREP() can indeed simplify lots of
>> similar codes in the hns driver. I will take a try and maybe prepare a
>> patch/series to use it in v5.9.
> 
> If you look in the git history you can find some Coccinelle spatch
> stuff that makes work like this not too hard
> 
> eg 91b60a7128d96244794beb9b324eb39273872da2 did something similar for
> the IBA CM MADs
> 
> Jason
> 

I see, thanks for the commit-id.

Weihang

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-05-28  1:15 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-20 13:53 [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Weihang Li
2020-05-20 13:53 ` [PATCH for-next 1/9] RDMA/hns: Let software PI/CI grow naturally Weihang Li
2020-05-20 13:53 ` [PATCH for-next 2/9] RDMA/hns: Add CQ flag instead of independent enable flag Weihang Li
2020-05-25 17:06   ` Jason Gunthorpe
2020-05-26  2:57     ` liweihang
2020-05-26 12:08       ` Jason Gunthorpe
2020-05-28  1:15         ` liweihang
2020-05-20 13:53 ` [PATCH for-next 3/9] RDMA/hns: Optimize post and poll process Weihang Li
2020-05-20 13:53 ` [PATCH for-next 4/9] RDMA/hns: Remove unused code about assert Weihang Li
2020-05-20 13:53 ` [PATCH for-next 5/9] RDMA/hns: Rename QP buffer related function Weihang Li
2020-05-20 13:53 ` [PATCH for-next 6/9] RDMA/hns: Change all page_shift to unsigned Weihang Li
2020-05-20 13:53 ` [PATCH for-next 7/9] RDMA/hns: Change variables representing quantity " Weihang Li
2020-05-20 13:53 ` [PATCH for-next 8/9] RDMA/hns: Refactor the QP context filling process related to WQE buffer configure Weihang Li
2020-05-20 13:53 ` [PATCH for-next 9/9] RDMA/hns: Optimize the usage of MTR Weihang Li
2020-05-25 17:11 ` [PATCH for-next 0/9] RDMA/hns: Cleanups for 5.8 Jason Gunthorpe
2020-05-25 17:36   ` Leon Romanovsky
2020-05-26  3:13     ` liweihang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).