All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next 00/13] Updates for 5.3-rc2
@ 2019-07-30  8:56 Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode Lijun Ou
                   ` (13 more replies)
  0 siblings, 14 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

Here are some updates for hns driver based 5.3-rc2, mainly
include some codes optimization and comments style modification.

Lang Cheng (6):
  RDMA/hns: Clean up unnecessary initial assignment
  RDMA/hns: Update some comments style
  RDMA/hns: Handling the error return value of hem function
  RDMA/hns: Split bool statement and assign statement
  RDMA/hns: Refactor irq request code
  RDMA/hns: Remove unnecessary kzalloc

Lijun Ou (2):
  RDMA/hns: Encapsulate some lines for setting sq size in user mode
  RDMA/hns: Optimize hns_roce_modify_qp function

Weihang Li (2):
  RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int()
  RDMA/hns: Disable alw_lcl_lpbk of SSU

Yangyang Li (1):
  RDMA/hns: Refactor hns_roce_v2_set_hem for hip08

Yixian Liu (2):
  RDMA/hns: Update the prompt message for creating and destroy qp
  RDMA/hns: Remove unnessary init for cmq reg

 drivers/infiniband/hw/hns/hns_roce_device.h |  65 +++++----
 drivers/infiniband/hw/hns/hns_roce_hem.c    |  15 +-
 drivers/infiniband/hw/hns/hns_roce_hem.h    |   6 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 210 ++++++++++++++--------------
 drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |   2 -
 drivers/infiniband/hw/hns/hns_roce_mr.c     |   1 -
 drivers/infiniband/hw/hns/hns_roce_qp.c     | 150 +++++++++++++-------
 7 files changed, 244 insertions(+), 205 deletions(-)

-- 
1.9.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30 11:16   ` Gal Pressman
  2019-07-30  8:56 ` [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function Lijun Ou
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

It needs to check the sq size with integrity when configures
the relatived parameters of sq. Here moves the relatived code
into a special function.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_qp.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 9c272c2..35ef7e2 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -324,16 +324,12 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev,
 	return 0;
 }
 
-static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
-				     struct ib_qp_cap *cap,
-				     struct hns_roce_qp *hr_qp,
-				     struct hns_roce_ib_create_qp *ucmd)
+static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
+					struct ib_qp_cap *cap,
+					struct hns_roce_ib_create_qp *ucmd)
 {
 	u32 roundup_sq_stride = roundup_pow_of_two(hr_dev->caps.max_sq_desc_sz);
 	u8 max_sq_stride = ilog2(roundup_sq_stride);
-	u32 ex_sge_num;
-	u32 page_size;
-	u32 max_cnt;
 
 	/* Sanity check SQ size before proceeding */
 	if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes ||
@@ -349,6 +345,25 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
 		return -EINVAL;
 	}
 
+	return 0;
+}
+
+static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
+				     struct ib_qp_cap *cap,
+				     struct hns_roce_qp *hr_qp,
+				     struct hns_roce_ib_create_qp *ucmd)
+{
+	u32 ex_sge_num;
+	u32 page_size;
+	u32 max_cnt;
+	int ret;
+
+	ret = check_sq_size_with_integrity(hr_dev, cap, ucmd);
+	if (ret) {
+		dev_err(hr_dev->dev, "Sanity check sq size fail\n");
+		return ret;
+	}
+
 	hr_qp->sq.wqe_cnt = 1 << ucmd->log_sq_bb_count;
 	hr_qp->sq.wqe_shift = ucmd->log_sq_stride;
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30 11:19   ` Gal Pressman
  2019-07-30  8:56 ` [PATCH for-next 03/13] RDMA/hns: Update the prompt message for creating and destroy qp Lijun Ou
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

Here mainly packages some code into some new functions in order to
reduce code compelexity.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_qp.c | 118 +++++++++++++++++++-------------
 1 file changed, 72 insertions(+), 46 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 35ef7e2..8b2d10f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1070,6 +1070,76 @@ int to_hr_qp_type(int qp_type)
 	return transport_type;
 }
 
+static int check_mtu_validate(struct hns_roce_dev *hr_dev,
+                             struct hns_roce_qp *hr_qp,
+                             struct ib_qp_attr *attr, int attr_mask)
+{
+       struct device *dev = hr_dev->dev;
+       enum ib_mtu active_mtu;
+       int p;
+
+       p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
+           active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu);
+
+       if ((hr_dev->caps.max_mtu >= IB_MTU_2048 &&
+            attr->path_mtu > hr_dev->caps.max_mtu) ||
+            attr->path_mtu < IB_MTU_256 || attr->path_mtu > active_mtu) {
+               dev_err(dev, "attr path_mtu(%d)invalid while modify qp",
+                       attr->path_mtu);
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+                                 int attr_mask)
+{
+       struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+       struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+       struct device *dev = hr_dev->dev;
+       int ret = 0;
+       int p;
+
+       if ((attr_mask & IB_QP_PORT) &&
+           (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
+               dev_err(dev, "attr port_num invalid.attr->port_num=%d\n",
+                       attr->port_num);
+               return -EINVAL;
+       }
+
+       if (attr_mask & IB_QP_PKEY_INDEX) {
+               p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
+               if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
+                       dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n",
+                               attr->pkey_index);
+                       return -EINVAL;
+               }
+       }
+
+       if (attr_mask & IB_QP_PATH_MTU) {
+               ret = check_mtu_validate(hr_dev, hr_qp, attr, attr_mask);
+               if (ret)
+                       return ret;
+       }
+
+       if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
+           attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
+               dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
+                       attr->max_rd_atomic);
+               return -EINVAL;
+       }
+
+       if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
+           attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
+               dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
+                       attr->max_dest_rd_atomic);
+               return -EINVAL;
+       }
+
+       return ret;
+}
+
 int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 		       int attr_mask, struct ib_udata *udata)
 {
@@ -1078,8 +1148,6 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 	enum ib_qp_state cur_state, new_state;
 	struct device *dev = hr_dev->dev;
 	int ret = -EINVAL;
-	int p;
-	enum ib_mtu active_mtu;
 
 	mutex_lock(&hr_qp->mutex);
 
@@ -1107,51 +1175,9 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 		goto out;
 	}
 
-	if ((attr_mask & IB_QP_PORT) &&
-	    (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
-		dev_err(dev, "attr port_num invalid.attr->port_num=%d\n",
-			attr->port_num);
-		goto out;
-	}
-
-	if (attr_mask & IB_QP_PKEY_INDEX) {
-		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
-		if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
-			dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n",
-				attr->pkey_index);
-			goto out;
-		}
-	}
-
-	if (attr_mask & IB_QP_PATH_MTU) {
-		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
-		active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu);
-
-		if ((hr_dev->caps.max_mtu == IB_MTU_4096 &&
-		    attr->path_mtu > IB_MTU_4096) ||
-		    (hr_dev->caps.max_mtu == IB_MTU_2048 &&
-		    attr->path_mtu > IB_MTU_2048) ||
-		    attr->path_mtu < IB_MTU_256 ||
-		    attr->path_mtu > active_mtu) {
-			dev_err(dev, "attr path_mtu(%d)invalid while modify qp",
-				attr->path_mtu);
-			goto out;
-		}
-	}
-
-	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
-	    attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
-		dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
-			attr->max_rd_atomic);
-		goto out;
-	}
-
-	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
-	    attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
-		dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
-			attr->max_dest_rd_atomic);
+	ret = hns_roce_check_qp_attr(ibqp, attr, attr_mask);
+	if (ret)
 		goto out;
-	}
 
 	if (cur_state == new_state && cur_state == IB_QPS_RESET) {
 		if (hr_dev->caps.min_wqes) {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 03/13] RDMA/hns: Update the prompt message for creating and destroy qp
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 04/13] RDMA/hns: Remove unnessary init for cmq reg Lijun Ou
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Yixian Liu <liuyixian@huawei.com>

Current prompt message is uncorrect when destroying qp, add qpn
information when creating qp.

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 +++---
 drivers/infiniband/hw/hns/hns_roce_qp.c    | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 83c58be..064e56c 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -4571,8 +4571,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
 		ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
 					    hr_qp->state, IB_QPS_RESET);
 		if (ret) {
-			dev_err(dev, "modify QP %06lx to ERR failed.\n",
-				hr_qp->qpn);
+			dev_err(dev, "modify QP to Reset failed.\n");
 			return ret;
 		}
 	}
@@ -4641,7 +4640,8 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 
 	ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
 	if (ret) {
-		dev_err(hr_dev->dev, "Destroy qp failed(%d)\n", ret);
+		dev_err(hr_dev->dev, "Destroy qp 0x%06lx failed(%d)\n",
+			hr_qp->qpn, ret);
 		return ret;
 	}
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 8b2d10f..c88a42d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1002,7 +1002,8 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
 		ret = hns_roce_create_qp_common(hr_dev, pd, init_attr, udata, 0,
 						hr_qp);
 		if (ret) {
-			dev_err(dev, "Create RC QP failed\n");
+			dev_err(dev, "Create RC QP 0x%06lx failed(%d)\n",
+				hr_qp->qpn, ret);
 			kfree(hr_qp);
 			return ERR_PTR(ret);
 		}
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 04/13] RDMA/hns: Remove unnessary init for cmq reg
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (2 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 03/13] RDMA/hns: Update the prompt message for creating and destroy qp Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 05/13] RDMA/hns: Clean up unnecessary initial assignment Lijun Ou
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Yixian Liu <liuyixian@huawei.com>

There is no need to init the enable bit of cmq.

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 ++----
 drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 2 --
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 064e56c..b13c68e 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -887,8 +887,7 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type)
 		roce_write(hr_dev, ROCEE_TX_CMQ_BASEADDR_H_REG,
 			   upper_32_bits(dma));
 		roce_write(hr_dev, ROCEE_TX_CMQ_DEPTH_REG,
-			  (ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S) |
-			   HNS_ROCE_CMQ_ENABLE);
+			   ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S);
 		roce_write(hr_dev, ROCEE_TX_CMQ_HEAD_REG, 0);
 		roce_write(hr_dev, ROCEE_TX_CMQ_TAIL_REG, 0);
 	} else {
@@ -896,8 +895,7 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type)
 		roce_write(hr_dev, ROCEE_RX_CMQ_BASEADDR_H_REG,
 			   upper_32_bits(dma));
 		roce_write(hr_dev, ROCEE_RX_CMQ_DEPTH_REG,
-			  (ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S) |
-			   HNS_ROCE_CMQ_ENABLE);
+			   ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S);
 		roce_write(hr_dev, ROCEE_RX_CMQ_HEAD_REG, 0);
 		roce_write(hr_dev, ROCEE_RX_CMQ_TAIL_REG, 0);
 	}
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
index 478f5a5..58931b5 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
@@ -126,8 +126,6 @@
 #define HNS_ROCE_CMD_FLAG_ERR_INTR	BIT(HNS_ROCE_CMD_FLAG_ERR_INTR_SHIFT)
 
 #define HNS_ROCE_CMQ_DESC_NUM_S		3
-#define HNS_ROCE_CMQ_EN_B		16
-#define HNS_ROCE_CMQ_ENABLE		BIT(HNS_ROCE_CMQ_EN_B)
 
 #define HNS_ROCE_CMQ_SCC_CLR_DONE_CNT		5
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 05/13] RDMA/hns: Clean up unnecessary initial assignment
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (3 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 04/13] RDMA/hns: Remove unnessary init for cmq reg Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 06/13] RDMA/hns: Update some comments style Lijun Ou
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

Here remove some unncessary initialization for some valiables.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index b13c68e..d20b92b 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -239,7 +239,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
 	struct device *dev = hr_dev->dev;
 	struct hns_roce_v2_db sq_db;
 	struct ib_qp_attr attr;
-	unsigned int sge_ind = 0;
+	unsigned int sge_ind ;
 	unsigned int owner_bit;
 	unsigned long flags;
 	unsigned int ind;
@@ -4291,7 +4291,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
 	struct hns_roce_v2_qp_context *context;
 	struct hns_roce_v2_qp_context *qpc_mask;
 	struct device *dev = hr_dev->dev;
-	int ret = -EINVAL;
+	int ret;
 
 	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
 	if (!context)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 06/13] RDMA/hns: Update some comments style
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (4 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 05/13] RDMA/hns: Clean up unnecessary initial assignment Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 07/13] RDMA/hns: Handling the error return value of hem function Lijun Ou
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

Here removes some useless comments and adds necessary spaces to
another comments.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h | 65 ++++++++++++++---------------
 drivers/infiniband/hw/hns/hns_roce_hem.h    |  6 +--
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  |  9 ++--
 drivers/infiniband/hw/hns/hns_roce_mr.c     |  1 -
 4 files changed, 38 insertions(+), 43 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index b39497a..8c4b120 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -84,7 +84,6 @@
 #define HNS_ROCE_CEQ_ENTRY_SIZE			0x4
 #define HNS_ROCE_AEQ_ENTRY_SIZE			0x10
 
-/* 4G/4K = 1M */
 #define HNS_ROCE_SL_SHIFT			28
 #define HNS_ROCE_TCLASS_SHIFT			20
 #define HNS_ROCE_FLOW_LABEL_MASK		0xfffff
@@ -322,7 +321,7 @@ struct hns_roce_hem_table {
 	unsigned long	num_hem;
 	/* HEM entry record obj total num */
 	unsigned long	num_obj;
-	/*Single obj size */
+	/* Single obj size */
 	unsigned long	obj_size;
 	unsigned long	table_chunk_size;
 	int		lowmem;
@@ -343,7 +342,7 @@ struct hns_roce_mtt {
 
 struct hns_roce_buf_region {
 	int offset; /* page offset */
-	u32 count; /* page count*/
+	u32 count; /* page count */
 	int hopnum; /* addressing hop num */
 };
 
@@ -384,25 +383,25 @@ struct hns_roce_mr {
 	u64			size; /* Address range of MR */
 	u32			key; /* Key of MR */
 	u32			pd;   /* PD num of MR */
-	u32			access;/* Access permission of MR */
+	u32			access;	/* Access permission of MR */
 	u32			npages;
 	int			enabled; /* MR's active status */
 	int			type;	/* MR's register type */
-	u64			*pbl_buf;/* MR's PBL space */
+	u64			*pbl_buf;	/* MR's PBL space */
 	dma_addr_t		pbl_dma_addr;	/* MR's PBL space PA */
-	u32			pbl_size;/* PA number in the PBL */
-	u64			pbl_ba;/* page table address */
-	u32			l0_chunk_last_num;/* L0 last number */
-	u32			l1_chunk_last_num;/* L1 last number */
-	u64			**pbl_bt_l2;/* PBL BT L2 */
-	u64			**pbl_bt_l1;/* PBL BT L1 */
-	u64			*pbl_bt_l0;/* PBL BT L0 */
-	dma_addr_t		*pbl_l2_dma_addr;/* PBL BT L2 dma addr */
-	dma_addr_t		*pbl_l1_dma_addr;/* PBL BT L1 dma addr */
-	dma_addr_t		pbl_l0_dma_addr;/* PBL BT L0 dma addr */
-	u32			pbl_ba_pg_sz;/* BT chunk page size */
-	u32			pbl_buf_pg_sz;/* buf chunk page size */
-	u32			pbl_hop_num;/* multi-hop number */
+	u32			pbl_size;	/* PA number in the PBL */
+	u64			pbl_ba;		/* page table address */
+	u32			l0_chunk_last_num;	/* L0 last number */
+	u32			l1_chunk_last_num;	/* L1 last number */
+	u64			**pbl_bt_l2;	/* PBL BT L2 */
+	u64			**pbl_bt_l1;	/* PBL BT L1 */
+	u64			*pbl_bt_l0;	/* PBL BT L0 */
+	dma_addr_t		*pbl_l2_dma_addr;	/* PBL BT L2 dma addr */
+	dma_addr_t		*pbl_l1_dma_addr;	/* PBL BT L1 dma addr */
+	dma_addr_t		pbl_l0_dma_addr;	/* PBL BT L0 dma addr */
+	u32			pbl_ba_pg_sz;	/* BT chunk page size */
+	u32			pbl_buf_pg_sz;	/* buf chunk page size */
+	u32			pbl_hop_num;	/* multi-hop number */
 };
 
 struct hns_roce_mr_table {
@@ -425,16 +424,16 @@ struct hns_roce_wq {
 	u32		max_post;
 	int		max_gs;
 	int		offset;
-	int		wqe_shift;/* WQE size */
+	int		wqe_shift;	/* WQE size */
 	u32		head;
 	u32		tail;
 	void __iomem	*db_reg_l;
 };
 
 struct hns_roce_sge {
-	int		sge_cnt;  /* SGE num */
+	int		sge_cnt;	/* SGE num */
 	int		offset;
-	int		sge_shift;/* SGE size */
+	int		sge_shift;	/* SGE size */
 };
 
 struct hns_roce_buf_list {
@@ -750,7 +749,7 @@ struct hns_roce_eq {
 	struct hns_roce_dev		*hr_dev;
 	void __iomem			*doorbell;
 
-	int				type_flag;/* Aeq:1 ceq:0 */
+	int				type_flag; /* Aeq:1 ceq:0 */
 	int				eqn;
 	u32				entries;
 	int				log_entries;
@@ -796,22 +795,22 @@ struct hns_roce_caps {
 	int		local_ca_ack_delay;
 	int		num_uars;
 	u32		phy_num_uars;
-	u32		max_sq_sg;	/* 2 */
-	u32		max_sq_inline;	/* 32 */
-	u32		max_rq_sg;	/* 2 */
+	u32		max_sq_sg;
+	u32		max_sq_inline;
+	u32		max_rq_sg;
 	u32		max_extend_sg;
-	int		num_qps;	/* 256k */
+	int		num_qps;
 	int             reserved_qps;
 	int		num_qpc_timer;
 	int		num_cqc_timer;
 	u32		max_srq_sg;
 	int		num_srqs;
-	u32		max_wqes;	/* 16k */
+	u32		max_wqes;
 	u32		max_srqs;
 	u32		max_srq_wrs;
 	u32		max_srq_sges;
-	u32		max_sq_desc_sz;	/* 64 */
-	u32		max_rq_desc_sz;	/* 64 */
+	u32		max_sq_desc_sz;
+	u32		max_rq_desc_sz;
 	u32		max_srq_desc_sz;
 	int		max_qp_init_rdma;
 	int		max_qp_dest_rdma;
@@ -822,7 +821,7 @@ struct hns_roce_caps {
 	int		reserved_cqs;
 	int		reserved_srqs;
 	u32		max_srqwqes;
-	int		num_aeq_vectors;	/* 1 */
+	int		num_aeq_vectors;
 	int		num_comp_vectors;
 	int		num_other_vectors;
 	int		num_mtpts;
@@ -903,7 +902,7 @@ struct hns_roce_caps {
 	u32		sl_num;
 	u32		tsq_buf_pg_sz;
 	u32		tpq_buf_pg_sz;
-	u32		chunk_sz;	/* chunk size in non multihop mode*/
+	u32		chunk_sz;	/* chunk size in non multihop mode */
 	u64		flags;
 };
 
@@ -1043,8 +1042,8 @@ struct hns_roce_dev {
 	int			loop_idc;
 	u32			sdb_offset;
 	u32			odb_offset;
-	dma_addr_t		tptr_dma_addr; /*only for hw v1*/
-	u32			tptr_size; /*only for hw v1*/
+	dma_addr_t		tptr_dma_addr;	/* only for hw v1 */
+	u32			tptr_size;	/* only for hw v1 */
 	const struct hns_roce_hw *hw;
 	void			*priv;
 	struct workqueue_struct *irq_workq;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h
index f1ccb8f..8678327 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.h
@@ -102,9 +102,9 @@ struct hns_roce_hem_mhop {
 	u32	buf_chunk_size;
 	u32	bt_chunk_size;
 	u32	ba_l0_num;
-	u32	l0_idx;/* level 0 base address table index */
-	u32	l1_idx;/* level 1 base address table index */
-	u32	l2_idx;/* level 2 base address table index */
+	u32	l0_idx; /* level 0 base address table index */
+	u32	l1_idx; /* level 1 base address table index */
+	u32	l2_idx; /* level 2 base address table index */
 };
 
 void hns_roce_free_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem *hem);
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index d20b92b..fc4c1d6 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -1886,7 +1886,7 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
 		goto err_tpq_init_failed;
 	}
 
-	/* Alloc memory for QPC Timer buffer space chunk*/
+	/* Alloc memory for QPC Timer buffer space chunk */
 	for (qpc_count = 0; qpc_count < hr_dev->caps.qpc_timer_bt_num;
 	     qpc_count++) {
 		ret = hns_roce_table_get(hr_dev, &hr_dev->qpc_timer_table,
@@ -1897,7 +1897,7 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
 		}
 	}
 
-	/* Alloc memory for CQC Timer buffer space chunk*/
+	/* Alloc memory for CQC Timer buffer space chunk */
 	for (cqc_count = 0; cqc_count < hr_dev->caps.cqc_timer_bt_num;
 	     cqc_count++) {
 		ret = hns_roce_table_get(hr_dev, &hr_dev->cqc_timer_table,
@@ -5236,14 +5236,12 @@ static void hns_roce_mhop_free_eq(struct hns_roce_dev *hr_dev,
 	buf_chk_sz = 1 << (hr_dev->caps.eqe_buf_pg_sz + PAGE_SHIFT);
 	bt_chk_sz = 1 << (hr_dev->caps.eqe_ba_pg_sz + PAGE_SHIFT);
 
-	/* hop_num = 0 */
 	if (mhop_num == HNS_ROCE_HOP_NUM_0) {
 		dma_free_coherent(dev, (unsigned int)(eq->entries *
 				  eq->eqe_size), eq->bt_l0, eq->l0_dma);
 		return;
 	}
 
-	/* hop_num = 1 or hop = 2 */
 	dma_free_coherent(dev, bt_chk_sz, eq->bt_l0, eq->l0_dma);
 	if (mhop_num == 1) {
 		for (i = 0; i < eq->l0_last_num; i++) {
@@ -5483,7 +5481,6 @@ static int hns_roce_mhop_alloc_eq(struct hns_roce_dev *hr_dev,
 			      buf_chk_sz);
 	bt_num = DIV_ROUND_UP(ba_num, bt_chk_sz / BA_BYTE_LEN);
 
-	/* hop_num = 0 */
 	if (mhop_num == HNS_ROCE_HOP_NUM_0) {
 		if (eq->entries > buf_chk_sz / eq->eqe_size) {
 			dev_err(dev, "eq entries %d is larger than buf_pg_sz!",
@@ -5749,7 +5746,7 @@ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num,
 		}
 	}
 
-	/* irq contains: abnormal + AEQ + CEQ*/
+	/* irq contains: abnormal + AEQ + CEQ */
 	for (j = 0; j < irq_num; j++)
 		if (j < other_num)
 			snprintf((char *)hr_dev->irq_names[j],
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index 0cfa946..8157679 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -517,7 +517,6 @@ static int hns_roce_mhop_alloc(struct hns_roce_dev *hr_dev, int npages,
 	if (mhop_num == HNS_ROCE_HOP_NUM_0)
 		return 0;
 
-	/* hop_num = 1 */
 	if (mhop_num == 1)
 		return pbl_1hop_alloc(hr_dev, npages, mr, pbl_bt_sz);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 07/13] RDMA/hns: Handling the error return value of hem function
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (5 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 06/13] RDMA/hns: Update some comments style Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 08/13] RDMA/hns: Split bool statement and assign statement Lijun Ou
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

Handling the error return value of hns_roce_calc_hem_mhop.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hem.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index d3e72a0..0268c7a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -830,7 +830,8 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev,
 	} else {
 		u32 seg_size = 64; /* 8 bytes per BA and 8 BA per segment */
 
-		hns_roce_calc_hem_mhop(hr_dev, table, &mhop_obj, &mhop);
+		if (hns_roce_calc_hem_mhop(hr_dev, table, &mhop_obj, &mhop))
+			goto out;
 		/* mtt mhop */
 		i = mhop.l0_idx;
 		j = mhop.l1_idx;
@@ -879,11 +880,13 @@ int hns_roce_table_get_range(struct hns_roce_dev *hr_dev,
 {
 	struct hns_roce_hem_mhop mhop;
 	unsigned long inc = table->table_chunk_size / table->obj_size;
-	unsigned long i;
+	unsigned long i = 0;
 	int ret;
 
 	if (hns_roce_check_whether_mhop(hr_dev, table->type)) {
-		hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop);
+		ret = hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop);
+		if (ret)
+			goto fail;
 		inc = mhop.bt_chunk_size / table->obj_size;
 	}
 
@@ -913,7 +916,8 @@ void hns_roce_table_put_range(struct hns_roce_dev *hr_dev,
 	unsigned long i;
 
 	if (hns_roce_check_whether_mhop(hr_dev, table->type)) {
-		hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop);
+		if (hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop))
+			return;
 		inc = mhop.bt_chunk_size / table->obj_size;
 	}
 
@@ -1035,7 +1039,8 @@ static void hns_roce_cleanup_mhop_hem_table(struct hns_roce_dev *hr_dev,
 	int i;
 	u64 obj;
 
-	hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop);
+	if (hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop))
+		return;
 	buf_chunk_size = table->type < HEM_TYPE_MTT ? mhop.buf_chunk_size :
 					mhop.bt_chunk_size;
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 08/13] RDMA/hns: Split bool statement and assign statement
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (6 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 07/13] RDMA/hns: Handling the error return value of hem function Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 09/13] RDMA/hns: Refactor irq request code Lijun Ou
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

Assign statement can not be contained in bool statement or
function param.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index fc4c1d6..85e0687a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -4938,7 +4938,7 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
 			       struct hns_roce_eq *eq)
 {
 	struct device *dev = hr_dev->dev;
-	struct hns_roce_aeqe *aeqe;
+	struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq);
 	int aeqe_found = 0;
 	int event_type;
 	int sub_type;
@@ -4946,8 +4946,7 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
 	u32 qpn;
 	u32 cqn;
 
-	while ((aeqe = next_aeqe_sw_v2(eq))) {
-
+	while (aeqe) {
 		/* Make sure we read AEQ entry after we have checked the
 		 * ownership bit
 		 */
@@ -5016,6 +5015,8 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
 			eq->cons_index = 0;
 		}
 		hns_roce_v2_init_irq_work(hr_dev, eq, qpn, cqn);
+
+		aeqe = next_aeqe_sw_v2(eq);
 	}
 
 	set_eq_cons_index_v2(eq);
@@ -5068,12 +5069,11 @@ static int hns_roce_v2_ceq_int(struct hns_roce_dev *hr_dev,
 			       struct hns_roce_eq *eq)
 {
 	struct device *dev = hr_dev->dev;
-	struct hns_roce_ceqe *ceqe;
+	struct hns_roce_ceqe *ceqe = next_ceqe_sw_v2(eq);
 	int ceqe_found = 0;
 	u32 cqn;
 
-	while ((ceqe = next_ceqe_sw_v2(eq))) {
-
+	while (ceqe) {
 		/* Make sure we read CEQ entry after we have checked the
 		 * ownership bit
 		 */
@@ -5092,6 +5092,8 @@ static int hns_roce_v2_ceq_int(struct hns_roce_dev *hr_dev,
 			dev_warn(dev, "cons_index overflow, set back to 0.\n");
 			eq->cons_index = 0;
 		}
+
+		ceqe = next_ceqe_sw_v2(eq);
 	}
 
 	set_eq_cons_index_v2(eq);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 09/13] RDMA/hns: Refactor irq request code
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (7 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 08/13] RDMA/hns: Split bool statement and assign statement Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc Lijun Ou
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

Remove unnecessary if...else..., to make the code look simpler.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 85e0687a..1186e34 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -5749,18 +5749,19 @@ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num,
 	}
 
 	/* irq contains: abnormal + AEQ + CEQ */
-	for (j = 0; j < irq_num; j++)
-		if (j < other_num)
-			snprintf((char *)hr_dev->irq_names[j],
-				 HNS_ROCE_INT_NAME_LEN, "hns-abn-%d", j);
-		else if (j < (other_num + aeq_num))
-			snprintf((char *)hr_dev->irq_names[j],
-				 HNS_ROCE_INT_NAME_LEN, "hns-aeq-%d",
-				 j - other_num);
-		else
-			snprintf((char *)hr_dev->irq_names[j],
-				 HNS_ROCE_INT_NAME_LEN, "hns-ceq-%d",
-				 j - other_num - aeq_num);
+	for (j = 0; j < other_num; j++)
+		snprintf((char *)hr_dev->irq_names[j],
+			 HNS_ROCE_INT_NAME_LEN, "hns-abn-%d", j);
+
+	for (j = other_num; j < (other_num + aeq_num); j++)
+		snprintf((char *)hr_dev->irq_names[j],
+			 HNS_ROCE_INT_NAME_LEN, "hns-aeq-%d",
+			 j - other_num);
+
+	for (j = (other_num + aeq_num); j < irq_num; j++)
+		snprintf((char *)hr_dev->irq_names[j],
+			 HNS_ROCE_INT_NAME_LEN, "hns-ceq-%d",
+			 j - other_num - aeq_num);
 
 	for (j = 0; j < irq_num; j++) {
 		if (j < other_num)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (8 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 09/13] RDMA/hns: Refactor irq request code Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30 13:40   ` Leon Romanovsky
  2019-07-30  8:56 ` [PATCH for-next 11/13] RDMA/hns: Refactor hns_roce_v2_set_hem for hip08 Lijun Ou
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Lang Cheng <chenglang@huawei.com>

For hns_roce_v2_query_qp and hns_roce_v2_modify_qp,
we can use stack memory to create qp context data.
Make the code simpler.

Signed-off-by: Lang Cheng <chenglang@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++-----------------
 1 file changed, 27 insertions(+), 37 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 1186e34..07ddfae 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
 {
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
-	struct hns_roce_v2_qp_context *context;
-	struct hns_roce_v2_qp_context *qpc_mask;
+	struct hns_roce_v2_qp_context ctx[2];
+	struct hns_roce_v2_qp_context *context = ctx;
+	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
 	struct device *dev = hr_dev->dev;
 	int ret;
 
-	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
-	if (!context)
-		return -ENOMEM;
-
-	qpc_mask = context + 1;
 	/*
 	 * In v2 engine, software pass context and context mask to hardware
 	 * when modifying qp. If software need modify some fields in context,
 	 * we should set all bits of the relevant fields in context mask to
 	 * 0 at the same time, else set them to 0x1.
 	 */
+	memset(context, 0, sizeof(*context));
 	memset(qpc_mask, 0xff, sizeof(*qpc_mask));
 	ret = hns_roce_v2_set_abs_fields(ibqp, attr, attr_mask, cur_state,
 					 new_state, context, qpc_mask);
@@ -4349,8 +4346,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
 		       V2_QPC_BYTE_60_QP_ST_S, 0);
 
 	/* SW pass context to HW */
-	ret = hns_roce_v2_qp_modify(hr_dev, cur_state, new_state,
-				    context, hr_qp);
+	ret = hns_roce_v2_qp_modify(hr_dev, cur_state, new_state, ctx, hr_qp);
 	if (ret) {
 		dev_err(dev, "hns_roce_qp_modify failed(%d)\n", ret);
 		goto out;
@@ -4378,7 +4374,6 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
 	}
 
 out:
-	kfree(context);
 	return ret;
 }
 
@@ -4429,16 +4424,12 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 {
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
-	struct hns_roce_v2_qp_context *context;
+	struct hns_roce_v2_qp_context context = {};
 	struct device *dev = hr_dev->dev;
 	int tmp_qp_state;
 	int state;
 	int ret;
 
-	context = kzalloc(sizeof(*context), GFP_KERNEL);
-	if (!context)
-		return -ENOMEM;
-
 	memset(qp_attr, 0, sizeof(*qp_attr));
 	memset(qp_init_attr, 0, sizeof(*qp_init_attr));
 
@@ -4450,14 +4441,14 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 		goto done;
 	}
 
-	ret = hns_roce_v2_query_qpc(hr_dev, hr_qp, context);
+	ret = hns_roce_v2_query_qpc(hr_dev, hr_qp, &context);
 	if (ret) {
 		dev_err(dev, "query qpc error\n");
 		ret = -EINVAL;
 		goto out;
 	}
 
-	state = roce_get_field(context->byte_60_qpst_tempid,
+	state = roce_get_field(context.byte_60_qpst_tempid,
 			       V2_QPC_BYTE_60_QP_ST_M, V2_QPC_BYTE_60_QP_ST_S);
 	tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state);
 	if (tmp_qp_state == -1) {
@@ -4467,7 +4458,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	}
 	hr_qp->state = (u8)tmp_qp_state;
 	qp_attr->qp_state = (enum ib_qp_state)hr_qp->state;
-	qp_attr->path_mtu = (enum ib_mtu)roce_get_field(context->byte_24_mtu_tc,
+	qp_attr->path_mtu = (enum ib_mtu)roce_get_field(context.byte_24_mtu_tc,
 							V2_QPC_BYTE_24_MTU_M,
 							V2_QPC_BYTE_24_MTU_S);
 	qp_attr->path_mig_state = IB_MIG_ARMED;
@@ -4475,20 +4466,20 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	if (hr_qp->ibqp.qp_type == IB_QPT_UD)
 		qp_attr->qkey = V2_QKEY_VAL;
 
-	qp_attr->rq_psn = roce_get_field(context->byte_108_rx_reqepsn,
+	qp_attr->rq_psn = roce_get_field(context.byte_108_rx_reqepsn,
 					 V2_QPC_BYTE_108_RX_REQ_EPSN_M,
 					 V2_QPC_BYTE_108_RX_REQ_EPSN_S);
-	qp_attr->sq_psn = (u32)roce_get_field(context->byte_172_sq_psn,
+	qp_attr->sq_psn = (u32)roce_get_field(context.byte_172_sq_psn,
 					      V2_QPC_BYTE_172_SQ_CUR_PSN_M,
 					      V2_QPC_BYTE_172_SQ_CUR_PSN_S);
-	qp_attr->dest_qp_num = (u8)roce_get_field(context->byte_56_dqpn_err,
+	qp_attr->dest_qp_num = (u8)roce_get_field(context.byte_56_dqpn_err,
 						  V2_QPC_BYTE_56_DQPN_M,
 						  V2_QPC_BYTE_56_DQPN_S);
-	qp_attr->qp_access_flags = ((roce_get_bit(context->byte_76_srqn_op_en,
+	qp_attr->qp_access_flags = ((roce_get_bit(context.byte_76_srqn_op_en,
 				    V2_QPC_BYTE_76_RRE_S)) << V2_QP_RWE_S) |
-				    ((roce_get_bit(context->byte_76_srqn_op_en,
+				    ((roce_get_bit(context.byte_76_srqn_op_en,
 				    V2_QPC_BYTE_76_RWE_S)) << V2_QP_RRE_S) |
-				    ((roce_get_bit(context->byte_76_srqn_op_en,
+				    ((roce_get_bit(context.byte_76_srqn_op_en,
 				    V2_QPC_BYTE_76_ATE_S)) << V2_QP_ATE_S);
 
 	if (hr_qp->ibqp.qp_type == IB_QPT_RC ||
@@ -4497,43 +4488,43 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 				rdma_ah_retrieve_grh(&qp_attr->ah_attr);
 
 		rdma_ah_set_sl(&qp_attr->ah_attr,
-			       roce_get_field(context->byte_28_at_fl,
+			       roce_get_field(context.byte_28_at_fl,
 					      V2_QPC_BYTE_28_SL_M,
 					      V2_QPC_BYTE_28_SL_S));
-		grh->flow_label = roce_get_field(context->byte_28_at_fl,
+		grh->flow_label = roce_get_field(context.byte_28_at_fl,
 						 V2_QPC_BYTE_28_FL_M,
 						 V2_QPC_BYTE_28_FL_S);
-		grh->sgid_index = roce_get_field(context->byte_20_smac_sgid_idx,
+		grh->sgid_index = roce_get_field(context.byte_20_smac_sgid_idx,
 						 V2_QPC_BYTE_20_SGID_IDX_M,
 						 V2_QPC_BYTE_20_SGID_IDX_S);
-		grh->hop_limit = roce_get_field(context->byte_24_mtu_tc,
+		grh->hop_limit = roce_get_field(context.byte_24_mtu_tc,
 						V2_QPC_BYTE_24_HOP_LIMIT_M,
 						V2_QPC_BYTE_24_HOP_LIMIT_S);
-		grh->traffic_class = roce_get_field(context->byte_24_mtu_tc,
+		grh->traffic_class = roce_get_field(context.byte_24_mtu_tc,
 						    V2_QPC_BYTE_24_TC_M,
 						    V2_QPC_BYTE_24_TC_S);
 
-		memcpy(grh->dgid.raw, context->dgid, sizeof(grh->dgid.raw));
+		memcpy(grh->dgid.raw, context.dgid, sizeof(grh->dgid.raw));
 	}
 
 	qp_attr->port_num = hr_qp->port + 1;
 	qp_attr->sq_draining = 0;
-	qp_attr->max_rd_atomic = 1 << roce_get_field(context->byte_208_irrl,
+	qp_attr->max_rd_atomic = 1 << roce_get_field(context.byte_208_irrl,
 						     V2_QPC_BYTE_208_SR_MAX_M,
 						     V2_QPC_BYTE_208_SR_MAX_S);
-	qp_attr->max_dest_rd_atomic = 1 << roce_get_field(context->byte_140_raq,
+	qp_attr->max_dest_rd_atomic = 1 << roce_get_field(context.byte_140_raq,
 						     V2_QPC_BYTE_140_RR_MAX_M,
 						     V2_QPC_BYTE_140_RR_MAX_S);
-	qp_attr->min_rnr_timer = (u8)roce_get_field(context->byte_80_rnr_rx_cqn,
+	qp_attr->min_rnr_timer = (u8)roce_get_field(context.byte_80_rnr_rx_cqn,
 						 V2_QPC_BYTE_80_MIN_RNR_TIME_M,
 						 V2_QPC_BYTE_80_MIN_RNR_TIME_S);
-	qp_attr->timeout = (u8)roce_get_field(context->byte_28_at_fl,
+	qp_attr->timeout = (u8)roce_get_field(context.byte_28_at_fl,
 					      V2_QPC_BYTE_28_AT_M,
 					      V2_QPC_BYTE_28_AT_S);
-	qp_attr->retry_cnt = roce_get_field(context->byte_212_lsn,
+	qp_attr->retry_cnt = roce_get_field(context.byte_212_lsn,
 					    V2_QPC_BYTE_212_RETRY_CNT_M,
 					    V2_QPC_BYTE_212_RETRY_CNT_S);
-	qp_attr->rnr_retry = context->rq_rnr_timer;
+	qp_attr->rnr_retry = context.rq_rnr_timer;
 
 done:
 	qp_attr->cur_qp_state = qp_attr->qp_state;
@@ -4552,7 +4543,6 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 
 out:
 	mutex_unlock(&hr_qp->mutex);
-	kfree(context);
 	return ret;
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 11/13] RDMA/hns: Refactor hns_roce_v2_set_hem for hip08
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (9 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 12/13] RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int() Lijun Ou
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Yangyang Li <liyangyang20@huawei.com>

In order to reduce the complexity of hns_roce_v2_set_hem, extract
the implementation of op as a function.

Signed-off-by: Yangyang Li <liyangyang20@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 75 +++++++++++++++++-------------
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 07ddfae..8f4430b 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -2903,11 +2903,49 @@ static int hns_roce_v2_poll_cq(struct ib_cq *ibcq, int num_entries,
 	return npolled;
 }
 
+static int get_op_for_set_hem(struct hns_roce_dev *hr_dev, u32 type,
+			      int step_idx)
+{
+	int op;
+
+	if (type == HEM_TYPE_SCCC && step_idx)
+		return -EINVAL;
+
+	switch (type) {
+	case HEM_TYPE_QPC:
+		op = HNS_ROCE_CMD_WRITE_QPC_BT0;
+		break;
+	case HEM_TYPE_MTPT:
+		op = HNS_ROCE_CMD_WRITE_MPT_BT0;
+		break;
+	case HEM_TYPE_CQC:
+		op = HNS_ROCE_CMD_WRITE_CQC_BT0;
+		break;
+	case HEM_TYPE_SRQC:
+		op = HNS_ROCE_CMD_WRITE_SRQC_BT0;
+		break;
+	case HEM_TYPE_SCCC:
+		op = HNS_ROCE_CMD_WRITE_SCCC_BT0;
+		break;
+	case HEM_TYPE_QPC_TIMER:
+		op = HNS_ROCE_CMD_WRITE_QPC_TIMER_BT0;
+		break;
+	case HEM_TYPE_CQC_TIMER:
+		op = HNS_ROCE_CMD_WRITE_CQC_TIMER_BT0;
+		break;
+	default:
+		dev_warn(hr_dev->dev,
+			 "Table %d not to be written by mailbox!\n", type);
+		return -EINVAL;
+	}
+
+	return op + step_idx;
+}
+
 static int hns_roce_v2_set_hem(struct hns_roce_dev *hr_dev,
 			       struct hns_roce_hem_table *table, int obj,
 			       int step_idx)
 {
-	struct device *dev = hr_dev->dev;
 	struct hns_roce_cmd_mailbox *mailbox;
 	struct hns_roce_hem_iter iter;
 	struct hns_roce_hem_mhop mhop;
@@ -2920,7 +2958,7 @@ static int hns_roce_v2_set_hem(struct hns_roce_dev *hr_dev,
 	u64 bt_ba = 0;
 	u32 chunk_ba_num;
 	u32 hop_num;
-	u16 op = 0xff;
+	int op;
 
 	if (!hns_roce_check_whether_mhop(hr_dev, table->type))
 		return 0;
@@ -2942,38 +2980,9 @@ static int hns_roce_v2_set_hem(struct hns_roce_dev *hr_dev,
 		hem_idx = i;
 	}
 
-	switch (table->type) {
-	case HEM_TYPE_QPC:
-		op = HNS_ROCE_CMD_WRITE_QPC_BT0;
-		break;
-	case HEM_TYPE_MTPT:
-		op = HNS_ROCE_CMD_WRITE_MPT_BT0;
-		break;
-	case HEM_TYPE_CQC:
-		op = HNS_ROCE_CMD_WRITE_CQC_BT0;
-		break;
-	case HEM_TYPE_SRQC:
-		op = HNS_ROCE_CMD_WRITE_SRQC_BT0;
-		break;
-	case HEM_TYPE_SCCC:
-		op = HNS_ROCE_CMD_WRITE_SCCC_BT0;
-		break;
-	case HEM_TYPE_QPC_TIMER:
-		op = HNS_ROCE_CMD_WRITE_QPC_TIMER_BT0;
-		break;
-	case HEM_TYPE_CQC_TIMER:
-		op = HNS_ROCE_CMD_WRITE_CQC_TIMER_BT0;
-		break;
-	default:
-		dev_warn(dev, "Table %d not to be written by mailbox!\n",
-			 table->type);
+	op = get_op_for_set_hem(hr_dev, table->type, step_idx);
+	if (op == -EINVAL)
 		return 0;
-	}
-
-	if (table->type == HEM_TYPE_SCCC && step_idx)
-		return 0;
-
-	op += step_idx;
 
 	mailbox = hns_roce_alloc_cmd_mailbox(hr_dev);
 	if (IS_ERR(mailbox))
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 12/13] RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int()
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (10 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 11/13] RDMA/hns: Refactor hns_roce_v2_set_hem for hip08 Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30  8:56 ` [PATCH for-next 13/13] RDMA/hns: Disable alw_lcl_lpbk of SSU Lijun Ou
  2019-07-30 12:33 ` [PATCH for-next 00/13] Updates for 5.3-rc2 Dennis Dalessandro
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Weihang Li <liweihang@hisilicon.com>

There is no need to tell users when eq->cons_index is overflow, we
just set it back to zero.

Signed-off-by: Weihang Li <liweihang@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 8f4430b..699a5b9 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -5009,10 +5009,9 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
 		++eq->cons_index;
 		aeqe_found = 1;
 
-		if (eq->cons_index > (2 * eq->entries - 1)) {
-			dev_warn(dev, "cons_index overflow, set back to 0.\n");
+		if (eq->cons_index > (2 * eq->entries - 1))
 			eq->cons_index = 0;
-		}
+
 		hns_roce_v2_init_irq_work(hr_dev, eq, qpn, cqn);
 
 		aeqe = next_aeqe_sw_v2(eq);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH for-next 13/13] RDMA/hns: Disable alw_lcl_lpbk of SSU
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (11 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 12/13] RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int() Lijun Ou
@ 2019-07-30  8:56 ` Lijun Ou
  2019-07-30 12:33 ` [PATCH for-next 00/13] Updates for 5.3-rc2 Dennis Dalessandro
  13 siblings, 0 replies; 26+ messages in thread
From: Lijun Ou @ 2019-07-30  8:56 UTC (permalink / raw)
  To: dledford, jgg; +Cc: leon, linux-rdma, linuxarm

From: Weihang Li <liweihang@hisilicon.com>

If we enabled alw_lcl_lpbk in promiscuous mode, packet whose source
and destination mac address is equal will be handled in both inner
loopback and outer loopback. This will halve performance of roce in
promiscuous mode.

Signed-off-by: Weihang Li <liweihang@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 699a5b9..a9cf0c9 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -1308,7 +1308,7 @@ static int hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev,
 		cpu_to_le16(HNS_ROCE_CMD_FLAG_NO_INTR | HNS_ROCE_CMD_FLAG_IN);
 	desc.flag &= cpu_to_le16(~HNS_ROCE_CMD_FLAG_WR);
 	roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LPBK_S, 1);
-	roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S, 1);
+	roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S, 0);
 	roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_DST_OVRD_S, 1);
 
 	return hns_roce_cmq_send(hr_dev, &desc, 1);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode
  2019-07-30  8:56 ` [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode Lijun Ou
@ 2019-07-30 11:16   ` Gal Pressman
  2019-07-30 12:17     ` Jason Gunthorpe
  0 siblings, 1 reply; 26+ messages in thread
From: Gal Pressman @ 2019-07-30 11:16 UTC (permalink / raw)
  To: Lijun Ou, dledford, jgg; +Cc: leon, linux-rdma, linuxarm

On 30/07/2019 11:56, Lijun Ou wrote:
> It needs to check the sq size with integrity when configures
> the relatived parameters of sq. Here moves the relatived code
> into a special function.
> 
> Signed-off-by: Lijun Ou <oulijun@huawei.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_qp.c | 29 ++++++++++++++++++++++-------
>  1 file changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
> index 9c272c2..35ef7e2 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_qp.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
> @@ -324,16 +324,12 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev,
>  	return 0;
>  }
>  
> -static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
> -				     struct ib_qp_cap *cap,
> -				     struct hns_roce_qp *hr_qp,
> -				     struct hns_roce_ib_create_qp *ucmd)
> +static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
> +					struct ib_qp_cap *cap,
> +					struct hns_roce_ib_create_qp *ucmd)
>  {
>  	u32 roundup_sq_stride = roundup_pow_of_two(hr_dev->caps.max_sq_desc_sz);
>  	u8 max_sq_stride = ilog2(roundup_sq_stride);
> -	u32 ex_sge_num;
> -	u32 page_size;
> -	u32 max_cnt;
>  
>  	/* Sanity check SQ size before proceeding */
>  	if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes ||
> @@ -349,6 +345,25 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
>  		return -EINVAL;
>  	}
>  
> +	return 0;
> +}
> +
> +static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
> +				     struct ib_qp_cap *cap,
> +				     struct hns_roce_qp *hr_qp,
> +				     struct hns_roce_ib_create_qp *ucmd)
> +{
> +	u32 ex_sge_num;
> +	u32 page_size;
> +	u32 max_cnt;
> +	int ret;
> +
> +	ret = check_sq_size_with_integrity(hr_dev, cap, ucmd);
> +	if (ret) {
> +		dev_err(hr_dev->dev, "Sanity check sq size fail\n");

Consider using ibdev_err, same applies for other patches.

> +		return ret;
> +	}
> +
>  	hr_qp->sq.wqe_cnt = 1 << ucmd->log_sq_bb_count;
>  	hr_qp->sq.wqe_shift = ucmd->log_sq_stride;
>  
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function
  2019-07-30  8:56 ` [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function Lijun Ou
@ 2019-07-30 11:19   ` Gal Pressman
  2019-07-30 13:39     ` oulijun
  0 siblings, 1 reply; 26+ messages in thread
From: Gal Pressman @ 2019-07-30 11:19 UTC (permalink / raw)
  To: Lijun Ou, dledford, jgg; +Cc: leon, linux-rdma, linuxarm

On 30/07/2019 11:56, Lijun Ou wrote:
> Here mainly packages some code into some new functions in order to
> reduce code compelexity.
> 
> Signed-off-by: Lijun Ou <oulijun@huawei.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_qp.c | 118 +++++++++++++++++++-------------
>  1 file changed, 72 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
> index 35ef7e2..8b2d10f 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_qp.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
> @@ -1070,6 +1070,76 @@ int to_hr_qp_type(int qp_type)
>  	return transport_type;
>  }
>  
> +static int check_mtu_validate(struct hns_roce_dev *hr_dev,
> +                             struct hns_roce_qp *hr_qp,
> +                             struct ib_qp_attr *attr, int attr_mask)
> +{
> +       struct device *dev = hr_dev->dev;
> +       enum ib_mtu active_mtu;
> +       int p;
> +
> +       p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
> +           active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu);
> +
> +       if ((hr_dev->caps.max_mtu >= IB_MTU_2048 &&
> +            attr->path_mtu > hr_dev->caps.max_mtu) ||
> +            attr->path_mtu < IB_MTU_256 || attr->path_mtu > active_mtu) {
> +               dev_err(dev, "attr path_mtu(%d)invalid while modify qp",
> +                       attr->path_mtu);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr,
> +                                 int attr_mask)
> +{
> +       struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> +       struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> +       struct device *dev = hr_dev->dev;
> +       int ret = 0;
> +       int p;
> +
> +       if ((attr_mask & IB_QP_PORT) &&
> +           (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
> +               dev_err(dev, "attr port_num invalid.attr->port_num=%d\n",
> +                       attr->port_num);
> +               return -EINVAL;
> +       }
> +
> +       if (attr_mask & IB_QP_PKEY_INDEX) {
> +               p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
> +               if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
> +                       dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n",
> +                               attr->pkey_index);
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       if (attr_mask & IB_QP_PATH_MTU) {
> +               ret = check_mtu_validate(hr_dev, hr_qp, attr, attr_mask);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
> +           attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
> +               dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
> +                       attr->max_rd_atomic);
> +               return -EINVAL;
> +       }
> +
> +       if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
> +           attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
> +               dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
> +                       attr->max_dest_rd_atomic);
> +               return -EINVAL;
> +       }
> +
> +       return ret;
> +}
> +
>  int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>  		       int attr_mask, struct ib_udata *udata)
>  {
> @@ -1078,8 +1148,6 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>  	enum ib_qp_state cur_state, new_state;
>  	struct device *dev = hr_dev->dev;
>  	int ret = -EINVAL;
> -	int p;
> -	enum ib_mtu active_mtu;
>  
>  	mutex_lock(&hr_qp->mutex);
>  
> @@ -1107,51 +1175,9 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>  		goto out;
>  	}
>  
> -	if ((attr_mask & IB_QP_PORT) &&
> -	    (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
> -		dev_err(dev, "attr port_num invalid.attr->port_num=%d\n",
> -			attr->port_num);
> -		goto out;
> -	}
> -
> -	if (attr_mask & IB_QP_PKEY_INDEX) {
> -		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
> -		if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
> -			dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n",
> -				attr->pkey_index);
> -			goto out;
> -		}
> -	}
> -
> -	if (attr_mask & IB_QP_PATH_MTU) {
> -		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
> -		active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu);
> -
> -		if ((hr_dev->caps.max_mtu == IB_MTU_4096 &&
> -		    attr->path_mtu > IB_MTU_4096) ||
> -		    (hr_dev->caps.max_mtu == IB_MTU_2048 &&
> -		    attr->path_mtu > IB_MTU_2048) ||
> -		    attr->path_mtu < IB_MTU_256 ||
> -		    attr->path_mtu > active_mtu) {
> -			dev_err(dev, "attr path_mtu(%d)invalid while modify qp",
> -				attr->path_mtu);
> -			goto out;
> -		}
> -	}
> -
> -	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
> -	    attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
> -		dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
> -			attr->max_rd_atomic);
> -		goto out;
> -	}
> -
> -	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
> -	    attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
> -		dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
> -			attr->max_dest_rd_atomic);
> +	ret = hns_roce_check_qp_attr(ibqp, attr, attr_mask);
> +	if (ret)
>  		goto out;
> -	}
>  
>  	if (cur_state == new_state && cur_state == IB_QPS_RESET) {
>  		if (hr_dev->caps.min_wqes) {
> 

This patch is formatted with spaces instead of tabs.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode
  2019-07-30 11:16   ` Gal Pressman
@ 2019-07-30 12:17     ` Jason Gunthorpe
  2019-07-30 13:37       ` oulijun
  0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2019-07-30 12:17 UTC (permalink / raw)
  To: Gal Pressman; +Cc: Lijun Ou, dledford, leon, linux-rdma, linuxarm

On Tue, Jul 30, 2019 at 02:16:01PM +0300, Gal Pressman wrote:
> On 30/07/2019 11:56, Lijun Ou wrote:
> > It needs to check the sq size with integrity when configures
> > the relatived parameters of sq. Here moves the relatived code
> > into a special function.
> > 
> > Signed-off-by: Lijun Ou <oulijun@huawei.com>
> >  drivers/infiniband/hw/hns/hns_roce_qp.c | 29 ++++++++++++++++++++++-------
> >  1 file changed, 22 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
> > index 9c272c2..35ef7e2 100644
> > +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
> > @@ -324,16 +324,12 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev,
> >  	return 0;
> >  }
> >  
> > -static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
> > -				     struct ib_qp_cap *cap,
> > -				     struct hns_roce_qp *hr_qp,
> > -				     struct hns_roce_ib_create_qp *ucmd)
> > +static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
> > +					struct ib_qp_cap *cap,
> > +					struct hns_roce_ib_create_qp *ucmd)
> >  {
> >  	u32 roundup_sq_stride = roundup_pow_of_two(hr_dev->caps.max_sq_desc_sz);
> >  	u8 max_sq_stride = ilog2(roundup_sq_stride);
> > -	u32 ex_sge_num;
> > -	u32 page_size;
> > -	u32 max_cnt;
> >  
> >  	/* Sanity check SQ size before proceeding */
> >  	if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes ||
> > @@ -349,6 +345,25 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
> >  		return -EINVAL;
> >  	}
> >  
> > +	return 0;
> > +}
> > +
> > +static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
> > +				     struct ib_qp_cap *cap,
> > +				     struct hns_roce_qp *hr_qp,
> > +				     struct hns_roce_ib_create_qp *ucmd)
> > +{
> > +	u32 ex_sge_num;
> > +	u32 page_size;
> > +	u32 max_cnt;
> > +	int ret;
> > +
> > +	ret = check_sq_size_with_integrity(hr_dev, cap, ucmd);
> > +	if (ret) {
> > +		dev_err(hr_dev->dev, "Sanity check sq size fail\n");
> 
> Consider using ibdev_err, same applies for other patches.

It would be good if driver authors would convert their drivers to use
the new interfaces

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 00/13] Updates for 5.3-rc2
  2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
                   ` (12 preceding siblings ...)
  2019-07-30  8:56 ` [PATCH for-next 13/13] RDMA/hns: Disable alw_lcl_lpbk of SSU Lijun Ou
@ 2019-07-30 12:33 ` Dennis Dalessandro
  2019-07-30 13:41   ` oulijun
  13 siblings, 1 reply; 26+ messages in thread
From: Dennis Dalessandro @ 2019-07-30 12:33 UTC (permalink / raw)
  To: Lijun Ou, dledford, jgg; +Cc: leon, linux-rdma, linuxarm

On 7/30/2019 4:56 AM, Lijun Ou wrote:
> Here are some updates for hns driver based 5.3-rc2, mainly
> include some codes optimization and comments style modification.
> 
> Lang Cheng (6):
>    RDMA/hns: Clean up unnecessary initial assignment
>    RDMA/hns: Update some comments style
>    RDMA/hns: Handling the error return value of hem function
>    RDMA/hns: Split bool statement and assign statement
>    RDMA/hns: Refactor irq request code
>    RDMA/hns: Remove unnecessary kzalloc
> 
> Lijun Ou (2):
>    RDMA/hns: Encapsulate some lines for setting sq size in user mode
>    RDMA/hns: Optimize hns_roce_modify_qp function
> 
> Weihang Li (2):
>    RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int()
>    RDMA/hns: Disable alw_lcl_lpbk of SSU
> 
> Yangyang Li (1):
>    RDMA/hns: Refactor hns_roce_v2_set_hem for hip08
> 
> Yixian Liu (2):
>    RDMA/hns: Update the prompt message for creating and destroy qp
>    RDMA/hns: Remove unnessary init for cmq reg
> 
>   drivers/infiniband/hw/hns/hns_roce_device.h |  65 +++++----
>   drivers/infiniband/hw/hns/hns_roce_hem.c    |  15 +-
>   drivers/infiniband/hw/hns/hns_roce_hem.h    |   6 +-
>   drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 210 ++++++++++++++--------------
>   drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |   2 -
>   drivers/infiniband/hw/hns/hns_roce_mr.c     |   1 -
>   drivers/infiniband/hw/hns/hns_roce_qp.c     | 150 +++++++++++++-------
>   7 files changed, 244 insertions(+), 205 deletions(-)
> 

A lot of this doesn't seem appropriate for an -rc does it?

-Denny

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode
  2019-07-30 12:17     ` Jason Gunthorpe
@ 2019-07-30 13:37       ` oulijun
  0 siblings, 0 replies; 26+ messages in thread
From: oulijun @ 2019-07-30 13:37 UTC (permalink / raw)
  To: Jason Gunthorpe, Gal Pressman; +Cc: dledford, leon, linux-rdma, linuxarm

在 2019/7/30 20:17, Jason Gunthorpe 写道:
> On Tue, Jul 30, 2019 at 02:16:01PM +0300, Gal Pressman wrote:
>> On 30/07/2019 11:56, Lijun Ou wrote:
>>> It needs to check the sq size with integrity when configures
>>> the relatived parameters of sq. Here moves the relatived code
>>> into a special function.
>>>
>>> Signed-off-by: Lijun Ou <oulijun@huawei.com>
>>>  drivers/infiniband/hw/hns/hns_roce_qp.c | 29 ++++++++++++++++++++++-------
>>>  1 file changed, 22 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
>>> index 9c272c2..35ef7e2 100644
>>> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
>>> @@ -324,16 +324,12 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev,
>>>  	return 0;
>>>  }
>>>  
>>> -static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
>>> -				     struct ib_qp_cap *cap,
>>> -				     struct hns_roce_qp *hr_qp,
>>> -				     struct hns_roce_ib_create_qp *ucmd)
>>> +static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev,
>>> +					struct ib_qp_cap *cap,
>>> +					struct hns_roce_ib_create_qp *ucmd)
>>>  {
>>>  	u32 roundup_sq_stride = roundup_pow_of_two(hr_dev->caps.max_sq_desc_sz);
>>>  	u8 max_sq_stride = ilog2(roundup_sq_stride);
>>> -	u32 ex_sge_num;
>>> -	u32 page_size;
>>> -	u32 max_cnt;
>>>  
>>>  	/* Sanity check SQ size before proceeding */
>>>  	if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes ||
>>> @@ -349,6 +345,25 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
>>>  		return -EINVAL;
>>>  	}
>>>  
>>> +	return 0;
>>> +}
>>> +
>>> +static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev,
>>> +				     struct ib_qp_cap *cap,
>>> +				     struct hns_roce_qp *hr_qp,
>>> +				     struct hns_roce_ib_create_qp *ucmd)
>>> +{
>>> +	u32 ex_sge_num;
>>> +	u32 page_size;
>>> +	u32 max_cnt;
>>> +	int ret;
>>> +
>>> +	ret = check_sq_size_with_integrity(hr_dev, cap, ucmd);
>>> +	if (ret) {
>>> +		dev_err(hr_dev->dev, "Sanity check sq size fail\n");
>> Consider using ibdev_err, same applies for other patches.
> It would be good if driver authors would convert their drivers to use
> the new interfaces
>
> Jason
>
> .
Yes, I am testing the effects of both according to the recommendations.
I think we should consider replacement spparately,  Do you agree?





^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function
  2019-07-30 11:19   ` Gal Pressman
@ 2019-07-30 13:39     ` oulijun
  0 siblings, 0 replies; 26+ messages in thread
From: oulijun @ 2019-07-30 13:39 UTC (permalink / raw)
  To: Gal Pressman, dledford, jgg; +Cc: leon, linux-rdma, linuxarm

在 2019/7/30 19:19, Gal Pressman 写道:
> On 30/07/2019 11:56, Lijun Ou wrote:
>> Here mainly packages some code into some new functions in order to
>> reduce code compelexity.
>>
>> Signed-off-by: Lijun Ou <oulijun@huawei.com>
>> ---
>>  drivers/infiniband/hw/hns/hns_roce_qp.c | 118 +++++++++++++++++++-------------
>>  1 file changed, 72 insertions(+), 46 deletions(-)
>>
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> index 35ef7e2..8b2d10f 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_qp.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
>> @@ -1070,6 +1070,76 @@ int to_hr_qp_type(int qp_type)
>>  	return transport_type;
>>  }
>>  
>> +static int check_mtu_validate(struct hns_roce_dev *hr_dev,
>> +                             struct hns_roce_qp *hr_qp,
>> +                             struct ib_qp_attr *attr, int attr_mask)
>> +{
>> +       struct device *dev = hr_dev->dev;
>> +       enum ib_mtu active_mtu;
>> +       int p;
>> +
>> +       p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
>> +           active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu);
>> +
>> +       if ((hr_dev->caps.max_mtu >= IB_MTU_2048 &&
>> +            attr->path_mtu > hr_dev->caps.max_mtu) ||
>> +            attr->path_mtu < IB_MTU_256 || attr->path_mtu > active_mtu) {
>> +               dev_err(dev, "attr path_mtu(%d)invalid while modify qp",
>> +                       attr->path_mtu);
>> +               return -EINVAL;
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>> +                                 int attr_mask)
>> +{
>> +       struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
>> +       struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
>> +       struct device *dev = hr_dev->dev;
>> +       int ret = 0;
>> +       int p;
>> +
>> +       if ((attr_mask & IB_QP_PORT) &&
>> +           (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
>> +               dev_err(dev, "attr port_num invalid.attr->port_num=%d\n",
>> +                       attr->port_num);
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (attr_mask & IB_QP_PKEY_INDEX) {
>> +               p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
>> +               if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
>> +                       dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n",
>> +                               attr->pkey_index);
>> +                       return -EINVAL;
>> +               }
>> +       }
>> +
>> +       if (attr_mask & IB_QP_PATH_MTU) {
>> +               ret = check_mtu_validate(hr_dev, hr_qp, attr, attr_mask);
>> +               if (ret)
>> +                       return ret;
>> +       }
>> +
>> +       if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
>> +           attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
>> +               dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
>> +                       attr->max_rd_atomic);
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
>> +           attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
>> +               dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
>> +                       attr->max_dest_rd_atomic);
>> +               return -EINVAL;
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>>  int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>>  		       int attr_mask, struct ib_udata *udata)
>>  {
>> @@ -1078,8 +1148,6 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>>  	enum ib_qp_state cur_state, new_state;
>>  	struct device *dev = hr_dev->dev;
>>  	int ret = -EINVAL;
>> -	int p;
>> -	enum ib_mtu active_mtu;
>>  
>>  	mutex_lock(&hr_qp->mutex);
>>  
>> @@ -1107,51 +1175,9 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
>>  		goto out;
>>  	}
>>  
>> -	if ((attr_mask & IB_QP_PORT) &&
>> -	    (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) {
>> -		dev_err(dev, "attr port_num invalid.attr->port_num=%d\n",
>> -			attr->port_num);
>> -		goto out;
>> -	}
>> -
>> -	if (attr_mask & IB_QP_PKEY_INDEX) {
>> -		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
>> -		if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) {
>> -			dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n",
>> -				attr->pkey_index);
>> -			goto out;
>> -		}
>> -	}
>> -
>> -	if (attr_mask & IB_QP_PATH_MTU) {
>> -		p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port;
>> -		active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu);
>> -
>> -		if ((hr_dev->caps.max_mtu == IB_MTU_4096 &&
>> -		    attr->path_mtu > IB_MTU_4096) ||
>> -		    (hr_dev->caps.max_mtu == IB_MTU_2048 &&
>> -		    attr->path_mtu > IB_MTU_2048) ||
>> -		    attr->path_mtu < IB_MTU_256 ||
>> -		    attr->path_mtu > active_mtu) {
>> -			dev_err(dev, "attr path_mtu(%d)invalid while modify qp",
>> -				attr->path_mtu);
>> -			goto out;
>> -		}
>> -	}
>> -
>> -	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
>> -	    attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) {
>> -		dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n",
>> -			attr->max_rd_atomic);
>> -		goto out;
>> -	}
>> -
>> -	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
>> -	    attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) {
>> -		dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n",
>> -			attr->max_dest_rd_atomic);
>> +	ret = hns_roce_check_qp_attr(ibqp, attr, attr_mask);
>> +	if (ret)
>>  		goto out;
>> -	}
>>  
>>  	if (cur_state == new_state && cur_state == IB_QPS_RESET) {
>>  		if (hr_dev->caps.min_wqes) {
>>
> This patch is formatted with spaces instead of tabs.
>
> .
Thanks. This is may careless when generated from local branch. the checkpatch seem to not checked.
I will fix it.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc
  2019-07-30  8:56 ` [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc Lijun Ou
@ 2019-07-30 13:40   ` Leon Romanovsky
  2019-07-31  2:43     ` oulijun
  0 siblings, 1 reply; 26+ messages in thread
From: Leon Romanovsky @ 2019-07-30 13:40 UTC (permalink / raw)
  To: Lijun Ou; +Cc: dledford, jgg, linux-rdma, linuxarm

On Tue, Jul 30, 2019 at 04:56:47PM +0800, Lijun Ou wrote:
> From: Lang Cheng <chenglang@huawei.com>
>
> For hns_roce_v2_query_qp and hns_roce_v2_modify_qp,
> we can use stack memory to create qp context data.
> Make the code simpler.
>
> Signed-off-by: Lang Cheng <chenglang@huawei.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++-----------------
>  1 file changed, 27 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> index 1186e34..07ddfae 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> @@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
>  {
>  	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
>  	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> -	struct hns_roce_v2_qp_context *context;
> -	struct hns_roce_v2_qp_context *qpc_mask;
> +	struct hns_roce_v2_qp_context ctx[2];
> +	struct hns_roce_v2_qp_context *context = ctx;
> +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
>  	struct device *dev = hr_dev->dev;
>  	int ret;
>
> -	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
> -	if (!context)
> -		return -ENOMEM;
> -
> -	qpc_mask = context + 1;
>  	/*
>  	 * In v2 engine, software pass context and context mask to hardware
>  	 * when modifying qp. If software need modify some fields in context,
>  	 * we should set all bits of the relevant fields in context mask to
>  	 * 0 at the same time, else set them to 0x1.
>  	 */
> +	memset(context, 0, sizeof(*context));

"struct hns_roce_v2_qp_context ctx[2] = {};" will do the trick.

Thanks

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 00/13] Updates for 5.3-rc2
  2019-07-30 12:33 ` [PATCH for-next 00/13] Updates for 5.3-rc2 Dennis Dalessandro
@ 2019-07-30 13:41   ` oulijun
  0 siblings, 0 replies; 26+ messages in thread
From: oulijun @ 2019-07-30 13:41 UTC (permalink / raw)
  To: Dennis Dalessandro, dledford, jgg; +Cc: leon, linux-rdma, linuxarm

在 2019/7/30 20:33, Dennis Dalessandro 写道:
> On 7/30/2019 4:56 AM, Lijun Ou wrote:
>> Here are some updates for hns driver based 5.3-rc2, mainly
>> include some codes optimization and comments style modification.
>>
>> Lang Cheng (6):
>>    RDMA/hns: Clean up unnecessary initial assignment
>>    RDMA/hns: Update some comments style
>>    RDMA/hns: Handling the error return value of hem function
>>    RDMA/hns: Split bool statement and assign statement
>>    RDMA/hns: Refactor irq request code
>>    RDMA/hns: Remove unnecessary kzalloc
>>
>> Lijun Ou (2):
>>    RDMA/hns: Encapsulate some lines for setting sq size in user mode
>>    RDMA/hns: Optimize hns_roce_modify_qp function
>>
>> Weihang Li (2):
>>    RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int()
>>    RDMA/hns: Disable alw_lcl_lpbk of SSU
>>
>> Yangyang Li (1):
>>    RDMA/hns: Refactor hns_roce_v2_set_hem for hip08
>>
>> Yixian Liu (2):
>>    RDMA/hns: Update the prompt message for creating and destroy qp
>>    RDMA/hns: Remove unnessary init for cmq reg
>>
>>   drivers/infiniband/hw/hns/hns_roce_device.h |  65 +++++----
>>   drivers/infiniband/hw/hns/hns_roce_hem.c    |  15 +-
>>   drivers/infiniband/hw/hns/hns_roce_hem.h    |   6 +-
>>   drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 210 ++++++++++++++--------------
>>   drivers/infiniband/hw/hns/hns_roce_hw_v2.h  |   2 -
>>   drivers/infiniband/hw/hns/hns_roce_mr.c     |   1 -
>>   drivers/infiniband/hw/hns/hns_roce_qp.c     | 150 +++++++++++++-------
>>   7 files changed, 244 insertions(+), 205 deletions(-)
>>
>
> A lot of this doesn't seem appropriate for an -rc does it?
>
It mainly optimize some codes and style. if it is some bugs, we should consider -rc?
> -Denny
>
> .
>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc
  2019-07-30 13:40   ` Leon Romanovsky
@ 2019-07-31  2:43     ` oulijun
  2019-07-31  7:49       ` Leon Romanovsky
  0 siblings, 1 reply; 26+ messages in thread
From: oulijun @ 2019-07-31  2:43 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: dledford, jgg, linux-rdma, linuxarm

在 2019/7/30 21:40, Leon Romanovsky 写道:
> On Tue, Jul 30, 2019 at 04:56:47PM +0800, Lijun Ou wrote:
>> From: Lang Cheng <chenglang@huawei.com>
>>
>> For hns_roce_v2_query_qp and hns_roce_v2_modify_qp,
>> we can use stack memory to create qp context data.
>> Make the code simpler.
>>
>> Signed-off-by: Lang Cheng <chenglang@huawei.com>
>> ---
>>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++-----------------
>>  1 file changed, 27 insertions(+), 37 deletions(-)
>>
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
>> index 1186e34..07ddfae 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
>> @@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
>>  {
>>  	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
>>  	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
>> -	struct hns_roce_v2_qp_context *context;
>> -	struct hns_roce_v2_qp_context *qpc_mask;
>> +	struct hns_roce_v2_qp_context ctx[2];
>> +	struct hns_roce_v2_qp_context *context = ctx;
>> +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
>>  	struct device *dev = hr_dev->dev;
>>  	int ret;
>>
>> -	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
>> -	if (!context)
>> -		return -ENOMEM;
>> -
>> -	qpc_mask = context + 1;
>>  	/*
>>  	 * In v2 engine, software pass context and context mask to hardware
>>  	 * when modifying qp. If software need modify some fields in context,
>>  	 * we should set all bits of the relevant fields in context mask to
>>  	 * 0 at the same time, else set them to 0x1.
>>  	 */
>> +	memset(context, 0, sizeof(*context));
> "struct hns_roce_v2_qp_context ctx[2] = {};" will do the trick.
>
> Thanks
>
> .
In this case, the mask is actually writen twice. if you do this, will it bring extra overhead when modify qp?




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc
  2019-07-31  2:43     ` oulijun
@ 2019-07-31  7:49       ` Leon Romanovsky
       [not found]         ` <fab1c105-b367-7ca7-fa2f-b46808ae1b24@huawei.com>
  0 siblings, 1 reply; 26+ messages in thread
From: Leon Romanovsky @ 2019-07-31  7:49 UTC (permalink / raw)
  To: oulijun; +Cc: dledford, jgg, linux-rdma, linuxarm

On Wed, Jul 31, 2019 at 10:43:01AM +0800, oulijun wrote:
> 在 2019/7/30 21:40, Leon Romanovsky 写道:
> > On Tue, Jul 30, 2019 at 04:56:47PM +0800, Lijun Ou wrote:
> >> From: Lang Cheng <chenglang@huawei.com>
> >>
> >> For hns_roce_v2_query_qp and hns_roce_v2_modify_qp,
> >> we can use stack memory to create qp context data.
> >> Make the code simpler.
> >>
> >> Signed-off-by: Lang Cheng <chenglang@huawei.com>
> >> ---
> >>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++-----------------
> >>  1 file changed, 27 insertions(+), 37 deletions(-)
> >>
> >> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> >> index 1186e34..07ddfae 100644
> >> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> >> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> >> @@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
> >>  {
> >>  	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> >>  	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> >> -	struct hns_roce_v2_qp_context *context;
> >> -	struct hns_roce_v2_qp_context *qpc_mask;
> >> +	struct hns_roce_v2_qp_context ctx[2];
> >> +	struct hns_roce_v2_qp_context *context = ctx;
> >> +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
> >>  	struct device *dev = hr_dev->dev;
> >>  	int ret;
> >>
> >> -	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
> >> -	if (!context)
> >> -		return -ENOMEM;
> >> -
> >> -	qpc_mask = context + 1;
> >>  	/*
> >>  	 * In v2 engine, software pass context and context mask to hardware
> >>  	 * when modifying qp. If software need modify some fields in context,
> >>  	 * we should set all bits of the relevant fields in context mask to
> >>  	 * 0 at the same time, else set them to 0x1.
> >>  	 */
> >> +	memset(context, 0, sizeof(*context));
> > "struct hns_roce_v2_qp_context ctx[2] = {};" will do the trick.
> >
> > Thanks
> >
> > .
> In this case, the mask is actually writen twice. if you do this, will it bring extra overhead when modify qp?

I don't understand first part of your sentence, and "no" to second part.

Thanks

>
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc
       [not found]         ` <fab1c105-b367-7ca7-fa2f-b46808ae1b24@huawei.com>
@ 2019-07-31  9:59           ` Leon Romanovsky
  2019-07-31 13:02             ` Jason Gunthorpe
  0 siblings, 1 reply; 26+ messages in thread
From: Leon Romanovsky @ 2019-07-31  9:59 UTC (permalink / raw)
  To: c00284828; +Cc: oulijun, dledford, jgg, linux-rdma, linuxarm

On Wed, Jul 31, 2019 at 05:16:38PM +0800, c00284828 wrote:
>
> 在 2019/7/31 15:49, Leon Romanovsky 写道:
> > On Wed, Jul 31, 2019 at 10:43:01AM +0800, oulijun wrote:
> > > 在 2019/7/30 21:40, Leon Romanovsky 写道:
> > > > On Tue, Jul 30, 2019 at 04:56:47PM +0800, Lijun Ou wrote:
> > > > > From: Lang Cheng <chenglang@huawei.com>
> > > > >
> > > > > For hns_roce_v2_query_qp and hns_roce_v2_modify_qp,
> > > > > we can use stack memory to create qp context data.
> > > > > Make the code simpler.
> > > > >
> > > > > Signed-off-by: Lang Cheng <chenglang@huawei.com>
> > > > > ---
> > > > >   drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++-----------------
> > > > >   1 file changed, 27 insertions(+), 37 deletions(-)
> > > > >
> > > > > diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> > > > > index 1186e34..07ddfae 100644
> > > > > --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> > > > > +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> > > > > @@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
> > > > >   {
> > > > >   	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> > > > >   	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> > > > > -	struct hns_roce_v2_qp_context *context;
> > > > > -	struct hns_roce_v2_qp_context *qpc_mask;
> > > > > +	struct hns_roce_v2_qp_context ctx[2];
> > > > > +	struct hns_roce_v2_qp_context *context = ctx;
> > > > > +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
> > > > >   	struct device *dev = hr_dev->dev;
> > > > >   	int ret;
> > > > >
> > > > > -	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
> > > > > -	if (!context)
> > > > > -		return -ENOMEM;
> > > > > -
> > > > > -	qpc_mask = context + 1;
> > > > >   	/*
> > > > >   	 * In v2 engine, software pass context and context mask to hardware
> > > > >   	 * when modifying qp. If software need modify some fields in context,
> > > > >   	 * we should set all bits of the relevant fields in context mask to
> > > > >   	 * 0 at the same time, else set them to 0x1.
> > > > >   	 */
> > > > > +	memset(context, 0, sizeof(*context));
> > > > "struct hns_roce_v2_qp_context ctx[2] = {};" will do the trick.
> > > >
> > > > Thanks
> > > >
> > > > .
> > > In this case, the mask is actually writen twice. if you do this, will it bring extra overhead when modify qp?
> > I don't understand first part of your sentence, and "no" to second part.
> >
> > Thanks
>
> +	struct hns_roce_v2_qp_context ctx[2];
> +	struct hns_roce_v2_qp_context *context = ctx;
> +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
> ...
> +	memset(context, 0, sizeof(*context));
>  	memset(qpc_mask, 0xff, sizeof(*qpc_mask));
>
> ctx[2] ={} does look more simple, just like another function in patch.
> But ctx[1] will be written 0 before being written to 0xff, is it faster than twice memset ?

Are you seriously talking about this micro optimization while you have mailbox
allocation in your modify_qp function?

Thanks

>
> > >
> > >
> > .
> >

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc
  2019-07-31  9:59           ` Leon Romanovsky
@ 2019-07-31 13:02             ` Jason Gunthorpe
  0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2019-07-31 13:02 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: c00284828, oulijun, dledford, linux-rdma, linuxarm

On Wed, Jul 31, 2019 at 12:59:01PM +0300, Leon Romanovsky wrote:
> On Wed, Jul 31, 2019 at 05:16:38PM +0800, c00284828 wrote:
> >
> > 在 2019/7/31 15:49, Leon Romanovsky 写道:
> > > On Wed, Jul 31, 2019 at 10:43:01AM +0800, oulijun wrote:
> > > > 在 2019/7/30 21:40, Leon Romanovsky 写道:
> > > > > On Tue, Jul 30, 2019 at 04:56:47PM +0800, Lijun Ou wrote:
> > > > > > From: Lang Cheng <chenglang@huawei.com>
> > > > > >
> > > > > > For hns_roce_v2_query_qp and hns_roce_v2_modify_qp,
> > > > > > we can use stack memory to create qp context data.
> > > > > > Make the code simpler.
> > > > > >
> > > > > > Signed-off-by: Lang Cheng <chenglang@huawei.com>
> > > > > >   drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++-----------------
> > > > > >   1 file changed, 27 insertions(+), 37 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> > > > > > index 1186e34..07ddfae 100644
> > > > > > +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
> > > > > > @@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
> > > > > >   {
> > > > > >   	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> > > > > >   	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> > > > > > -	struct hns_roce_v2_qp_context *context;
> > > > > > -	struct hns_roce_v2_qp_context *qpc_mask;
> > > > > > +	struct hns_roce_v2_qp_context ctx[2];
> > > > > > +	struct hns_roce_v2_qp_context *context = ctx;
> > > > > > +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
> > > > > >   	struct device *dev = hr_dev->dev;
> > > > > >   	int ret;
> > > > > >
> > > > > > -	context = kcalloc(2, sizeof(*context), GFP_ATOMIC);
> > > > > > -	if (!context)
> > > > > > -		return -ENOMEM;
> > > > > > -
> > > > > > -	qpc_mask = context + 1;
> > > > > >   	/*
> > > > > >   	 * In v2 engine, software pass context and context mask to hardware
> > > > > >   	 * when modifying qp. If software need modify some fields in context,
> > > > > >   	 * we should set all bits of the relevant fields in context mask to
> > > > > >   	 * 0 at the same time, else set them to 0x1.
> > > > > >   	 */
> > > > > > +	memset(context, 0, sizeof(*context));
> > > > > "struct hns_roce_v2_qp_context ctx[2] = {};" will do the trick.
> > > > >
> > > > > Thanks
> > > > >
> > > > > .
> > > > In this case, the mask is actually writen twice. if you do this, will it bring extra overhead when modify qp?
> > > I don't understand first part of your sentence, and "no" to second part.
> > >
> > > Thanks
> >
> > +	struct hns_roce_v2_qp_context ctx[2];
> > +	struct hns_roce_v2_qp_context *context = ctx;
> > +	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
> > ...
> > +	memset(context, 0, sizeof(*context));
> >  	memset(qpc_mask, 0xff, sizeof(*qpc_mask));
> >
> > ctx[2] ={} does look more simple, just like another function in patch.
> > But ctx[1] will be written 0 before being written to 0xff, is it faster than twice memset ?
> 
> Are you seriously talking about this micro optimization while you have mailbox
> allocation in your modify_qp function?

In any event the compiler eliminates duplicate stores in a lot of cases
like this.

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2019-07-31 13:02 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-30  8:56 [PATCH for-next 00/13] Updates for 5.3-rc2 Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 01/13] RDMA/hns: Encapsulate some lines for setting sq size in user mode Lijun Ou
2019-07-30 11:16   ` Gal Pressman
2019-07-30 12:17     ` Jason Gunthorpe
2019-07-30 13:37       ` oulijun
2019-07-30  8:56 ` [PATCH for-next 02/13] RDMA/hns: Optimize hns_roce_modify_qp function Lijun Ou
2019-07-30 11:19   ` Gal Pressman
2019-07-30 13:39     ` oulijun
2019-07-30  8:56 ` [PATCH for-next 03/13] RDMA/hns: Update the prompt message for creating and destroy qp Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 04/13] RDMA/hns: Remove unnessary init for cmq reg Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 05/13] RDMA/hns: Clean up unnecessary initial assignment Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 06/13] RDMA/hns: Update some comments style Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 07/13] RDMA/hns: Handling the error return value of hem function Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 08/13] RDMA/hns: Split bool statement and assign statement Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 09/13] RDMA/hns: Refactor irq request code Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 10/13] RDMA/hns: Remove unnecessary kzalloc Lijun Ou
2019-07-30 13:40   ` Leon Romanovsky
2019-07-31  2:43     ` oulijun
2019-07-31  7:49       ` Leon Romanovsky
     [not found]         ` <fab1c105-b367-7ca7-fa2f-b46808ae1b24@huawei.com>
2019-07-31  9:59           ` Leon Romanovsky
2019-07-31 13:02             ` Jason Gunthorpe
2019-07-30  8:56 ` [PATCH for-next 11/13] RDMA/hns: Refactor hns_roce_v2_set_hem for hip08 Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 12/13] RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int() Lijun Ou
2019-07-30  8:56 ` [PATCH for-next 13/13] RDMA/hns: Disable alw_lcl_lpbk of SSU Lijun Ou
2019-07-30 12:33 ` [PATCH for-next 00/13] Updates for 5.3-rc2 Dennis Dalessandro
2019-07-30 13:41   ` oulijun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.