linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver
@ 2016-11-04 16:36 Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1 Salil Mehta
                   ` (10 more replies)
  0 siblings, 11 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

This patchset introduces some code improvements and fixes
for the identified problems in the HNS RoCE driver.

Lijun Ou (4):
  IB/hns: Add the interface for querying QP1
  IB/hns: add self loopback for CM
  IB/hns: Modify the condition of notifying hardware loopback
  IB/hns: Fix the bug for qp state in hns_roce_v1_m_qp()

Salil Mehta (1):
  IB/hns: Fix for Checkpatch.pl comment style errors

Shaobo Xu (1):
  IB/hns: Implement the add_gid/del_gid and optimize the GIDs
    management

Wei Hu (Xavier) (5):
  IB/hns: Add code for refreshing CQ CI using TPTR
  IB/hns: Optimize the logic of allocating memory using APIs
  IB/hns: Modify the macro for the timeout when cmd process
  IB/hns: Modify query info named port_num when querying RC QP
  IB/hns: Change qpn allocation to round-robin mode.

 drivers/infiniband/hw/hns/hns_roce_alloc.c  |   11 +-
 drivers/infiniband/hw/hns/hns_roce_cmd.c    |    8 +-
 drivers/infiniband/hw/hns/hns_roce_cmd.h    |    7 +-
 drivers/infiniband/hw/hns/hns_roce_common.h |    2 -
 drivers/infiniband/hw/hns/hns_roce_cq.c     |   17 +-
 drivers/infiniband/hw/hns/hns_roce_device.h |   45 ++--
 drivers/infiniband/hw/hns/hns_roce_eq.c     |    6 +-
 drivers/infiniband/hw/hns/hns_roce_hem.c    |    6 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c  |  271 +++++++++++++++++------
 drivers/infiniband/hw/hns/hns_roce_hw_v1.h  |   17 +-
 drivers/infiniband/hw/hns/hns_roce_main.c   |  311 +++++++--------------------
 drivers/infiniband/hw/hns/hns_roce_mr.c     |   21 +-
 drivers/infiniband/hw/hns/hns_roce_pd.c     |    5 +-
 drivers/infiniband/hw/hns/hns_roce_qp.c     |    2 +-
 14 files changed, 367 insertions(+), 362 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-07  5:45   ` Anurup M
  2016-11-04 16:36 ` [PATCH for-next 02/11] IB/hns: Add code for refreshing CQ CI using TPTR Salil Mehta
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

From: Lijun Ou <oulijun@huawei.com>

In old code, It only added the interface for querying non-specific
QP. This patch mainly adds an interface for querying QP1.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c |   87 +++++++++++++++++++++++++++-
 drivers/infiniband/hw/hns/hns_roce_hw_v1.h |    6 +-
 2 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index 71232e5..ca8b784 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -2630,8 +2630,82 @@ static int hns_roce_v1_query_qpc(struct hns_roce_dev *hr_dev,
 	return ret;
 }
 
-int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
-			 int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+static int hns_roce_v1_q_sqp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+			     int qp_attr_mask,
+			     struct ib_qp_init_attr *qp_init_attr)
+{
+	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+	struct hns_roce_sqp_context *context;
+	u32 addr;
+
+	context = kzalloc(sizeof(*context), GFP_KERNEL);
+	if (!context)
+		return -ENOMEM;
+
+	mutex_lock(&hr_qp->mutex);
+
+	if (hr_qp->state == IB_QPS_RESET) {
+		qp_attr->qp_state = IB_QPS_RESET;
+		goto done;
+	}
+
+	addr = ROCEE_QP1C_CFG0_0_REG + hr_qp->port * sizeof(*context);
+	context->qp1c_bytes_4 = roce_read(hr_dev, addr);
+	context->sq_rq_bt_l = roce_read(hr_dev, addr + 1);
+	context->qp1c_bytes_12 = roce_read(hr_dev, addr + 2);
+	context->qp1c_bytes_16 = roce_read(hr_dev, addr + 3);
+	context->qp1c_bytes_20 = roce_read(hr_dev, addr + 4);
+	context->cur_rq_wqe_ba_l = roce_read(hr_dev, addr + 5);
+	context->qp1c_bytes_28 = roce_read(hr_dev, addr + 6);
+	context->qp1c_bytes_32 = roce_read(hr_dev, addr + 7);
+	context->cur_sq_wqe_ba_l = roce_read(hr_dev, addr + 8);
+	context->qp1c_bytes_40 = roce_read(hr_dev, addr + 9);
+
+	hr_qp->state = roce_get_field(context->qp1c_bytes_4,
+				      QP1C_BYTES_4_QP_STATE_M,
+				      QP1C_BYTES_4_QP_STATE_S);
+	qp_attr->qp_state	= hr_qp->state;
+	qp_attr->path_mtu	= IB_MTU_256;
+	qp_attr->path_mig_state	= IB_MIG_ARMED;
+	qp_attr->qkey		= QKEY_VAL;
+	qp_attr->rq_psn		= 0;
+	qp_attr->sq_psn		= 0;
+	qp_attr->dest_qp_num	= 1;
+	qp_attr->qp_access_flags = 6;
+
+	qp_attr->pkey_index = roce_get_field(context->qp1c_bytes_20,
+					     QP1C_BYTES_20_PKEY_IDX_M,
+					     QP1C_BYTES_20_PKEY_IDX_S);
+	qp_attr->port_num = hr_qp->port + 1;
+	qp_attr->sq_draining = 0;
+	qp_attr->max_rd_atomic = 0;
+	qp_attr->max_dest_rd_atomic = 0;
+	qp_attr->min_rnr_timer = 0;
+	qp_attr->timeout = 0;
+	qp_attr->retry_cnt = 0;
+	qp_attr->rnr_retry = 0;
+	qp_attr->alt_timeout = 0;
+
+done:
+	qp_attr->cur_qp_state = qp_attr->qp_state;
+	qp_attr->cap.max_recv_wr = hr_qp->rq.wqe_cnt;
+	qp_attr->cap.max_recv_sge = hr_qp->rq.max_gs;
+	qp_attr->cap.max_send_wr = hr_qp->sq.wqe_cnt;
+	qp_attr->cap.max_send_sge = hr_qp->sq.max_gs;
+	qp_attr->cap.max_inline_data = 0;
+	qp_init_attr->cap = qp_attr->cap;
+	qp_init_attr->create_flags = 0;
+
+	mutex_unlock(&hr_qp->mutex);
+	kfree(context);
+
+	return 0;
+}
+
+static int hns_roce_v1_q_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+			    int qp_attr_mask,
+			    struct ib_qp_init_attr *qp_init_attr)
 {
 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
@@ -2767,6 +2841,15 @@ int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	return ret;
 }
 
+int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+			 int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+{
+	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+
+	return hr_qp->doorbell_qpn <= 1 ?
+		hns_roce_v1_q_sqp(ibqp, qp_attr, qp_attr_mask, qp_init_attr) :
+		hns_roce_v1_q_qp(ibqp, qp_attr, qp_attr_mask, qp_init_attr);
+}
 static void hns_roce_v1_destroy_qp_common(struct hns_roce_dev *hr_dev,
 					  struct hns_roce_qp *hr_qp,
 					  int is_user)
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
index 539b0a3b..2e1878b 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
@@ -480,13 +480,17 @@ struct hns_roce_sqp_context {
 	u32 qp1c_bytes_12;
 	u32 qp1c_bytes_16;
 	u32 qp1c_bytes_20;
-	u32 qp1c_bytes_28;
 	u32 cur_rq_wqe_ba_l;
+	u32 qp1c_bytes_28;
 	u32 qp1c_bytes_32;
 	u32 cur_sq_wqe_ba_l;
 	u32 qp1c_bytes_40;
 };
 
+#define QP1C_BYTES_4_QP_STATE_S 0
+#define QP1C_BYTES_4_QP_STATE_M   \
+	(((1UL << 3) - 1) << QP1C_BYTES_4_QP_STATE_S)
+
 #define QP1C_BYTES_4_SQ_WQE_SHIFT_S 8
 #define QP1C_BYTES_4_SQ_WQE_SHIFT_M   \
 	(((1UL << 4) - 1) << QP1C_BYTES_4_SQ_WQE_SHIFT_S)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 02/11] IB/hns: Add code for refreshing CQ CI using TPTR
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1 Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs Salil Mehta
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm, Dongdong Huang

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch added the code for refreshing CQ CI using TPTR in hip06
SoC.

We will send a doorbell to hardware for refreshing CQ CI when user
succeed to poll a cqe. But it will be failed if the doorbell has
been blocked. So hardware will read a special buffer called TPTR
to get the lastest CI value when the cq is almost full.

This patch support the special CI buffer as follows:
a) Alloc the memory for TPTR in the hns_roce_tptr_init function and
   free it in hns_roce_tptr_free function, these two functions will
   be called in probe function and in the remove function.
b) Add the code for computing offset(every cq need 2 bytes) and
   write the dma addr to every cq context to notice hardware in the
   function named hns_roce_v1_write_cqc.
c) Add code for mapping TPTR buffer to user space in function named
   hns_roce_mmap. The mapping distinguish TPTR and UAR of user mode
   by vm_pgoff(0: UAR, 1: TPTR, others:invaild) in hip06.
d) Alloc the code for refreshing CQ CI using TPTR in the function
   named hns_roce_v1_poll_cq.
e) Add some variable definitions to the related structure.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Dongdong Huang(Donald) <hdd.huang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_common.h |    2 -
 drivers/infiniband/hw/hns/hns_roce_cq.c     |    9 +++
 drivers/infiniband/hw/hns/hns_roce_device.h |    6 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c  |   79 ++++++++++++++++++++++++---
 drivers/infiniband/hw/hns/hns_roce_hw_v1.h  |    9 +++
 drivers/infiniband/hw/hns/hns_roce_main.c   |   13 ++++-
 6 files changed, 103 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_common.h b/drivers/infiniband/hw/hns/hns_roce_common.h
index 2970161..0dcb620 100644
--- a/drivers/infiniband/hw/hns/hns_roce_common.h
+++ b/drivers/infiniband/hw/hns/hns_roce_common.h
@@ -253,8 +253,6 @@
 #define ROCEE_VENDOR_ID_REG			0x0
 #define ROCEE_VENDOR_PART_ID_REG		0x4
 
-#define ROCEE_HW_VERSION_REG			0x8
-
 #define ROCEE_SYS_IMAGE_GUID_L_REG		0xC
 #define ROCEE_SYS_IMAGE_GUID_H_REG		0x10
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 0973659..5dc8d92 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -349,6 +349,15 @@ struct ib_cq *hns_roce_ib_create_cq(struct ib_device *ib_dev,
 		goto err_mtt;
 	}
 
+	/*
+	 * For the QP created by kernel space, tptr value should be initialized
+	 * to zero; For the QP created by user space, it will cause synchronous
+	 * problems if tptr is set to zero here, so we initialze it in user
+	 * space.
+	 */
+	if (!context)
+		*hr_cq->tptr_addr = 0;
+
 	/* Get created cq handler and carry out event */
 	hr_cq->comp = hns_roce_ib_cq_comp;
 	hr_cq->event = hns_roce_ib_cq_event;
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 3417315..7242b14 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -37,6 +37,8 @@
 
 #define DRV_NAME "hns_roce"
 
+#define HNS_ROCE_HW_VER1	('h' << 24 | 'i' << 16 | '0' << 8 | '6')
+
 #define MAC_ADDR_OCTET_NUM			6
 #define HNS_ROCE_MAX_MSG_LEN			0x80000000
 
@@ -296,7 +298,7 @@ struct hns_roce_cq {
 	u32				cq_depth;
 	u32				cons_index;
 	void __iomem			*cq_db_l;
-	void __iomem			*tptr_addr;
+	u16				*tptr_addr;
 	unsigned long			cqn;
 	u32				vector;
 	atomic_t			refcount;
@@ -553,6 +555,8 @@ struct hns_roce_dev {
 
 	int			cmd_mod;
 	int			loop_idc;
+	dma_addr_t		tptr_dma_addr; /*only for hw v1*/
+	u32			tptr_size; /*only for hw v1*/
 	struct hns_roce_hw	*hw;
 };
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index ca8b784..7750d0d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -849,6 +849,45 @@ static void hns_roce_bt_free(struct hns_roce_dev *hr_dev)
 		priv->bt_table.qpc_buf.buf, priv->bt_table.qpc_buf.map);
 }
 
+static int hns_roce_tptr_init(struct hns_roce_dev *hr_dev)
+{
+	struct device *dev = &hr_dev->pdev->dev;
+	struct hns_roce_buf_list *tptr_buf;
+	struct hns_roce_v1_priv *priv;
+
+	priv = (struct hns_roce_v1_priv *)hr_dev->hw->priv;
+	tptr_buf = &priv->tptr_table.tptr_buf;
+
+	/*
+	 * This buffer will be used for CQ's tptr(tail pointer), also
+	 * named ci(customer index). Every CQ will use 2 bytes to save
+	 * cqe ci in hip06. Hardware will read this area to get new ci
+	 * when the queue is almost full.
+	 */
+	tptr_buf->buf = dma_alloc_coherent(dev, HNS_ROCE_V1_TPTR_BUF_SIZE,
+					   &tptr_buf->map, GFP_KERNEL);
+	if (!tptr_buf->buf)
+		return -ENOMEM;
+
+	hr_dev->tptr_dma_addr = tptr_buf->map;
+	hr_dev->tptr_size = HNS_ROCE_V1_TPTR_BUF_SIZE;
+
+	return 0;
+}
+
+static void hns_roce_tptr_free(struct hns_roce_dev *hr_dev)
+{
+	struct device *dev = &hr_dev->pdev->dev;
+	struct hns_roce_buf_list *tptr_buf;
+	struct hns_roce_v1_priv *priv;
+
+	priv = (struct hns_roce_v1_priv *)hr_dev->hw->priv;
+	tptr_buf = &priv->tptr_table.tptr_buf;
+
+	dma_free_coherent(dev, HNS_ROCE_V1_TPTR_BUF_SIZE,
+			  tptr_buf->buf, tptr_buf->map);
+}
+
 /**
  * hns_roce_v1_reset - reset RoCE
  * @hr_dev: RoCE device struct pointer
@@ -906,12 +945,11 @@ void hns_roce_v1_profile(struct hns_roce_dev *hr_dev)
 	hr_dev->vendor_id = le32_to_cpu(roce_read(hr_dev, ROCEE_VENDOR_ID_REG));
 	hr_dev->vendor_part_id = le32_to_cpu(roce_read(hr_dev,
 					     ROCEE_VENDOR_PART_ID_REG));
-	hr_dev->hw_rev = le32_to_cpu(roce_read(hr_dev, ROCEE_HW_VERSION_REG));
-
 	hr_dev->sys_image_guid = le32_to_cpu(roce_read(hr_dev,
 					     ROCEE_SYS_IMAGE_GUID_L_REG)) |
 				((u64)le32_to_cpu(roce_read(hr_dev,
 					    ROCEE_SYS_IMAGE_GUID_H_REG)) << 32);
+	hr_dev->hw_rev		= HNS_ROCE_HW_VER1;
 
 	caps->num_qps		= HNS_ROCE_V1_MAX_QP_NUM;
 	caps->max_wqes		= HNS_ROCE_V1_MAX_WQE_NUM;
@@ -1009,8 +1047,17 @@ int hns_roce_v1_init(struct hns_roce_dev *hr_dev)
 		goto error_failed_bt_init;
 	}
 
+	ret = hns_roce_tptr_init(hr_dev);
+	if (ret) {
+		dev_err(dev, "tptr init failed!\n");
+		goto error_failed_tptr_init;
+	}
+
 	return 0;
 
+error_failed_tptr_init:
+	hns_roce_bt_free(hr_dev);
+
 error_failed_bt_init:
 	hns_roce_port_enable(hr_dev, HNS_ROCE_PORT_DOWN);
 	hns_roce_raq_free(hr_dev);
@@ -1022,6 +1069,7 @@ int hns_roce_v1_init(struct hns_roce_dev *hr_dev)
 
 void hns_roce_v1_exit(struct hns_roce_dev *hr_dev)
 {
+	hns_roce_tptr_free(hr_dev);
 	hns_roce_bt_free(hr_dev);
 	hns_roce_port_enable(hr_dev, HNS_ROCE_PORT_DOWN);
 	hns_roce_raq_free(hr_dev);
@@ -1339,14 +1387,21 @@ void hns_roce_v1_write_cqc(struct hns_roce_dev *hr_dev,
 			   dma_addr_t dma_handle, int nent, u32 vector)
 {
 	struct hns_roce_cq_context *cq_context = NULL;
-	void __iomem *tptr_addr;
+	struct hns_roce_buf_list *tptr_buf;
+	struct hns_roce_v1_priv *priv;
+	dma_addr_t tptr_dma_addr;
+	int offset;
+
+	priv = (struct hns_roce_v1_priv *)hr_dev->hw->priv;
+	tptr_buf = &priv->tptr_table.tptr_buf;
 
 	cq_context = mb_buf;
 	memset(cq_context, 0, sizeof(*cq_context));
 
-	tptr_addr = 0;
-	hr_dev->priv_addr = tptr_addr;
-	hr_cq->tptr_addr = tptr_addr;
+	/* Get the tptr for this CQ. */
+	offset = hr_cq->cqn * HNS_ROCE_V1_TPTR_ENTRY_SIZE;
+	tptr_dma_addr = tptr_buf->map + offset;
+	hr_cq->tptr_addr = (u16 *)(tptr_buf->buf + offset);
 
 	/* Register cq_context members */
 	roce_set_field(cq_context->cqc_byte_4,
@@ -1390,10 +1445,10 @@ void hns_roce_v1_write_cqc(struct hns_roce_dev *hr_dev,
 	roce_set_field(cq_context->cqc_byte_20,
 		       CQ_CONTEXT_CQC_BYTE_20_CQE_TPTR_ADDR_H_M,
 		       CQ_CONTEXT_CQC_BYTE_20_CQE_TPTR_ADDR_H_S,
-		       (u64)tptr_addr >> 44);
+		       tptr_dma_addr >> 44);
 	cq_context->cqc_byte_20 = cpu_to_le32(cq_context->cqc_byte_20);
 
-	cq_context->cqe_tptr_addr_l = (u32)((u64)tptr_addr >> 12);
+	cq_context->cqe_tptr_addr_l = (u32)(tptr_dma_addr >> 12);
 
 	roce_set_field(cq_context->cqc_byte_32,
 		       CQ_CONTEXT_CQC_BYTE_32_CUR_CQE_BA1_H_M,
@@ -1659,8 +1714,14 @@ int hns_roce_v1_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
 			break;
 	}
 
-	if (npolled)
+	if (npolled) {
+		*hr_cq->tptr_addr = hr_cq->cons_index &
+			((hr_cq->cq_depth << 1) - 1);
+
+		/* Memroy barrier */
+		wmb();
 		hns_roce_v1_cq_set_ci(hr_cq, hr_cq->cons_index);
+	}
 
 	spin_unlock_irqrestore(&hr_cq->lock, flags);
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
index 2e1878b..6004c7f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
@@ -104,6 +104,10 @@
 
 #define HNS_ROCE_BT_RSV_BUF_SIZE			(1 << 17)
 
+#define HNS_ROCE_V1_TPTR_ENTRY_SIZE			2
+#define HNS_ROCE_V1_TPTR_BUF_SIZE	\
+	(HNS_ROCE_V1_TPTR_ENTRY_SIZE * HNS_ROCE_V1_MAX_CQ_NUM)
+
 #define HNS_ROCE_ODB_POLL_MODE				0
 
 #define HNS_ROCE_SDB_NORMAL_MODE			0
@@ -983,10 +987,15 @@ struct hns_roce_bt_table {
 	struct hns_roce_buf_list cqc_buf;
 };
 
+struct hns_roce_tptr_table {
+	struct hns_roce_buf_list tptr_buf;
+};
+
 struct hns_roce_v1_priv {
 	struct hns_roce_db_table  db_table;
 	struct hns_roce_raq_table raq_table;
 	struct hns_roce_bt_table  bt_table;
+	struct hns_roce_tptr_table tptr_table;
 };
 
 int hns_dsaf_roce_reset(struct fwnode_handle *dsaf_fwnode, bool dereset);
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index 764e35a..6770171 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -549,6 +549,8 @@ static int hns_roce_dealloc_ucontext(struct ib_ucontext *ibcontext)
 static int hns_roce_mmap(struct ib_ucontext *context,
 			 struct vm_area_struct *vma)
 {
+	struct hns_roce_dev *hr_dev = to_hr_dev(context->device);
+
 	if (((vma->vm_end - vma->vm_start) % PAGE_SIZE) != 0)
 		return -EINVAL;
 
@@ -558,10 +560,15 @@ static int hns_roce_mmap(struct ib_ucontext *context,
 				       to_hr_ucontext(context)->uar.pfn,
 				       PAGE_SIZE, vma->vm_page_prot))
 			return -EAGAIN;
-
-	} else {
+	} else if (vma->vm_pgoff == 1 && hr_dev->hw_rev == HNS_ROCE_HW_VER1) {
+		/* vm_pgoff: 1 -- TPTR */
+		if (io_remap_pfn_range(vma, vma->vm_start,
+				       hr_dev->tptr_dma_addr >> PAGE_SHIFT,
+				       hr_dev->tptr_size,
+				       vma->vm_page_prot))
+			return -EAGAIN;
+	} else
 		return -EINVAL;
-	}
 
 	return 0;
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1 Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 02/11] IB/hns: Add code for refreshing CQ CI using TPTR Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-09  7:21   ` Leon Romanovsky
  2016-11-04 16:36 ` [PATCH for-next 04/11] IB/hns: add self loopback for CM Salil Mehta
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm, Ping Zhang

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch modified the logic of allocating memory using APIs in
hns RoCE driver. We used kcalloc instead of kmalloc_array and
bitmap_zero. And When kcalloc failed, call vzalloc to alloc
memory.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Ping Zhang <zhangping5@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index fb87883..d3dfb5f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct hns_roce_buddy *buddy, int max_order)
 
 	for (i = 0; i <= buddy->max_order; ++i) {
 		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
-		buddy->bits[i] = kmalloc_array(s, sizeof(long), GFP_KERNEL);
-		if (!buddy->bits[i])
-			goto err_out_free;
-
-		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order - i));
+		buddy->bits[i] = kcalloc(s, sizeof(long), GFP_KERNEL);
+		if (!buddy->bits[i]) {
+			buddy->bits[i] = vzalloc(s * sizeof(long));
+			if (!buddy->bits[i])
+				goto err_out_free;
+		}
 	}
 
 	set_bit(0, buddy->bits[buddy->max_order]);
@@ -151,7 +152,7 @@ static int hns_roce_buddy_init(struct hns_roce_buddy *buddy, int max_order)
 
 err_out_free:
 	for (i = 0; i <= buddy->max_order; ++i)
-		kfree(buddy->bits[i]);
+		kvfree(buddy->bits[i]);
 
 err_out:
 	kfree(buddy->bits);
@@ -164,7 +165,7 @@ static void hns_roce_buddy_cleanup(struct hns_roce_buddy *buddy)
 	int i;
 
 	for (i = 0; i <= buddy->max_order; ++i)
-		kfree(buddy->bits[i]);
+		kvfree(buddy->bits[i]);
 
 	kfree(buddy->bits);
 	kfree(buddy->num_free);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 04/11] IB/hns: add self loopback for CM
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (2 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 05/11] IB/hns: Modify the condition of notifying hardware loopback Salil Mehta
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm, Peter Chen

From: Lijun Ou <oulijun@huawei.com>

This patch mainly adds self loopback support for CM.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Peter Chen <luck.chen@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c |   11 +++++++++++
 drivers/infiniband/hw/hns/hns_roce_hw_v1.h |    2 ++
 2 files changed, 13 insertions(+)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index 7750d0d..d6df6dd 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -32,6 +32,7 @@
 
 #include <linux/platform_device.h>
 #include <linux/acpi.h>
+#include <linux/etherdevice.h>
 #include <rdma/ib_umem.h>
 #include "hns_roce_common.h"
 #include "hns_roce_device.h"
@@ -72,6 +73,8 @@ int hns_roce_v1_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 	int nreq = 0;
 	u32 ind = 0;
 	int ret = 0;
+	u8 *smac;
+	int loopback;
 
 	if (unlikely(ibqp->qp_type != IB_QPT_GSI &&
 		ibqp->qp_type != IB_QPT_RC)) {
@@ -129,6 +132,14 @@ int hns_roce_v1_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 				       UD_SEND_WQE_U32_8_DMAC_5_M,
 				       UD_SEND_WQE_U32_8_DMAC_5_S,
 				       ah->av.mac[5]);
+
+			smac = (u8 *)hr_dev->dev_addr[qp->port];
+			loopback = ether_addr_equal_unaligned(ah->av.mac,
+							      smac) ? 1 : 0;
+			roce_set_bit(ud_sq_wqe->u32_8,
+				     UD_SEND_WQE_U32_8_LOOPBACK_INDICATOR_S,
+				     loopback);
+
 			roce_set_field(ud_sq_wqe->u32_8,
 				       UD_SEND_WQE_U32_8_OPERATION_TYPE_M,
 				       UD_SEND_WQE_U32_8_OPERATION_TYPE_S,
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
index 6004c7f..cf28f1b 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
@@ -440,6 +440,8 @@ struct hns_roce_ud_send_wqe {
 #define UD_SEND_WQE_U32_8_DMAC_5_M   \
 	(((1UL << 8) - 1) << UD_SEND_WQE_U32_8_DMAC_5_S)
 
+#define UD_SEND_WQE_U32_8_LOOPBACK_INDICATOR_S 22
+
 #define UD_SEND_WQE_U32_8_OPERATION_TYPE_S 16
 #define UD_SEND_WQE_U32_8_OPERATION_TYPE_M   \
 	(((1UL << 4) - 1) << UD_SEND_WQE_U32_8_OPERATION_TYPE_S)
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 05/11] IB/hns: Modify the condition of notifying hardware loopback
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (3 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 04/11] IB/hns: add self loopback for CM Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 06/11] IB/hns: Fix the bug for qp state in hns_roce_v1_m_qp() Salil Mehta
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

From: Lijun Ou <oulijun@huawei.com>

This patch modified the condition of notifying hardware loopback.

In hip06, RoCE Engine has several ports, one QP is related
to one port. hardware only support loopback in the same port,
not in the different ports.

So, If QP related to port N, the dmac in the QP context equals
the smac of the local port N or the loop_idc is 1, we should
set loopback bit in QP context to notify hardware.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c |   24 +++++++-----------------
 1 file changed, 7 insertions(+), 17 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index d6df6dd..8ca36a7 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -2244,24 +2244,14 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
 			     QP_CONTEXT_QPC_BYTE_32_SIGNALING_TYPE_S,
 			     hr_qp->sq_signal_bits);
 
-		for (port = 0; port < hr_dev->caps.num_ports; port++) {
-			smac = (u8 *)hr_dev->dev_addr[port];
-			dev_dbg(dev, "smac: %2x: %2x: %2x: %2x: %2x: %2x\n",
-				smac[0], smac[1], smac[2], smac[3], smac[4],
-				smac[5]);
-			if ((dmac[0] == smac[0]) && (dmac[1] == smac[1]) &&
-			    (dmac[2] == smac[2]) && (dmac[3] == smac[3]) &&
-			    (dmac[4] == smac[4]) && (dmac[5] == smac[5])) {
-				roce_set_bit(context->qpc_bytes_32,
-				    QP_CONTEXT_QPC_BYTE_32_LOOPBACK_INDICATOR_S,
-				    1);
-				break;
-			}
-		}
-
-		if (hr_dev->loop_idc == 0x1)
+		port = (attr_mask & IB_QP_PORT) ? (attr->port_num - 1) :
+			hr_qp->port;
+		smac = (u8 *)hr_dev->dev_addr[port];
+		/* when dmac equals smac or loop_idc is 1, it should loopback */
+		if (ether_addr_equal_unaligned(dmac, smac) ||
+		    hr_dev->loop_idc == 0x1)
 			roce_set_bit(context->qpc_bytes_32,
-				QP_CONTEXT_QPC_BYTE_32_LOOPBACK_INDICATOR_S, 1);
+			      QP_CONTEXT_QPC_BYTE_32_LOOPBACK_INDICATOR_S, 1);
 
 		roce_set_bit(context->qpc_bytes_32,
 			     QP_CONTEXT_QPC_BYTE_32_GLOBAL_HEADER_S,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 06/11] IB/hns: Fix the bug for qp state in hns_roce_v1_m_qp()
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (4 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 05/11] IB/hns: Modify the condition of notifying hardware loopback Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 07/11] IB/hns: Modify the macro for the timeout when cmd process Salil Mehta
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

From: Lijun Ou <oulijun@huawei.com>

In old code, the value of qp state from qpc was assigned for
attr->qp_state. The value may be an error while attr_mask &
IB_QP_STATE is zero.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index 8ca36a7..2d48406 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -2571,7 +2571,7 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
 	/* Every status migrate must change state */
 	roce_set_field(context->qpc_bytes_144,
 		       QP_CONTEXT_QPC_BYTES_144_QP_STATE_M,
-		       QP_CONTEXT_QPC_BYTES_144_QP_STATE_S, attr->qp_state);
+		       QP_CONTEXT_QPC_BYTES_144_QP_STATE_S, new_state);
 
 	/* SW pass context to HW */
 	ret = hns_roce_v1_qp_modify(hr_dev, &hr_qp->mtt,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 07/11] IB/hns: Modify the macro for the timeout when cmd process
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (5 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 06/11] IB/hns: Fix the bug for qp state in hns_roce_v1_m_qp() Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 08/11] IB/hns: Modify query info named port_num when querying RC QP Salil Mehta
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch modified the macro for the timeout when cmd is
processing as follows:
Before modification:
 enum {
	HNS_ROCE_CMD_TIME_CLASS_A       = 10000,
	HNS_ROCE_CMD_TIME_CLASS_B       = 10000,
	HNS_ROCE_CMD_TIME_CLASS_C       = 10000,
 };
After modification:
 #define HNS_ROCE_CMD_TIMEOUT_MSECS	10000

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_cmd.h   |    7 +------
 drivers/infiniband/hw/hns/hns_roce_cq.c    |    4 ++--
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c |    8 ++++----
 drivers/infiniband/hw/hns/hns_roce_mr.c    |    4 ++--
 4 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.h b/drivers/infiniband/hw/hns/hns_roce_cmd.h
index e3997d3..ed14ad3 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.h
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.h
@@ -34,6 +34,7 @@
 #define _HNS_ROCE_CMD_H
 
 #define HNS_ROCE_MAILBOX_SIZE		4096
+#define HNS_ROCE_CMD_TIMEOUT_MSECS	10000
 
 enum {
 	/* TPT commands */
@@ -57,12 +58,6 @@ enum {
 	HNS_ROCE_CMD_QUERY_QP		= 0x22,
 };
 
-enum {
-	HNS_ROCE_CMD_TIME_CLASS_A	= 10000,
-	HNS_ROCE_CMD_TIME_CLASS_B	= 10000,
-	HNS_ROCE_CMD_TIME_CLASS_C	= 10000,
-};
-
 struct hns_roce_cmd_mailbox {
 	void		       *buf;
 	dma_addr_t		dma;
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 5dc8d92..461a273 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -77,7 +77,7 @@ static int hns_roce_sw2hw_cq(struct hns_roce_dev *dev,
 			     unsigned long cq_num)
 {
 	return hns_roce_cmd_mbox(dev, mailbox->dma, 0, cq_num, 0,
-			    HNS_ROCE_CMD_SW2HW_CQ, HNS_ROCE_CMD_TIME_CLASS_A);
+			    HNS_ROCE_CMD_SW2HW_CQ, HNS_ROCE_CMD_TIMEOUT_MSECS);
 }
 
 static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent,
@@ -176,7 +176,7 @@ static int hns_roce_hw2sw_cq(struct hns_roce_dev *dev,
 {
 	return hns_roce_cmd_mbox(dev, 0, mailbox ? mailbox->dma : 0, cq_num,
 				 mailbox ? 0 : 1, HNS_ROCE_CMD_HW2SW_CQ,
-				 HNS_ROCE_CMD_TIME_CLASS_A);
+				 HNS_ROCE_CMD_TIMEOUT_MSECS);
 }
 
 static void hns_roce_free_cq(struct hns_roce_dev *hr_dev,
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index 2d48406..c39a9b2 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -1871,12 +1871,12 @@ static int hns_roce_v1_qp_modify(struct hns_roce_dev *hr_dev,
 	if (op[cur_state][new_state] == HNS_ROCE_CMD_2RST_QP)
 		return hns_roce_cmd_mbox(hr_dev, 0, 0, hr_qp->qpn, 2,
 					 HNS_ROCE_CMD_2RST_QP,
-					 HNS_ROCE_CMD_TIME_CLASS_A);
+					 HNS_ROCE_CMD_TIMEOUT_MSECS);
 
 	if (op[cur_state][new_state] == HNS_ROCE_CMD_2ERR_QP)
 		return hns_roce_cmd_mbox(hr_dev, 0, 0, hr_qp->qpn, 2,
 					 HNS_ROCE_CMD_2ERR_QP,
-					 HNS_ROCE_CMD_TIME_CLASS_A);
+					 HNS_ROCE_CMD_TIMEOUT_MSECS);
 
 	mailbox = hns_roce_alloc_cmd_mailbox(hr_dev);
 	if (IS_ERR(mailbox))
@@ -1886,7 +1886,7 @@ static int hns_roce_v1_qp_modify(struct hns_roce_dev *hr_dev,
 
 	ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, hr_qp->qpn, 0,
 				op[cur_state][new_state],
-				HNS_ROCE_CMD_TIME_CLASS_C);
+				HNS_ROCE_CMD_TIMEOUT_MSECS);
 
 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
 	return ret;
@@ -2681,7 +2681,7 @@ static int hns_roce_v1_query_qpc(struct hns_roce_dev *hr_dev,
 
 	ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, hr_qp->qpn, 0,
 				HNS_ROCE_CMD_QUERY_QP,
-				HNS_ROCE_CMD_TIME_CLASS_A);
+				HNS_ROCE_CMD_TIMEOUT_MSECS);
 	if (!ret)
 		memcpy(hr_context, mailbox->buf, sizeof(*hr_context));
 	else
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index d3dfb5f..2227962 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -53,7 +53,7 @@ static int hns_roce_sw2hw_mpt(struct hns_roce_dev *hr_dev,
 {
 	return hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, mpt_index, 0,
 				 HNS_ROCE_CMD_SW2HW_MPT,
-				 HNS_ROCE_CMD_TIME_CLASS_B);
+				 HNS_ROCE_CMD_TIMEOUT_MSECS);
 }
 
 static int hns_roce_hw2sw_mpt(struct hns_roce_dev *hr_dev,
@@ -62,7 +62,7 @@ static int hns_roce_hw2sw_mpt(struct hns_roce_dev *hr_dev,
 {
 	return hns_roce_cmd_mbox(hr_dev, 0, mailbox ? mailbox->dma : 0,
 				 mpt_index, !mailbox, HNS_ROCE_CMD_HW2SW_MPT,
-				 HNS_ROCE_CMD_TIME_CLASS_B);
+				 HNS_ROCE_CMD_TIMEOUT_MSECS);
 }
 
 static int hns_roce_buddy_alloc(struct hns_roce_buddy *buddy, int order,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 08/11] IB/hns: Modify query info named port_num when querying RC QP
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (6 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 07/11] IB/hns: Modify the macro for the timeout when cmd process Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 09/11] IB/hns: Change qpn allocation to round-robin mode Salil Mehta
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

This patch modified the output query info qp_attr->port_num
to fix bug in hip06.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index c39a9b2..76edebe 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -2861,9 +2861,7 @@ static int hns_roce_v1_q_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
 	qp_attr->pkey_index = roce_get_field(context->qpc_bytes_12,
 			      QP_CONTEXT_QPC_BYTES_12_P_KEY_INDEX_M,
 			      QP_CONTEXT_QPC_BYTES_12_P_KEY_INDEX_S);
-	qp_attr->port_num = (u8)roce_get_field(context->qpc_bytes_156,
-			     QP_CONTEXT_QPC_BYTES_156_PORT_NUM_M,
-			     QP_CONTEXT_QPC_BYTES_156_PORT_NUM_S) + 1;
+	qp_attr->port_num = hr_qp->port + 1;
 	qp_attr->sq_draining = 0;
 	qp_attr->max_rd_atomic = roce_get_field(context->qpc_bytes_156,
 				 QP_CONTEXT_QPC_BYTES_156_INITIATOR_DEPTH_M,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 09/11] IB/hns: Change qpn allocation to round-robin mode.
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (7 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 08/11] IB/hns: Modify query info named port_num when querying RC QP Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-09  7:24   ` Leon Romanovsky
  2016-11-04 16:36 ` [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management Salil Mehta
  2016-11-04 16:36 ` [PATCH for-next 11/11] IB/hns: Fix for Checkpatch.pl comment style errors Salil Mehta
  10 siblings, 1 reply; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

When using CM to establish connections, qp number that was freed
just now will be rejected by ib core. To fix these problem, We
change qpn allocation to round-robin mode. We added the round-robin
mode for allocating resources using bitmap. We use round-robin mode
for qp number and non round-robing mode for other resources like
cq number, pd number etc.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_alloc.c  |   11 +++++++----
 drivers/infiniband/hw/hns/hns_roce_cq.c     |    4 ++--
 drivers/infiniband/hw/hns/hns_roce_device.h |    9 +++++++--
 drivers/infiniband/hw/hns/hns_roce_mr.c     |    2 +-
 drivers/infiniband/hw/hns/hns_roce_pd.c     |    5 +++--
 drivers/infiniband/hw/hns/hns_roce_qp.c     |    2 +-
 6 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c
index 863a17a..605962f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_alloc.c
+++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c
@@ -61,9 +61,10 @@ int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj)
 	return ret;
 }
 
-void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj)
+void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj,
+			  int rr)
 {
-	hns_roce_bitmap_free_range(bitmap, obj, 1);
+	hns_roce_bitmap_free_range(bitmap, obj, 1, rr);
 }
 
 int hns_roce_bitmap_alloc_range(struct hns_roce_bitmap *bitmap, int cnt,
@@ -106,7 +107,8 @@ int hns_roce_bitmap_alloc_range(struct hns_roce_bitmap *bitmap, int cnt,
 }
 
 void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap,
-				unsigned long obj, int cnt)
+				unsigned long obj, int cnt,
+				int rr)
 {
 	int i;
 
@@ -116,7 +118,8 @@ void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap,
 	for (i = 0; i < cnt; i++)
 		clear_bit(obj + i, bitmap->table);
 
-	bitmap->last = min(bitmap->last, obj);
+	if (!rr)
+		bitmap->last = min(bitmap->last, obj);
 	bitmap->top = (bitmap->top + bitmap->max + bitmap->reserved_top)
 		       & bitmap->mask;
 	spin_unlock(&bitmap->lock);
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 461a273..c9f6c3d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -166,7 +166,7 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent,
 	hns_roce_table_put(hr_dev, &cq_table->table, hr_cq->cqn);
 
 err_out:
-	hns_roce_bitmap_free(&cq_table->bitmap, hr_cq->cqn);
+	hns_roce_bitmap_free(&cq_table->bitmap, hr_cq->cqn, BITMAP_NO_RR);
 	return ret;
 }
 
@@ -204,7 +204,7 @@ static void hns_roce_free_cq(struct hns_roce_dev *hr_dev,
 	spin_unlock_irq(&cq_table->lock);
 
 	hns_roce_table_put(hr_dev, &cq_table->table, hr_cq->cqn);
-	hns_roce_bitmap_free(&cq_table->bitmap, hr_cq->cqn);
+	hns_roce_bitmap_free(&cq_table->bitmap, hr_cq->cqn, BITMAP_NO_RR);
 }
 
 static int hns_roce_ib_get_cq_umem(struct hns_roce_dev *hr_dev,
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 7242b14..593a42a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -72,6 +72,9 @@
 #define HNS_ROCE_MAX_GID_NUM			16
 #define HNS_ROCE_GID_SIZE			16
 
+#define BITMAP_NO_RR				0
+#define BITMAP_RR				1
+
 #define MR_TYPE_MR				0x00
 #define MR_TYPE_DMA				0x03
 
@@ -661,7 +664,8 @@ int hns_roce_buf_write_mtt(struct hns_roce_dev *hr_dev,
 void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev);
 
 int hns_roce_bitmap_alloc(struct hns_roce_bitmap *bitmap, unsigned long *obj);
-void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj);
+void hns_roce_bitmap_free(struct hns_roce_bitmap *bitmap, unsigned long obj,
+			 int rr);
 int hns_roce_bitmap_init(struct hns_roce_bitmap *bitmap, u32 num, u32 mask,
 			 u32 reserved_bot, u32 resetrved_top);
 void hns_roce_bitmap_cleanup(struct hns_roce_bitmap *bitmap);
@@ -669,7 +673,8 @@ int hns_roce_bitmap_init(struct hns_roce_bitmap *bitmap, u32 num, u32 mask,
 int hns_roce_bitmap_alloc_range(struct hns_roce_bitmap *bitmap, int cnt,
 				int align, unsigned long *obj);
 void hns_roce_bitmap_free_range(struct hns_roce_bitmap *bitmap,
-				unsigned long obj, int cnt);
+				unsigned long obj, int cnt,
+				int rr);
 
 struct ib_ah *hns_roce_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr);
 int hns_roce_query_ah(struct ib_ah *ibah, struct ib_ah_attr *ah_attr);
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index 2227962..6396bc5 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -288,7 +288,7 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev,
 	}
 
 	hns_roce_bitmap_free(&hr_dev->mr_table.mtpt_bitmap,
-			     key_to_hw_index(mr->key));
+			     key_to_hw_index(mr->key), BITMAP_NO_RR);
 }
 
 static int hns_roce_mr_enable(struct hns_roce_dev *hr_dev,
diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
index 05db7d5..a64500f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
@@ -40,7 +40,7 @@ static int hns_roce_pd_alloc(struct hns_roce_dev *hr_dev, unsigned long *pdn)
 
 static void hns_roce_pd_free(struct hns_roce_dev *hr_dev, unsigned long pdn)
 {
-	hns_roce_bitmap_free(&hr_dev->pd_bitmap, pdn);
+	hns_roce_bitmap_free(&hr_dev->pd_bitmap, pdn, BITMAP_NO_RR);
 }
 
 int hns_roce_init_pd_table(struct hns_roce_dev *hr_dev)
@@ -121,7 +121,8 @@ int hns_roce_uar_alloc(struct hns_roce_dev *hr_dev, struct hns_roce_uar *uar)
 
 void hns_roce_uar_free(struct hns_roce_dev *hr_dev, struct hns_roce_uar *uar)
 {
-	hns_roce_bitmap_free(&hr_dev->uar_table.bitmap, uar->index);
+	hns_roce_bitmap_free(&hr_dev->uar_table.bitmap, uar->index,
+			     BITMAP_NO_RR);
 }
 
 int hns_roce_init_uar_table(struct hns_roce_dev *hr_dev)
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index e86dd8d..4775b5c 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -250,7 +250,7 @@ void hns_roce_release_range_qp(struct hns_roce_dev *hr_dev, int base_qpn,
 	if (base_qpn < SQP_NUM)
 		return;
 
-	hns_roce_bitmap_free_range(&qp_table->bitmap, base_qpn, cnt);
+	hns_roce_bitmap_free_range(&qp_table->bitmap, base_qpn, cnt, BITMAP_RR);
 }
 
 static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (8 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 09/11] IB/hns: Change qpn allocation to round-robin mode Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  2016-11-07  8:17   ` Anurup M
  2016-11-04 16:36 ` [PATCH for-next 11/11] IB/hns: Fix for Checkpatch.pl comment style errors Salil Mehta
  10 siblings, 1 reply; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm, Shaobo Xu

From: Shaobo Xu <xushaobo2@huawei.com>

IB core has implemented the calculation of GIDs and the management
of GID tables, and it is now responsible to supply query function
for GIDs. So the calculation of GIDs and the management of GID
tables in the RoCE driver is redundant.

The patch is to implement the add_gid/del_gid to set the GIDs in
the RoCE driver, remove the redundant calculation and management of
GIDs in the notifier call of the net device and the inet, and
update the query_gid.

Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h |    2 -
 drivers/infiniband/hw/hns/hns_roce_main.c   |  270 +++++----------------------
 2 files changed, 48 insertions(+), 224 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 593a42a..9ef1cc3 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -429,8 +429,6 @@ struct hns_roce_ib_iboe {
 	struct net_device      *netdevs[HNS_ROCE_MAX_PORTS];
 	struct notifier_block	nb;
 	struct notifier_block	nb_inet;
-	/* 16 GID is shared by 6 port in v1 engine. */
-	union ib_gid		gid_table[HNS_ROCE_MAX_GID_NUM];
 	u8			phy_port[HNS_ROCE_MAX_PORTS];
 };
 
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index 6770171..795ef97 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -35,52 +35,13 @@
 #include <rdma/ib_addr.h>
 #include <rdma/ib_smi.h>
 #include <rdma/ib_user_verbs.h>
+#include <rdma/ib_cache.h>
 #include "hns_roce_common.h"
 #include "hns_roce_device.h"
 #include "hns_roce_user.h"
 #include "hns_roce_hem.h"
 
 /**
- * hns_roce_addrconf_ifid_eui48 - Get default gid.
- * @eui: eui.
- * @vlan_id:  gid
- * @dev:  net device
- * Description:
- *    MAC convert to GID
- *        gid[0..7] = fe80 0000 0000 0000
- *        gid[8] = mac[0] ^ 2
- *        gid[9] = mac[1]
- *        gid[10] = mac[2]
- *        gid[11] = ff        (VLAN ID high byte (4 MS bits))
- *        gid[12] = fe        (VLAN ID low byte)
- *        gid[13] = mac[3]
- *        gid[14] = mac[4]
- *        gid[15] = mac[5]
- */
-static void hns_roce_addrconf_ifid_eui48(u8 *eui, u16 vlan_id,
-					 struct net_device *dev)
-{
-	memcpy(eui, dev->dev_addr, 3);
-	memcpy(eui + 5, dev->dev_addr + 3, 3);
-	if (vlan_id < 0x1000) {
-		eui[3] = vlan_id >> 8;
-		eui[4] = vlan_id & 0xff;
-	} else {
-		eui[3] = 0xff;
-		eui[4] = 0xfe;
-	}
-	eui[0] ^= 2;
-}
-
-static void hns_roce_make_default_gid(struct net_device *dev, union ib_gid *gid)
-{
-	memset(gid, 0, sizeof(*gid));
-	gid->raw[0] = 0xFE;
-	gid->raw[1] = 0x80;
-	hns_roce_addrconf_ifid_eui48(&gid->raw[8], 0xffff, dev);
-}
-
-/**
  * hns_get_gid_index - Get gid index.
  * @hr_dev: pointer to structure hns_roce_dev.
  * @port:  port, value range: 0 ~ MAX
@@ -96,30 +57,6 @@ int hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index)
 	return gid_index * hr_dev->caps.num_ports + port;
 }
 
-static int hns_roce_set_gid(struct hns_roce_dev *hr_dev, u8 port, int gid_index,
-		     union ib_gid *gid)
-{
-	struct device *dev = &hr_dev->pdev->dev;
-	u8 gid_idx = 0;
-
-	if (gid_index >= hr_dev->caps.gid_table_len[port]) {
-		dev_err(dev, "gid_index %d illegal, port %d gid range: 0~%d\n",
-			gid_index, port, hr_dev->caps.gid_table_len[port] - 1);
-		return -EINVAL;
-	}
-
-	gid_idx = hns_get_gid_index(hr_dev, port, gid_index);
-
-	if (!memcmp(gid, &hr_dev->iboe.gid_table[gid_idx], sizeof(*gid)))
-		return -EINVAL;
-
-	memcpy(&hr_dev->iboe.gid_table[gid_idx], gid, sizeof(*gid));
-
-	hr_dev->hw->set_gid(hr_dev, port, gid_index, gid);
-
-	return 0;
-}
-
 static void hns_roce_set_mac(struct hns_roce_dev *hr_dev, u8 port, u8 *addr)
 {
 	u8 phy_port;
@@ -147,15 +84,44 @@ static void hns_roce_set_mtu(struct hns_roce_dev *hr_dev, u8 port, int mtu)
 	hr_dev->hw->set_mtu(hr_dev, phy_port, tmp);
 }
 
-static void hns_roce_update_gids(struct hns_roce_dev *hr_dev, int port)
+static int hns_roce_add_gid(struct ib_device *device, u8 port_num,
+			    unsigned int index, const union ib_gid *gid,
+			    const struct ib_gid_attr *attr, void **context)
+{
+	struct hns_roce_dev *hr_dev = to_hr_dev(device);
+	u8 port = port_num - 1;
+	unsigned long flags;
+
+	if (port >= hr_dev->caps.num_ports)
+		return -EINVAL;
+
+	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
+
+	hr_dev->hw->set_gid(hr_dev, port, index, (union ib_gid *)gid);
+
+	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
+
+	return 0;
+}
+
+static int hns_roce_del_gid(struct ib_device *device, u8 port_num,
+			    unsigned int index, void **context)
 {
-	struct ib_event event;
+	struct hns_roce_dev *hr_dev = to_hr_dev(device);
+	union ib_gid zgid = { {0} };
+	u8 port = port_num - 1;
+	unsigned long flags;
+
+	if (port >= hr_dev->caps.num_ports)
+		return -EINVAL;
 
-	/* Refresh gid in ib_cache */
-	event.device = &hr_dev->ib_dev;
-	event.element.port_num = port + 1;
-	event.event = IB_EVENT_GID_CHANGE;
-	ib_dispatch_event(&event);
+	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
+
+	hr_dev->hw->set_gid(hr_dev, port, index, &zgid);
+
+	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
+
+	return 0;
 }
 
 static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
@@ -164,8 +130,6 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
 	struct device *dev = &hr_dev->pdev->dev;
 	struct net_device *netdev;
 	unsigned long flags;
-	union ib_gid gid;
-	int ret = 0;
 
 	netdev = hr_dev->iboe.netdevs[port];
 	if (!netdev) {
@@ -181,10 +145,6 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
 	case NETDEV_REGISTER:
 	case NETDEV_CHANGEADDR:
 		hns_roce_set_mac(hr_dev, port, netdev->dev_addr);
-		hns_roce_make_default_gid(netdev, &gid);
-		ret = hns_roce_set_gid(hr_dev, port, 0, &gid);
-		if (!ret)
-			hns_roce_update_gids(hr_dev, port);
 		break;
 	case NETDEV_DOWN:
 		/*
@@ -197,7 +157,7 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
 	}
 
 	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
-	return ret;
+	return 0;
 }
 
 static int hns_roce_netdev_event(struct notifier_block *self,
@@ -224,118 +184,17 @@ static int hns_roce_netdev_event(struct notifier_block *self,
 	return NOTIFY_DONE;
 }
 
-static void hns_roce_addr_event(int event, struct net_device *event_netdev,
-				struct hns_roce_dev *hr_dev, union ib_gid *gid)
-{
-	struct hns_roce_ib_iboe *iboe = NULL;
-	int gid_table_len = 0;
-	unsigned long flags;
-	union ib_gid zgid;
-	u8 gid_idx = 0;
-	u8 port = 0;
-	int i = 0;
-	int free;
-	struct net_device *real_dev = rdma_vlan_dev_real_dev(event_netdev) ?
-				      rdma_vlan_dev_real_dev(event_netdev) :
-				      event_netdev;
-
-	if (event != NETDEV_UP && event != NETDEV_DOWN)
-		return;
-
-	iboe = &hr_dev->iboe;
-	while (port < hr_dev->caps.num_ports) {
-		if (real_dev == iboe->netdevs[port])
-			break;
-		port++;
-	}
-
-	if (port >= hr_dev->caps.num_ports) {
-		dev_dbg(&hr_dev->pdev->dev, "can't find netdev\n");
-		return;
-	}
-
-	memset(zgid.raw, 0, sizeof(zgid.raw));
-	free = -1;
-	gid_table_len = hr_dev->caps.gid_table_len[port];
-
-	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
-
-	for (i = 0; i < gid_table_len; i++) {
-		gid_idx = hns_get_gid_index(hr_dev, port, i);
-		if (!memcmp(gid->raw, iboe->gid_table[gid_idx].raw,
-			    sizeof(gid->raw)))
-			break;
-		if (free < 0 && !memcmp(zgid.raw,
-			iboe->gid_table[gid_idx].raw, sizeof(zgid.raw)))
-			free = i;
-	}
-
-	if (i >= gid_table_len) {
-		if (free < 0) {
-			spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
-			dev_dbg(&hr_dev->pdev->dev,
-				"gid_index overflow, port(%d)\n", port);
-			return;
-		}
-		if (!hns_roce_set_gid(hr_dev, port, free, gid))
-			hns_roce_update_gids(hr_dev, port);
-	} else if (event == NETDEV_DOWN) {
-		if (!hns_roce_set_gid(hr_dev, port, i, &zgid))
-			hns_roce_update_gids(hr_dev, port);
-	}
-
-	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
-}
-
-static int hns_roce_inet_event(struct notifier_block *self, unsigned long event,
-			       void *ptr)
-{
-	struct in_ifaddr *ifa = ptr;
-	struct hns_roce_dev *hr_dev;
-	struct net_device *dev = ifa->ifa_dev->dev;
-	union ib_gid gid;
-
-	ipv6_addr_set_v4mapped(ifa->ifa_address, (struct in6_addr *)&gid);
-
-	hr_dev = container_of(self, struct hns_roce_dev, iboe.nb_inet);
-
-	hns_roce_addr_event(event, dev, hr_dev, &gid);
-
-	return NOTIFY_DONE;
-}
-
-static int hns_roce_setup_mtu_gids(struct hns_roce_dev *hr_dev)
+static int hns_roce_setup_mtu_mac(struct hns_roce_dev *hr_dev)
 {
-	struct in_ifaddr *ifa_list = NULL;
-	union ib_gid gid = {{0} };
-	u32 ipaddr = 0;
-	int index = 0;
-	int ret = 0;
-	u8 i = 0;
+	u8 i;
 
 	for (i = 0; i < hr_dev->caps.num_ports; i++) {
 		hns_roce_set_mtu(hr_dev, i,
 				 ib_mtu_enum_to_int(hr_dev->caps.max_mtu));
 		hns_roce_set_mac(hr_dev, i, hr_dev->iboe.netdevs[i]->dev_addr);
-
-		if (hr_dev->iboe.netdevs[i]->ip_ptr) {
-			ifa_list = hr_dev->iboe.netdevs[i]->ip_ptr->ifa_list;
-			index = 1;
-			while (ifa_list) {
-				ipaddr = ifa_list->ifa_address;
-				ipv6_addr_set_v4mapped(ipaddr,
-						       (struct in6_addr *)&gid);
-				ret = hns_roce_set_gid(hr_dev, i, index, &gid);
-				if (ret)
-					break;
-				index++;
-				ifa_list = ifa_list->ifa_next;
-			}
-			hns_roce_update_gids(hr_dev, i);
-		}
 	}
 
-	return ret;
+	return 0;
 }
 
 static int hns_roce_query_device(struct ib_device *ib_dev,
@@ -444,31 +303,6 @@ static enum rdma_link_layer hns_roce_get_link_layer(struct ib_device *device,
 static int hns_roce_query_gid(struct ib_device *ib_dev, u8 port_num, int index,
 			      union ib_gid *gid)
 {
-	struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
-	struct device *dev = &hr_dev->pdev->dev;
-	u8 gid_idx = 0;
-	u8 port;
-
-	if (port_num < 1 || port_num > hr_dev->caps.num_ports ||
-	    index >= hr_dev->caps.gid_table_len[port_num - 1]) {
-		dev_err(dev,
-			"port_num %d index %d illegal! correct range: port_num 1~%d index 0~%d!\n",
-			port_num, index, hr_dev->caps.num_ports,
-			hr_dev->caps.gid_table_len[port_num - 1] - 1);
-		return -EINVAL;
-	}
-
-	port = port_num - 1;
-	gid_idx = hns_get_gid_index(hr_dev, port, index);
-	if (gid_idx >= HNS_ROCE_MAX_GID_NUM) {
-		dev_err(dev, "port_num %d index %d illegal! total gid num %d!\n",
-			port_num, index, HNS_ROCE_MAX_GID_NUM);
-		return -EINVAL;
-	}
-
-	memcpy(gid->raw, hr_dev->iboe.gid_table[gid_idx].raw,
-	       HNS_ROCE_GID_SIZE);
-
 	return 0;
 }
 
@@ -646,6 +480,8 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
 	ib_dev->get_link_layer		= hns_roce_get_link_layer;
 	ib_dev->get_netdev		= hns_roce_get_netdev;
 	ib_dev->query_gid		= hns_roce_query_gid;
+	ib_dev->add_gid			= hns_roce_add_gid;
+	ib_dev->del_gid			= hns_roce_del_gid;
 	ib_dev->query_pkey		= hns_roce_query_pkey;
 	ib_dev->alloc_ucontext		= hns_roce_alloc_ucontext;
 	ib_dev->dealloc_ucontext	= hns_roce_dealloc_ucontext;
@@ -688,32 +524,22 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
 		return ret;
 	}
 
-	ret = hns_roce_setup_mtu_gids(hr_dev);
+	ret = hns_roce_setup_mtu_mac(hr_dev);
 	if (ret) {
-		dev_err(dev, "roce_setup_mtu_gids failed!\n");
-		goto error_failed_setup_mtu_gids;
+		dev_err(dev, "setup_mtu_mac failed!\n");
+		goto error_failed_setup_mtu_mac;
 	}
 
 	iboe->nb.notifier_call = hns_roce_netdev_event;
 	ret = register_netdevice_notifier(&iboe->nb);
 	if (ret) {
 		dev_err(dev, "register_netdevice_notifier failed!\n");
-		goto error_failed_setup_mtu_gids;
-	}
-
-	iboe->nb_inet.notifier_call = hns_roce_inet_event;
-	ret = register_inetaddr_notifier(&iboe->nb_inet);
-	if (ret) {
-		dev_err(dev, "register inet addr notifier failed!\n");
-		goto error_failed_register_inetaddr_notifier;
+		goto error_failed_setup_mtu_mac;
 	}
 
 	return 0;
 
-error_failed_register_inetaddr_notifier:
-	unregister_netdevice_notifier(&iboe->nb);
-
-error_failed_setup_mtu_gids:
+error_failed_setup_mtu_mac:
 	ib_unregister_device(ib_dev);
 
 	return ret;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH for-next 11/11] IB/hns: Fix for Checkpatch.pl comment style errors
  2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
                   ` (9 preceding siblings ...)
  2016-11-04 16:36 ` [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management Salil Mehta
@ 2016-11-04 16:36 ` Salil Mehta
  10 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-04 16:36 UTC (permalink / raw)
  To: dledford
  Cc: salil.mehta, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

This patch correct the comment style errors caught by
checkpatch.pl script

Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_cmd.c    |    8 ++--
 drivers/infiniband/hw/hns/hns_roce_device.h |   28 +++++++-------
 drivers/infiniband/hw/hns/hns_roce_eq.c     |    6 +--
 drivers/infiniband/hw/hns/hns_roce_hem.c    |    6 +--
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c  |   56 +++++++++++++--------------
 drivers/infiniband/hw/hns/hns_roce_main.c   |   28 +++++++-------
 6 files changed, 66 insertions(+), 66 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
index 2a0b6c0..8c1f7a6 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
@@ -216,10 +216,10 @@ static int __hns_roce_cmd_mbox_wait(struct hns_roce_dev *hr_dev, u64 in_param,
 		goto out;
 
 	/*
-	* It is timeout when wait_for_completion_timeout return 0
-	* The return value is the time limit set in advance
-	* how many seconds showing
-	*/
+	 * It is timeout when wait_for_completion_timeout return 0
+	 * The return value is the time limit set in advance
+	 * how many seconds showing
+	 */
 	if (!wait_for_completion_timeout(&context->done,
 					 msecs_to_jiffies(timeout))) {
 		dev_err(dev, "[cmd]wait_for_completion_timeout timeout\n");
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 9ef1cc3..e48464d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -201,9 +201,9 @@ struct hns_roce_bitmap {
 /* Order = 0: bitmap is biggest, order = max bitmap is least (only a bit) */
 /* Every bit repesent to a partner free/used status in bitmap */
 /*
-* Initial, bits of other bitmap are all 0 except that a bit of max_order is 1
-* Bit = 1 represent to idle and available; bit = 0: not available
-*/
+ * Initial, bits of other bitmap are all 0 except that a bit of max_order is 1
+ * Bit = 1 represent to idle and available; bit = 0: not available
+ */
 struct hns_roce_buddy {
 	/* Members point to every order level bitmap */
 	unsigned long **bits;
@@ -365,25 +365,25 @@ struct hns_roce_cmdq {
 	struct mutex		hcr_mutex;
 	struct semaphore	poll_sem;
 	/*
-	* Event mode: cmd register mutex protection,
-	* ensure to not exceed max_cmds and user use limit region
-	*/
+	 * Event mode: cmd register mutex protection,
+	 * ensure to not exceed max_cmds and user use limit region
+	 */
 	struct semaphore	event_sem;
 	int			max_cmds;
 	spinlock_t		context_lock;
 	int			free_head;
 	struct hns_roce_cmd_context *context;
 	/*
-	* Result of get integer part
-	* which max_comds compute according a power of 2
-	*/
+	 * Result of get integer part
+	 * which max_comds compute according a power of 2
+	 */
 	u16			token_mask;
 	/*
-	* Process whether use event mode, init default non-zero
-	* After the event queue of cmd event ready,
-	* can switch into event mode
-	* close device, switch into poll mode(non event mode)
-	*/
+	 * Process whether use event mode, init default non-zero
+	 * After the event queue of cmd event ready,
+	 * can switch into event mode
+	 * close device, switch into poll mode(non event mode)
+	 */
 	u8			use_events;
 	u8			toggle;
 };
diff --git a/drivers/infiniband/hw/hns/hns_roce_eq.c b/drivers/infiniband/hw/hns/hns_roce_eq.c
index 21e21b0..50f8649 100644
--- a/drivers/infiniband/hw/hns/hns_roce_eq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_eq.c
@@ -371,9 +371,9 @@ static int hns_roce_aeq_ovf_int(struct hns_roce_dev *hr_dev,
 	int i = 0;
 
 	/**
-	* AEQ overflow ECC mult bit err CEQ overflow alarm
-	* must clear interrupt, mask irq, clear irq, cancel mask operation
-	*/
+	 * AEQ overflow ECC mult bit err CEQ overflow alarm
+	 * must clear interrupt, mask irq, clear irq, cancel mask operation
+	 */
 	aeshift_val = roce_read(hr_dev, ROCEE_CAEP_AEQC_AEQE_SHIFT_REG);
 
 	if (roce_get_bit(aeshift_val,
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 250d8f2..c5104e0 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -80,9 +80,9 @@ struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev, int npages,
 			--order;
 
 		/*
-		* Alloc memory one time. If failed, don't alloc small block
-		* memory, directly return fail.
-		*/
+		 * Alloc memory one time. If failed, don't alloc small block
+		 * memory, directly return fail.
+		 */
 		mem = &chunk->mem[chunk->npages];
 		buf = dma_alloc_coherent(&hr_dev->pdev->dev, PAGE_SIZE << order,
 				&sg_dma_address(mem), gfp_mask);
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
index 76edebe..8107f8c 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
@@ -1352,9 +1352,9 @@ static void __hns_roce_v1_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn,
 	}
 
 	/*
-	* Now backwards through the CQ, removing CQ entries
-	* that match our QP by overwriting them with next entries.
-	*/
+	 * Now backwards through the CQ, removing CQ entries
+	 * that match our QP by overwriting them with next entries.
+	 */
 	while ((int) --prod_index - (int) hr_cq->cons_index >= 0) {
 		cqe = get_cqe(hr_cq, prod_index & hr_cq->ib_cq.cqe);
 		if ((roce_get_field(cqe->cqe_byte_16, CQE_BYTE_16_LOCAL_QPN_M,
@@ -1376,9 +1376,9 @@ static void __hns_roce_v1_cq_clean(struct hns_roce_cq *hr_cq, u32 qpn,
 	if (nfreed) {
 		hr_cq->cons_index += nfreed;
 		/*
-		* Make sure update of buffer contents is done before
-		* updating consumer index.
-		*/
+		 * Make sure update of buffer contents is done before
+		 * updating consumer index.
+		 */
 		wmb();
 
 		hns_roce_v1_cq_set_ci(hr_cq, hr_cq->cons_index);
@@ -1473,7 +1473,7 @@ void hns_roce_v1_write_cqc(struct hns_roce_dev *hr_dev,
 	roce_set_bit(cq_context->cqc_byte_32,
 		     CQ_CQNTEXT_CQC_BYTE_32_TYPE_OF_COMPLETION_NOTIFICATION_S,
 		     0);
-	/*The initial value of cq's ci is 0 */
+	/* The initial value of cq's ci is 0 */
 	roce_set_field(cq_context->cqc_byte_32,
 		       CQ_CONTEXT_CQC_BYTE_32_CQ_CONS_IDX_M,
 		       CQ_CONTEXT_CQC_BYTE_32_CQ_CONS_IDX_S, 0);
@@ -1490,9 +1490,9 @@ int hns_roce_v1_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
 	notification_flag = (flags & IB_CQ_SOLICITED_MASK) ==
 			    IB_CQ_SOLICITED ? CQ_DB_REQ_NOT : CQ_DB_REQ_NOT_SOL;
 	/*
-	* flags = 0; Notification Flag = 1, next
-	* flags = 1; Notification Flag = 0, solocited
-	*/
+	 * flags = 0; Notification Flag = 1, next
+	 * flags = 1; Notification Flag = 0, solocited
+	 */
 	doorbell[0] = hr_cq->cons_index & ((hr_cq->cq_depth << 1) - 1);
 	roce_set_bit(doorbell[1], ROCEE_DB_OTHERS_H_ROCEE_DB_OTH_HW_SYNS_S, 1);
 	roce_set_field(doorbell[1], ROCEE_DB_OTHERS_H_ROCEE_DB_OTH_CMD_M,
@@ -1647,10 +1647,10 @@ static int hns_roce_v1_poll_one(struct hns_roce_cq *hr_cq,
 		wq = &(*cur_qp)->sq;
 		if ((*cur_qp)->sq_signal_bits) {
 			/*
-			* If sg_signal_bit is 1,
-			* firstly tail pointer updated to wqe
-			* which current cqe correspond to
-			*/
+			 * If sg_signal_bit is 1,
+			 * firstly tail pointer updated to wqe
+			 * which current cqe correspond to
+			 */
 			wqe_ctr = (u16)roce_get_field(cqe->cqe_byte_4,
 						      CQE_BYTE_4_WQE_INDEX_M,
 						      CQE_BYTE_4_WQE_INDEX_S);
@@ -2072,11 +2072,11 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
 	}
 
 	/*
-	*Reset to init
-	*	Mandatory param:
-	*	IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_PORT | IB_QP_ACCESS_FLAGS
-	*	Optional param: NA
-	*/
+	 * Reset to init
+	 *	Mandatory param:
+	 *	IB_QP_STATE | IB_QP_PKEY_INDEX | IB_QP_PORT | IB_QP_ACCESS_FLAGS
+	 *	Optional param: NA
+	 */
 	if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
 		roce_set_field(context->qpc_bytes_4,
 			       QP_CONTEXT_QPC_BYTES_4_TRANSPORT_SERVICE_TYPE_M,
@@ -2584,9 +2584,9 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
 	}
 
 	/*
-	* Use rst2init to instead of init2init with drv,
-	* need to hw to flash RQ HEAD by DB again
-	*/
+	 * Use rst2init to instead of init2init with drv,
+	 * need to hw to flash RQ HEAD by DB again
+	 */
 	if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
 		/* Memory barrier */
 		wmb();
@@ -2925,9 +2925,9 @@ static void hns_roce_v1_destroy_qp_common(struct hns_roce_dev *hr_dev,
 	if (hr_qp->ibqp.qp_type == IB_QPT_RC) {
 		if (hr_qp->state != IB_QPS_RESET) {
 			/*
-			* Set qp to ERR,
-			* waiting for hw complete processing all dbs
-			*/
+			 * Set qp to ERR,
+			 * waiting for hw complete processing all dbs
+			 */
 			if (hns_roce_v1_qp_modify(hr_dev, NULL,
 					to_hns_roce_state(
 						(enum ib_qp_state)hr_qp->state),
@@ -2940,9 +2940,9 @@ static void hns_roce_v1_destroy_qp_common(struct hns_roce_dev *hr_dev,
 			sdbisusepr_val = roce_read(hr_dev,
 					 ROCEE_SDB_ISSUE_PTR_REG);
 			/*
-			* Query db process status,
-			* until hw process completely
-			*/
+			 * Query db process status,
+			 * until hw process completely
+			 */
 			end = msecs_to_jiffies(
 			      HNS_ROCE_QP_DESTROY_TIMEOUT_MSECS) + jiffies;
 			do {
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index 795ef97..914d0ac 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -148,8 +148,8 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
 		break;
 	case NETDEV_DOWN:
 		/*
-		* In v1 engine, only support all ports closed together.
-		*/
+		 * In v1 engine, only support all ports closed together.
+		 */
 		break;
 	default:
 		dev_dbg(dev, "NETDEV event = 0x%x!\n", (u32)(event));
@@ -773,10 +773,10 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
 }
 
 /**
-* hns_roce_setup_hca - setup host channel adapter
-* @hr_dev: pointer to hns roce device
-* Return : int
-*/
+ * hns_roce_setup_hca - setup host channel adapter
+ * @hr_dev: pointer to hns roce device
+ * Return : int
+ */
 static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
 {
 	int ret;
@@ -841,11 +841,11 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
 }
 
 /**
-* hns_roce_probe - RoCE driver entrance
-* @pdev: pointer to platform device
-* Return : int
-*
-*/
+ * hns_roce_probe - RoCE driver entrance
+ * @pdev: pointer to platform device
+ * Return : int
+ *
+ */
 static int hns_roce_probe(struct platform_device *pdev)
 {
 	int ret;
@@ -958,9 +958,9 @@ static int hns_roce_probe(struct platform_device *pdev)
 }
 
 /**
-* hns_roce_remove - remove RoCE device
-* @pdev: pointer to platform device
-*/
+ * hns_roce_remove - remove RoCE device
+ * @pdev: pointer to platform device
+ */
 static int hns_roce_remove(struct platform_device *pdev)
 {
 	struct hns_roce_dev *hr_dev = platform_get_drvdata(pdev);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1
  2016-11-04 16:36 ` [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1 Salil Mehta
@ 2016-11-07  5:45   ` Anurup M
  2016-11-07 11:45     ` Salil Mehta
  0 siblings, 1 reply; 23+ messages in thread
From: Anurup M @ 2016-11-07  5:45 UTC (permalink / raw)
  To: Salil Mehta, dledford
  Cc: linux-rdma, netdev, mehta.salil.lnk, linux-kernel, linuxarm



On 11/4/2016 10:06 PM, Salil Mehta wrote:
> From: Lijun Ou <oulijun@huawei.com>
> 
> In old code, It only added the interface for querying non-specific
> QP. This patch mainly adds an interface for querying QP1.
> 
> Signed-off-by: Lijun Ou <oulijun@huawei.com>
> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_hw_v1.c |   87 +++++++++++++++++++++++++++-
>  drivers/infiniband/hw/hns/hns_roce_hw_v1.h |    6 +-
>  2 files changed, 90 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> index 71232e5..ca8b784 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> @@ -2630,8 +2630,82 @@ static int hns_roce_v1_query_qpc(struct hns_roce_dev *hr_dev,
>  	return ret;
>  }
>  
> -int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
> -			 int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
> +static int hns_roce_v1_q_sqp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
> +			     int qp_attr_mask,
> +			     struct ib_qp_init_attr *qp_init_attr)
> +{
> +	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> +	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> +	struct hns_roce_sqp_context *context;
> +	u32 addr;
> +
> +	context = kzalloc(sizeof(*context), GFP_KERNEL);
> +	if (!context)
> +		return -ENOMEM;
> +
Do we really need dynamic alloc here as the size is fixed and this memory scope is
only inside this function. I think better to use a static allocation.

> +	mutex_lock(&hr_qp->mutex);
> +
> +	if (hr_qp->state == IB_QPS_RESET) {
I think alloc can be moved after this check (if dynamic alloc is really needed).
> +		qp_attr->qp_state = IB_QPS_RESET;
> +		goto done;
> +	}
> +
> +	addr = ROCEE_QP1C_CFG0_0_REG + hr_qp->port * sizeof(*context);
> +	context->qp1c_bytes_4 = roce_read(hr_dev, addr);
> +	context->sq_rq_bt_l = roce_read(hr_dev, addr + 1);
> +	context->qp1c_bytes_12 = roce_read(hr_dev, addr + 2);
> +	context->qp1c_bytes_16 = roce_read(hr_dev, addr + 3);
> +	context->qp1c_bytes_20 = roce_read(hr_dev, addr + 4);
> +	context->cur_rq_wqe_ba_l = roce_read(hr_dev, addr + 5);
> +	context->qp1c_bytes_28 = roce_read(hr_dev, addr + 6);
> +	context->qp1c_bytes_32 = roce_read(hr_dev, addr + 7);
> +	context->cur_sq_wqe_ba_l = roce_read(hr_dev, addr + 8);
> +	context->qp1c_bytes_40 = roce_read(hr_dev, addr + 9);
> +
> +	hr_qp->state = roce_get_field(context->qp1c_bytes_4,
> +				      QP1C_BYTES_4_QP_STATE_M,
> +				      QP1C_BYTES_4_QP_STATE_S);
> +	qp_attr->qp_state	= hr_qp->state;
> +	qp_attr->path_mtu	= IB_MTU_256;
> +	qp_attr->path_mig_state	= IB_MIG_ARMED;
> +	qp_attr->qkey		= QKEY_VAL;
> +	qp_attr->rq_psn		= 0;
> +	qp_attr->sq_psn		= 0;
> +	qp_attr->dest_qp_num	= 1;
> +	qp_attr->qp_access_flags = 6;
> +
> +	qp_attr->pkey_index = roce_get_field(context->qp1c_bytes_20,
> +					     QP1C_BYTES_20_PKEY_IDX_M,
> +					     QP1C_BYTES_20_PKEY_IDX_S);
> +	qp_attr->port_num = hr_qp->port + 1;
> +	qp_attr->sq_draining = 0;
> +	qp_attr->max_rd_atomic = 0;
> +	qp_attr->max_dest_rd_atomic = 0;
> +	qp_attr->min_rnr_timer = 0;
> +	qp_attr->timeout = 0;
> +	qp_attr->retry_cnt = 0;
> +	qp_attr->rnr_retry = 0;
> +	qp_attr->alt_timeout = 0;
> +
> +done:
> +	qp_attr->cur_qp_state = qp_attr->qp_state;
> +	qp_attr->cap.max_recv_wr = hr_qp->rq.wqe_cnt;
> +	qp_attr->cap.max_recv_sge = hr_qp->rq.max_gs;
> +	qp_attr->cap.max_send_wr = hr_qp->sq.wqe_cnt;
> +	qp_attr->cap.max_send_sge = hr_qp->sq.max_gs;
> +	qp_attr->cap.max_inline_data = 0;
> +	qp_init_attr->cap = qp_attr->cap;
> +	qp_init_attr->create_flags = 0;
> +
> +	mutex_unlock(&hr_qp->mutex);
> +	kfree(context);
> +
> +	return 0;
> +}
> +
> +static int hns_roce_v1_q_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
> +			    int qp_attr_mask,
> +			    struct ib_qp_init_attr *qp_init_attr)
>  {
>  	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
>  	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> @@ -2767,6 +2841,15 @@ int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
>  	return ret;
>  }
>  
> +int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
> +			 int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
> +{
> +	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> +
> +	return hr_qp->doorbell_qpn <= 1 ?
> +		hns_roce_v1_q_sqp(ibqp, qp_attr, qp_attr_mask, qp_init_attr) :
> +		hns_roce_v1_q_qp(ibqp, qp_attr, qp_attr_mask, qp_init_attr);
> +}
>  static void hns_roce_v1_destroy_qp_common(struct hns_roce_dev *hr_dev,
>  					  struct hns_roce_qp *hr_qp,
>  					  int is_user)
> diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> index 539b0a3b..2e1878b 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> @@ -480,13 +480,17 @@ struct hns_roce_sqp_context {
>  	u32 qp1c_bytes_12;
>  	u32 qp1c_bytes_16;
>  	u32 qp1c_bytes_20;
> -	u32 qp1c_bytes_28;
>  	u32 cur_rq_wqe_ba_l;
> +	u32 qp1c_bytes_28;
>  	u32 qp1c_bytes_32;
>  	u32 cur_sq_wqe_ba_l;
>  	u32 qp1c_bytes_40;
>  };
>  
> +#define QP1C_BYTES_4_QP_STATE_S 0
> +#define QP1C_BYTES_4_QP_STATE_M   \
> +	(((1UL << 3) - 1) << QP1C_BYTES_4_QP_STATE_S)
> +
>  #define QP1C_BYTES_4_SQ_WQE_SHIFT_S 8
>  #define QP1C_BYTES_4_SQ_WQE_SHIFT_M   \
>  	(((1UL << 4) - 1) << QP1C_BYTES_4_SQ_WQE_SHIFT_S)
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management
  2016-11-04 16:36 ` [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management Salil Mehta
@ 2016-11-07  8:17   ` Anurup M
  2016-11-07 10:04     ` Shaobo Xu
  0 siblings, 1 reply; 23+ messages in thread
From: Anurup M @ 2016-11-07  8:17 UTC (permalink / raw)
  To: Salil Mehta, dledford
  Cc: linux-rdma, netdev, mehta.salil.lnk, linux-kernel, linuxarm



On 11/4/2016 10:06 PM, Salil Mehta wrote:
> From: Shaobo Xu <xushaobo2@huawei.com>
> 
> IB core has implemented the calculation of GIDs and the management
> of GID tables, and it is now responsible to supply query function
> for GIDs. So the calculation of GIDs and the management of GID
> tables in the RoCE driver is redundant.
> 
> The patch is to implement the add_gid/del_gid to set the GIDs in
> the RoCE driver, remove the redundant calculation and management of
> GIDs in the notifier call of the net device and the inet, and
> update the query_gid.
> 
> Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_device.h |    2 -
>  drivers/infiniband/hw/hns/hns_roce_main.c   |  270 +++++----------------------
>  2 files changed, 48 insertions(+), 224 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
> index 593a42a..9ef1cc3 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_device.h
> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
> @@ -429,8 +429,6 @@ struct hns_roce_ib_iboe {
>  	struct net_device      *netdevs[HNS_ROCE_MAX_PORTS];
>  	struct notifier_block	nb;
>  	struct notifier_block	nb_inet;
> -	/* 16 GID is shared by 6 port in v1 engine. */
> -	union ib_gid		gid_table[HNS_ROCE_MAX_GID_NUM];
>  	u8			phy_port[HNS_ROCE_MAX_PORTS];
>  };
>  
> diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
> index 6770171..795ef97 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_main.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_main.c
> @@ -35,52 +35,13 @@
>  #include <rdma/ib_addr.h>
>  #include <rdma/ib_smi.h>
>  #include <rdma/ib_user_verbs.h>
> +#include <rdma/ib_cache.h>
>  #include "hns_roce_common.h"
>  #include "hns_roce_device.h"
>  #include "hns_roce_user.h"
>  #include "hns_roce_hem.h"
>  
>  /**
> - * hns_roce_addrconf_ifid_eui48 - Get default gid.
> - * @eui: eui.
> - * @vlan_id:  gid
> - * @dev:  net device
> - * Description:
> - *    MAC convert to GID
> - *        gid[0..7] = fe80 0000 0000 0000
> - *        gid[8] = mac[0] ^ 2
> - *        gid[9] = mac[1]
> - *        gid[10] = mac[2]
> - *        gid[11] = ff        (VLAN ID high byte (4 MS bits))
> - *        gid[12] = fe        (VLAN ID low byte)
> - *        gid[13] = mac[3]
> - *        gid[14] = mac[4]
> - *        gid[15] = mac[5]
> - */
> -static void hns_roce_addrconf_ifid_eui48(u8 *eui, u16 vlan_id,
> -					 struct net_device *dev)
> -{
> -	memcpy(eui, dev->dev_addr, 3);
> -	memcpy(eui + 5, dev->dev_addr + 3, 3);
> -	if (vlan_id < 0x1000) {
> -		eui[3] = vlan_id >> 8;
> -		eui[4] = vlan_id & 0xff;
> -	} else {
> -		eui[3] = 0xff;
> -		eui[4] = 0xfe;
> -	}
> -	eui[0] ^= 2;
> -}
> -
> -static void hns_roce_make_default_gid(struct net_device *dev, union ib_gid *gid)
> -{
> -	memset(gid, 0, sizeof(*gid));
> -	gid->raw[0] = 0xFE;
> -	gid->raw[1] = 0x80;
> -	hns_roce_addrconf_ifid_eui48(&gid->raw[8], 0xffff, dev);
> -}
> -
> -/**
>   * hns_get_gid_index - Get gid index.
>   * @hr_dev: pointer to structure hns_roce_dev.
>   * @port:  port, value range: 0 ~ MAX
> @@ -96,30 +57,6 @@ int hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index)
>  	return gid_index * hr_dev->caps.num_ports + port;
>  }
>  
> -static int hns_roce_set_gid(struct hns_roce_dev *hr_dev, u8 port, int gid_index,
> -		     union ib_gid *gid)
> -{
> -	struct device *dev = &hr_dev->pdev->dev;
> -	u8 gid_idx = 0;
> -
> -	if (gid_index >= hr_dev->caps.gid_table_len[port]) {
> -		dev_err(dev, "gid_index %d illegal, port %d gid range: 0~%d\n",
> -			gid_index, port, hr_dev->caps.gid_table_len[port] - 1);
> -		return -EINVAL;
> -	}
> -
> -	gid_idx = hns_get_gid_index(hr_dev, port, gid_index);
> -
> -	if (!memcmp(gid, &hr_dev->iboe.gid_table[gid_idx], sizeof(*gid)))
> -		return -EINVAL;
> -
> -	memcpy(&hr_dev->iboe.gid_table[gid_idx], gid, sizeof(*gid));
> -
> -	hr_dev->hw->set_gid(hr_dev, port, gid_index, gid);
> -
> -	return 0;
> -}
> -
>  static void hns_roce_set_mac(struct hns_roce_dev *hr_dev, u8 port, u8 *addr)
>  {
>  	u8 phy_port;
> @@ -147,15 +84,44 @@ static void hns_roce_set_mtu(struct hns_roce_dev *hr_dev, u8 port, int mtu)
>  	hr_dev->hw->set_mtu(hr_dev, phy_port, tmp);
>  }
>  
> -static void hns_roce_update_gids(struct hns_roce_dev *hr_dev, int port)
> +static int hns_roce_add_gid(struct ib_device *device, u8 port_num,
> +			    unsigned int index, const union ib_gid *gid,
> +			    const struct ib_gid_attr *attr, void **context)
> +{
> +	struct hns_roce_dev *hr_dev = to_hr_dev(device);
> +	u8 port = port_num - 1;
> +	unsigned long flags;
> +
> +	if (port >= hr_dev->caps.num_ports)
> +		return -EINVAL;
> +
> +	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
> +
> +	hr_dev->hw->set_gid(hr_dev, port, index, (union ib_gid *)gid);
> +
> +	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
> +
> +	return 0;
> +}
> +
> +static int hns_roce_del_gid(struct ib_device *device, u8 port_num,
> +			    unsigned int index, void **context)
>  {
> -	struct ib_event event;
> +	struct hns_roce_dev *hr_dev = to_hr_dev(device);
> +	union ib_gid zgid = { {0} };
> +	u8 port = port_num - 1;
> +	unsigned long flags;
> +
> +	if (port >= hr_dev->caps.num_ports)
> +		return -EINVAL;
>  
> -	/* Refresh gid in ib_cache */
> -	event.device = &hr_dev->ib_dev;
> -	event.element.port_num = port + 1;
> -	event.event = IB_EVENT_GID_CHANGE;
> -	ib_dispatch_event(&event);
> +	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
> +
> +	hr_dev->hw->set_gid(hr_dev, port, index, &zgid);
zgid has value zero. and after this call, where is zgid used?
> +
> +	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
> +
> +	return 0;
>  }
>  
>  static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
> @@ -164,8 +130,6 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>  	struct device *dev = &hr_dev->pdev->dev;
>  	struct net_device *netdev;
>  	unsigned long flags;
> -	union ib_gid gid;
> -	int ret = 0;
>  
>  	netdev = hr_dev->iboe.netdevs[port];
>  	if (!netdev) {
> @@ -181,10 +145,6 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>  	case NETDEV_REGISTER:
>  	case NETDEV_CHANGEADDR:
>  		hns_roce_set_mac(hr_dev, port, netdev->dev_addr);
> -		hns_roce_make_default_gid(netdev, &gid);
> -		ret = hns_roce_set_gid(hr_dev, port, 0, &gid);
> -		if (!ret)
> -			hns_roce_update_gids(hr_dev, port);
>  		break;
>  	case NETDEV_DOWN:
>  		/*
> @@ -197,7 +157,7 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>  	}
>  
>  	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
> -	return ret;
> +	return 0;
>  }
>  
>  static int hns_roce_netdev_event(struct notifier_block *self,
> @@ -224,118 +184,17 @@ static int hns_roce_netdev_event(struct notifier_block *self,
>  	return NOTIFY_DONE;
>  }
>  
> -static void hns_roce_addr_event(int event, struct net_device *event_netdev,
> -				struct hns_roce_dev *hr_dev, union ib_gid *gid)
> -{
> -	struct hns_roce_ib_iboe *iboe = NULL;
> -	int gid_table_len = 0;
> -	unsigned long flags;
> -	union ib_gid zgid;
> -	u8 gid_idx = 0;
> -	u8 port = 0;
> -	int i = 0;
> -	int free;
> -	struct net_device *real_dev = rdma_vlan_dev_real_dev(event_netdev) ?
> -				      rdma_vlan_dev_real_dev(event_netdev) :
> -				      event_netdev;
> -
> -	if (event != NETDEV_UP && event != NETDEV_DOWN)
> -		return;
> -
> -	iboe = &hr_dev->iboe;
> -	while (port < hr_dev->caps.num_ports) {
> -		if (real_dev == iboe->netdevs[port])
> -			break;
> -		port++;
> -	}
> -
> -	if (port >= hr_dev->caps.num_ports) {
> -		dev_dbg(&hr_dev->pdev->dev, "can't find netdev\n");
> -		return;
> -	}
> -
> -	memset(zgid.raw, 0, sizeof(zgid.raw));
> -	free = -1;
> -	gid_table_len = hr_dev->caps.gid_table_len[port];
> -
> -	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
> -
> -	for (i = 0; i < gid_table_len; i++) {
> -		gid_idx = hns_get_gid_index(hr_dev, port, i);
> -		if (!memcmp(gid->raw, iboe->gid_table[gid_idx].raw,
> -			    sizeof(gid->raw)))
> -			break;
> -		if (free < 0 && !memcmp(zgid.raw,
> -			iboe->gid_table[gid_idx].raw, sizeof(zgid.raw)))
> -			free = i;
> -	}
> -
> -	if (i >= gid_table_len) {
> -		if (free < 0) {
> -			spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
> -			dev_dbg(&hr_dev->pdev->dev,
> -				"gid_index overflow, port(%d)\n", port);
> -			return;
> -		}
> -		if (!hns_roce_set_gid(hr_dev, port, free, gid))
> -			hns_roce_update_gids(hr_dev, port);
> -	} else if (event == NETDEV_DOWN) {
> -		if (!hns_roce_set_gid(hr_dev, port, i, &zgid))
> -			hns_roce_update_gids(hr_dev, port);
> -	}
> -
> -	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
> -}
> -
> -static int hns_roce_inet_event(struct notifier_block *self, unsigned long event,
> -			       void *ptr)
> -{
> -	struct in_ifaddr *ifa = ptr;
> -	struct hns_roce_dev *hr_dev;
> -	struct net_device *dev = ifa->ifa_dev->dev;
> -	union ib_gid gid;
> -
> -	ipv6_addr_set_v4mapped(ifa->ifa_address, (struct in6_addr *)&gid);
> -
> -	hr_dev = container_of(self, struct hns_roce_dev, iboe.nb_inet);
> -
> -	hns_roce_addr_event(event, dev, hr_dev, &gid);
> -
> -	return NOTIFY_DONE;
> -}
> -
> -static int hns_roce_setup_mtu_gids(struct hns_roce_dev *hr_dev)
> +static int hns_roce_setup_mtu_mac(struct hns_roce_dev *hr_dev)
>  {
> -	struct in_ifaddr *ifa_list = NULL;
> -	union ib_gid gid = {{0} };
> -	u32 ipaddr = 0;
> -	int index = 0;
> -	int ret = 0;
> -	u8 i = 0;
> +	u8 i;
>  
>  	for (i = 0; i < hr_dev->caps.num_ports; i++) {
>  		hns_roce_set_mtu(hr_dev, i,
>  				 ib_mtu_enum_to_int(hr_dev->caps.max_mtu));
>  		hns_roce_set_mac(hr_dev, i, hr_dev->iboe.netdevs[i]->dev_addr);
> -
> -		if (hr_dev->iboe.netdevs[i]->ip_ptr) {
> -			ifa_list = hr_dev->iboe.netdevs[i]->ip_ptr->ifa_list;
> -			index = 1;
> -			while (ifa_list) {
> -				ipaddr = ifa_list->ifa_address;
> -				ipv6_addr_set_v4mapped(ipaddr,
> -						       (struct in6_addr *)&gid);
> -				ret = hns_roce_set_gid(hr_dev, i, index, &gid);
> -				if (ret)
> -					break;
> -				index++;
> -				ifa_list = ifa_list->ifa_next;
> -			}
> -			hns_roce_update_gids(hr_dev, i);
> -		}
>  	}
>  
> -	return ret;
> +	return 0;
>  }
>  
>  static int hns_roce_query_device(struct ib_device *ib_dev,
> @@ -444,31 +303,6 @@ static enum rdma_link_layer hns_roce_get_link_layer(struct ib_device *device,
>  static int hns_roce_query_gid(struct ib_device *ib_dev, u8 port_num, int index,
>  			      union ib_gid *gid)
>  {
> -	struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
> -	struct device *dev = &hr_dev->pdev->dev;
> -	u8 gid_idx = 0;
> -	u8 port;
> -
> -	if (port_num < 1 || port_num > hr_dev->caps.num_ports ||
> -	    index >= hr_dev->caps.gid_table_len[port_num - 1]) {
> -		dev_err(dev,
> -			"port_num %d index %d illegal! correct range: port_num 1~%d index 0~%d!\n",
> -			port_num, index, hr_dev->caps.num_ports,
> -			hr_dev->caps.gid_table_len[port_num - 1] - 1);
> -		return -EINVAL;
> -	}
> -
> -	port = port_num - 1;
> -	gid_idx = hns_get_gid_index(hr_dev, port, index);
> -	if (gid_idx >= HNS_ROCE_MAX_GID_NUM) {
> -		dev_err(dev, "port_num %d index %d illegal! total gid num %d!\n",
> -			port_num, index, HNS_ROCE_MAX_GID_NUM);
> -		return -EINVAL;
> -	}
> -
> -	memcpy(gid->raw, hr_dev->iboe.gid_table[gid_idx].raw,
> -	       HNS_ROCE_GID_SIZE);
> -
>  	return 0;
>  }
>  
> @@ -646,6 +480,8 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
>  	ib_dev->get_link_layer		= hns_roce_get_link_layer;
>  	ib_dev->get_netdev		= hns_roce_get_netdev;
>  	ib_dev->query_gid		= hns_roce_query_gid;
> +	ib_dev->add_gid			= hns_roce_add_gid;
> +	ib_dev->del_gid			= hns_roce_del_gid;
>  	ib_dev->query_pkey		= hns_roce_query_pkey;
>  	ib_dev->alloc_ucontext		= hns_roce_alloc_ucontext;
>  	ib_dev->dealloc_ucontext	= hns_roce_dealloc_ucontext;
> @@ -688,32 +524,22 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
>  		return ret;
>  	}
>  
> -	ret = hns_roce_setup_mtu_gids(hr_dev);
> +	ret = hns_roce_setup_mtu_mac(hr_dev);
>  	if (ret) {
> -		dev_err(dev, "roce_setup_mtu_gids failed!\n");
> -		goto error_failed_setup_mtu_gids;
> +		dev_err(dev, "setup_mtu_mac failed!\n");
> +		goto error_failed_setup_mtu_mac;
>  	}
>  
>  	iboe->nb.notifier_call = hns_roce_netdev_event;
>  	ret = register_netdevice_notifier(&iboe->nb);
>  	if (ret) {
>  		dev_err(dev, "register_netdevice_notifier failed!\n");
> -		goto error_failed_setup_mtu_gids;
> -	}
> -
> -	iboe->nb_inet.notifier_call = hns_roce_inet_event;
> -	ret = register_inetaddr_notifier(&iboe->nb_inet);
> -	if (ret) {
> -		dev_err(dev, "register inet addr notifier failed!\n");
> -		goto error_failed_register_inetaddr_notifier;
> +		goto error_failed_setup_mtu_mac;
>  	}
>  
>  	return 0;
>  
> -error_failed_register_inetaddr_notifier:
> -	unregister_netdevice_notifier(&iboe->nb);
> -
> -error_failed_setup_mtu_gids:
> +error_failed_setup_mtu_mac:
>  	ib_unregister_device(ib_dev);
>  
>  	return ret;
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management
  2016-11-07  8:17   ` Anurup M
@ 2016-11-07 10:04     ` Shaobo Xu
  0 siblings, 0 replies; 23+ messages in thread
From: Shaobo Xu @ 2016-11-07 10:04 UTC (permalink / raw)
  To: Anurup M, Salil Mehta, dledford
  Cc: linux-rdma, netdev, mehta.salil.lnk, linux-kernel, linuxarm



On 2016/11/7 16:17, Anurup M wrote:
> 
> 
> On 11/4/2016 10:06 PM, Salil Mehta wrote:
>> From: Shaobo Xu <xushaobo2@huawei.com>
>>
>> IB core has implemented the calculation of GIDs and the management
>> of GID tables, and it is now responsible to supply query function
>> for GIDs. So the calculation of GIDs and the management of GID
>> tables in the RoCE driver is redundant.
>>
>> The patch is to implement the add_gid/del_gid to set the GIDs in
>> the RoCE driver, remove the redundant calculation and management of
>> GIDs in the notifier call of the net device and the inet, and
>> update the query_gid.
>>
>> Signed-off-by: Shaobo Xu <xushaobo2@huawei.com>
>> Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
>> Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
>> ---
>>  drivers/infiniband/hw/hns/hns_roce_device.h |    2 -
>>  drivers/infiniband/hw/hns/hns_roce_main.c   |  270 +++++----------------------
>>  2 files changed, 48 insertions(+), 224 deletions(-)
>>
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
>> index 593a42a..9ef1cc3 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_device.h
>> +++ b/drivers/infiniband/hw/hns/hns_roce_device.h
>> @@ -429,8 +429,6 @@ struct hns_roce_ib_iboe {
>>  	struct net_device      *netdevs[HNS_ROCE_MAX_PORTS];
>>  	struct notifier_block	nb;
>>  	struct notifier_block	nb_inet;
>> -	/* 16 GID is shared by 6 port in v1 engine. */
>> -	union ib_gid		gid_table[HNS_ROCE_MAX_GID_NUM];
>>  	u8			phy_port[HNS_ROCE_MAX_PORTS];
>>  };
>>  
>> diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
>> index 6770171..795ef97 100644
>> --- a/drivers/infiniband/hw/hns/hns_roce_main.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_main.c
>> @@ -35,52 +35,13 @@
>>  #include <rdma/ib_addr.h>
>>  #include <rdma/ib_smi.h>
>>  #include <rdma/ib_user_verbs.h>
>> +#include <rdma/ib_cache.h>
>>  #include "hns_roce_common.h"
>>  #include "hns_roce_device.h"
>>  #include "hns_roce_user.h"
>>  #include "hns_roce_hem.h"
>>  
>>  /**
>> - * hns_roce_addrconf_ifid_eui48 - Get default gid.
>> - * @eui: eui.
>> - * @vlan_id:  gid
>> - * @dev:  net device
>> - * Description:
>> - *    MAC convert to GID
>> - *        gid[0..7] = fe80 0000 0000 0000
>> - *        gid[8] = mac[0] ^ 2
>> - *        gid[9] = mac[1]
>> - *        gid[10] = mac[2]
>> - *        gid[11] = ff        (VLAN ID high byte (4 MS bits))
>> - *        gid[12] = fe        (VLAN ID low byte)
>> - *        gid[13] = mac[3]
>> - *        gid[14] = mac[4]
>> - *        gid[15] = mac[5]
>> - */
>> -static void hns_roce_addrconf_ifid_eui48(u8 *eui, u16 vlan_id,
>> -					 struct net_device *dev)
>> -{
>> -	memcpy(eui, dev->dev_addr, 3);
>> -	memcpy(eui + 5, dev->dev_addr + 3, 3);
>> -	if (vlan_id < 0x1000) {
>> -		eui[3] = vlan_id >> 8;
>> -		eui[4] = vlan_id & 0xff;
>> -	} else {
>> -		eui[3] = 0xff;
>> -		eui[4] = 0xfe;
>> -	}
>> -	eui[0] ^= 2;
>> -}
>> -
>> -static void hns_roce_make_default_gid(struct net_device *dev, union ib_gid *gid)
>> -{
>> -	memset(gid, 0, sizeof(*gid));
>> -	gid->raw[0] = 0xFE;
>> -	gid->raw[1] = 0x80;
>> -	hns_roce_addrconf_ifid_eui48(&gid->raw[8], 0xffff, dev);
>> -}
>> -
>> -/**
>>   * hns_get_gid_index - Get gid index.
>>   * @hr_dev: pointer to structure hns_roce_dev.
>>   * @port:  port, value range: 0 ~ MAX
>> @@ -96,30 +57,6 @@ int hns_get_gid_index(struct hns_roce_dev *hr_dev, u8 port, int gid_index)
>>  	return gid_index * hr_dev->caps.num_ports + port;
>>  }
>>  
>> -static int hns_roce_set_gid(struct hns_roce_dev *hr_dev, u8 port, int gid_index,
>> -		     union ib_gid *gid)
>> -{
>> -	struct device *dev = &hr_dev->pdev->dev;
>> -	u8 gid_idx = 0;
>> -
>> -	if (gid_index >= hr_dev->caps.gid_table_len[port]) {
>> -		dev_err(dev, "gid_index %d illegal, port %d gid range: 0~%d\n",
>> -			gid_index, port, hr_dev->caps.gid_table_len[port] - 1);
>> -		return -EINVAL;
>> -	}
>> -
>> -	gid_idx = hns_get_gid_index(hr_dev, port, gid_index);
>> -
>> -	if (!memcmp(gid, &hr_dev->iboe.gid_table[gid_idx], sizeof(*gid)))
>> -		return -EINVAL;
>> -
>> -	memcpy(&hr_dev->iboe.gid_table[gid_idx], gid, sizeof(*gid));
>> -
>> -	hr_dev->hw->set_gid(hr_dev, port, gid_index, gid);
>> -
>> -	return 0;
>> -}
>> -
>>  static void hns_roce_set_mac(struct hns_roce_dev *hr_dev, u8 port, u8 *addr)
>>  {
>>  	u8 phy_port;
>> @@ -147,15 +84,44 @@ static void hns_roce_set_mtu(struct hns_roce_dev *hr_dev, u8 port, int mtu)
>>  	hr_dev->hw->set_mtu(hr_dev, phy_port, tmp);
>>  }
>>  
>> -static void hns_roce_update_gids(struct hns_roce_dev *hr_dev, int port)
>> +static int hns_roce_add_gid(struct ib_device *device, u8 port_num,
>> +			    unsigned int index, const union ib_gid *gid,
>> +			    const struct ib_gid_attr *attr, void **context)
>> +{
>> +	struct hns_roce_dev *hr_dev = to_hr_dev(device);
>> +	u8 port = port_num - 1;
>> +	unsigned long flags;
>> +
>> +	if (port >= hr_dev->caps.num_ports)
>> +		return -EINVAL;
>> +
>> +	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
>> +
>> +	hr_dev->hw->set_gid(hr_dev, port, index, (union ib_gid *)gid);
>> +
>> +	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
>> +
>> +	return 0;
>> +}
>> +
>> +static int hns_roce_del_gid(struct ib_device *device, u8 port_num,
>> +			    unsigned int index, void **context)
>>  {
>> -	struct ib_event event;
>> +	struct hns_roce_dev *hr_dev = to_hr_dev(device);
>> +	union ib_gid zgid = { {0} };
>> +	u8 port = port_num - 1;
>> +	unsigned long flags;
>> +
>> +	if (port >= hr_dev->caps.num_ports)
>> +		return -EINVAL;
>>  
>> -	/* Refresh gid in ib_cache */
>> -	event.device = &hr_dev->ib_dev;
>> -	event.element.port_num = port + 1;
>> -	event.event = IB_EVENT_GID_CHANGE;
>> -	ib_dispatch_event(&event);
>> +	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
>> +
>> +	hr_dev->hw->set_gid(hr_dev, port, index, &zgid);
> zgid has value zero. and after this call, where is zgid used?
While deleting the GIDs of the specified port, it needs to set zgid to the port GID configure register,
which means no GID to this port.

Best Regards,
>> +
>> +	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
>> +
>> +	return 0;
>>  }
>>  
>>  static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>> @@ -164,8 +130,6 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>>  	struct device *dev = &hr_dev->pdev->dev;
>>  	struct net_device *netdev;
>>  	unsigned long flags;
>> -	union ib_gid gid;
>> -	int ret = 0;
>>  
>>  	netdev = hr_dev->iboe.netdevs[port];
>>  	if (!netdev) {
>> @@ -181,10 +145,6 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>>  	case NETDEV_REGISTER:
>>  	case NETDEV_CHANGEADDR:
>>  		hns_roce_set_mac(hr_dev, port, netdev->dev_addr);
>> -		hns_roce_make_default_gid(netdev, &gid);
>> -		ret = hns_roce_set_gid(hr_dev, port, 0, &gid);
>> -		if (!ret)
>> -			hns_roce_update_gids(hr_dev, port);
>>  		break;
>>  	case NETDEV_DOWN:
>>  		/*
>> @@ -197,7 +157,7 @@ static int handle_en_event(struct hns_roce_dev *hr_dev, u8 port,
>>  	}
>>  
>>  	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
>> -	return ret;
>> +	return 0;
>>  }
>>  
>>  static int hns_roce_netdev_event(struct notifier_block *self,
>> @@ -224,118 +184,17 @@ static int hns_roce_netdev_event(struct notifier_block *self,
>>  	return NOTIFY_DONE;
>>  }
>>  
>> -static void hns_roce_addr_event(int event, struct net_device *event_netdev,
>> -				struct hns_roce_dev *hr_dev, union ib_gid *gid)
>> -{
>> -	struct hns_roce_ib_iboe *iboe = NULL;
>> -	int gid_table_len = 0;
>> -	unsigned long flags;
>> -	union ib_gid zgid;
>> -	u8 gid_idx = 0;
>> -	u8 port = 0;
>> -	int i = 0;
>> -	int free;
>> -	struct net_device *real_dev = rdma_vlan_dev_real_dev(event_netdev) ?
>> -				      rdma_vlan_dev_real_dev(event_netdev) :
>> -				      event_netdev;
>> -
>> -	if (event != NETDEV_UP && event != NETDEV_DOWN)
>> -		return;
>> -
>> -	iboe = &hr_dev->iboe;
>> -	while (port < hr_dev->caps.num_ports) {
>> -		if (real_dev == iboe->netdevs[port])
>> -			break;
>> -		port++;
>> -	}
>> -
>> -	if (port >= hr_dev->caps.num_ports) {
>> -		dev_dbg(&hr_dev->pdev->dev, "can't find netdev\n");
>> -		return;
>> -	}
>> -
>> -	memset(zgid.raw, 0, sizeof(zgid.raw));
>> -	free = -1;
>> -	gid_table_len = hr_dev->caps.gid_table_len[port];
>> -
>> -	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
>> -
>> -	for (i = 0; i < gid_table_len; i++) {
>> -		gid_idx = hns_get_gid_index(hr_dev, port, i);
>> -		if (!memcmp(gid->raw, iboe->gid_table[gid_idx].raw,
>> -			    sizeof(gid->raw)))
>> -			break;
>> -		if (free < 0 && !memcmp(zgid.raw,
>> -			iboe->gid_table[gid_idx].raw, sizeof(zgid.raw)))
>> -			free = i;
>> -	}
>> -
>> -	if (i >= gid_table_len) {
>> -		if (free < 0) {
>> -			spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
>> -			dev_dbg(&hr_dev->pdev->dev,
>> -				"gid_index overflow, port(%d)\n", port);
>> -			return;
>> -		}
>> -		if (!hns_roce_set_gid(hr_dev, port, free, gid))
>> -			hns_roce_update_gids(hr_dev, port);
>> -	} else if (event == NETDEV_DOWN) {
>> -		if (!hns_roce_set_gid(hr_dev, port, i, &zgid))
>> -			hns_roce_update_gids(hr_dev, port);
>> -	}
>> -
>> -	spin_unlock_irqrestore(&hr_dev->iboe.lock, flags);
>> -}
>> -
>> -static int hns_roce_inet_event(struct notifier_block *self, unsigned long event,
>> -			       void *ptr)
>> -{
>> -	struct in_ifaddr *ifa = ptr;
>> -	struct hns_roce_dev *hr_dev;
>> -	struct net_device *dev = ifa->ifa_dev->dev;
>> -	union ib_gid gid;
>> -
>> -	ipv6_addr_set_v4mapped(ifa->ifa_address, (struct in6_addr *)&gid);
>> -
>> -	hr_dev = container_of(self, struct hns_roce_dev, iboe.nb_inet);
>> -
>> -	hns_roce_addr_event(event, dev, hr_dev, &gid);
>> -
>> -	return NOTIFY_DONE;
>> -}
>> -
>> -static int hns_roce_setup_mtu_gids(struct hns_roce_dev *hr_dev)
>> +static int hns_roce_setup_mtu_mac(struct hns_roce_dev *hr_dev)
>>  {
>> -	struct in_ifaddr *ifa_list = NULL;
>> -	union ib_gid gid = {{0} };
>> -	u32 ipaddr = 0;
>> -	int index = 0;
>> -	int ret = 0;
>> -	u8 i = 0;
>> +	u8 i;
>>  
>>  	for (i = 0; i < hr_dev->caps.num_ports; i++) {
>>  		hns_roce_set_mtu(hr_dev, i,
>>  				 ib_mtu_enum_to_int(hr_dev->caps.max_mtu));
>>  		hns_roce_set_mac(hr_dev, i, hr_dev->iboe.netdevs[i]->dev_addr);
>> -
>> -		if (hr_dev->iboe.netdevs[i]->ip_ptr) {
>> -			ifa_list = hr_dev->iboe.netdevs[i]->ip_ptr->ifa_list;
>> -			index = 1;
>> -			while (ifa_list) {
>> -				ipaddr = ifa_list->ifa_address;
>> -				ipv6_addr_set_v4mapped(ipaddr,
>> -						       (struct in6_addr *)&gid);
>> -				ret = hns_roce_set_gid(hr_dev, i, index, &gid);
>> -				if (ret)
>> -					break;
>> -				index++;
>> -				ifa_list = ifa_list->ifa_next;
>> -			}
>> -			hns_roce_update_gids(hr_dev, i);
>> -		}
>>  	}
>>  
>> -	return ret;
>> +	return 0;
>>  }
>>  
>>  static int hns_roce_query_device(struct ib_device *ib_dev,
>> @@ -444,31 +303,6 @@ static enum rdma_link_layer hns_roce_get_link_layer(struct ib_device *device,
>>  static int hns_roce_query_gid(struct ib_device *ib_dev, u8 port_num, int index,
>>  			      union ib_gid *gid)
>>  {
>> -	struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
>> -	struct device *dev = &hr_dev->pdev->dev;
>> -	u8 gid_idx = 0;
>> -	u8 port;
>> -
>> -	if (port_num < 1 || port_num > hr_dev->caps.num_ports ||
>> -	    index >= hr_dev->caps.gid_table_len[port_num - 1]) {
>> -		dev_err(dev,
>> -			"port_num %d index %d illegal! correct range: port_num 1~%d index 0~%d!\n",
>> -			port_num, index, hr_dev->caps.num_ports,
>> -			hr_dev->caps.gid_table_len[port_num - 1] - 1);
>> -		return -EINVAL;
>> -	}
>> -
>> -	port = port_num - 1;
>> -	gid_idx = hns_get_gid_index(hr_dev, port, index);
>> -	if (gid_idx >= HNS_ROCE_MAX_GID_NUM) {
>> -		dev_err(dev, "port_num %d index %d illegal! total gid num %d!\n",
>> -			port_num, index, HNS_ROCE_MAX_GID_NUM);
>> -		return -EINVAL;
>> -	}
>> -
>> -	memcpy(gid->raw, hr_dev->iboe.gid_table[gid_idx].raw,
>> -	       HNS_ROCE_GID_SIZE);
>> -
>>  	return 0;
>>  }
>>  
>> @@ -646,6 +480,8 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
>>  	ib_dev->get_link_layer		= hns_roce_get_link_layer;
>>  	ib_dev->get_netdev		= hns_roce_get_netdev;
>>  	ib_dev->query_gid		= hns_roce_query_gid;
>> +	ib_dev->add_gid			= hns_roce_add_gid;
>> +	ib_dev->del_gid			= hns_roce_del_gid;
>>  	ib_dev->query_pkey		= hns_roce_query_pkey;
>>  	ib_dev->alloc_ucontext		= hns_roce_alloc_ucontext;
>>  	ib_dev->dealloc_ucontext	= hns_roce_dealloc_ucontext;
>> @@ -688,32 +524,22 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
>>  		return ret;
>>  	}
>>  
>> -	ret = hns_roce_setup_mtu_gids(hr_dev);
>> +	ret = hns_roce_setup_mtu_mac(hr_dev);
>>  	if (ret) {
>> -		dev_err(dev, "roce_setup_mtu_gids failed!\n");
>> -		goto error_failed_setup_mtu_gids;
>> +		dev_err(dev, "setup_mtu_mac failed!\n");
>> +		goto error_failed_setup_mtu_mac;
>>  	}
>>  
>>  	iboe->nb.notifier_call = hns_roce_netdev_event;
>>  	ret = register_netdevice_notifier(&iboe->nb);
>>  	if (ret) {
>>  		dev_err(dev, "register_netdevice_notifier failed!\n");
>> -		goto error_failed_setup_mtu_gids;
>> -	}
>> -
>> -	iboe->nb_inet.notifier_call = hns_roce_inet_event;
>> -	ret = register_inetaddr_notifier(&iboe->nb_inet);
>> -	if (ret) {
>> -		dev_err(dev, "register inet addr notifier failed!\n");
>> -		goto error_failed_register_inetaddr_notifier;
>> +		goto error_failed_setup_mtu_mac;
>>  	}
>>  
>>  	return 0;
>>  
>> -error_failed_register_inetaddr_notifier:
>> -	unregister_netdevice_notifier(&iboe->nb);
>> -
>> -error_failed_setup_mtu_gids:
>> +error_failed_setup_mtu_mac:
>>  	ib_unregister_device(ib_dev);
>>  
>>  	return ret;
>>
> 
> _______________________________________________
> linuxarm mailing list
> linuxarm@huawei.com
> http://rnd-openeuler.huawei.com/mailman/listinfo/linuxarm
> 
> .
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1
  2016-11-07  5:45   ` Anurup M
@ 2016-11-07 11:45     ` Salil Mehta
  0 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-07 11:45 UTC (permalink / raw)
  To: Anurup m, dledford
  Cc: linux-rdma, netdev, mehta.salil.lnk, linux-kernel, Linuxarm



> -----Original Message-----
> From: Anurup m
> Sent: Monday, November 07, 2016 5:46 AM
> To: Salil Mehta; dledford@redhat.com
> Cc: linux-rdma@vger.kernel.org; netdev@vger.kernel.org;
> mehta.salil.lnk@gmail.com; linux-kernel@vger.kernel.org; Linuxarm
> Subject: Re: [PATCH for-next 01/11] IB/hns: Add the interface for
> querying QP1
> 
> 
> 
> On 11/4/2016 10:06 PM, Salil Mehta wrote:
> > From: Lijun Ou <oulijun@huawei.com>
> >
> > In old code, It only added the interface for querying non-specific
> > QP. This patch mainly adds an interface for querying QP1.
> >
> > Signed-off-by: Lijun Ou <oulijun@huawei.com>
> > Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> > Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> > ---
> >  drivers/infiniband/hw/hns/hns_roce_hw_v1.c |   87
> +++++++++++++++++++++++++++-
> >  drivers/infiniband/hw/hns/hns_roce_hw_v1.h |    6 +-
> >  2 files changed, 90 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> > index 71232e5..ca8b784 100644
> > --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> > +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.c
> > @@ -2630,8 +2630,82 @@ static int hns_roce_v1_query_qpc(struct
> hns_roce_dev *hr_dev,
> >  	return ret;
> >  }
> >
> > -int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr
> *qp_attr,
> > -			 int qp_attr_mask, struct ib_qp_init_attr
> *qp_init_attr)
> > +static int hns_roce_v1_q_sqp(struct ib_qp *ibqp, struct ib_qp_attr
> *qp_attr,
> > +			     int qp_attr_mask,
> > +			     struct ib_qp_init_attr *qp_init_attr)
> > +{
> > +	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> > +	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> > +	struct hns_roce_sqp_context *context;
> > +	u32 addr;
> > +
> > +	context = kzalloc(sizeof(*context), GFP_KERNEL);
> > +	if (!context)
> > +		return -ENOMEM;
> > +
> Do we really need dynamic alloc here as the size is fixed and this
> memory scope is
> only inside this function. I think better to use a static allocation.
Agreed. Somehow missed this in the internal review. Will change!

Thanks
Salil
> 
> > +	mutex_lock(&hr_qp->mutex);
> > +
> > +	if (hr_qp->state == IB_QPS_RESET) {
> I think alloc can be moved after this check (if dynamic alloc is really
> needed).
> > +		qp_attr->qp_state = IB_QPS_RESET;
> > +		goto done;
> > +	}
> > +
> > +	addr = ROCEE_QP1C_CFG0_0_REG + hr_qp->port * sizeof(*context);
> > +	context->qp1c_bytes_4 = roce_read(hr_dev, addr);
> > +	context->sq_rq_bt_l = roce_read(hr_dev, addr + 1);
> > +	context->qp1c_bytes_12 = roce_read(hr_dev, addr + 2);
> > +	context->qp1c_bytes_16 = roce_read(hr_dev, addr + 3);
> > +	context->qp1c_bytes_20 = roce_read(hr_dev, addr + 4);
> > +	context->cur_rq_wqe_ba_l = roce_read(hr_dev, addr + 5);
> > +	context->qp1c_bytes_28 = roce_read(hr_dev, addr + 6);
> > +	context->qp1c_bytes_32 = roce_read(hr_dev, addr + 7);
> > +	context->cur_sq_wqe_ba_l = roce_read(hr_dev, addr + 8);
> > +	context->qp1c_bytes_40 = roce_read(hr_dev, addr + 9);
> > +
> > +	hr_qp->state = roce_get_field(context->qp1c_bytes_4,
> > +				      QP1C_BYTES_4_QP_STATE_M,
> > +				      QP1C_BYTES_4_QP_STATE_S);
> > +	qp_attr->qp_state	= hr_qp->state;
> > +	qp_attr->path_mtu	= IB_MTU_256;
> > +	qp_attr->path_mig_state	= IB_MIG_ARMED;
> > +	qp_attr->qkey		= QKEY_VAL;
> > +	qp_attr->rq_psn		= 0;
> > +	qp_attr->sq_psn		= 0;
> > +	qp_attr->dest_qp_num	= 1;
> > +	qp_attr->qp_access_flags = 6;
> > +
> > +	qp_attr->pkey_index = roce_get_field(context->qp1c_bytes_20,
> > +					     QP1C_BYTES_20_PKEY_IDX_M,
> > +					     QP1C_BYTES_20_PKEY_IDX_S);
> > +	qp_attr->port_num = hr_qp->port + 1;
> > +	qp_attr->sq_draining = 0;
> > +	qp_attr->max_rd_atomic = 0;
> > +	qp_attr->max_dest_rd_atomic = 0;
> > +	qp_attr->min_rnr_timer = 0;
> > +	qp_attr->timeout = 0;
> > +	qp_attr->retry_cnt = 0;
> > +	qp_attr->rnr_retry = 0;
> > +	qp_attr->alt_timeout = 0;
> > +
> > +done:
> > +	qp_attr->cur_qp_state = qp_attr->qp_state;
> > +	qp_attr->cap.max_recv_wr = hr_qp->rq.wqe_cnt;
> > +	qp_attr->cap.max_recv_sge = hr_qp->rq.max_gs;
> > +	qp_attr->cap.max_send_wr = hr_qp->sq.wqe_cnt;
> > +	qp_attr->cap.max_send_sge = hr_qp->sq.max_gs;
> > +	qp_attr->cap.max_inline_data = 0;
> > +	qp_init_attr->cap = qp_attr->cap;
> > +	qp_init_attr->create_flags = 0;
> > +
> > +	mutex_unlock(&hr_qp->mutex);
> > +	kfree(context);
> > +
> > +	return 0;
> > +}
> > +
> > +static int hns_roce_v1_q_qp(struct ib_qp *ibqp, struct ib_qp_attr
> *qp_attr,
> > +			    int qp_attr_mask,
> > +			    struct ib_qp_init_attr *qp_init_attr)
> >  {
> >  	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
> >  	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> > @@ -2767,6 +2841,15 @@ int hns_roce_v1_query_qp(struct ib_qp *ibqp,
> struct ib_qp_attr *qp_attr,
> >  	return ret;
> >  }
> >
> > +int hns_roce_v1_query_qp(struct ib_qp *ibqp, struct ib_qp_attr
> *qp_attr,
> > +			 int qp_attr_mask, struct ib_qp_init_attr
> *qp_init_attr)
> > +{
> > +	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
> > +
> > +	return hr_qp->doorbell_qpn <= 1 ?
> > +		hns_roce_v1_q_sqp(ibqp, qp_attr, qp_attr_mask,
> qp_init_attr) :
> > +		hns_roce_v1_q_qp(ibqp, qp_attr, qp_attr_mask,
> qp_init_attr);
> > +}
> >  static void hns_roce_v1_destroy_qp_common(struct hns_roce_dev
> *hr_dev,
> >  					  struct hns_roce_qp *hr_qp,
> >  					  int is_user)
> > diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> > index 539b0a3b..2e1878b 100644
> > --- a/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> > +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v1.h
> > @@ -480,13 +480,17 @@ struct hns_roce_sqp_context {
> >  	u32 qp1c_bytes_12;
> >  	u32 qp1c_bytes_16;
> >  	u32 qp1c_bytes_20;
> > -	u32 qp1c_bytes_28;
> >  	u32 cur_rq_wqe_ba_l;
> > +	u32 qp1c_bytes_28;
> >  	u32 qp1c_bytes_32;
> >  	u32 cur_sq_wqe_ba_l;
> >  	u32 qp1c_bytes_40;
> >  };
> >
> > +#define QP1C_BYTES_4_QP_STATE_S 0
> > +#define QP1C_BYTES_4_QP_STATE_M   \
> > +	(((1UL << 3) - 1) << QP1C_BYTES_4_QP_STATE_S)
> > +
> >  #define QP1C_BYTES_4_SQ_WQE_SHIFT_S 8
> >  #define QP1C_BYTES_4_SQ_WQE_SHIFT_M   \
> >  	(((1UL << 4) - 1) << QP1C_BYTES_4_SQ_WQE_SHIFT_S)
> >

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-04 16:36 ` [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs Salil Mehta
@ 2016-11-09  7:21   ` Leon Romanovsky
  2016-11-15 15:52     ` Salil Mehta
  0 siblings, 1 reply; 23+ messages in thread
From: Leon Romanovsky @ 2016-11-09  7:21 UTC (permalink / raw)
  To: Salil Mehta
  Cc: dledford, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm, Ping Zhang

[-- Attachment #1: Type: text/plain, Size: 1501 bytes --]

On Fri, Nov 04, 2016 at 04:36:25PM +0000, Salil Mehta wrote:
> From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
>
> This patch modified the logic of allocating memory using APIs in
> hns RoCE driver. We used kcalloc instead of kmalloc_array and
> bitmap_zero. And When kcalloc failed, call vzalloc to alloc
> memory.
>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Ping Zhang <zhangping5@huawei.com>
> Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> ---
>  drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
> index fb87883..d3dfb5f 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> @@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct hns_roce_buddy *buddy, int max_order)
>
>  	for (i = 0; i <= buddy->max_order; ++i) {
>  		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
> -		buddy->bits[i] = kmalloc_array(s, sizeof(long), GFP_KERNEL);
> -		if (!buddy->bits[i])
> -			goto err_out_free;
> -
> -		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order - i));
> +		buddy->bits[i] = kcalloc(s, sizeof(long), GFP_KERNEL);
> +		if (!buddy->bits[i]) {
> +			buddy->bits[i] = vzalloc(s * sizeof(long));

I wonder, why don't you use directly vzalloc instead of kcalloc fallback?

> +			if (!buddy->bits[i])
> +				goto err_out_free;
> +		}
>  	}

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 09/11] IB/hns: Change qpn allocation to round-robin mode.
  2016-11-04 16:36 ` [PATCH for-next 09/11] IB/hns: Change qpn allocation to round-robin mode Salil Mehta
@ 2016-11-09  7:24   ` Leon Romanovsky
  0 siblings, 0 replies; 23+ messages in thread
From: Leon Romanovsky @ 2016-11-09  7:24 UTC (permalink / raw)
  To: Salil Mehta
  Cc: dledford, xavier.huwei, oulijun, mehta.salil.lnk, linux-rdma,
	netdev, linux-kernel, linuxarm

[-- Attachment #1: Type: text/plain, Size: 650 bytes --]

On Fri, Nov 04, 2016 at 04:36:31PM +0000, Salil Mehta wrote:
> From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
>
> When using CM to establish connections, qp number that was freed
> just now will be rejected by ib core. To fix these problem, We
> change qpn allocation to round-robin mode. We added the round-robin
> mode for allocating resources using bitmap. We use round-robin mode
> for qp number and non round-robing mode for other resources like
> cq number, pd number etc.
>
> Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>

Reviewed-by: Leon Romanovsky <leonro@mellanox.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-09  7:21   ` Leon Romanovsky
@ 2016-11-15 15:52     ` Salil Mehta
  2016-11-16  8:36       ` Leon Romanovsky
  0 siblings, 1 reply; 23+ messages in thread
From: Salil Mehta @ 2016-11-15 15:52 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: dledford, Huwei (Xavier),
	oulijun, mehta.salil.lnk, linux-rdma, netdev, linux-kernel,
	Linuxarm, Zhangping (ZP)

> -----Original Message-----
> From: Leon Romanovsky [mailto:leon@kernel.org]
> Sent: Wednesday, November 09, 2016 7:22 AM
> To: Salil Mehta
> Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> Zhangping (ZP)
> Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> allocating memory using APIs
> 
> On Fri, Nov 04, 2016 at 04:36:25PM +0000, Salil Mehta wrote:
> > From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
> >
> > This patch modified the logic of allocating memory using APIs in
> > hns RoCE driver. We used kcalloc instead of kmalloc_array and
> > bitmap_zero. And When kcalloc failed, call vzalloc to alloc
> > memory.
> >
> > Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> > Signed-off-by: Ping Zhang <zhangping5@huawei.com>
> > Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> > ---
> >  drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-------
> >  1 file changed, 8 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c
> b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > index fb87883..d3dfb5f 100644
> > --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > @@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct
> hns_roce_buddy *buddy, int max_order)
> >
> >  	for (i = 0; i <= buddy->max_order; ++i) {
> >  		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
> > -		buddy->bits[i] = kmalloc_array(s, sizeof(long),
> GFP_KERNEL);
> > -		if (!buddy->bits[i])
> > -			goto err_out_free;
> > -
> > -		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order - i));
> > +		buddy->bits[i] = kcalloc(s, sizeof(long), GFP_KERNEL);
> > +		if (!buddy->bits[i]) {
> > +			buddy->bits[i] = vzalloc(s * sizeof(long));
> 
> I wonder, why don't you use directly vzalloc instead of kcalloc
> fallback?
As we know we will have physical contiguous pages if the kcalloc
call succeeds. This will give us a chance to have better performance
over the allocations which are just virtually contiguous through the
function vzalloc(). Therefore, later has only been used as a fallback
when our memory request cannot be entertained through kcalloc.

Are you suggesting that there will not be much performance penalty
if we use just vzalloc ?

> 
> > +			if (!buddy->bits[i])
> > +				goto err_out_free;
> > +		}
> >  	}

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-15 15:52     ` Salil Mehta
@ 2016-11-16  8:36       ` Leon Romanovsky
  2016-11-21 16:12         ` Salil Mehta
  0 siblings, 1 reply; 23+ messages in thread
From: Leon Romanovsky @ 2016-11-16  8:36 UTC (permalink / raw)
  To: Salil Mehta
  Cc: dledford, Huwei (Xavier),
	oulijun, mehta.salil.lnk, linux-rdma, netdev, linux-kernel,
	Linuxarm, Zhangping (ZP)

[-- Attachment #1: Type: text/plain, Size: 3062 bytes --]

On Tue, Nov 15, 2016 at 03:52:46PM +0000, Salil Mehta wrote:
> > -----Original Message-----
> > From: Leon Romanovsky [mailto:leon@kernel.org]
> > Sent: Wednesday, November 09, 2016 7:22 AM
> > To: Salil Mehta
> > Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> > mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> > Zhangping (ZP)
> > Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> > allocating memory using APIs
> >
> > On Fri, Nov 04, 2016 at 04:36:25PM +0000, Salil Mehta wrote:
> > > From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
> > >
> > > This patch modified the logic of allocating memory using APIs in
> > > hns RoCE driver. We used kcalloc instead of kmalloc_array and
> > > bitmap_zero. And When kcalloc failed, call vzalloc to alloc
> > > memory.
> > >
> > > Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> > > Signed-off-by: Ping Zhang <zhangping5@huawei.com>
> > > Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> > > ---
> > >  drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-------
> > >  1 file changed, 8 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > index fb87883..d3dfb5f 100644
> > > --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > @@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct
> > hns_roce_buddy *buddy, int max_order)
> > >
> > >  	for (i = 0; i <= buddy->max_order; ++i) {
> > >  		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
> > > -		buddy->bits[i] = kmalloc_array(s, sizeof(long),
> > GFP_KERNEL);
> > > -		if (!buddy->bits[i])
> > > -			goto err_out_free;
> > > -
> > > -		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order - i));
> > > +		buddy->bits[i] = kcalloc(s, sizeof(long), GFP_KERNEL);
> > > +		if (!buddy->bits[i]) {
> > > +			buddy->bits[i] = vzalloc(s * sizeof(long));
> >
> > I wonder, why don't you use directly vzalloc instead of kcalloc
> > fallback?
> As we know we will have physical contiguous pages if the kcalloc
> call succeeds. This will give us a chance to have better performance
> over the allocations which are just virtually contiguous through the
> function vzalloc(). Therefore, later has only been used as a fallback
> when our memory request cannot be entertained through kcalloc.
>
> Are you suggesting that there will not be much performance penalty
> if we use just vzalloc ?

Not exactly,
I asked it, because we have similar code in our drivers and this
construction looks strange to me.

1. If performance is critical, we will use kmalloc.
2. If performance is not critical, we will use vmalloc.

But in this case, such construction shows me that we can live with
vmalloc performance and kmalloc allocation are not really needed.

In your specific case, I'm not sure that kcalloc will ever fail.

Thanks


>
> >
> > > +			if (!buddy->bits[i])
> > > +				goto err_out_free;
> > > +		}
> > >  	}

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-16  8:36       ` Leon Romanovsky
@ 2016-11-21 16:12         ` Salil Mehta
  2016-11-21 17:14           ` Leon Romanovsky
  0 siblings, 1 reply; 23+ messages in thread
From: Salil Mehta @ 2016-11-21 16:12 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: dledford, Huwei (Xavier),
	oulijun, mehta.salil.lnk, linux-rdma, netdev, linux-kernel,
	Linuxarm, Zhangping (ZP)

> -----Original Message-----
> From: Leon Romanovsky [mailto:leon@kernel.org]
> Sent: Wednesday, November 16, 2016 8:36 AM
> To: Salil Mehta
> Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> Zhangping (ZP)
> Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> allocating memory using APIs
> 
> On Tue, Nov 15, 2016 at 03:52:46PM +0000, Salil Mehta wrote:
> > > -----Original Message-----
> > > From: Leon Romanovsky [mailto:leon@kernel.org]
> > > Sent: Wednesday, November 09, 2016 7:22 AM
> > > To: Salil Mehta
> > > Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> > > mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> > > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> > > Zhangping (ZP)
> > > Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> > > allocating memory using APIs
> > >
> > > On Fri, Nov 04, 2016 at 04:36:25PM +0000, Salil Mehta wrote:
> > > > From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
> > > >
> > > > This patch modified the logic of allocating memory using APIs in
> > > > hns RoCE driver. We used kcalloc instead of kmalloc_array and
> > > > bitmap_zero. And When kcalloc failed, call vzalloc to alloc
> > > > memory.
> > > >
> > > > Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> > > > Signed-off-by: Ping Zhang <zhangping5@huawei.com>
> > > > Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> > > > ---
> > > >  drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-------
> > > >  1 file changed, 8 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > index fb87883..d3dfb5f 100644
> > > > --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > @@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct
> > > hns_roce_buddy *buddy, int max_order)
> > > >
> > > >  	for (i = 0; i <= buddy->max_order; ++i) {
> > > >  		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
> > > > -		buddy->bits[i] = kmalloc_array(s, sizeof(long),
> > > GFP_KERNEL);
> > > > -		if (!buddy->bits[i])
> > > > -			goto err_out_free;
> > > > -
> > > > -		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order -
> i));
> > > > +		buddy->bits[i] = kcalloc(s, sizeof(long),
> GFP_KERNEL);
> > > > +		if (!buddy->bits[i]) {
> > > > +			buddy->bits[i] = vzalloc(s * sizeof(long));
> > >
> > > I wonder, why don't you use directly vzalloc instead of kcalloc
> > > fallback?
> > As we know we will have physical contiguous pages if the kcalloc
> > call succeeds. This will give us a chance to have better performance
> > over the allocations which are just virtually contiguous through the
> > function vzalloc(). Therefore, later has only been used as a fallback
> > when our memory request cannot be entertained through kcalloc.
> >
> > Are you suggesting that there will not be much performance penalty
> > if we use just vzalloc ?
> 
> Not exactly,
> I asked it, because we have similar code in our drivers and this
> construction looks strange to me.
> 
> 1. If performance is critical, we will use kmalloc.
> 2. If performance is not critical, we will use vmalloc.
> 
> But in this case, such construction shows me that we can live with
> vmalloc performance and kmalloc allocation are not really needed.
> 
> In your specific case, I'm not sure that kcalloc will ever fail.
Performance is definitely critical here. Though, I agree this is bit
unusual way of memory allocation. In actual, we were encountering
memory alloc failures using kmalloc (if you see allocation amount
is on the higher side and is exponential) so we ended up using
vmalloc as fall back - It is very naïve allocation scheme.

Maybe we need to rethink this allocation scheme part? Also, I can pull
back this particular patch for now or just live with vzalloc() till
we figure out proper solution to this? 

> 
> Thanks
> 
> 
> >
> > >
> > > > +			if (!buddy->bits[i])
> > > > +				goto err_out_free;
> > > > +		}
> > > >  	}

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-21 16:12         ` Salil Mehta
@ 2016-11-21 17:14           ` Leon Romanovsky
  2016-11-21 20:20             ` Salil Mehta
  0 siblings, 1 reply; 23+ messages in thread
From: Leon Romanovsky @ 2016-11-21 17:14 UTC (permalink / raw)
  To: Salil Mehta
  Cc: dledford, Huwei (Xavier),
	oulijun, mehta.salil.lnk, linux-rdma, netdev, linux-kernel,
	Linuxarm, Zhangping (ZP)

[-- Attachment #1: Type: text/plain, Size: 4775 bytes --]

On Mon, Nov 21, 2016 at 04:12:38PM +0000, Salil Mehta wrote:
> > -----Original Message-----
> > From: Leon Romanovsky [mailto:leon@kernel.org]
> > Sent: Wednesday, November 16, 2016 8:36 AM
> > To: Salil Mehta
> > Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> > mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> > Zhangping (ZP)
> > Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> > allocating memory using APIs
> >
> > On Tue, Nov 15, 2016 at 03:52:46PM +0000, Salil Mehta wrote:
> > > > -----Original Message-----
> > > > From: Leon Romanovsky [mailto:leon@kernel.org]
> > > > Sent: Wednesday, November 09, 2016 7:22 AM
> > > > To: Salil Mehta
> > > > Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> > > > mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> > > > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> > > > Zhangping (ZP)
> > > > Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> > > > allocating memory using APIs
> > > >
> > > > On Fri, Nov 04, 2016 at 04:36:25PM +0000, Salil Mehta wrote:
> > > > > From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
> > > > >
> > > > > This patch modified the logic of allocating memory using APIs in
> > > > > hns RoCE driver. We used kcalloc instead of kmalloc_array and
> > > > > bitmap_zero. And When kcalloc failed, call vzalloc to alloc
> > > > > memory.
> > > > >
> > > > > Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> > > > > Signed-off-by: Ping Zhang <zhangping5@huawei.com>
> > > > > Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> > > > > ---
> > > > >  drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-------
> > > > >  1 file changed, 8 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > index fb87883..d3dfb5f 100644
> > > > > --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > @@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct
> > > > hns_roce_buddy *buddy, int max_order)
> > > > >
> > > > >  	for (i = 0; i <= buddy->max_order; ++i) {
> > > > >  		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
> > > > > -		buddy->bits[i] = kmalloc_array(s, sizeof(long),
> > > > GFP_KERNEL);
> > > > > -		if (!buddy->bits[i])
> > > > > -			goto err_out_free;
> > > > > -
> > > > > -		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order -
> > i));
> > > > > +		buddy->bits[i] = kcalloc(s, sizeof(long),
> > GFP_KERNEL);
> > > > > +		if (!buddy->bits[i]) {
> > > > > +			buddy->bits[i] = vzalloc(s * sizeof(long));
> > > >
> > > > I wonder, why don't you use directly vzalloc instead of kcalloc
> > > > fallback?
> > > As we know we will have physical contiguous pages if the kcalloc
> > > call succeeds. This will give us a chance to have better performance
> > > over the allocations which are just virtually contiguous through the
> > > function vzalloc(). Therefore, later has only been used as a fallback
> > > when our memory request cannot be entertained through kcalloc.
> > >
> > > Are you suggesting that there will not be much performance penalty
> > > if we use just vzalloc ?
> >
> > Not exactly,
> > I asked it, because we have similar code in our drivers and this
> > construction looks strange to me.
> >
> > 1. If performance is critical, we will use kmalloc.
> > 2. If performance is not critical, we will use vmalloc.
> >
> > But in this case, such construction shows me that we can live with
> > vmalloc performance and kmalloc allocation are not really needed.
> >
> > In your specific case, I'm not sure that kcalloc will ever fail.
> Performance is definitely critical here. Though, I agree this is bit
> unusual way of memory allocation. In actual, we were encountering
> memory alloc failures using kmalloc (if you see allocation amount
> is on the higher side and is exponential) so we ended up using
> vmalloc as fall back - It is very naïve allocation scheme.

I understand it, we did the same, see our mlx5_vzalloc call.
BTW, we used __GFP_NOWARN flag, which you should consider to use
in your case too.

>
> Maybe we need to rethink this allocation scheme part? Also, I can pull
> back this particular patch for now or just live with vzalloc() till
> we figure out proper solution to this?

It is up to you, I don't think that you should drop it, AFAIK, there is
no other proper solution.

>
> >
> > Thanks
> >
> >
> > >
> > > >
> > > > > +			if (!buddy->bits[i])
> > > > > +				goto err_out_free;
> > > > > +		}
> > > > >  	}

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs
  2016-11-21 17:14           ` Leon Romanovsky
@ 2016-11-21 20:20             ` Salil Mehta
  0 siblings, 0 replies; 23+ messages in thread
From: Salil Mehta @ 2016-11-21 20:20 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: dledford, Huwei (Xavier),
	oulijun, mehta.salil.lnk, linux-rdma, netdev, linux-kernel,
	Linuxarm, Zhangping (ZP)

> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-
> owner@vger.kernel.org] On Behalf Of Leon Romanovsky
> Sent: Monday, November 21, 2016 5:14 PM
> To: Salil Mehta
> Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> Zhangping (ZP)
> Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> allocating memory using APIs
> 
> On Mon, Nov 21, 2016 at 04:12:38PM +0000, Salil Mehta wrote:
> > > -----Original Message-----
> > > From: Leon Romanovsky [mailto:leon@kernel.org]
> > > Sent: Wednesday, November 16, 2016 8:36 AM
> > > To: Salil Mehta
> > > Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> > > mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> > > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> > > Zhangping (ZP)
> > > Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic of
> > > allocating memory using APIs
> > >
> > > On Tue, Nov 15, 2016 at 03:52:46PM +0000, Salil Mehta wrote:
> > > > > -----Original Message-----
> > > > > From: Leon Romanovsky [mailto:leon@kernel.org]
> > > > > Sent: Wednesday, November 09, 2016 7:22 AM
> > > > > To: Salil Mehta
> > > > > Cc: dledford@redhat.com; Huwei (Xavier); oulijun;
> > > > > mehta.salil.lnk@gmail.com; linux-rdma@vger.kernel.org;
> > > > > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Linuxarm;
> > > > > Zhangping (ZP)
> > > > > Subject: Re: [PATCH for-next 03/11] IB/hns: Optimize the logic
> of
> > > > > allocating memory using APIs
> > > > >
> > > > > On Fri, Nov 04, 2016 at 04:36:25PM +0000, Salil Mehta wrote:
> > > > > > From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>
> > > > > >
> > > > > > This patch modified the logic of allocating memory using APIs
> in
> > > > > > hns RoCE driver. We used kcalloc instead of kmalloc_array and
> > > > > > bitmap_zero. And When kcalloc failed, call vzalloc to alloc
> > > > > > memory.
> > > > > >
> > > > > > Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
> > > > > > Signed-off-by: Ping Zhang <zhangping5@huawei.com>
> > > > > > Signed-off-by: Salil Mehta  <salil.mehta@huawei.com>
> > > > > > ---
> > > > > >  drivers/infiniband/hw/hns/hns_roce_mr.c |   15 ++++++++-----
> --
> > > > > >  1 file changed, 8 insertions(+), 7 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > > index fb87883..d3dfb5f 100644
> > > > > > --- a/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > > +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
> > > > > > @@ -137,11 +137,12 @@ static int hns_roce_buddy_init(struct
> > > > > hns_roce_buddy *buddy, int max_order)
> > > > > >
> > > > > >  	for (i = 0; i <= buddy->max_order; ++i) {
> > > > > >  		s = BITS_TO_LONGS(1 << (buddy->max_order - i));
> > > > > > -		buddy->bits[i] = kmalloc_array(s, sizeof(long),
> > > > > GFP_KERNEL);
> > > > > > -		if (!buddy->bits[i])
> > > > > > -			goto err_out_free;
> > > > > > -
> > > > > > -		bitmap_zero(buddy->bits[i], 1 << (buddy->max_order -
> > > i));
> > > > > > +		buddy->bits[i] = kcalloc(s, sizeof(long),
> > > GFP_KERNEL);
> > > > > > +		if (!buddy->bits[i]) {
> > > > > > +			buddy->bits[i] = vzalloc(s * sizeof(long));
> > > > >
> > > > > I wonder, why don't you use directly vzalloc instead of kcalloc
> > > > > fallback?
> > > > As we know we will have physical contiguous pages if the kcalloc
> > > > call succeeds. This will give us a chance to have better
> performance
> > > > over the allocations which are just virtually contiguous through
> the
> > > > function vzalloc(). Therefore, later has only been used as a
> fallback
> > > > when our memory request cannot be entertained through kcalloc.
> > > >
> > > > Are you suggesting that there will not be much performance
> penalty
> > > > if we use just vzalloc ?
> > >
> > > Not exactly,
> > > I asked it, because we have similar code in our drivers and this
> > > construction looks strange to me.
> > >
> > > 1. If performance is critical, we will use kmalloc.
> > > 2. If performance is not critical, we will use vmalloc.
> > >
> > > But in this case, such construction shows me that we can live with
> > > vmalloc performance and kmalloc allocation are not really needed.
> > >
> > > In your specific case, I'm not sure that kcalloc will ever fail.
> > Performance is definitely critical here. Though, I agree this is bit
> > unusual way of memory allocation. In actual, we were encountering
> > memory alloc failures using kmalloc (if you see allocation amount
> > is on the higher side and is exponential) so we ended up using
> > vmalloc as fall back - It is very naïve allocation scheme.
> 
> I understand it, we did the same, see our mlx5_vzalloc call.
> BTW, we used __GFP_NOWARN flag, which you should consider to use
> in your case too.
Ok. Will add this flag and refloat patch V3. 

Thanks
> 
> >
> > Maybe we need to rethink this allocation scheme part? Also, I can
> pull
> > back this particular patch for now or just live with vzalloc() till
> > we figure out proper solution to this?
> 
> It is up to you, I don't think that you should drop it, AFAIK, there is
> no other proper solution.
Ok we will live with it for now and later maybe we can see how we can optimize
pre-allocation of physically contiguous memory. 

Thanks for your suggestions!
Salil
> 
> >
> > >
> > > Thanks
> > >
> > >
> > > >
> > > > >
> > > > > > +			if (!buddy->bits[i])
> > > > > > +				goto err_out_free;
> > > > > > +		}
> > > > > >  	}

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-11-21 20:20 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-04 16:36 [PATCH for-next 00/11] Code improvements & fixes for HNS RoCE driver Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 01/11] IB/hns: Add the interface for querying QP1 Salil Mehta
2016-11-07  5:45   ` Anurup M
2016-11-07 11:45     ` Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 02/11] IB/hns: Add code for refreshing CQ CI using TPTR Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 03/11] IB/hns: Optimize the logic of allocating memory using APIs Salil Mehta
2016-11-09  7:21   ` Leon Romanovsky
2016-11-15 15:52     ` Salil Mehta
2016-11-16  8:36       ` Leon Romanovsky
2016-11-21 16:12         ` Salil Mehta
2016-11-21 17:14           ` Leon Romanovsky
2016-11-21 20:20             ` Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 04/11] IB/hns: add self loopback for CM Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 05/11] IB/hns: Modify the condition of notifying hardware loopback Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 06/11] IB/hns: Fix the bug for qp state in hns_roce_v1_m_qp() Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 07/11] IB/hns: Modify the macro for the timeout when cmd process Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 08/11] IB/hns: Modify query info named port_num when querying RC QP Salil Mehta
2016-11-04 16:36 ` [PATCH for-next 09/11] IB/hns: Change qpn allocation to round-robin mode Salil Mehta
2016-11-09  7:24   ` Leon Romanovsky
2016-11-04 16:36 ` [PATCH for-next 10/11] IB/hns: Implement the add_gid/del_gid and optimize the GIDs management Salil Mehta
2016-11-07  8:17   ` Anurup M
2016-11-07 10:04     ` Shaobo Xu
2016-11-04 16:36 ` [PATCH for-next 11/11] IB/hns: Fix for Checkpatch.pl comment style errors Salil Mehta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).