linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 for-next 0/2] Fixes for hip08 driver
@ 2019-08-29  8:41 Weihang Li
  2019-08-29  8:41 ` [PATCH v2 for-next 1/2] RDMA/hns: Optimize cmd init and mode selection for hip08 Weihang Li
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Weihang Li @ 2019-08-29  8:41 UTC (permalink / raw)
  To: dledford, jgg; +Cc: linux-rdma, linuxarm

Here optimizes some codes and removes some warnings by sparse tool
checking as well as fixes some defects.

Changelog:
v1->v2: Remove a redundant judgment in patch(1/2) as Leon Romanovsky
	suggested. Fix patch(2/2) as comments from Doug Ledford. Also
	some modifications on title and commit message.

Lijun Ou (1):
  RDMA/hns: Package operations of rq inline buffer into separate
    functions

Yixian Liu (1):
  RDMA/hns: Optimize cmd init and mode selection for hip08

 drivers/infiniband/hw/hns/hns_roce_cmd.c  | 10 +---
 drivers/infiniband/hw/hns/hns_roce_main.c |  8 +--
 drivers/infiniband/hw/hns/hns_roce_qp.c   | 95 ++++++++++++++++++-------------
 3 files changed, 61 insertions(+), 52 deletions(-)

-- 
2.8.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 for-next 1/2] RDMA/hns: Optimize cmd init and mode selection for hip08
  2019-08-29  8:41 [PATCH v2 for-next 0/2] Fixes for hip08 driver Weihang Li
@ 2019-08-29  8:41 ` Weihang Li
  2019-08-29  8:41 ` [PATCH v2 for-next 2/2] RDMA/hns: Package operations of rq inline buffer into separate functions Weihang Li
  2019-08-30 15:26 ` [PATCH v2 for-next 0/2] Fixes for hip08 driver Doug Ledford
  2 siblings, 0 replies; 4+ messages in thread
From: Weihang Li @ 2019-08-29  8:41 UTC (permalink / raw)
  To: dledford, jgg; +Cc: linux-rdma, linuxarm

From: Yixian Liu <liuyixian@huawei.com>

There are two modes for mailbox command (cmd) queue, i.e., event mode and
poll mode. For each mode, we use corresponding semaphores to protect the
cmd queue resource competition, so called event_sem and poll_sem. During
cmd init, both semaphores are initialized and poll mode is selected.
Thus, there is no need to up poll_sema again in cmd_use_polling.

Furthermore, there is no need to down the sema of the other side while
switching mode. This patch aims to decouple the switch between event
mode and poll mode of cmd.

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Weihang Li <liweihang@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_cmd.c  | 10 +---------
 drivers/infiniband/hw/hns/hns_roce_main.c |  8 ++++----
 2 files changed, 5 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
index ade26fa..455d533 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
@@ -251,23 +251,15 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
 	hr_cmd->token_mask = CMD_TOKEN_MASK;
 	hr_cmd->use_events = 1;
 
-	down(&hr_cmd->poll_sem);
-
 	return 0;
 }
 
 void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev)
 {
 	struct hns_roce_cmdq *hr_cmd = &hr_dev->cmd;
-	int i;
-
-	hr_cmd->use_events = 0;
-
-	for (i = 0; i < hr_cmd->max_cmds; ++i)
-		down(&hr_cmd->event_sem);
 
 	kfree(hr_cmd->context);
-	up(&hr_cmd->poll_sem);
+	hr_cmd->use_events = 0;
 }
 
 struct hns_roce_cmd_mailbox
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index 1b757cc..b5d196c 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -902,6 +902,7 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
 		goto error_failed_cmd_init;
 	}
 
+	/* EQ depends on poll mode, event mode depends on EQ */
 	ret = hr_dev->hw->init_eq(hr_dev);
 	if (ret) {
 		dev_err(dev, "eq init failed!\n");
@@ -911,8 +912,9 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
 	if (hr_dev->cmd_mod) {
 		ret = hns_roce_cmd_use_events(hr_dev);
 		if (ret) {
-			dev_err(dev, "Switch to event-driven cmd failed!\n");
-			goto error_failed_use_event;
+			dev_warn(dev,
+				 "Cmd event  mode failed, set back to poll!\n");
+			hns_roce_cmd_use_polling(hr_dev);
 		}
 	}
 
@@ -955,8 +957,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
 error_failed_init_hem:
 	if (hr_dev->cmd_mod)
 		hns_roce_cmd_use_polling(hr_dev);
-
-error_failed_use_event:
 	hr_dev->hw->cleanup_eq(hr_dev);
 
 error_failed_eq_table:
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 for-next 2/2] RDMA/hns: Package operations of rq inline buffer into separate functions
  2019-08-29  8:41 [PATCH v2 for-next 0/2] Fixes for hip08 driver Weihang Li
  2019-08-29  8:41 ` [PATCH v2 for-next 1/2] RDMA/hns: Optimize cmd init and mode selection for hip08 Weihang Li
@ 2019-08-29  8:41 ` Weihang Li
  2019-08-30 15:26 ` [PATCH v2 for-next 0/2] Fixes for hip08 driver Doug Ledford
  2 siblings, 0 replies; 4+ messages in thread
From: Weihang Li @ 2019-08-29  8:41 UTC (permalink / raw)
  To: dledford, jgg; +Cc: linux-rdma, linuxarm

From: Lijun Ou <oulijun@huawei.com>

Here packages the codes of allocating and freeing rq inline buffer
in hns_roce_create_qp_common function in order to reduce the
complexity.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Weihang Li <liweihang@hisilicon.com>
---
 drivers/infiniband/hw/hns/hns_roce_qp.c | 95 +++++++++++++++++++--------------
 1 file changed, 56 insertions(+), 39 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 8868172..bd78ff9 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -635,6 +635,50 @@ static int hns_roce_qp_has_rq(struct ib_qp_init_attr *attr)
 	return 1;
 }
 
+static int alloc_rq_inline_buf(struct hns_roce_qp *hr_qp,
+			       struct ib_qp_init_attr *init_attr)
+{
+	u32 max_recv_sge = init_attr->cap.max_recv_sge;
+	struct hns_roce_rinl_wqe *wqe_list;
+	u32 wqe_cnt = hr_qp->rq.wqe_cnt;
+	int i;
+
+	/* allocate recv inline buf */
+	wqe_list = kcalloc(wqe_cnt, sizeof(struct hns_roce_rinl_wqe),
+			   GFP_KERNEL);
+
+	if (!wqe_list)
+		goto err;
+
+	/* Allocate a continuous buffer for all inline sge we need */
+	wqe_list[0].sg_list = kcalloc(wqe_cnt, (max_recv_sge *
+				      sizeof(struct hns_roce_rinl_sge)),
+				      GFP_KERNEL);
+	if (!wqe_list[0].sg_list)
+		goto err_wqe_list;
+
+	/* Assign buffers of sg_list to each inline wqe */
+	for (i = 1; i < wqe_cnt; i++)
+		wqe_list[i].sg_list = &wqe_list[0].sg_list[i * max_recv_sge];
+
+	hr_qp->rq_inl_buf.wqe_list = wqe_list;
+	hr_qp->rq_inl_buf.wqe_cnt = wqe_cnt;
+
+	return 0;
+
+err_wqe_list:
+	kfree(wqe_list);
+
+err:
+	return -ENOMEM;
+}
+
+static void free_rq_inline_buf(struct hns_roce_qp *hr_qp)
+{
+	kfree(hr_qp->rq_inl_buf.wqe_list[0].sg_list);
+	kfree(hr_qp->rq_inl_buf.wqe_list);
+}
+
 static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 				     struct ib_pd *ib_pd,
 				     struct ib_qp_init_attr *init_attr,
@@ -676,33 +720,11 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 
 	if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) &&
 	    hns_roce_qp_has_rq(init_attr)) {
-		/* allocate recv inline buf */
-		hr_qp->rq_inl_buf.wqe_list = kcalloc(hr_qp->rq.wqe_cnt,
-					       sizeof(struct hns_roce_rinl_wqe),
-					       GFP_KERNEL);
-		if (!hr_qp->rq_inl_buf.wqe_list) {
-			ret = -ENOMEM;
+		ret = alloc_rq_inline_buf(hr_qp, init_attr);
+		if (ret) {
+			dev_err(dev, "allocate receive inline buffer failed\n");
 			goto err_out;
 		}
-
-		hr_qp->rq_inl_buf.wqe_cnt = hr_qp->rq.wqe_cnt;
-
-		/* Firstly, allocate a list of sge space buffer */
-		hr_qp->rq_inl_buf.wqe_list[0].sg_list =
-					kcalloc(hr_qp->rq_inl_buf.wqe_cnt,
-					       init_attr->cap.max_recv_sge *
-					       sizeof(struct hns_roce_rinl_sge),
-					       GFP_KERNEL);
-		if (!hr_qp->rq_inl_buf.wqe_list[0].sg_list) {
-			ret = -ENOMEM;
-			goto err_wqe_list;
-		}
-
-		for (i = 1; i < hr_qp->rq_inl_buf.wqe_cnt; i++)
-			/* Secondly, reallocate the buffer */
-			hr_qp->rq_inl_buf.wqe_list[i].sg_list =
-				&hr_qp->rq_inl_buf.wqe_list[0].sg_list[i *
-				init_attr->cap.max_recv_sge];
 	}
 
 	page_shift = PAGE_SHIFT + hr_dev->caps.mtt_buf_pg_sz;
@@ -710,14 +732,14 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 		if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
 			dev_err(dev, "ib_copy_from_udata error for create qp\n");
 			ret = -EFAULT;
-			goto err_rq_sge_list;
+			goto err_alloc_rq_inline_buf;
 		}
 
 		ret = hns_roce_set_user_sq_size(hr_dev, &init_attr->cap, hr_qp,
 						&ucmd);
 		if (ret) {
 			dev_err(dev, "hns_roce_set_user_sq_size error for create qp\n");
-			goto err_rq_sge_list;
+			goto err_alloc_rq_inline_buf;
 		}
 
 		hr_qp->umem = ib_umem_get(udata, ucmd.buf_addr,
@@ -725,7 +747,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 		if (IS_ERR(hr_qp->umem)) {
 			dev_err(dev, "ib_umem_get error for create qp\n");
 			ret = PTR_ERR(hr_qp->umem);
-			goto err_rq_sge_list;
+			goto err_alloc_rq_inline_buf;
 		}
 		hr_qp->region_cnt = split_wqe_buf_region(hr_dev, hr_qp,
 				hr_qp->regions, ARRAY_SIZE(hr_qp->regions),
@@ -786,13 +808,13 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 		    IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
 			dev_err(dev, "init_attr->create_flags error!\n");
 			ret = -EINVAL;
-			goto err_rq_sge_list;
+			goto err_alloc_rq_inline_buf;
 		}
 
 		if (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO) {
 			dev_err(dev, "init_attr->create_flags error!\n");
 			ret = -EINVAL;
-			goto err_rq_sge_list;
+			goto err_alloc_rq_inline_buf;
 		}
 
 		/* Set SQ size */
@@ -800,7 +822,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 						  hr_qp);
 		if (ret) {
 			dev_err(dev, "hns_roce_set_kernel_sq_size error!\n");
-			goto err_rq_sge_list;
+			goto err_alloc_rq_inline_buf;
 		}
 
 		/* QP doorbell register address */
@@ -814,7 +836,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 			ret = hns_roce_alloc_db(hr_dev, &hr_qp->rdb, 0);
 			if (ret) {
 				dev_err(dev, "rq record doorbell alloc failed!\n");
-				goto err_rq_sge_list;
+				goto err_alloc_rq_inline_buf;
 			}
 			*hr_qp->rdb.db_record = 0;
 			hr_qp->rdb_en = 1;
@@ -980,15 +1002,10 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
 	    (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB))
 		hns_roce_free_db(hr_dev, &hr_qp->rdb);
 
-err_rq_sge_list:
-	if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) &&
-	     hns_roce_qp_has_rq(init_attr))
-		kfree(hr_qp->rq_inl_buf.wqe_list[0].sg_list);
-
-err_wqe_list:
+err_alloc_rq_inline_buf:
 	if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) &&
 	     hns_roce_qp_has_rq(init_attr))
-		kfree(hr_qp->rq_inl_buf.wqe_list);
+		free_rq_inline_buf(hr_qp);
 
 err_out:
 	return ret;
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 for-next 0/2] Fixes for hip08 driver
  2019-08-29  8:41 [PATCH v2 for-next 0/2] Fixes for hip08 driver Weihang Li
  2019-08-29  8:41 ` [PATCH v2 for-next 1/2] RDMA/hns: Optimize cmd init and mode selection for hip08 Weihang Li
  2019-08-29  8:41 ` [PATCH v2 for-next 2/2] RDMA/hns: Package operations of rq inline buffer into separate functions Weihang Li
@ 2019-08-30 15:26 ` Doug Ledford
  2 siblings, 0 replies; 4+ messages in thread
From: Doug Ledford @ 2019-08-30 15:26 UTC (permalink / raw)
  To: Weihang Li, jgg; +Cc: linux-rdma, linuxarm

[-- Attachment #1: Type: text/plain, Size: 1005 bytes --]

On Thu, 2019-08-29 at 16:41 +0800, Weihang Li wrote:
> Here optimizes some codes and removes some warnings by sparse tool
> checking as well as fixes some defects.
> 
> Changelog:
> v1->v2: Remove a redundant judgment in patch(1/2) as Leon Romanovsky
> 	suggested. Fix patch(2/2) as comments from Doug Ledford. Also
> 	some modifications on title and commit message.
> 
> Lijun Ou (1):
>   RDMA/hns: Package operations of rq inline buffer into separate
>     functions
> 
> Yixian Liu (1):
>   RDMA/hns: Optimize cmd init and mode selection for hip08
> 
>  drivers/infiniband/hw/hns/hns_roce_cmd.c  | 10 +---
>  drivers/infiniband/hw/hns/hns_roce_main.c |  8 +--
>  drivers/infiniband/hw/hns/hns_roce_qp.c   | 95 ++++++++++++++++++--
> -----------
>  3 files changed, 61 insertions(+), 52 deletions(-)
> 

Thanks, applied to for-next.

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-08-30 15:26 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-29  8:41 [PATCH v2 for-next 0/2] Fixes for hip08 driver Weihang Li
2019-08-29  8:41 ` [PATCH v2 for-next 1/2] RDMA/hns: Optimize cmd init and mode selection for hip08 Weihang Li
2019-08-29  8:41 ` [PATCH v2 for-next 2/2] RDMA/hns: Package operations of rq inline buffer into separate functions Weihang Li
2019-08-30 15:26 ` [PATCH v2 for-next 0/2] Fixes for hip08 driver Doug Ledford

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).