All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 for-next 0/5] misc update for RTRS
@ 2021-06-11 12:10 Jack Wang
  2021-06-11 12:10 ` [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr Jack Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Jack Wang @ 2021-06-11 12:10 UTC (permalink / raw)
  To: linux-rdma; +Cc: bvanassche, leon, dledford, jgg, haris.iqbal, jinpu.wang

Hi Jason, hi Doug,

Please consider to include following changes to the next merge window.
It contains:
- first 2 patches are for reducing the memory usage when create QP.
- the third one are to make testing on RXE working.
- the fourth one is just cleanup for variables.
- the last one is new to check max_qp_wr as suggested by Leon

v2->v1:
- A new patch to check device map_qp_wr when create QP.

v1: https://lore.kernel.org/linux-rdma/20210608103039.39080-1-jinpu.wang@ionos.com/T/#t

This patchset is based on rdma/for-next branch.

Note: I tried the patchset for fast memory registration on write path can still apply.
https://lore.kernel.org/linux-rdma/20210608113536.42965-1-jinpu.wang@ionos.com/T/#t

Guoqing Jiang (1):
  RDMA/rtrs: Rename some variables to avoid confusion

Jack Wang (3):
  RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr
  RDMA/rtrs-clt: Use minimal max_send_sge when create qp
  RDMA/rtrs: Check device max_qp_wr limit when create QP

Md Haris Iqbal (1):
  RDMA/rtrs: RDMA_RXE requires more number of WR

 drivers/infiniband/ulp/rtrs/rtrs-clt.c | 50 ++++++++++++++------------
 drivers/infiniband/ulp/rtrs/rtrs-clt.h |  3 +-
 drivers/infiniband/ulp/rtrs/rtrs-pri.h | 10 +++---
 drivers/infiniband/ulp/rtrs/rtrs-srv.c | 34 ++++++++++--------
 drivers/infiniband/ulp/rtrs/rtrs.c     | 24 ++++++-------
 5 files changed, 65 insertions(+), 56 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr
  2021-06-11 12:10 [PATCHv2 for-next 0/5] misc update for RTRS Jack Wang
@ 2021-06-11 12:10 ` Jack Wang
  2021-06-13  6:21   ` Leon Romanovsky
  2021-06-11 12:10 ` [PATCHv2 for-next 2/5] RDMA/rtrs-clt: Use minimal max_send_sge when create qp Jack Wang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Jack Wang @ 2021-06-11 12:10 UTC (permalink / raw)
  To: linux-rdma
  Cc: bvanassche, leon, dledford, jgg, haris.iqbal, jinpu.wang,
	Jack Wang, Md Haris Iqbal, Gioh Kim

From: Jack Wang <jinpu.wang@cloud.ionos.com>

Currently rtrs when create_qp use a coarse numbers (bigger in general),
which leads to hardware create more resources which only waste memory
with no benefits.

For max_send_wr, we don't really need alway max_qp_wr size when creating
qp, reduce it to cq_size.

For max_recv_wr,  cq_size is enough.

With the patch when sess_queue_depth=128, per session (2 paths)
memory consumption reduced from 188 MB to 65MB

When always_invalidate is enabled, we need send more wr,
so treat it special.

Fixes: 9cb837480424e ("RDMA/rtrs: server: main functionality")
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
---
 drivers/infiniband/ulp/rtrs/rtrs-srv.c | 38 +++++++++++++++++---------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
index 5639b29b8b02..04ec3080e9b5 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
@@ -1634,7 +1634,7 @@ static int create_con(struct rtrs_srv_sess *sess,
 	struct rtrs_sess *s = &sess->s;
 	struct rtrs_srv_con *con;
 
-	u32 cq_size, wr_queue_size;
+	u32 cq_size, max_send_wr, max_recv_wr, wr_limit;
 	int err, cq_vector;
 
 	con = kzalloc(sizeof(*con), GFP_KERNEL);
@@ -1655,30 +1655,42 @@ static int create_con(struct rtrs_srv_sess *sess,
 		 * All receive and all send (each requiring invalidate)
 		 * + 2 for drain and heartbeat
 		 */
-		wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
-		cq_size = wr_queue_size;
+		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
+		max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2;
+		cq_size = max_send_wr + max_recv_wr;
 	} else {
-		/*
-		 * If we have all receive requests posted and
-		 * all write requests posted and each read request
-		 * requires an invalidate request + drain
-		 * and qp gets into error state.
-		 */
-		cq_size = srv->queue_depth * 3 + 1;
 		/*
 		 * In theory we might have queue_depth * 32
 		 * outstanding requests if an unsafe global key is used
 		 * and we have queue_depth read requests each consisting
 		 * of 32 different addresses. div 3 for mlx5.
 		 */
-		wr_queue_size = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
+		wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
+		/* when always_invlaidate enalbed, we need linv+rinv+mr+imm */
+		if (always_invalidate)
+			max_send_wr =
+				min_t(int, wr_limit,
+				      srv->queue_depth * (1 + 4) + 1);
+		else
+			max_send_wr =
+				min_t(int, wr_limit,
+				      srv->queue_depth * (1 + 2) + 1);
+
+		max_recv_wr = srv->queue_depth + 1;
+		/*
+		 * If we have all receive requests posted and
+		 * all write requests posted and each read request
+		 * requires an invalidate request + drain
+		 * and qp gets into error state.
+		 */
+		cq_size = max_send_wr + max_recv_wr;
 	}
-	atomic_set(&con->sq_wr_avail, wr_queue_size);
+	atomic_set(&con->sq_wr_avail, max_send_wr);
 	cq_vector = rtrs_srv_get_next_cq_vector(sess);
 
 	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
 	err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
-				 wr_queue_size, wr_queue_size,
+				 max_send_wr, max_recv_wr,
 				 IB_POLL_WORKQUEUE);
 	if (err) {
 		rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 for-next 2/5] RDMA/rtrs-clt: Use minimal max_send_sge when create qp
  2021-06-11 12:10 [PATCHv2 for-next 0/5] misc update for RTRS Jack Wang
  2021-06-11 12:10 ` [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr Jack Wang
@ 2021-06-11 12:10 ` Jack Wang
  2021-06-13  6:23   ` Leon Romanovsky
  2021-06-11 12:10 ` [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR Jack Wang
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Jack Wang @ 2021-06-11 12:10 UTC (permalink / raw)
  To: linux-rdma
  Cc: bvanassche, leon, dledford, jgg, haris.iqbal, jinpu.wang,
	Jack Wang, Guoqing Jiang

From: Jack Wang <jinpu.wang@cloud.ionos.com>

We use device limit max_send_sge, which is suboptimal for memory usage.
We don't need that much for User Con, 1 is enough. And for IO con,
sess->max_segments + 1 is enough

Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
 drivers/infiniband/ulp/rtrs/rtrs-clt.c | 14 ++++++++------
 drivers/infiniband/ulp/rtrs/rtrs-clt.h |  1 -
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index f1fd7ae9ac53..cd53edddfe1f 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -1417,7 +1417,6 @@ static void query_fast_reg_mode(struct rtrs_clt_sess *sess)
 	sess->max_pages_per_mr =
 		min3(sess->max_pages_per_mr, (u32)max_pages_per_mr,
 		     ib_dev->attrs.max_fast_reg_page_list_len);
-	sess->max_send_sge = ib_dev->attrs.max_send_sge;
 }
 
 static bool rtrs_clt_change_state_get_old(struct rtrs_clt_sess *sess,
@@ -1573,7 +1572,7 @@ static void destroy_con(struct rtrs_clt_con *con)
 static int create_con_cq_qp(struct rtrs_clt_con *con)
 {
 	struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess);
-	u32 max_send_wr, max_recv_wr, cq_size;
+	u32 max_send_wr, max_recv_wr, cq_size, max_send_sge;
 	int err, cq_vector;
 	struct rtrs_msg_rkey_rsp *rsp;
 
@@ -1587,6 +1586,7 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
 		 */
 		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
 		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
+		max_send_sge = 1;
 		/* We must be the first here */
 		if (WARN_ON(sess->s.dev))
 			return -EINVAL;
@@ -1625,25 +1625,27 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
 		max_recv_wr =
 			min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr,
 			      sess->queue_depth * 3 + 1);
+		max_send_sge = sess->clt->max_segments + 1;
 	}
+	cq_size = max_send_wr + max_recv_wr;
 	/* alloc iu to recv new rkey reply when server reports flags set */
 	if (sess->flags & RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) {
-		con->rsp_ius = rtrs_iu_alloc(max_recv_wr, sizeof(*rsp),
+		con->rsp_ius = rtrs_iu_alloc(cq_size, sizeof(*rsp),
 					      GFP_KERNEL, sess->s.dev->ib_dev,
 					      DMA_FROM_DEVICE,
 					      rtrs_clt_rdma_done);
 		if (!con->rsp_ius)
 			return -ENOMEM;
-		con->queue_size = max_recv_wr;
+		con->queue_size = cq_size;
 	}
 	cq_size = max_send_wr + max_recv_wr;
 	cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors;
 	if (con->c.cid >= sess->s.irq_con_num)
-		err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge,
+		err = rtrs_cq_qp_create(&sess->s, &con->c, max_send_sge,
 					cq_vector, cq_size, max_send_wr,
 					max_recv_wr, IB_POLL_DIRECT);
 	else
-		err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge,
+		err = rtrs_cq_qp_create(&sess->s, &con->c, max_send_sge,
 					cq_vector, cq_size, max_send_wr,
 					max_recv_wr, IB_POLL_SOFTIRQ);
 	/*
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
index 919c9f96f25b..822a820540d4 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
@@ -141,7 +141,6 @@ struct rtrs_clt_sess {
 	u32			chunk_size;
 	size_t			queue_depth;
 	u32			max_pages_per_mr;
-	int			max_send_sge;
 	u32			flags;
 	struct kobject		kobj;
 	u8			for_new_clt;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR
  2021-06-11 12:10 [PATCHv2 for-next 0/5] misc update for RTRS Jack Wang
  2021-06-11 12:10 ` [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr Jack Wang
  2021-06-11 12:10 ` [PATCHv2 for-next 2/5] RDMA/rtrs-clt: Use minimal max_send_sge when create qp Jack Wang
@ 2021-06-11 12:10 ` Jack Wang
  2021-06-13  6:25   ` Leon Romanovsky
  2021-06-11 12:10 ` [PATCHv2 for-next 4/5] RDMA/rtrs: Rename some variables to avoid confusion Jack Wang
  2021-06-11 12:10 ` [PATCHv2 for-next 5/5] RDMA/rtrs: Check device max_qp_wr limit when create QP Jack Wang
  4 siblings, 1 reply; 12+ messages in thread
From: Jack Wang @ 2021-06-11 12:10 UTC (permalink / raw)
  To: linux-rdma
  Cc: bvanassche, leon, dledford, jgg, haris.iqbal, jinpu.wang,
	Md Haris Iqbal, Jack Wang, Gioh Kim

From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>

When using rdma_rxe, post_one_recv() returns
NOMEM error due to the full recv queue.
This patch increase the number of WR for receive queue
to support all devices.

Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
---
 drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++---
 drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index cd53edddfe1f..acf0fde410c3 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
 	lockdep_assert_held(&con->con_mutex);
 	if (con->c.cid == 0) {
 		/*
-		 * One completion for each receive and two for each send
-		 * (send request + registration)
+		 * Two (request + registration) completion for send
+		 * Two for recv if always_invalidate is set on server
+		 * or one for recv.
 		 * + 2 for drain and heartbeat
-		 * in case qp gets into error state
+		 * in case qp gets into error state.
 		 */
 		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
 		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
index 04ec3080e9b5..bb73f7762a87 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
@@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess,
 		 * + 2 for drain and heartbeat
 		 */
 		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
-		max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2;
+		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
 		cq_size = max_send_wr + max_recv_wr;
 	} else {
 		/*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 for-next 4/5] RDMA/rtrs: Rename some variables to avoid confusion
  2021-06-11 12:10 [PATCHv2 for-next 0/5] misc update for RTRS Jack Wang
                   ` (2 preceding siblings ...)
  2021-06-11 12:10 ` [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR Jack Wang
@ 2021-06-11 12:10 ` Jack Wang
  2021-06-13  6:32   ` Leon Romanovsky
  2021-06-11 12:10 ` [PATCHv2 for-next 5/5] RDMA/rtrs: Check device max_qp_wr limit when create QP Jack Wang
  4 siblings, 1 reply; 12+ messages in thread
From: Jack Wang @ 2021-06-11 12:10 UTC (permalink / raw)
  To: linux-rdma
  Cc: bvanassche, leon, dledford, jgg, haris.iqbal, jinpu.wang,
	Guoqing Jiang, Md Haris Iqbal, Jack Wang

From: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>

Those variables are passed to create_cq, create_qp, rtrs_iu_alloc and
rtrs_iu_free, so these *_size means the num of unit. And cq_size also
means number of cq element.

Also move the setting of cq_num to common path.

Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/infiniband/ulp/rtrs/rtrs-clt.c | 18 +++++++++---------
 drivers/infiniband/ulp/rtrs/rtrs-clt.h |  2 +-
 drivers/infiniband/ulp/rtrs/rtrs-pri.h | 10 +++++-----
 drivers/infiniband/ulp/rtrs/rtrs-srv.c |  7 +++----
 drivers/infiniband/ulp/rtrs/rtrs.c     | 24 ++++++++++++------------
 5 files changed, 30 insertions(+), 31 deletions(-)

diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index acf0fde410c3..67ff5bf9bfa8 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -1572,7 +1572,7 @@ static void destroy_con(struct rtrs_clt_con *con)
 static int create_con_cq_qp(struct rtrs_clt_con *con)
 {
 	struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess);
-	u32 max_send_wr, max_recv_wr, cq_size, max_send_sge;
+	u32 max_send_wr, max_recv_wr, cq_num, max_send_sge;
 	int err, cq_vector;
 	struct rtrs_msg_rkey_rsp *rsp;
 
@@ -1628,26 +1628,26 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
 			      sess->queue_depth * 3 + 1);
 		max_send_sge = sess->clt->max_segments + 1;
 	}
-	cq_size = max_send_wr + max_recv_wr;
+	cq_num = max_send_wr + max_recv_wr;
 	/* alloc iu to recv new rkey reply when server reports flags set */
 	if (sess->flags & RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) {
-		con->rsp_ius = rtrs_iu_alloc(cq_size, sizeof(*rsp),
+		con->rsp_ius = rtrs_iu_alloc(cq_num, sizeof(*rsp),
 					      GFP_KERNEL, sess->s.dev->ib_dev,
 					      DMA_FROM_DEVICE,
 					      rtrs_clt_rdma_done);
 		if (!con->rsp_ius)
 			return -ENOMEM;
-		con->queue_size = cq_size;
+		con->queue_num = cq_num;
 	}
-	cq_size = max_send_wr + max_recv_wr;
+	cq_num = max_send_wr + max_recv_wr;
 	cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors;
 	if (con->c.cid >= sess->s.irq_con_num)
 		err = rtrs_cq_qp_create(&sess->s, &con->c, max_send_sge,
-					cq_vector, cq_size, max_send_wr,
+					cq_vector, cq_num, max_send_wr,
 					max_recv_wr, IB_POLL_DIRECT);
 	else
 		err = rtrs_cq_qp_create(&sess->s, &con->c, max_send_sge,
-					cq_vector, cq_size, max_send_wr,
+					cq_vector, cq_num, max_send_wr,
 					max_recv_wr, IB_POLL_SOFTIRQ);
 	/*
 	 * In case of error we do not bother to clean previous allocations,
@@ -1667,9 +1667,9 @@ static void destroy_con_cq_qp(struct rtrs_clt_con *con)
 	lockdep_assert_held(&con->con_mutex);
 	rtrs_cq_qp_destroy(&con->c);
 	if (con->rsp_ius) {
-		rtrs_iu_free(con->rsp_ius, sess->s.dev->ib_dev, con->queue_size);
+		rtrs_iu_free(con->rsp_ius, sess->s.dev->ib_dev, con->queue_num);
 		con->rsp_ius = NULL;
-		con->queue_size = 0;
+		con->queue_num = 0;
 	}
 	if (sess->s.dev_ref && !--sess->s.dev_ref) {
 		rtrs_ib_dev_put(sess->s.dev);
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
index 822a820540d4..eed2a20ee9be 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
@@ -71,7 +71,7 @@ struct rtrs_clt_stats {
 struct rtrs_clt_con {
 	struct rtrs_con	c;
 	struct rtrs_iu		*rsp_ius;
-	u32			queue_size;
+	u32			queue_num;
 	unsigned int		cpu;
 	struct mutex		con_mutex;
 	atomic_t		io_cnt;
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
index bd06a79fd516..76cca2058f6f 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
@@ -95,7 +95,7 @@ struct rtrs_con {
 	struct ib_cq		*cq;
 	struct rdma_cm_id	*cm_id;
 	unsigned int		cid;
-	u16                     cq_size;
+	int                     nr_cqe;
 };
 
 struct rtrs_sess {
@@ -294,10 +294,10 @@ struct rtrs_msg_rdma_hdr {
 
 /* rtrs.c */
 
-struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t t,
+struct rtrs_iu *rtrs_iu_alloc(u32 queue_num, size_t size, gfp_t t,
 			      struct ib_device *dev, enum dma_data_direction,
 			      void (*done)(struct ib_cq *cq, struct ib_wc *wc));
-void rtrs_iu_free(struct rtrs_iu *iu, struct ib_device *dev, u32 queue_size);
+void rtrs_iu_free(struct rtrs_iu *iu, struct ib_device *dev, u32 queue_num);
 int rtrs_iu_post_recv(struct rtrs_con *con, struct rtrs_iu *iu);
 int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
 		      struct ib_send_wr *head);
@@ -312,8 +312,8 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe,
 				   u32 imm_data, enum ib_send_flags flags,
 				   struct ib_send_wr *head);
 
-int rtrs_cq_qp_create(struct rtrs_sess *rtrs_sess, struct rtrs_con *con,
-		      u32 max_send_sge, int cq_vector, int cq_size,
+int rtrs_cq_qp_create(struct rtrs_sess *sess, struct rtrs_con *con,
+		      u32 max_send_sge, int cq_vector, int nr_cqe,
 		      u32 max_send_wr, u32 max_recv_wr,
 		      enum ib_poll_context poll_ctx);
 void rtrs_cq_qp_destroy(struct rtrs_con *con);
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
index bb73f7762a87..c10dfc296259 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
@@ -1634,7 +1634,7 @@ static int create_con(struct rtrs_srv_sess *sess,
 	struct rtrs_sess *s = &sess->s;
 	struct rtrs_srv_con *con;
 
-	u32 cq_size, max_send_wr, max_recv_wr, wr_limit;
+	u32 cq_num, max_send_wr, max_recv_wr, wr_limit;
 	int err, cq_vector;
 
 	con = kzalloc(sizeof(*con), GFP_KERNEL);
@@ -1657,7 +1657,6 @@ static int create_con(struct rtrs_srv_sess *sess,
 		 */
 		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
 		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
-		cq_size = max_send_wr + max_recv_wr;
 	} else {
 		/*
 		 * In theory we might have queue_depth * 32
@@ -1683,13 +1682,13 @@ static int create_con(struct rtrs_srv_sess *sess,
 		 * requires an invalidate request + drain
 		 * and qp gets into error state.
 		 */
-		cq_size = max_send_wr + max_recv_wr;
 	}
+	cq_num = max_send_wr + max_recv_wr;
 	atomic_set(&con->sq_wr_avail, max_send_wr);
 	cq_vector = rtrs_srv_get_next_cq_vector(sess);
 
 	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
-	err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
+	err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_num,
 				 max_send_wr, max_recv_wr,
 				 IB_POLL_WORKQUEUE);
 	if (err) {
diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
index 4e602e40f623..08e1f7d82c95 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs.c
@@ -18,7 +18,7 @@
 MODULE_DESCRIPTION("RDMA Transport Core");
 MODULE_LICENSE("GPL");
 
-struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
+struct rtrs_iu *rtrs_iu_alloc(u32 iu_num, size_t size, gfp_t gfp_mask,
 			      struct ib_device *dma_dev,
 			      enum dma_data_direction dir,
 			      void (*done)(struct ib_cq *cq, struct ib_wc *wc))
@@ -26,10 +26,10 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
 	struct rtrs_iu *ius, *iu;
 	int i;
 
-	ius = kcalloc(queue_size, sizeof(*ius), gfp_mask);
+	ius = kcalloc(iu_num, sizeof(*ius), gfp_mask);
 	if (!ius)
 		return NULL;
-	for (i = 0; i < queue_size; i++) {
+	for (i = 0; i < iu_num; i++) {
 		iu = &ius[i];
 		iu->direction = dir;
 		iu->buf = kzalloc(size, gfp_mask);
@@ -50,7 +50,7 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
 }
 EXPORT_SYMBOL_GPL(rtrs_iu_alloc);
 
-void rtrs_iu_free(struct rtrs_iu *ius, struct ib_device *ibdev, u32 queue_size)
+void rtrs_iu_free(struct rtrs_iu *ius, struct ib_device *ibdev, u32 queue_num)
 {
 	struct rtrs_iu *iu;
 	int i;
@@ -58,7 +58,7 @@ void rtrs_iu_free(struct rtrs_iu *ius, struct ib_device *ibdev, u32 queue_size)
 	if (!ius)
 		return;
 
-	for (i = 0; i < queue_size; i++) {
+	for (i = 0; i < queue_num; i++) {
 		iu = &ius[i];
 		ib_dma_unmap_single(ibdev, iu->dma_addr, iu->size, iu->direction);
 		kfree(iu->buf);
@@ -212,20 +212,20 @@ static void qp_event_handler(struct ib_event *ev, void *ctx)
 	}
 }
 
-static int create_cq(struct rtrs_con *con, int cq_vector, u16 cq_size,
+static int create_cq(struct rtrs_con *con, int cq_vector, int nr_cqe,
 		     enum ib_poll_context poll_ctx)
 {
 	struct rdma_cm_id *cm_id = con->cm_id;
 	struct ib_cq *cq;
 
-	cq = ib_cq_pool_get(cm_id->device, cq_size, cq_vector, poll_ctx);
+	cq = ib_cq_pool_get(cm_id->device, nr_cqe, cq_vector, poll_ctx);
 	if (IS_ERR(cq)) {
 		rtrs_err(con->sess, "Creating completion queue failed, errno: %ld\n",
 			  PTR_ERR(cq));
 		return PTR_ERR(cq);
 	}
 	con->cq = cq;
-	con->cq_size = cq_size;
+	con->nr_cqe = nr_cqe;
 
 	return 0;
 }
@@ -260,20 +260,20 @@ static int create_qp(struct rtrs_con *con, struct ib_pd *pd,
 }
 
 int rtrs_cq_qp_create(struct rtrs_sess *sess, struct rtrs_con *con,
-		       u32 max_send_sge, int cq_vector, int cq_size,
+		       u32 max_send_sge, int cq_vector, int nr_cqe,
 		       u32 max_send_wr, u32 max_recv_wr,
 		       enum ib_poll_context poll_ctx)
 {
 	int err;
 
-	err = create_cq(con, cq_vector, cq_size, poll_ctx);
+	err = create_cq(con, cq_vector, nr_cqe, poll_ctx);
 	if (err)
 		return err;
 
 	err = create_qp(con, sess->dev->ib_pd, max_send_wr, max_recv_wr,
 			max_send_sge);
 	if (err) {
-		ib_cq_pool_put(con->cq, con->cq_size);
+		ib_cq_pool_put(con->cq, con->nr_cqe);
 		con->cq = NULL;
 		return err;
 	}
@@ -290,7 +290,7 @@ void rtrs_cq_qp_destroy(struct rtrs_con *con)
 		con->qp = NULL;
 	}
 	if (con->cq) {
-		ib_cq_pool_put(con->cq, con->cq_size);
+		ib_cq_pool_put(con->cq, con->nr_cqe);
 		con->cq = NULL;
 	}
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 for-next 5/5] RDMA/rtrs: Check device max_qp_wr limit when create QP
  2021-06-11 12:10 [PATCHv2 for-next 0/5] misc update for RTRS Jack Wang
                   ` (3 preceding siblings ...)
  2021-06-11 12:10 ` [PATCHv2 for-next 4/5] RDMA/rtrs: Rename some variables to avoid confusion Jack Wang
@ 2021-06-11 12:10 ` Jack Wang
  2021-06-13  6:36   ` Leon Romanovsky
  4 siblings, 1 reply; 12+ messages in thread
From: Jack Wang @ 2021-06-11 12:10 UTC (permalink / raw)
  To: linux-rdma
  Cc: bvanassche, leon, dledford, jgg, haris.iqbal, jinpu.wang,
	Leon Romanovsky, Gioh Kim

Currently we only check device max_qp_wr limit for IO connection,
but not for service connection. We should check for both.

So save the max_qp_wr device limit in wr_limit, and use it for both
IO connections and service connections.

While at it, also remove an outdated comments.

Suggested-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
---
 drivers/infiniband/ulp/rtrs/rtrs-clt.c | 29 +++++++++++++-------------
 drivers/infiniband/ulp/rtrs/rtrs-srv.c | 13 ++++--------
 2 files changed, 19 insertions(+), 23 deletions(-)

diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index 67ff5bf9bfa8..125e0bead262 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -1572,21 +1572,12 @@ static void destroy_con(struct rtrs_clt_con *con)
 static int create_con_cq_qp(struct rtrs_clt_con *con)
 {
 	struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess);
-	u32 max_send_wr, max_recv_wr, cq_num, max_send_sge;
+	u32 max_send_wr, max_recv_wr, cq_num, max_send_sge, wr_limit;
 	int err, cq_vector;
 	struct rtrs_msg_rkey_rsp *rsp;
 
 	lockdep_assert_held(&con->con_mutex);
 	if (con->c.cid == 0) {
-		/*
-		 * Two (request + registration) completion for send
-		 * Two for recv if always_invalidate is set on server
-		 * or one for recv.
-		 * + 2 for drain and heartbeat
-		 * in case qp gets into error state.
-		 */
-		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
-		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
 		max_send_sge = 1;
 		/* We must be the first here */
 		if (WARN_ON(sess->s.dev))
@@ -1606,6 +1597,17 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
 		}
 		sess->s.dev_ref = 1;
 		query_fast_reg_mode(sess);
+		wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr;
+		/*
+		 * Two (request + registration) completion for send
+		 * Two for recv if always_invalidate is set on server
+		 * or one for recv.
+		 * + 2 for drain and heartbeat
+		 * in case qp gets into error state.
+		 */
+		max_send_wr =
+			min_t(int, wr_limit, SERVICE_CON_QUEUE_DEPTH * 2 + 2);
+		max_recv_wr = max_send_wr;
 	} else {
 		/*
 		 * Here we assume that session members are correctly set.
@@ -1617,14 +1619,13 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
 		if (WARN_ON(!sess->queue_depth))
 			return -EINVAL;
 
+		wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr;
 		/* Shared between connections */
 		sess->s.dev_ref++;
-		max_send_wr =
-			min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr,
+		max_send_wr = min_t(int, wr_limit,
 			      /* QD * (REQ + RSP + FR REGS or INVS) + drain */
 			      sess->queue_depth * 3 + 1);
-		max_recv_wr =
-			min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr,
+		max_recv_wr = min_t(int, wr_limit,
 			      sess->queue_depth * 3 + 1);
 		max_send_sge = sess->clt->max_segments + 1;
 	}
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
index c10dfc296259..1a30fd833792 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
@@ -1649,22 +1649,17 @@ static int create_con(struct rtrs_srv_sess *sess,
 	con->c.sess = &sess->s;
 	con->c.cid = cid;
 	atomic_set(&con->wr_cnt, 1);
+	wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr;
 
 	if (con->c.cid == 0) {
 		/*
 		 * All receive and all send (each requiring invalidate)
 		 * + 2 for drain and heartbeat
 		 */
-		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
-		max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
+		max_send_wr = min_t(int, wr_limit,
+				    SERVICE_CON_QUEUE_DEPTH * 2 + 2);
+		max_recv_wr = max_send_wr;
 	} else {
-		/*
-		 * In theory we might have queue_depth * 32
-		 * outstanding requests if an unsafe global key is used
-		 * and we have queue_depth read requests each consisting
-		 * of 32 different addresses. div 3 for mlx5.
-		 */
-		wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
 		/* when always_invlaidate enalbed, we need linv+rinv+mr+imm */
 		if (always_invalidate)
 			max_send_wr =
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr
  2021-06-11 12:10 ` [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr Jack Wang
@ 2021-06-13  6:21   ` Leon Romanovsky
  0 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2021-06-13  6:21 UTC (permalink / raw)
  To: Jack Wang
  Cc: linux-rdma, bvanassche, dledford, jgg, haris.iqbal, Jack Wang,
	Md Haris Iqbal, Gioh Kim

On Fri, Jun 11, 2021 at 02:10:30PM +0200, Jack Wang wrote:
> From: Jack Wang <jinpu.wang@cloud.ionos.com>
> 
> Currently rtrs when create_qp use a coarse numbers (bigger in general),
> which leads to hardware create more resources which only waste memory
> with no benefits.
> 
> For max_send_wr, we don't really need alway max_qp_wr size when creating
> qp, reduce it to cq_size.
> 
> For max_recv_wr,  cq_size is enough.
> 
> With the patch when sess_queue_depth=128, per session (2 paths)
> memory consumption reduced from 188 MB to 65MB
> 
> When always_invalidate is enabled, we need send more wr,
> so treat it special.
> 
> Fixes: 9cb837480424e ("RDMA/rtrs: server: main functionality")
> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
> ---
>  drivers/infiniband/ulp/rtrs/rtrs-srv.c | 38 +++++++++++++++++---------
>  1 file changed, 25 insertions(+), 13 deletions(-)
> 

Thanks,
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 for-next 2/5] RDMA/rtrs-clt: Use minimal max_send_sge when create qp
  2021-06-11 12:10 ` [PATCHv2 for-next 2/5] RDMA/rtrs-clt: Use minimal max_send_sge when create qp Jack Wang
@ 2021-06-13  6:23   ` Leon Romanovsky
  0 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2021-06-13  6:23 UTC (permalink / raw)
  To: Jack Wang
  Cc: linux-rdma, bvanassche, dledford, jgg, haris.iqbal, Jack Wang,
	Guoqing Jiang

On Fri, Jun 11, 2021 at 02:10:31PM +0200, Jack Wang wrote:
> From: Jack Wang <jinpu.wang@cloud.ionos.com>
> 
> We use device limit max_send_sge, which is suboptimal for memory usage.
> We don't need that much for User Con, 1 is enough. And for IO con,
> sess->max_segments + 1 is enough
> 
> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
> ---
>  drivers/infiniband/ulp/rtrs/rtrs-clt.c | 14 ++++++++------
>  drivers/infiniband/ulp/rtrs/rtrs-clt.h |  1 -
>  2 files changed, 8 insertions(+), 7 deletions(-)
> 

Thanks,
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR
  2021-06-11 12:10 ` [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR Jack Wang
@ 2021-06-13  6:25   ` Leon Romanovsky
  2021-06-14  6:06     ` Jinpu Wang
  0 siblings, 1 reply; 12+ messages in thread
From: Leon Romanovsky @ 2021-06-13  6:25 UTC (permalink / raw)
  To: Jack Wang
  Cc: linux-rdma, bvanassche, dledford, jgg, haris.iqbal,
	Md Haris Iqbal, Jack Wang, Gioh Kim

On Fri, Jun 11, 2021 at 02:10:32PM +0200, Jack Wang wrote:
> From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> 
> When using rdma_rxe, post_one_recv() returns
> NOMEM error due to the full recv queue.
> This patch increase the number of WR for receive queue
> to support all devices.
> 
> Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
> ---
>  drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++---
>  drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +-
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 

NOMEM -> ENOMEM

Thanks,
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 for-next 4/5] RDMA/rtrs: Rename some variables to avoid confusion
  2021-06-11 12:10 ` [PATCHv2 for-next 4/5] RDMA/rtrs: Rename some variables to avoid confusion Jack Wang
@ 2021-06-13  6:32   ` Leon Romanovsky
  0 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2021-06-13  6:32 UTC (permalink / raw)
  To: Jack Wang
  Cc: linux-rdma, bvanassche, dledford, jgg, haris.iqbal,
	Guoqing Jiang, Md Haris Iqbal, Jack Wang

On Fri, Jun 11, 2021 at 02:10:33PM +0200, Jack Wang wrote:
> From: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
> 
> Those variables are passed to create_cq, create_qp, rtrs_iu_alloc and
> rtrs_iu_free, so these *_size means the num of unit. And cq_size also
> means number of cq element.
> 
> Also move the setting of cq_num to common path.
> 
> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
> Reviewed-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> ---
>  drivers/infiniband/ulp/rtrs/rtrs-clt.c | 18 +++++++++---------
>  drivers/infiniband/ulp/rtrs/rtrs-clt.h |  2 +-
>  drivers/infiniband/ulp/rtrs/rtrs-pri.h | 10 +++++-----
>  drivers/infiniband/ulp/rtrs/rtrs-srv.c |  7 +++----
>  drivers/infiniband/ulp/rtrs/rtrs.c     | 24 ++++++++++++------------
>  5 files changed, 30 insertions(+), 31 deletions(-)
> 

Commit message is worth to rewrite.

Thanks,
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 for-next 5/5] RDMA/rtrs: Check device max_qp_wr limit when create QP
  2021-06-11 12:10 ` [PATCHv2 for-next 5/5] RDMA/rtrs: Check device max_qp_wr limit when create QP Jack Wang
@ 2021-06-13  6:36   ` Leon Romanovsky
  0 siblings, 0 replies; 12+ messages in thread
From: Leon Romanovsky @ 2021-06-13  6:36 UTC (permalink / raw)
  To: Jack Wang; +Cc: linux-rdma, bvanassche, dledford, jgg, haris.iqbal, Gioh Kim

On Fri, Jun 11, 2021 at 02:10:34PM +0200, Jack Wang wrote:
> Currently we only check device max_qp_wr limit for IO connection,
> but not for service connection. We should check for both.
> 
> So save the max_qp_wr device limit in wr_limit, and use it for both
> IO connections and service connections.
> 
> While at it, also remove an outdated comments.
> 
> Suggested-by: Leon Romanovsky <leonro@nvidia.com>
> Signed-off-by: Jack Wang <jinpu.wang@ionos.com>
> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
> ---
>  drivers/infiniband/ulp/rtrs/rtrs-clt.c | 29 +++++++++++++-------------
>  drivers/infiniband/ulp/rtrs/rtrs-srv.c | 13 ++++--------
>  2 files changed, 19 insertions(+), 23 deletions(-)
> 

Thanks,
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR
  2021-06-13  6:25   ` Leon Romanovsky
@ 2021-06-14  6:06     ` Jinpu Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Jinpu Wang @ 2021-06-14  6:06 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: RDMA mailing list, Bart Van Assche, Doug Ledford,
	Jason Gunthorpe, Haris Iqbal, Md Haris Iqbal, Jack Wang,
	Gioh Kim

On Sun, Jun 13, 2021 at 12:12 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Fri, Jun 11, 2021 at 02:10:32PM +0200, Jack Wang wrote:
> > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> >
> > When using rdma_rxe, post_one_recv() returns
> > NOMEM error due to the full recv queue.
> > This patch increase the number of WR for receive queue
> > to support all devices.
> >
> > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> > Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
> > ---
> >  drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++---
> >  drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +-
> >  2 files changed, 5 insertions(+), 4 deletions(-)
> >
>
> NOMEM -> ENOMEM
>
> Thanks,
> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Thanks for the review, I will address all the comments in the next round.

Regards!

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-06-14  6:07 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-11 12:10 [PATCHv2 for-next 0/5] misc update for RTRS Jack Wang
2021-06-11 12:10 ` [PATCHv2 for-next 1/5] RDMA/rtrs-srv: Set minimal max_send_wr and max_recv_wr Jack Wang
2021-06-13  6:21   ` Leon Romanovsky
2021-06-11 12:10 ` [PATCHv2 for-next 2/5] RDMA/rtrs-clt: Use minimal max_send_sge when create qp Jack Wang
2021-06-13  6:23   ` Leon Romanovsky
2021-06-11 12:10 ` [PATCHv2 for-next 3/5] RDMA/rtrs: RDMA_RXE requires more number of WR Jack Wang
2021-06-13  6:25   ` Leon Romanovsky
2021-06-14  6:06     ` Jinpu Wang
2021-06-11 12:10 ` [PATCHv2 for-next 4/5] RDMA/rtrs: Rename some variables to avoid confusion Jack Wang
2021-06-13  6:32   ` Leon Romanovsky
2021-06-11 12:10 ` [PATCHv2 for-next 5/5] RDMA/rtrs: Check device max_qp_wr limit when create QP Jack Wang
2021-06-13  6:36   ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.