All of lore.kernel.org
 help / color / mirror / Atom feed
* [V2 rdma-core 0/4] Broadcom's rdma provider lib update
@ 2021-05-05 17:10 Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 1/4] bnxt_re/lib: Check AH handler validity before use Devesh Sharma
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Devesh Sharma @ 2021-05-05 17:10 UTC (permalink / raw)
  To: linux-rdma; +Cc: Devesh Sharma

[-- Attachment #1: Type: text/plain, Size: 847 bytes --]

This patch series is a part of bigger feature submission which
changes the wqe posting logic. The current series adds the
ground work in the direction to change the wqe posting algorithm.

v1->v2
- added Fixes tag
- updated patch description
- dropped if() check before free.

Devesh Sharma (4):
  bnxt_re/lib: Check AH handler validity before use
  bnxt_re/lib: align base sq entry structure to 16B boundary
  bnxt_re/lib: consolidate hwque and swque in common structure
  bnxt_re/lib: query device attributes only once and store

 providers/bnxt_re/bnxt_re-abi.h |  24 ++---
 providers/bnxt_re/db.c          |   6 +-
 providers/bnxt_re/main.c        |  31 +++---
 providers/bnxt_re/main.h        |  15 ++-
 providers/bnxt_re/verbs.c       | 182 +++++++++++++++++---------------
 5 files changed, 138 insertions(+), 120 deletions(-)

-- 
2.25.1


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [V2 rdma-core 1/4] bnxt_re/lib: Check AH handler validity before use
  2021-05-05 17:10 [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
@ 2021-05-05 17:10 ` Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 2/4] bnxt_re/lib: align base sq entry structure to 16B boundary Devesh Sharma
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2021-05-05 17:10 UTC (permalink / raw)
  To: linux-rdma; +Cc: Devesh Sharma

[-- Attachment #1: Type: text/plain, Size: 1039 bytes --]

Provider library should check the validity of AH
handler before referencing it.

Fixing the AH validity check when initializing the
UD SQE from AH.

Fixes: 60ce22c59eaa ("Enable-UD-control-path-and-wqe-posting")
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
---
 providers/bnxt_re/verbs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c
index ca561662..a015bed7 100644
--- a/providers/bnxt_re/verbs.c
+++ b/providers/bnxt_re/verbs.c
@@ -1193,13 +1193,13 @@ static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe,
 	int len;
 
 	len = bnxt_re_build_send_sqe(qp, wqe, wr, is_inline);
-	sqe->qkey = htole32(wr->wr.ud.remote_qkey);
-	sqe->dst_qp = htole32(wr->wr.ud.remote_qpn);
 	if (!wr->wr.ud.ah) {
 		len = -EINVAL;
 		goto bail;
 	}
 	ah = to_bnxt_re_ah(wr->wr.ud.ah);
+	sqe->qkey = htole32(wr->wr.ud.remote_qkey);
+	sqe->dst_qp = htole32(wr->wr.ud.remote_qpn);
 	sqe->avid = htole32(ah->avid & 0xFFFFF);
 bail:
 	return len;
-- 
2.25.1


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [V2 rdma-core 2/4] bnxt_re/lib: align base sq entry structure to 16B boundary
  2021-05-05 17:10 [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 1/4] bnxt_re/lib: Check AH handler validity before use Devesh Sharma
@ 2021-05-05 17:10 ` Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 3/4] bnxt_re/lib: consolidate hwque and swque in common structure Devesh Sharma
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2021-05-05 17:10 UTC (permalink / raw)
  To: linux-rdma; +Cc: Devesh Sharma

[-- Attachment #1: Type: text/plain, Size: 5961 bytes --]

Breaking the send_wqe and recv_wqe hardware
specific interface into smaller chunk size instead of
fixed 128B chunks. This make both post-send and post-recv
flexible.

Fixes: d2745fe2ab86 ("Add support for posting and polling")
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
---
 providers/bnxt_re/bnxt_re-abi.h | 24 ++++++++++--------------
 providers/bnxt_re/verbs.c       | 26 ++++++++++++--------------
 2 files changed, 22 insertions(+), 28 deletions(-)

diff --git a/providers/bnxt_re/bnxt_re-abi.h b/providers/bnxt_re/bnxt_re-abi.h
index c6998e85..c82019e8 100644
--- a/providers/bnxt_re/bnxt_re-abi.h
+++ b/providers/bnxt_re/bnxt_re-abi.h
@@ -234,9 +234,16 @@ struct bnxt_re_term_cqe {
 	__le64 rsvd1;
 };
 
+union lower_shdr {
+	__le64 qkey_len;
+	__le64 lkey_plkey;
+	__le64 rva;
+};
+
 struct bnxt_re_bsqe {
 	__le32 rsv_ws_fl_wt;
 	__le32 key_immd;
+	union lower_shdr lhdr;
 };
 
 struct bnxt_re_psns {
@@ -262,42 +269,33 @@ struct bnxt_re_sge {
 #define BNXT_RE_MAX_INLINE_SIZE		0x60
 
 struct bnxt_re_send {
-	__le32 length;
-	__le32 qkey;
 	__le32 dst_qp;
 	__le32 avid;
 	__le64 rsvd;
 };
 
 struct bnxt_re_raw {
-	__le32 length;
-	__le32 rsvd1;
 	__le32 cfa_meta;
 	__le32 rsvd2;
 	__le64 rsvd3;
 };
 
 struct bnxt_re_rdma {
-	__le32 length;
-	__le32 rsvd1;
 	__le64 rva;
 	__le32 rkey;
 	__le32 rsvd2;
 };
 
 struct bnxt_re_atomic {
-	__le64 rva;
 	__le64 swp_dt;
 	__le64 cmp_dt;
 };
 
 struct bnxt_re_inval {
-	__le64 rsvd[3];
+	__le64 rsvd[2];
 };
 
 struct bnxt_re_bind {
-	__le32 plkey;
-	__le32 lkey;
 	__le64 va;
 	__le64 len; /* only 40 bits are valid */
 };
@@ -305,17 +303,15 @@ struct bnxt_re_bind {
 struct bnxt_re_brqe {
 	__le32 rsv_ws_fl_wt;
 	__le32 rsvd;
+	__le32 wrid;
+	__le32 rsvd1;
 };
 
 struct bnxt_re_rqe {
-	__le32 wrid;
-	__le32 rsvd1;
 	__le64 rsvd[2];
 };
 
 struct bnxt_re_srqe {
-	__le32 srq_tag; /* 20 bits are valid */
-	__le32 rsvd1;
 	__le64 rsvd[2];
 };
 #endif
diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c
index a015bed7..760e840a 100644
--- a/providers/bnxt_re/verbs.c
+++ b/providers/bnxt_re/verbs.c
@@ -1150,17 +1150,16 @@ static void bnxt_re_fill_wrid(struct bnxt_re_wrid *wrid, struct ibv_send_wr *wr,
 static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, void *wqe,
 				  struct ibv_send_wr *wr, uint8_t is_inline)
 {
-	struct bnxt_re_bsqe *hdr = wqe;
-	struct bnxt_re_send *sqe = ((void *)wqe + sizeof(struct bnxt_re_bsqe));
 	struct bnxt_re_sge *sge = ((void *)wqe + bnxt_re_get_sqe_hdr_sz());
+	struct bnxt_re_bsqe *hdr = wqe;
 	uint32_t wrlen, hdrval = 0;
-	int len;
 	uint8_t opcode, qesize;
+	int len;
 
 	len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, is_inline);
 	if (len < 0)
 		return len;
-	sqe->length = htole32(len);
+	hdr->lhdr.qkey_len = htole64((uint64_t)len);
 
 	/* Fill Header */
 	opcode = bnxt_re_ibv_to_bnxt_wr_opcd(wr->opcode);
@@ -1189,7 +1188,9 @@ static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe,
 				struct ibv_send_wr *wr, uint8_t is_inline)
 {
 	struct bnxt_re_send *sqe = ((void *)wqe + sizeof(struct bnxt_re_bsqe));
+	struct bnxt_re_bsqe *hdr = wqe;
 	struct bnxt_re_ah *ah;
+	uint64_t qkey;
 	int len;
 
 	len = bnxt_re_build_send_sqe(qp, wqe, wr, is_inline);
@@ -1198,7 +1199,8 @@ static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe,
 		goto bail;
 	}
 	ah = to_bnxt_re_ah(wr->wr.ud.ah);
-	sqe->qkey = htole32(wr->wr.ud.remote_qkey);
+	qkey = wr->wr.ud.remote_qkey;
+	hdr->lhdr.qkey_len |= htole64(qkey << 32);
 	sqe->dst_qp = htole32(wr->wr.ud.remote_qpn);
 	sqe->avid = htole32(ah->avid & 0xFFFFF);
 bail:
@@ -1228,7 +1230,7 @@ static int bnxt_re_build_cns_sqe(struct bnxt_re_qp *qp, void *wqe,
 
 	len = bnxt_re_build_send_sqe(qp, wqe, wr, false);
 	hdr->key_immd = htole32(wr->wr.atomic.rkey);
-	sqe->rva = htole64(wr->wr.atomic.remote_addr);
+	hdr->lhdr.rva = htole64(wr->wr.atomic.remote_addr);
 	sqe->cmp_dt = htole64(wr->wr.atomic.compare_add);
 	sqe->swp_dt = htole64(wr->wr.atomic.swap);
 
@@ -1245,7 +1247,7 @@ static int bnxt_re_build_fna_sqe(struct bnxt_re_qp *qp, void *wqe,
 
 	len = bnxt_re_build_send_sqe(qp, wqe, wr, false);
 	hdr->key_immd = htole32(wr->wr.atomic.rkey);
-	sqe->rva = htole64(wr->wr.atomic.remote_addr);
+	hdr->lhdr.rva = htole64(wr->wr.atomic.remote_addr);
 	sqe->cmp_dt = htole64(wr->wr.atomic.compare_add);
 
 	return len;
@@ -1368,13 +1370,11 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr,
 			     void *rqe)
 {
 	struct bnxt_re_brqe *hdr = rqe;
-	struct bnxt_re_rqe *rwr;
-	struct bnxt_re_sge *sge;
 	struct bnxt_re_wrid *wrid;
+	struct bnxt_re_sge *sge;
 	int wqe_sz, len;
 	uint32_t hdrval;
 
-	rwr = (rqe + sizeof(struct bnxt_re_brqe));
 	sge = (rqe + bnxt_re_get_rqe_hdr_sz());
 	wrid = &qp->rwrid[qp->rqq->tail];
 
@@ -1388,7 +1388,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr,
 	hdrval = BNXT_RE_WR_OPCD_RECV;
 	hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT);
 	hdr->rsv_ws_fl_wt = htole32(hdrval);
-	rwr->wrid = htole32(qp->rqq->tail);
+	hdr->wrid = htole32(qp->rqq->tail);
 
 	/* Fill wrid */
 	wrid->wrid = wr->wr_id;
@@ -1586,13 +1586,11 @@ static int bnxt_re_build_srqe(struct bnxt_re_srq *srq,
 			      struct ibv_recv_wr *wr, void *srqe)
 {
 	struct bnxt_re_brqe *hdr = srqe;
-	struct bnxt_re_rqe *rwr;
 	struct bnxt_re_sge *sge;
 	struct bnxt_re_wrid *wrid;
 	int wqe_sz, len, next;
 	uint32_t hdrval = 0;
 
-	rwr = (srqe + sizeof(struct bnxt_re_brqe));
 	sge = (srqe + bnxt_re_get_srqe_hdr_sz());
 	next = srq->start_idx;
 	wrid = &srq->srwrid[next];
@@ -1602,7 +1600,7 @@ static int bnxt_re_build_srqe(struct bnxt_re_srq *srq,
 	wqe_sz = wr->num_sge + (bnxt_re_get_srqe_hdr_sz() >> 4); /* 16B align */
 	hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT);
 	hdr->rsv_ws_fl_wt = htole32(hdrval);
-	rwr->wrid = htole32((uint32_t)next);
+	hdr->wrid = htole32((uint32_t)next);
 
 	/* Fill wrid */
 	wrid->wrid = wr->wr_id;
-- 
2.25.1


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [V2 rdma-core 3/4] bnxt_re/lib: consolidate hwque and swque in common structure
  2021-05-05 17:10 [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 1/4] bnxt_re/lib: Check AH handler validity before use Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 2/4] bnxt_re/lib: align base sq entry structure to 16B boundary Devesh Sharma
@ 2021-05-05 17:10 ` Devesh Sharma
  2021-05-05 17:10 ` [V2 rdma-core 4/4] bnxt_re/lib: query device attributes only once and store Devesh Sharma
  2021-05-10  5:03 ` [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
  4 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2021-05-05 17:10 UTC (permalink / raw)
  To: linux-rdma; +Cc: Devesh Sharma

[-- Attachment #1: Type: text/plain, Size: 11548 bytes --]

Consolidating hardware queue (hwque) and software queue (swque)
under a single bookkeeping data structure bnxt_re_joint_queue.

This is to ease the hardware and software queue management. Further
reduces the size of bnxt_re_qp structure.

Fixes: d2745fe2ab86 ("Add support for posting and polling")
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
---
 providers/bnxt_re/db.c    |   6 +-
 providers/bnxt_re/main.h  |  13 ++--
 providers/bnxt_re/verbs.c | 131 +++++++++++++++++++++-----------------
 3 files changed, 85 insertions(+), 65 deletions(-)

diff --git a/providers/bnxt_re/db.c b/providers/bnxt_re/db.c
index 85da182e..3c797573 100644
--- a/providers/bnxt_re/db.c
+++ b/providers/bnxt_re/db.c
@@ -63,7 +63,8 @@ void bnxt_re_ring_rq_db(struct bnxt_re_qp *qp)
 {
 	struct bnxt_re_db_hdr hdr;
 
-	bnxt_re_init_db_hdr(&hdr, qp->rqq->tail, qp->qpid, BNXT_RE_QUE_TYPE_RQ);
+	bnxt_re_init_db_hdr(&hdr, qp->jrqq->hwque->tail,
+			    qp->qpid, BNXT_RE_QUE_TYPE_RQ);
 	bnxt_re_ring_db(qp->udpi, &hdr);
 }
 
@@ -71,7 +72,8 @@ void bnxt_re_ring_sq_db(struct bnxt_re_qp *qp)
 {
 	struct bnxt_re_db_hdr hdr;
 
-	bnxt_re_init_db_hdr(&hdr, qp->sqq->tail, qp->qpid, BNXT_RE_QUE_TYPE_SQ);
+	bnxt_re_init_db_hdr(&hdr, qp->jsqq->hwque->tail,
+			    qp->qpid, BNXT_RE_QUE_TYPE_SQ);
 	bnxt_re_ring_db(qp->udpi, &hdr);
 }
 
diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h
index 368297e6..d470e30a 100644
--- a/providers/bnxt_re/main.h
+++ b/providers/bnxt_re/main.h
@@ -120,13 +120,18 @@ struct bnxt_re_srq {
 	bool arm_req;
 };
 
+struct bnxt_re_joint_queue {
+	struct bnxt_re_queue *hwque;
+	struct bnxt_re_wrid *swque;
+	uint32_t start_idx;
+	uint32_t last_idx;
+};
+
 struct bnxt_re_qp {
 	struct ibv_qp ibvqp;
 	struct bnxt_re_chip_ctx *cctx;
-	struct bnxt_re_queue *sqq;
-	struct bnxt_re_wrid *swrid;
-	struct bnxt_re_queue *rqq;
-	struct bnxt_re_wrid *rwrid;
+	struct bnxt_re_joint_queue *jsqq;
+	struct bnxt_re_joint_queue *jrqq;
 	struct bnxt_re_srq *srq;
 	struct bnxt_re_cq *scq;
 	struct bnxt_re_cq *rcq;
diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c
index 760e840a..59a57f72 100644
--- a/providers/bnxt_re/verbs.c
+++ b/providers/bnxt_re/verbs.c
@@ -242,7 +242,7 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp,
 				     struct bnxt_re_bcqe *hdr,
 				     struct bnxt_re_req_cqe *scqe, int *cnt)
 {
-	struct bnxt_re_queue *sq = qp->sqq;
+	struct bnxt_re_queue *sq = qp->jsqq->hwque;
 	struct bnxt_re_context *cntx;
 	struct bnxt_re_wrid *swrid;
 	struct bnxt_re_psns *spsn;
@@ -252,7 +252,7 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp,
 
 	scq = to_bnxt_re_cq(qp->ibvqp.send_cq);
 	cntx = to_bnxt_re_context(scq->ibvcq.context);
-	swrid = &qp->swrid[head];
+	swrid = &qp->jsqq->swque[head];
 	spsn = swrid->psns;
 
 	*cnt = 1;
@@ -267,7 +267,7 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp,
 			BNXT_RE_PSNS_OPCD_MASK;
 	ibvwc->byte_len = 0;
 
-	bnxt_re_incr_head(qp->sqq);
+	bnxt_re_incr_head(sq);
 
 	if (qp->qpst != IBV_QPS_ERR)
 		qp->qpst = IBV_QPS_ERR;
@@ -284,14 +284,14 @@ static uint8_t bnxt_re_poll_success_scqe(struct bnxt_re_qp *qp,
 					 struct bnxt_re_req_cqe *scqe,
 					 int *cnt)
 {
-	struct bnxt_re_queue *sq = qp->sqq;
+	struct bnxt_re_queue *sq = qp->jsqq->hwque;
 	struct bnxt_re_wrid *swrid;
 	struct bnxt_re_psns *spsn;
-	uint8_t pcqe = false;
 	uint32_t head = sq->head;
+	uint8_t pcqe = false;
 	uint32_t cindx;
 
-	swrid = &qp->swrid[head];
+	swrid = &qp->jsqq->swque[head];
 	spsn = swrid->psns;
 	cindx = le32toh(scqe->con_indx);
 
@@ -361,8 +361,8 @@ static int bnxt_re_poll_err_rcqe(struct bnxt_re_qp *qp, struct ibv_wc *ibvwc,
 	cntx = to_bnxt_re_context(rcq->ibvcq.context);
 
 	if (!qp->srq) {
-		rq = qp->rqq;
-		ibvwc->wr_id = qp->rwrid[rq->head].wrid;
+		rq = qp->jrqq->hwque;
+		ibvwc->wr_id = qp->jrqq->swque[rq->head].wrid;
 	} else {
 		struct bnxt_re_srq *srq;
 		int tag;
@@ -423,8 +423,8 @@ static void bnxt_re_poll_success_rcqe(struct bnxt_re_qp *qp,
 
 	rcqe = cqe;
 	if (!qp->srq) {
-		rq = qp->rqq;
-		ibvwc->wr_id = qp->rwrid[rq->head].wrid;
+		rq = qp->jrqq->hwque;
+		ibvwc->wr_id = qp->jrqq->swque[rq->head].wrid;
 	} else {
 		struct bnxt_re_srq *srq;
 		int tag;
@@ -648,13 +648,13 @@ static int bnxt_re_poll_flush_wqes(struct bnxt_re_cq *cq,
 			if (sq_list) {
 				qp = container_of(cur, struct bnxt_re_qp,
 						  snode);
-				que = qp->sqq;
-				wridp = qp->swrid;
+				que = qp->jsqq->hwque;
+				wridp = qp->jsqq->swque;
 			} else {
 				qp = container_of(cur, struct bnxt_re_qp,
 						  rnode);
-				que = qp->rqq;
-				wridp = qp->rwrid;
+				que = qp->jrqq->hwque;
+				wridp = qp->jrqq->swque;
 			}
 			if (bnxt_re_is_que_empty(que))
 				continue;
@@ -802,55 +802,66 @@ static int bnxt_re_check_qp_limits(struct bnxt_re_context *cntx,
 
 static void bnxt_re_free_queue_ptr(struct bnxt_re_qp *qp)
 {
-	if (qp->rqq)
-		free(qp->rqq);
-	if (qp->sqq)
-		free(qp->sqq);
+	free(qp->jrqq->hwque);
+	free(qp->jrqq);
+	free(qp->jsqq->hwque);
+	free(qp->jsqq);
 }
 
 static int bnxt_re_alloc_queue_ptr(struct bnxt_re_qp *qp,
 				   struct ibv_qp_init_attr *attr)
 {
-	qp->sqq = calloc(1, sizeof(struct bnxt_re_queue));
-	if (!qp->sqq)
-		return -ENOMEM;
+	int rc = -ENOMEM;
+
+	qp->jsqq = calloc(1, sizeof(struct bnxt_re_joint_queue));
+	if (!qp->jsqq)
+		return rc;
+	qp->jsqq->hwque = calloc(1, sizeof(struct bnxt_re_queue));
+	if (!qp->jsqq->hwque)
+		goto fail;
+
 	if (!attr->srq) {
-		qp->rqq = calloc(1, sizeof(struct bnxt_re_queue));
-		if (!qp->rqq) {
-			free(qp->sqq);
-			return -ENOMEM;
+		qp->jrqq = calloc(1, sizeof(struct bnxt_re_joint_queue));
+		if (!qp->jrqq) {
+			free(qp->jsqq);
+			goto fail;
 		}
+		qp->jrqq->hwque = calloc(1, sizeof(struct bnxt_re_queue));
+		if (!qp->jrqq->hwque)
+			goto fail;
 	}
 
 	return 0;
+fail:
+	bnxt_re_free_queue_ptr(qp);
+	return rc;
 }
 
 static void bnxt_re_free_queues(struct bnxt_re_qp *qp)
 {
-	if (qp->rqq) {
-		if (qp->rwrid)
-			free(qp->rwrid);
-		pthread_spin_destroy(&qp->rqq->qlock);
-		bnxt_re_free_aligned(qp->rqq);
+	if (qp->jrqq) {
+		free(qp->jrqq->swque);
+		pthread_spin_destroy(&qp->jrqq->hwque->qlock);
+		bnxt_re_free_aligned(qp->jrqq->hwque);
 	}
 
-	if (qp->swrid)
-		free(qp->swrid);
-	pthread_spin_destroy(&qp->sqq->qlock);
-	bnxt_re_free_aligned(qp->sqq);
+	free(qp->jsqq->swque);
+	pthread_spin_destroy(&qp->jsqq->hwque->qlock);
+	bnxt_re_free_aligned(qp->jsqq->hwque);
 }
 
 static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp,
 				struct ibv_qp_init_attr *attr,
 				uint32_t pg_size) {
 	struct bnxt_re_psns_ext *psns_ext;
+	struct bnxt_re_wrid *swque;
 	struct bnxt_re_queue *que;
 	struct bnxt_re_psns *psns;
 	uint32_t psn_depth;
 	uint32_t psn_size;
 	int ret, indx;
 
-	que = qp->sqq;
+	que = qp->jsqq->hwque;
 	que->stride = bnxt_re_get_sqe_sz();
 	/* 8916 adjustment */
 	que->depth = roundup_pow_of_two(attr->cap.max_send_wr + 1 +
@@ -870,7 +881,7 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp,
 	 * is UD-qp. UD-qp use this memory to maintain WC-opcode.
 	 * See definition of bnxt_re_fill_psns() for the use case.
 	 */
-	ret = bnxt_re_alloc_aligned(qp->sqq, pg_size);
+	ret = bnxt_re_alloc_aligned(que, pg_size);
 	if (ret)
 		return ret;
 	/* exclude psns depth*/
@@ -878,36 +889,38 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp,
 	/* start of spsn space sizeof(struct bnxt_re_psns) each. */
 	psns = (que->va + que->stride * que->depth);
 	psns_ext = (struct bnxt_re_psns_ext *)psns;
-	pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE);
-	qp->swrid = calloc(que->depth, sizeof(struct bnxt_re_wrid));
-	if (!qp->swrid) {
+	swque = calloc(que->depth, sizeof(struct bnxt_re_wrid));
+	if (!swque) {
 		ret = -ENOMEM;
 		goto fail;
 	}
 
 	for (indx = 0 ; indx < que->depth; indx++, psns++)
-		qp->swrid[indx].psns = psns;
+		swque[indx].psns = psns;
 	if (bnxt_re_is_chip_gen_p5(qp->cctx)) {
 		for (indx = 0 ; indx < que->depth; indx++, psns_ext++) {
-			qp->swrid[indx].psns_ext = psns_ext;
-			qp->swrid[indx].psns = (struct bnxt_re_psns *)psns_ext;
+			swque[indx].psns_ext = psns_ext;
+			swque[indx].psns = (struct bnxt_re_psns *)psns_ext;
 		}
 	}
+	qp->jsqq->swque = swque;
 
 	qp->cap.max_swr = que->depth;
+	pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE);
 
-	if (qp->rqq) {
-		que = qp->rqq;
+	if (qp->jrqq) {
+		que = qp->jrqq->hwque;
 		que->stride = bnxt_re_get_rqe_sz();
 		que->depth = roundup_pow_of_two(attr->cap.max_recv_wr + 1);
 		que->diff = que->depth - attr->cap.max_recv_wr;
-		ret = bnxt_re_alloc_aligned(qp->rqq, pg_size);
+		ret = bnxt_re_alloc_aligned(que, pg_size);
 		if (ret)
 			goto fail;
 		pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE);
 		/* For RQ only bnxt_re_wri.wrid is used. */
-		qp->rwrid = calloc(que->depth, sizeof(struct bnxt_re_wrid));
-		if (!qp->rwrid) {
+		qp->jrqq->swque = calloc(que->depth,
+					 sizeof(struct bnxt_re_wrid));
+		if (!qp->jrqq->swque) {
 			ret = -ENOMEM;
 			goto fail;
 		}
@@ -946,8 +959,8 @@ struct ibv_qp *bnxt_re_create_qp(struct ibv_pd *ibvpd,
 		goto failq;
 	/* Fill ibv_cmd */
 	cap = &qp->cap;
-	req.qpsva = (uintptr_t)qp->sqq->va;
-	req.qprva = qp->rqq ? (uintptr_t)qp->rqq->va : 0;
+	req.qpsva = (uintptr_t)qp->jsqq->hwque->va;
+	req.qprva = qp->jrqq ? (uintptr_t)qp->jrqq->hwque->va : 0;
 	req.qp_handle = (uintptr_t)qp;
 
 	if (ibv_cmd_create_qp(ibvpd, &qp->ibvqp, attr, &req.ibv_cmd, sizeof(req),
@@ -995,11 +1008,11 @@ int bnxt_re_modify_qp(struct ibv_qp *ibvqp, struct ibv_qp_attr *attr,
 			qp->qpst = attr->qp_state;
 			/* transition to reset */
 			if (qp->qpst == IBV_QPS_RESET) {
-				qp->sqq->head = 0;
-				qp->sqq->tail = 0;
-				if (qp->rqq) {
-					qp->rqq->head = 0;
-					qp->rqq->tail = 0;
+				qp->jsqq->hwque->head = 0;
+				qp->jsqq->hwque->tail = 0;
+				if (qp->jrqq) {
+					qp->jrqq->hwque->head = 0;
+					qp->jrqq->hwque->tail = 0;
 				}
 			}
 		}
@@ -1257,7 +1270,7 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr,
 		      struct ibv_send_wr **bad)
 {
 	struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp);
-	struct bnxt_re_queue *sq = qp->sqq;
+	struct bnxt_re_queue *sq = qp->jsqq->hwque;
 	struct bnxt_re_wrid *wrid;
 	uint8_t is_inline = false;
 	struct bnxt_re_bsqe *hdr;
@@ -1289,7 +1302,7 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr,
 		}
 
 		sqe = (void *)(sq->va + (sq->tail * sq->stride));
-		wrid = &qp->swrid[sq->tail];
+		wrid = &qp->jsqq->swque[sq->tail];
 
 		memset(sqe, 0, bnxt_re_get_sqe_sz());
 		hdr = sqe;
@@ -1376,7 +1389,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr,
 	uint32_t hdrval;
 
 	sge = (rqe + bnxt_re_get_rqe_hdr_sz());
-	wrid = &qp->rwrid[qp->rqq->tail];
+	wrid = &qp->jrqq->swque[qp->jrqq->hwque->tail];
 
 	len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, false);
 	wqe_sz = wr->num_sge + (bnxt_re_get_rqe_hdr_sz() >> 4); /* 16B align */
@@ -1388,7 +1401,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr,
 	hdrval = BNXT_RE_WR_OPCD_RECV;
 	hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT);
 	hdr->rsv_ws_fl_wt = htole32(hdrval);
-	hdr->wrid = htole32(qp->rqq->tail);
+	hdr->wrid = htole32(qp->jrqq->hwque->tail);
 
 	/* Fill wrid */
 	wrid->wrid = wr->wr_id;
@@ -1402,7 +1415,7 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr,
 		      struct ibv_recv_wr **bad)
 {
 	struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp);
-	struct bnxt_re_queue *rq = qp->rqq;
+	struct bnxt_re_queue *rq = qp->jrqq->hwque;
 	void *rqe;
 	int ret;
 
-- 
2.25.1


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [V2 rdma-core 4/4] bnxt_re/lib: query device attributes only once and store
  2021-05-05 17:10 [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
                   ` (2 preceding siblings ...)
  2021-05-05 17:10 ` [V2 rdma-core 3/4] bnxt_re/lib: consolidate hwque and swque in common structure Devesh Sharma
@ 2021-05-05 17:10 ` Devesh Sharma
  2021-05-10  5:03 ` [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
  4 siblings, 0 replies; 9+ messages in thread
From: Devesh Sharma @ 2021-05-05 17:10 UTC (permalink / raw)
  To: linux-rdma; +Cc: Devesh Sharma

[-- Attachment #1: Type: text/plain, Size: 5532 bytes --]

Making a change to query device attributes only once during
context initialization. Context structure would store the
attributes for future reference. This avoids multiple
user to kernel context switch during QP creation.

Fixes: d2745fe2ab86 ("Add support for posting and polling")
Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com>
---
 providers/bnxt_re/main.c  | 31 ++++++++++++++++++-------------
 providers/bnxt_re/main.h  |  2 ++
 providers/bnxt_re/verbs.c | 25 +++++++++++--------------
 3 files changed, 31 insertions(+), 27 deletions(-)

diff --git a/providers/bnxt_re/main.c b/providers/bnxt_re/main.c
index a78e6b98..1779e1ec 100644
--- a/providers/bnxt_re/main.c
+++ b/providers/bnxt_re/main.c
@@ -129,10 +129,11 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev,
 						   int cmd_fd,
 						   void *private_data)
 {
-	struct ibv_get_context cmd;
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(vdev);
 	struct ubnxt_re_cntx_resp resp;
-	struct bnxt_re_dev *dev = to_bnxt_re_dev(vdev);
 	struct bnxt_re_context *cntx;
+	struct ibv_get_context cmd;
+	int ret;
 
 	cntx = verbs_init_and_alloc_context(vdev, cmd_fd, cntx, ibvctx,
 					    RDMA_DRIVER_BNXT_RE);
@@ -146,9 +147,9 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev,
 
 	cntx->dev_id = resp.dev_id;
 	cntx->max_qp = resp.max_qp;
-	dev->pg_size = resp.pg_size;
-	dev->cqe_size = resp.cqe_sz;
-	dev->max_cq_depth = resp.max_cqd;
+	rdev->pg_size = resp.pg_size;
+	rdev->cqe_size = resp.cqe_sz;
+	rdev->max_cq_depth = resp.max_cqd;
 	if (resp.comp_mask & BNXT_RE_UCNTX_CMASK_HAVE_CCTX) {
 		cntx->cctx.chip_num = resp.chip_id0 & 0xFFFF;
 		cntx->cctx.chip_rev = (resp.chip_id0 >>
@@ -159,7 +160,7 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev,
 	}
 	pthread_spin_init(&cntx->fqlock, PTHREAD_PROCESS_PRIVATE);
 	/* mmap shared page. */
-	cntx->shpg = mmap(NULL, dev->pg_size, PROT_READ | PROT_WRITE,
+	cntx->shpg = mmap(NULL, rdev->pg_size, PROT_READ | PROT_WRITE,
 			  MAP_SHARED, cmd_fd, 0);
 	if (cntx->shpg == MAP_FAILED) {
 		cntx->shpg = NULL;
@@ -168,6 +169,10 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev,
 	pthread_mutex_init(&cntx->shlock, NULL);
 
 	verbs_set_ops(&cntx->ibvctx, &bnxt_re_cntx_ops);
+	cntx->rdev = rdev;
+	ret = ibv_query_device(&cntx->ibvctx.context, &rdev->devattr);
+	if (ret)
+		goto failed;
 
 	return &cntx->ibvctx;
 
@@ -180,19 +185,19 @@ failed:
 static void bnxt_re_free_context(struct ibv_context *ibvctx)
 {
 	struct bnxt_re_context *cntx = to_bnxt_re_context(ibvctx);
-	struct bnxt_re_dev *dev = to_bnxt_re_dev(ibvctx->device);
+	struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibvctx->device);
 
 	/* Unmap if anything device specific was mapped in init_context. */
 	pthread_mutex_destroy(&cntx->shlock);
 	if (cntx->shpg)
-		munmap(cntx->shpg, dev->pg_size);
+		munmap(cntx->shpg, rdev->pg_size);
 	pthread_spin_destroy(&cntx->fqlock);
 
 	/* Un-map DPI only for the first PD that was
 	 * allocated in this context.
 	 */
 	if (cntx->udpi.dbpage && cntx->udpi.dbpage != MAP_FAILED) {
-		munmap(cntx->udpi.dbpage, dev->pg_size);
+		munmap(cntx->udpi.dbpage, rdev->pg_size);
 		cntx->udpi.dbpage = NULL;
 	}
 
@@ -203,13 +208,13 @@ static void bnxt_re_free_context(struct ibv_context *ibvctx)
 static struct verbs_device *
 bnxt_re_device_alloc(struct verbs_sysfs_dev *sysfs_dev)
 {
-	struct bnxt_re_dev *dev;
+	struct bnxt_re_dev *rdev;
 
-	dev = calloc(1, sizeof(*dev));
-	if (!dev)
+	rdev = calloc(1, sizeof(*rdev));
+	if (!rdev)
 		return NULL;
 
-	return &dev->vdev;
+	return &rdev->vdev;
 }
 
 static const struct verbs_device_ops bnxt_re_dev_ops = {
diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h
index d470e30a..a63719e8 100644
--- a/providers/bnxt_re/main.h
+++ b/providers/bnxt_re/main.h
@@ -166,10 +166,12 @@ struct bnxt_re_dev {
 
 	uint32_t cqe_size;
 	uint32_t max_cq_depth;
+	struct ibv_device_attr devattr;
 };
 
 struct bnxt_re_context {
 	struct verbs_context ibvctx;
+	struct bnxt_re_dev *rdev;
 	uint32_t dev_id;
 	uint32_t max_qp;
 	struct bnxt_re_chip_ctx cctx;
diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c
index 59a57f72..fb2cf5ac 100644
--- a/providers/bnxt_re/verbs.c
+++ b/providers/bnxt_re/verbs.c
@@ -777,25 +777,22 @@ int bnxt_re_arm_cq(struct ibv_cq *ibvcq, int flags)
 static int bnxt_re_check_qp_limits(struct bnxt_re_context *cntx,
 				   struct ibv_qp_init_attr *attr)
 {
-	struct ibv_device_attr devattr;
-	int ret;
+	struct ibv_device_attr *devattr;
+	struct bnxt_re_dev *rdev;
 
-	ret = bnxt_re_query_device(
-		&cntx->ibvctx.context, NULL,
-		container_of(&devattr, struct ibv_device_attr_ex, orig_attr),
-		sizeof(devattr));
-	if (ret)
-		return ret;
-	if (attr->cap.max_send_sge > devattr.max_sge)
+	rdev = cntx->rdev;
+	devattr = &rdev->devattr;
+
+	if (attr->cap.max_send_sge > devattr->max_sge)
 		return EINVAL;
-	if (attr->cap.max_recv_sge > devattr.max_sge)
+	if (attr->cap.max_recv_sge > devattr->max_sge)
 		return EINVAL;
 	if (attr->cap.max_inline_data > BNXT_RE_MAX_INLINE_SIZE)
 		return EINVAL;
-	if (attr->cap.max_send_wr > devattr.max_qp_wr)
-		attr->cap.max_send_wr = devattr.max_qp_wr;
-	if (attr->cap.max_recv_wr > devattr.max_qp_wr)
-		attr->cap.max_recv_wr = devattr.max_qp_wr;
+	if (attr->cap.max_send_wr > devattr->max_qp_wr)
+		attr->cap.max_send_wr = devattr->max_qp_wr;
+	if (attr->cap.max_recv_wr > devattr->max_qp_wr)
+		attr->cap.max_recv_wr = devattr->max_qp_wr;
 
 	return 0;
 }
-- 
2.25.1


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [V2 rdma-core 0/4] Broadcom's rdma provider lib update
  2021-05-05 17:10 [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
                   ` (3 preceding siblings ...)
  2021-05-05 17:10 ` [V2 rdma-core 4/4] bnxt_re/lib: query device attributes only once and store Devesh Sharma
@ 2021-05-10  5:03 ` Devesh Sharma
  2021-05-10  5:41   ` Leon Romanovsky
  4 siblings, 1 reply; 9+ messages in thread
From: Devesh Sharma @ 2021-05-10  5:03 UTC (permalink / raw)
  To: linux-rdma

[-- Attachment #1: Type: text/plain, Size: 1083 bytes --]

On Wed, May 5, 2021 at 10:41 PM Devesh Sharma
<devesh.sharma@broadcom.com> wrote:
>
> This patch series is a part of bigger feature submission which
> changes the wqe posting logic. The current series adds the
> ground work in the direction to change the wqe posting algorithm.
>
> v1->v2
> - added Fixes tag
> - updated patch description
> - dropped if() check before free.
>
> Devesh Sharma (4):
>   bnxt_re/lib: Check AH handler validity before use
>   bnxt_re/lib: align base sq entry structure to 16B boundary
>   bnxt_re/lib: consolidate hwque and swque in common structure
>   bnxt_re/lib: query device attributes only once and store
>
>  providers/bnxt_re/bnxt_re-abi.h |  24 ++---
>  providers/bnxt_re/db.c          |   6 +-
>  providers/bnxt_re/main.c        |  31 +++---
>  providers/bnxt_re/main.h        |  15 ++-
>  providers/bnxt_re/verbs.c       | 182 +++++++++++++++++---------------
>  5 files changed, 138 insertions(+), 120 deletions(-)
>
> --
> 2.25.1
>
Hello Maintainers, Could you bless the V2 if there are no more
comments/suggestions...

-- 
-Regards
Devesh

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [V2 rdma-core 0/4] Broadcom's rdma provider lib update
  2021-05-10  5:03 ` [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
@ 2021-05-10  5:41   ` Leon Romanovsky
  2021-05-10  6:25     ` Devesh Sharma
  0 siblings, 1 reply; 9+ messages in thread
From: Leon Romanovsky @ 2021-05-10  5:41 UTC (permalink / raw)
  To: Devesh Sharma; +Cc: linux-rdma

On Mon, May 10, 2021 at 10:33:48AM +0530, Devesh Sharma wrote:
> On Wed, May 5, 2021 at 10:41 PM Devesh Sharma
> <devesh.sharma@broadcom.com> wrote:
> >
> > This patch series is a part of bigger feature submission which
> > changes the wqe posting logic. The current series adds the
> > ground work in the direction to change the wqe posting algorithm.
> >
> > v1->v2
> > - added Fixes tag
> > - updated patch description
> > - dropped if() check before free.
> >
> > Devesh Sharma (4):
> >   bnxt_re/lib: Check AH handler validity before use
> >   bnxt_re/lib: align base sq entry structure to 16B boundary
> >   bnxt_re/lib: consolidate hwque and swque in common structure
> >   bnxt_re/lib: query device attributes only once and store
> >
> >  providers/bnxt_re/bnxt_re-abi.h |  24 ++---
> >  providers/bnxt_re/db.c          |   6 +-
> >  providers/bnxt_re/main.c        |  31 +++---
> >  providers/bnxt_re/main.h        |  15 ++-
> >  providers/bnxt_re/verbs.c       | 182 +++++++++++++++++---------------
> >  5 files changed, 138 insertions(+), 120 deletions(-)
> >
> > --
> > 2.25.1
> >
> Hello Maintainers, Could you bless the V2 if there are no more
> comments/suggestions...

I planned to take it after rdma-core release (today/tomorrow).

Thanks

> 
> -- 
> -Regards
> Devesh



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [V2 rdma-core 0/4] Broadcom's rdma provider lib update
  2021-05-10  5:41   ` Leon Romanovsky
@ 2021-05-10  6:25     ` Devesh Sharma
  2021-05-10  9:50       ` Leon Romanovsky
  0 siblings, 1 reply; 9+ messages in thread
From: Devesh Sharma @ 2021-05-10  6:25 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma

[-- Attachment #1: Type: text/plain, Size: 1456 bytes --]

On Mon, May 10, 2021 at 11:11 AM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Mon, May 10, 2021 at 10:33:48AM +0530, Devesh Sharma wrote:
> > On Wed, May 5, 2021 at 10:41 PM Devesh Sharma
> > <devesh.sharma@broadcom.com> wrote:
> > >
> > > This patch series is a part of bigger feature submission which
> > > changes the wqe posting logic. The current series adds the
> > > ground work in the direction to change the wqe posting algorithm.
> > >
> > > v1->v2
> > > - added Fixes tag
> > > - updated patch description
> > > - dropped if() check before free.
> > >
> > > Devesh Sharma (4):
> > >   bnxt_re/lib: Check AH handler validity before use
> > >   bnxt_re/lib: align base sq entry structure to 16B boundary
> > >   bnxt_re/lib: consolidate hwque and swque in common structure
> > >   bnxt_re/lib: query device attributes only once and store
> > >
> > >  providers/bnxt_re/bnxt_re-abi.h |  24 ++---
> > >  providers/bnxt_re/db.c          |   6 +-
> > >  providers/bnxt_re/main.c        |  31 +++---
> > >  providers/bnxt_re/main.h        |  15 ++-
> > >  providers/bnxt_re/verbs.c       | 182 +++++++++++++++++---------------
> > >  5 files changed, 138 insertions(+), 120 deletions(-)
> > >
> > > --
> > > 2.25.1
> > >
> > Hello Maintainers, Could you bless the V2 if there are no more
> > comments/suggestions...
>
> I planned to take it after rdma-core release (today/tomorrow).
>
> Thanks
Sure, Thanks.
>
> >
> > --
> > -Regards
> > Devesh
>
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4212 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [V2 rdma-core 0/4] Broadcom's rdma provider lib update
  2021-05-10  6:25     ` Devesh Sharma
@ 2021-05-10  9:50       ` Leon Romanovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2021-05-10  9:50 UTC (permalink / raw)
  To: Devesh Sharma; +Cc: linux-rdma

On Mon, May 10, 2021 at 11:55:24AM +0530, Devesh Sharma wrote:
> On Mon, May 10, 2021 at 11:11 AM Leon Romanovsky <leon@kernel.org> wrote:
> >
> > On Mon, May 10, 2021 at 10:33:48AM +0530, Devesh Sharma wrote:
> > > On Wed, May 5, 2021 at 10:41 PM Devesh Sharma
> > > <devesh.sharma@broadcom.com> wrote:
> > > >
> > > > This patch series is a part of bigger feature submission which
> > > > changes the wqe posting logic. The current series adds the
> > > > ground work in the direction to change the wqe posting algorithm.
> > > >
> > > > v1->v2
> > > > - added Fixes tag
> > > > - updated patch description
> > > > - dropped if() check before free.
> > > >
> > > > Devesh Sharma (4):
> > > >   bnxt_re/lib: Check AH handler validity before use
> > > >   bnxt_re/lib: align base sq entry structure to 16B boundary
> > > >   bnxt_re/lib: consolidate hwque and swque in common structure
> > > >   bnxt_re/lib: query device attributes only once and store
> > > >
> > > >  providers/bnxt_re/bnxt_re-abi.h |  24 ++---
> > > >  providers/bnxt_re/db.c          |   6 +-
> > > >  providers/bnxt_re/main.c        |  31 +++---
> > > >  providers/bnxt_re/main.h        |  15 ++-
> > > >  providers/bnxt_re/verbs.c       | 182 +++++++++++++++++---------------
> > > >  5 files changed, 138 insertions(+), 120 deletions(-)
> > > >
> > > > --
> > > > 2.25.1
> > > >
> > > Hello Maintainers, Could you bless the V2 if there are no more
> > > comments/suggestions...
> >
> > I planned to take it after rdma-core release (today/tomorrow).
> >
> > Thanks
> Sure, Thanks.

Applied, thanks

> >
> > >
> > > --
> > > -Regards
> > > Devesh
> >
> >



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-05-10  9:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-05 17:10 [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
2021-05-05 17:10 ` [V2 rdma-core 1/4] bnxt_re/lib: Check AH handler validity before use Devesh Sharma
2021-05-05 17:10 ` [V2 rdma-core 2/4] bnxt_re/lib: align base sq entry structure to 16B boundary Devesh Sharma
2021-05-05 17:10 ` [V2 rdma-core 3/4] bnxt_re/lib: consolidate hwque and swque in common structure Devesh Sharma
2021-05-05 17:10 ` [V2 rdma-core 4/4] bnxt_re/lib: query device attributes only once and store Devesh Sharma
2021-05-10  5:03 ` [V2 rdma-core 0/4] Broadcom's rdma provider lib update Devesh Sharma
2021-05-10  5:41   ` Leon Romanovsky
2021-05-10  6:25     ` Devesh Sharma
2021-05-10  9:50       ` Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.