linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists allocations
@ 2021-05-11 10:36 Leon Romanovsky
  2021-05-11 10:59 ` Haakon Bugge
                   ` (2 more replies)
  0 siblings, 3 replies; 38+ messages in thread
From: Leon Romanovsky @ 2021-05-11 10:36 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe, Dennis Dalessandro
  Cc: Leon Romanovsky, linux-kernel, linux-rdma, Mike Marciniszyn

From: Leon Romanovsky <leonro@nvidia.com>

The rdmavt QP has fields that are both needed for the control and data
path. Such mixed declaration caused to the very specific allocation flow
with kzalloc_node and SGE list embedded into the struct rvt_qp.

This patch separates QP creation to two: regular memory allocation for
the control path and specific code for the SGE list, while the access to
the later is performed through derefenced pointer.

Such pointer and its context are expected to be in the cache, so
performance difference is expected to be negligible, if any exists.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
Hi,

This change is needed to convert QP to core allocation scheme. In that
scheme QP is allocated outside of the driver and size of such allocation
is constant and can be calculated at the compile time.

Thanks
---
 drivers/infiniband/sw/rdmavt/qp.c | 13 ++++++++-----
 include/rdma/rdmavt_qp.h          |  2 +-
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index 9d13db68283c..4522071fc220 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -1077,7 +1077,7 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
 	int err;
 	struct rvt_swqe *swq = NULL;
 	size_t sz;
-	size_t sg_list_sz;
+	size_t sg_list_sz = 0;
 	struct ib_qp *ret = ERR_PTR(-ENOMEM);
 	struct rvt_dev_info *rdi = ib_to_rvt(ibpd->device);
 	void *priv = NULL;
@@ -1125,8 +1125,6 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
 		if (!swq)
 			return ERR_PTR(-ENOMEM);
 
-		sz = sizeof(*qp);
-		sg_list_sz = 0;
 		if (init_attr->srq) {
 			struct rvt_srq *srq = ibsrq_to_rvtsrq(init_attr->srq);
 
@@ -1136,10 +1134,13 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
 		} else if (init_attr->cap.max_recv_sge > 1)
 			sg_list_sz = sizeof(*qp->r_sg_list) *
 				(init_attr->cap.max_recv_sge - 1);
-		qp = kzalloc_node(sz + sg_list_sz, GFP_KERNEL,
-				  rdi->dparms.node);
+		qp = kzalloc(sizeof(*qp), GFP_KERNEL);
 		if (!qp)
 			goto bail_swq;
+		qp->r_sg_list =
+			kzalloc_node(sg_list_sz, GFP_KERNEL, rdi->dparms.node);
+		if (!qp->r_sg_list)
+			goto bail_qp;
 		qp->allowed_ops = get_allowed_ops(init_attr->qp_type);
 
 		RCU_INIT_POINTER(qp->next, NULL);
@@ -1327,6 +1328,7 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
 
 bail_qp:
 	kfree(qp->s_ack_queue);
+	kfree(qp->r_sg_list);
 	kfree(qp);
 
 bail_swq:
@@ -1761,6 +1763,7 @@ int rvt_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 	kvfree(qp->r_rq.kwq);
 	rdi->driver_f.qp_priv_free(rdi, qp);
 	kfree(qp->s_ack_queue);
+	kfree(qp->r_sg_list);
 	rdma_destroy_ah_attr(&qp->remote_ah_attr);
 	rdma_destroy_ah_attr(&qp->alt_ah_attr);
 	free_ud_wq_attr(qp);
diff --git a/include/rdma/rdmavt_qp.h b/include/rdma/rdmavt_qp.h
index 8275954f5ce6..2e58d5e6ac0e 100644
--- a/include/rdma/rdmavt_qp.h
+++ b/include/rdma/rdmavt_qp.h
@@ -444,7 +444,7 @@ struct rvt_qp {
 	/*
 	 * This sge list MUST be last. Do not add anything below here.
 	 */
-	struct rvt_sge r_sg_list[] /* verified SGEs */
+	struct rvt_sge *r_sg_list /* verified SGEs */
 		____cacheline_aligned_in_smp;
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2021-07-04  6:34 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11 10:36 [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists allocations Leon Romanovsky
2021-05-11 10:59 ` Haakon Bugge
2021-05-11 12:34   ` Leon Romanovsky
2021-05-11 19:15     ` Marciniszyn, Mike
2021-05-11 19:27       ` Leon Romanovsky
2021-05-11 19:39         ` Marciniszyn, Mike
2021-05-12  4:08         ` Dennis Dalessandro
2021-05-12 12:13           ` Leon Romanovsky
2021-05-12 12:45             ` Dennis Dalessandro
2021-05-11 12:26 ` Dennis Dalessandro
2021-05-11 12:34   ` Leon Romanovsky
2021-05-12 12:25     ` Marciniszyn, Mike
2021-05-12 12:50       ` Leon Romanovsky
2021-05-13 19:03         ` Dennis Dalessandro
2021-05-13 19:15           ` Jason Gunthorpe
2021-05-13 19:31             ` Dennis Dalessandro
2021-05-14 13:02               ` Jason Gunthorpe
2021-05-14 14:07                 ` Dennis Dalessandro
2021-05-14 14:35                   ` Jason Gunthorpe
2021-05-14 15:00                     ` Marciniszyn, Mike
2021-05-14 15:02                       ` Jason Gunthorpe
2021-05-19  7:50                         ` Leon Romanovsky
2021-05-19 11:56                           ` Dennis Dalessandro
2021-05-19 18:29                             ` Jason Gunthorpe
2021-05-19 19:49                               ` Dennis Dalessandro
2021-05-19 20:26                                 ` Jason Gunthorpe
2021-05-20 22:02                                   ` Dennis Dalessandro
2021-05-21  6:29                                     ` Leon Romanovsky
2021-05-25 13:13                                     ` Jason Gunthorpe
2021-05-25 14:10                                       ` Dennis Dalessandro
2021-05-25 14:20                                         ` Jason Gunthorpe
2021-05-25 14:29                                           ` Dennis Dalessandro
2021-06-28 21:59                                           ` Marciniszyn, Mike
2021-06-28 23:19                                             ` Jason Gunthorpe
2021-07-04  6:34                                               ` Leon Romanovsky
2021-06-02  4:33                                         ` Leon Romanovsky
2021-05-16 10:56           ` Leon Romanovsky
2021-05-12 12:23 ` Marciniszyn, Mike

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).