linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes
@ 2020-10-26 13:19 Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 1/7] RDMA/mlx5: Remove mlx5_ib_mr->order Leon Romanovsky
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, Artemy Kovalyov, linux-rdma, Saeed Mahameed

From: Leon Romanovsky <leonro@nvidia.com>

From Jason:

The MR code computes and passes around a tuple of (npages, page_shift,
ncont, order) to represent the size of the MR in various terms.

This is quite confusing about what term refers to what, and overlaps with
data already stored and computed inside the umem. Rework all of this to be
more umem centric and use these identities instead:

      npages == ib_umem_num_pages(mr->umem)
      page_shift == mr->page_shift
      ncont == ib_umem_num_dma_blocks(mr->umem, 1 << mr->page_shift)
      order == order_base_2(ncont)

By storing the page_shift inside the mlx5_ib_mr it becomes nearly self
describing.

Thanks

Jason Gunthorpe (7):
  RDMA/mlx5: Remove mlx5_ib_mr->order
  RDMA/mlx5: Fix corruption of reg_pages in mlx5_ib_rereg_user_mr()
  RDMA/mlx5: Remove mlx5_ib_mr->npages
  RDMA/mlx5: Move mlx5_ib_cont_pages() to the creation of the mlx5_ib_mr
  RDMA/mlx5: Remove order from mlx5_ib_cont_pages()
  RDMA/mlx5: Remove ncont from mlx5_ib_cont_pages()
  RDMA/mlx5: Remove npages from mlx5_ib_cont_pages()

 drivers/infiniband/hw/mlx5/cq.c      |  39 ++++---
 drivers/infiniband/hw/mlx5/devx.c    |  20 ++--
 drivers/infiniband/hw/mlx5/mem.c     |  30 ++---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |   6 +-
 drivers/infiniband/hw/mlx5/mr.c      | 168 ++++++++++++---------------
 drivers/infiniband/hw/mlx5/qp.c      |  39 +++----
 drivers/infiniband/hw/mlx5/srq.c     |   8 +-
 7 files changed, 134 insertions(+), 176 deletions(-)

--
2.26.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 1/7] RDMA/mlx5: Remove mlx5_ib_mr->order
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-10-26 13:19 ` [PATCH mlx5-next 2/7] RDMA/mlx5: Fix corruption of reg_pages in mlx5_ib_rereg_user_mr() Leon Romanovsky
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

The is only ever set to non-zero if the MR is from the cache, and if it is
cached then the order is in cached_ent->order.

Make it clearer that use_umr_mtt_update() only returns true for cached MRs
and remove the redundant data.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 -
 drivers/infiniband/hw/mlx5/mr.c      | 5 +++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index b1f2b34e5955..93310dda01b6 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -601,7 +601,6 @@ struct mlx5_ib_mr {
 	struct ib_umem	       *umem;
 	struct mlx5_shared_mr_info	*smr_info;
 	struct list_head	list;
-	unsigned int		order;
 	struct mlx5_cache_ent  *cache_ent;
 	int			npages;
 	struct mlx5_ib_dev     *dev;
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 910120b551c5..c9be69fcc1ea 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -126,7 +126,9 @@ static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
 static inline bool mlx5_ib_pas_fits_in_mr(struct mlx5_ib_mr *mr, u64 start,
 					  u64 length)
 {
-	return ((u64)1 << mr->order) * MLX5_ADAPTER_PAGE_SIZE >=
+	if (!mr->cache_ent)
+		return false;
+	return ((u64)1 << mr->cache_ent->order) * MLX5_ADAPTER_PAGE_SIZE >=
 		length + (start & (MLX5_ADAPTER_PAGE_SIZE - 1));
 }
 
@@ -172,7 +174,6 @@ static struct mlx5_ib_mr *alloc_cache_mr(struct mlx5_cache_ent *ent, void *mkc)
 	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
 	if (!mr)
 		return NULL;
-	mr->order = ent->order;
 	mr->cache_ent = ent;
 	mr->dev = ent->dev;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH mlx5-next 2/7] RDMA/mlx5: Fix corruption of reg_pages in mlx5_ib_rereg_user_mr()
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 1/7] RDMA/mlx5: Remove mlx5_ib_mr->order Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 3/7] RDMA/mlx5: Remove mlx5_ib_mr->npages Leon Romanovsky
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: Artemy Kovalyov, linux-rdma, Saeed Mahameed

From: Jason Gunthorpe <jgg@nvidia.com>

reg_pages should always contain mr->npage since when the mr is finally
de-reg'd it is always subtracted out.

If there were any error exits then mlx5_ib_rereg_user_mr() would leave the
reg_pages adjusted and this will cause it to be double subtracted
eventually.

The manipulation of reg_pages is inherently connected to the umem, so lift
it out of set_mr_fields() and only adjust it around creating/destroying a
umem.

reg_pages is only used for diagnostics in sysfs.

Fixes: 7d0cc6edcc70 ("IB/mlx5: Add MR cache for large UMR regions")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mr.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index c9be69fcc1ea..769c5ae4894a 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1248,10 +1248,8 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 }
 
 static void set_mr_fields(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
-			  int npages, u64 length, int access_flags)
+			  u64 length, int access_flags)
 {
-	mr->npages = npages;
-	atomic_add(npages, &dev->mdev->priv.reg_pages);
 	mr->ibmr.lkey = mr->mmkey.key;
 	mr->ibmr.rkey = mr->mmkey.key;
 	mr->ibmr.length = length;
@@ -1291,8 +1289,7 @@ static struct ib_mr *mlx5_ib_get_dm_mr(struct ib_pd *pd, u64 start_addr,
 
 	kfree(in);
 
-	mr->umem = NULL;
-	set_mr_fields(dev, mr, 0, length, acc);
+	set_mr_fields(dev, mr, length, acc);
 
 	return &mr->ibmr;
 
@@ -1421,7 +1418,9 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	mlx5_ib_dbg(dev, "mkey 0x%x\n", mr->mmkey.key);
 
 	mr->umem = umem;
-	set_mr_fields(dev, mr, npages, length, access_flags);
+	mr->npages = npages;
+	atomic_add(mr->npages, &dev->mdev->priv.reg_pages);
+	set_mr_fields(dev, mr, length, access_flags);
 
 	if (xlt_with_umr && !(access_flags & IB_ACCESS_ON_DEMAND)) {
 		/*
@@ -1533,8 +1532,6 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 	mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n",
 		    start, virt_addr, length, access_flags);
 
-	atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
-
 	if (!mr->umem)
 		return -EINVAL;
 
@@ -1555,12 +1552,17 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 		 * used.
 		 */
 		flags |= IB_MR_REREG_TRANS;
+		atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
+		mr->npages = 0;
 		ib_umem_release(mr->umem);
 		mr->umem = NULL;
+
 		err = mr_umem_get(dev, addr, len, access_flags, &mr->umem,
 				  &npages, &page_shift, &ncont, &order);
 		if (err)
 			goto err;
+		mr->npages = ncont;
+		atomic_add(mr->npages, &dev->mdev->priv.reg_pages);
 	}
 
 	if (!mlx5_ib_can_reconfig_with_umr(dev, mr->access_flags,
@@ -1611,7 +1613,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 			goto err;
 	}
 
-	set_mr_fields(dev, mr, npages, len, access_flags);
+	set_mr_fields(dev, mr, len, access_flags);
 
 	return 0;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 3/7] RDMA/mlx5: Remove mlx5_ib_mr->npages
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 1/7] RDMA/mlx5: Remove mlx5_ib_mr->order Leon Romanovsky
  2020-10-26 13:19 ` [PATCH mlx5-next 2/7] RDMA/mlx5: Fix corruption of reg_pages in mlx5_ib_rereg_user_mr() Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 4/7] RDMA/mlx5: Move mlx5_ib_cont_pages() to the creation of the mlx5_ib_mr Leon Romanovsky
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

This is the same value as ib_umem_num_pages(mr->umem), use that instead.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  1 -
 drivers/infiniband/hw/mlx5/mr.c      | 26 ++++++++++++++------------
 2 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 93310dda01b6..0eaf992a4e15 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -602,7 +602,6 @@ struct mlx5_ib_mr {
 	struct mlx5_shared_mr_info	*smr_info;
 	struct list_head	list;
 	struct mlx5_cache_ent  *cache_ent;
-	int			npages;
 	struct mlx5_ib_dev     *dev;
 	u32 out[MLX5_ST_SZ_DW(create_mkey_out)];
 	struct mlx5_core_sig_ctx    *sig;
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 769c5ae4894a..c76134219bf2 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1418,8 +1418,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	mlx5_ib_dbg(dev, "mkey 0x%x\n", mr->mmkey.key);
 
 	mr->umem = umem;
-	mr->npages = npages;
-	atomic_add(mr->npages, &dev->mdev->priv.reg_pages);
+	atomic_add(ib_umem_num_pages(mr->umem), &dev->mdev->priv.reg_pages);
 	set_mr_fields(dev, mr, length, access_flags);
 
 	if (xlt_with_umr && !(access_flags & IB_ACCESS_ON_DEMAND)) {
@@ -1552,8 +1551,8 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 		 * used.
 		 */
 		flags |= IB_MR_REREG_TRANS;
-		atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
-		mr->npages = 0;
+		atomic_sub(ib_umem_num_pages(mr->umem),
+			   &dev->mdev->priv.reg_pages);
 		ib_umem_release(mr->umem);
 		mr->umem = NULL;
 
@@ -1561,8 +1560,8 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 				  &npages, &page_shift, &ncont, &order);
 		if (err)
 			goto err;
-		mr->npages = ncont;
-		atomic_add(mr->npages, &dev->mdev->priv.reg_pages);
+		atomic_add(ib_umem_num_pages(mr->umem),
+			   &dev->mdev->priv.reg_pages);
 	}
 
 	if (!mlx5_ib_can_reconfig_with_umr(dev, mr->access_flags,
@@ -1695,23 +1694,26 @@ static void clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
 
 static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
 {
-	int npages = mr->npages;
 	struct ib_umem *umem = mr->umem;
+	bool is_odp = is_odp_mr(mr);
 
 	/* Stop all DMA */
-	if (is_odp_mr(mr))
+	if (is_odp)
 		mlx5_ib_fence_odp_mr(mr);
 	else
 		clean_mr(dev, mr);
 
+	if (umem) {
+		if (!is_odp)
+			atomic_sub(ib_umem_num_pages(umem),
+				   &dev->mdev->priv.reg_pages);
+		ib_umem_release(umem);
+	}
+
 	if (mr->cache_ent)
 		mlx5_mr_cache_free(dev, mr);
 	else
 		kfree(mr);
-
-	ib_umem_release(umem);
-	atomic_sub(npages, &dev->mdev->priv.reg_pages);
-
 }
 
 int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 4/7] RDMA/mlx5: Move mlx5_ib_cont_pages() to the creation of the mlx5_ib_mr
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
                   ` (2 preceding siblings ...)
  2020-10-26 13:19 ` [PATCH rdma-next 3/7] RDMA/mlx5: Remove mlx5_ib_mr->npages Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 5/7] RDMA/mlx5: Remove order from mlx5_ib_cont_pages() Leon Romanovsky
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

For the user MR path, instead of calling this after getting the umem, call
it as part of creating the struct mlx5_ib_mr and distill its output to a
single page_shift stored inside the mr.

This avoids passing around the tuple of its output. Based on the umem and
page_shift, the output arguments can be computed using:

  count == ib_umem_num_pages(mr->umem)
  shift == mr->page_shift
  ncont == ib_umem_num_dma_blocks(mr->umem, 1 << mr->page_shift)
  order == order_base_2(ncont)

And since mr->page_shift == umem_odp->page_shift then ncont ==
ib_umem_num_dma_blocks() == ib_umem_odp_num_pages() for ODP umems.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mem.c     |  11 +++
 drivers/infiniband/hw/mlx5/mlx5_ib.h |   1 +
 drivers/infiniband/hw/mlx5/mr.c      | 135 +++++++++++----------------
 3 files changed, 69 insertions(+), 78 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mem.c b/drivers/infiniband/hw/mlx5/mem.c
index 13de3d2edd34..e63af1b05c0b 100644
--- a/drivers/infiniband/hw/mlx5/mem.c
+++ b/drivers/infiniband/hw/mlx5/mem.c
@@ -57,6 +57,17 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 	struct scatterlist *sg;
 	int entry;
 
+	if (umem->is_odp) {
+		struct ib_umem_odp *odp = to_ib_umem_odp(umem);
+
+		*shift = odp->page_shift;
+		*ncont = ib_umem_odp_num_pages(odp);
+		*count = *ncont << (*shift - PAGE_SHIFT);
+		if (order)
+			*order = ilog2(roundup_pow_of_two(*count));
+		return;
+	}
+
 	addr = addr >> PAGE_SHIFT;
 	tmp = (unsigned long)addr;
 	m = find_first_bit(&tmp, BITS_PER_LONG);
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 0eaf992a4e15..9c1d206cb900 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -597,6 +597,7 @@ struct mlx5_ib_mr {
 	int			max_descs;
 	int			desc_size;
 	int			access_mode;
+	unsigned int		page_shift;
 	struct mlx5_core_mkey	mmkey;
 	struct ib_umem	       *umem;
 	struct mlx5_shared_mr_info	*smr_info;
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index c76134219bf2..f1e4b4c001fe 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -868,14 +868,11 @@ static int mr_cache_max_order(struct mlx5_ib_dev *dev)
 	return MLX5_MAX_UMR_SHIFT;
 }
 
-static int mr_umem_get(struct mlx5_ib_dev *dev, u64 start, u64 length,
-		       int access_flags, struct ib_umem **umem, int *npages,
-		       int *page_shift, int *ncont, int *order)
+static struct ib_umem *mr_umem_get(struct mlx5_ib_dev *dev, u64 start,
+				   u64 length, int access_flags)
 {
 	struct ib_umem *u;
 
-	*umem = NULL;
-
 	if (access_flags & IB_ACCESS_ON_DEMAND) {
 		struct ib_umem_odp *odp;
 
@@ -884,39 +881,17 @@ static int mr_umem_get(struct mlx5_ib_dev *dev, u64 start, u64 length,
 		if (IS_ERR(odp)) {
 			mlx5_ib_dbg(dev, "umem get failed (%ld)\n",
 				    PTR_ERR(odp));
-			return PTR_ERR(odp);
-		}
-
-		u = &odp->umem;
-
-		*page_shift = odp->page_shift;
-		*ncont = ib_umem_odp_num_pages(odp);
-		*npages = *ncont << (*page_shift - PAGE_SHIFT);
-		if (order)
-			*order = ilog2(roundup_pow_of_two(*ncont));
-	} else {
-		u = ib_umem_get(&dev->ib_dev, start, length, access_flags);
-		if (IS_ERR(u)) {
-			mlx5_ib_dbg(dev, "umem get failed (%ld)\n", PTR_ERR(u));
-			return PTR_ERR(u);
+			return ERR_CAST(odp);
 		}
-
-		mlx5_ib_cont_pages(u, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages,
-				   page_shift, ncont, order);
+		return &odp->umem;
 	}
 
-	if (!*npages) {
-		mlx5_ib_warn(dev, "avoid zero region\n");
-		ib_umem_release(u);
-		return -EINVAL;
+	u = ib_umem_get(&dev->ib_dev, start, length, access_flags);
+	if (IS_ERR(u)) {
+		mlx5_ib_dbg(dev, "umem get failed (%ld)\n", PTR_ERR(u));
+		return u;
 	}
-
-	*umem = u;
-
-	mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n",
-		    *npages, *ncont, *order, *page_shift);
-
-	return 0;
+	return u;
 }
 
 static void mlx5_ib_umr_done(struct ib_cq *cq, struct ib_wc *wc)
@@ -975,15 +950,21 @@ static struct mlx5_cache_ent *mr_cache_ent_from_order(struct mlx5_ib_dev *dev,
 	return &cache->ent[order];
 }
 
-static struct mlx5_ib_mr *
-alloc_mr_from_cache(struct ib_pd *pd, struct ib_umem *umem, u64 virt_addr,
-		    u64 len, int npages, int page_shift, unsigned int order,
-		    int access_flags)
+static struct mlx5_ib_mr *alloc_mr_from_cache(struct ib_pd *pd,
+					      struct ib_umem *umem, u64 iova,
+					      int access_flags)
 {
 	struct mlx5_ib_dev *dev = to_mdev(pd->device);
-	struct mlx5_cache_ent *ent = mr_cache_ent_from_order(dev, order);
+	struct mlx5_cache_ent *ent;
 	struct mlx5_ib_mr *mr;
+	int npages;
+	int page_shift;
+	int ncont;
+	int order;
 
+	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
+			   &page_shift, &ncont, &order);
+	ent = mr_cache_ent_from_order(dev, order);
 	if (!ent)
 		return ERR_PTR(-E2BIG);
 
@@ -1002,9 +983,10 @@ alloc_mr_from_cache(struct ib_pd *pd, struct ib_umem *umem, u64 virt_addr,
 	mr->umem = umem;
 	mr->access_flags = access_flags;
 	mr->desc_size = sizeof(struct mlx5_mtt);
-	mr->mmkey.iova = virt_addr;
-	mr->mmkey.size = len;
+	mr->mmkey.iova = iova;
+	mr->mmkey.size = umem->length;
 	mr->mmkey.pd = to_mpd(pd)->pdn;
+	mr->page_shift = page_shift;
 
 	return mr;
 }
@@ -1163,14 +1145,15 @@ int mlx5_ib_update_xlt(struct mlx5_ib_mr *mr, u64 idx, int npages,
  * Else, the given ibmr will be used.
  */
 static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
-				     u64 virt_addr, u64 length,
-				     struct ib_umem *umem, int npages,
-				     int page_shift, int access_flags,
-				     bool populate)
+				     struct ib_umem *umem, u64 iova,
+				     int access_flags, bool populate)
 {
 	struct mlx5_ib_dev *dev = to_mdev(pd->device);
 	struct mlx5_ib_mr *mr;
+	int page_shift;
 	__be64 *pas;
+	int npages;
+	int ncont;
 	void *mkc;
 	int inlen;
 	u32 *in;
@@ -1181,6 +1164,10 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 	if (!mr)
 		return ERR_PTR(-ENOMEM);
 
+	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
+			   &page_shift, &ncont, NULL);
+
+	mr->page_shift = page_shift;
 	mr->ibmr.pd = pd;
 	mr->access_flags = access_flags;
 
@@ -1207,20 +1194,20 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 	MLX5_SET(create_mkey_in, in, pg_access, !!(pg_cap));
 
 	mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry);
-	set_mkc_access_pd_addr_fields(mkc, access_flags, virt_addr,
+	set_mkc_access_pd_addr_fields(mkc, access_flags, iova,
 				      populate ? pd : dev->umrc.pd);
 	MLX5_SET(mkc, mkc, free, !populate);
 	MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT);
 	MLX5_SET(mkc, mkc, umr_en, 1);
 
-	MLX5_SET64(mkc, mkc, len, length);
+	MLX5_SET64(mkc, mkc, len, umem->length);
 	MLX5_SET(mkc, mkc, bsf_octword_size, 0);
 	MLX5_SET(mkc, mkc, translations_octword_size,
-		 get_octo_len(virt_addr, length, page_shift));
+		 get_octo_len(iova, umem->length, page_shift));
 	MLX5_SET(mkc, mkc, log_page_size, page_shift);
 	if (populate) {
 		MLX5_SET(create_mkey_in, in, translations_octword_actual_size,
-			 get_octo_len(virt_addr, length, page_shift));
+			 get_octo_len(iova, umem->length, page_shift));
 	}
 
 	err = mlx5_ib_create_mkey(dev, &mr->mmkey, in, inlen);
@@ -1359,10 +1346,6 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	struct mlx5_ib_mr *mr = NULL;
 	bool xlt_with_umr;
 	struct ib_umem *umem;
-	int page_shift;
-	int npages;
-	int ncont;
-	int order;
 	int err;
 
 	if (!IS_ENABLED(CONFIG_INFINIBAND_USER_MEM))
@@ -1390,23 +1373,20 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 		return &mr->ibmr;
 	}
 
-	err = mr_umem_get(dev, start, length, access_flags, &umem,
-			  &npages, &page_shift, &ncont, &order);
-
-	if (err < 0)
-		return ERR_PTR(err);
+	umem = mr_umem_get(dev, start, length, access_flags);
+	if (IS_ERR(umem))
+		return ERR_CAST(umem);
 
 	if (xlt_with_umr) {
-		mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont,
-					 page_shift, order, access_flags);
+		mr = alloc_mr_from_cache(pd, umem, virt_addr, access_flags);
 		if (IS_ERR(mr))
 			mr = NULL;
 	}
 
 	if (!mr) {
 		mutex_lock(&dev->slow_path_mutex);
-		mr = reg_create(NULL, pd, virt_addr, length, umem, ncont,
-				page_shift, access_flags, !xlt_with_umr);
+		mr = reg_create(NULL, pd, umem, virt_addr, access_flags,
+				!xlt_with_umr);
 		mutex_unlock(&dev->slow_path_mutex);
 	}
 
@@ -1429,8 +1409,10 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 		 */
 		int update_xlt_flags = MLX5_IB_UPD_XLT_ENABLE;
 
-		err = mlx5_ib_update_xlt(mr, 0, ncont, page_shift,
-					 update_xlt_flags);
+		err = mlx5_ib_update_xlt(
+			mr, 0,
+			ib_umem_num_dma_blocks(umem, 1UL << mr->page_shift),
+			mr->page_shift, update_xlt_flags);
 		if (err) {
 			dereg_mr(dev, mr);
 			return ERR_PTR(err);
@@ -1520,11 +1502,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 	int access_flags = flags & IB_MR_REREG_ACCESS ?
 			    new_access_flags :
 			    mr->access_flags;
-	int page_shift = 0;
 	int upd_flags = 0;
-	int npages = 0;
-	int ncont = 0;
-	int order = 0;
 	u64 addr, len;
 	int err;
 
@@ -1554,12 +1532,12 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 		atomic_sub(ib_umem_num_pages(mr->umem),
 			   &dev->mdev->priv.reg_pages);
 		ib_umem_release(mr->umem);
-		mr->umem = NULL;
-
-		err = mr_umem_get(dev, addr, len, access_flags, &mr->umem,
-				  &npages, &page_shift, &ncont, &order);
-		if (err)
+		mr->umem = mr_umem_get(dev, addr, len, access_flags);
+		if (IS_ERR(mr->umem)) {
+			err = PTR_ERR(mr->umem);
+			mr->umem = NULL;
 			goto err;
+		}
 		atomic_add(ib_umem_num_pages(mr->umem),
 			   &dev->mdev->priv.reg_pages);
 	}
@@ -1578,9 +1556,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 		if (err)
 			goto err;
 
-		mr = reg_create(ib_mr, pd, addr, len, mr->umem, ncont,
-				page_shift, access_flags, true);
-
+		mr = reg_create(ib_mr, pd, mr->umem, addr, access_flags, true);
 		if (IS_ERR(mr)) {
 			err = PTR_ERR(mr);
 			mr = to_mmr(ib_mr);
@@ -1602,8 +1578,11 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
 				upd_flags |= MLX5_IB_UPD_XLT_PD;
 			if (flags & IB_MR_REREG_ACCESS)
 				upd_flags |= MLX5_IB_UPD_XLT_ACCESS;
-			err = mlx5_ib_update_xlt(mr, 0, npages, page_shift,
-						 upd_flags);
+			err = mlx5_ib_update_xlt(
+				mr, 0,
+				ib_umem_num_dma_blocks(mr->umem,
+						       1UL << mr->page_shift),
+				mr->page_shift, upd_flags);
 		} else {
 			err = rereg_umr(pd, mr, access_flags, flags);
 		}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 5/7] RDMA/mlx5: Remove order from mlx5_ib_cont_pages()
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
                   ` (3 preceding siblings ...)
  2020-10-26 13:19 ` [PATCH rdma-next 4/7] RDMA/mlx5: Move mlx5_ib_cont_pages() to the creation of the mlx5_ib_mr Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 6/7] RDMA/mlx5: Remove ncont " Leon Romanovsky
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

Only alloc_mr_from_cache() needs order and can trivially compute it, so
lift it to the one call site and remove the NULL arguments.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/cq.c      |  5 ++---
 drivers/infiniband/hw/mlx5/devx.c    |  2 +-
 drivers/infiniband/hw/mlx5/mem.c     | 13 +------------
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/mr.c      |  8 ++++----
 drivers/infiniband/hw/mlx5/qp.c      |  4 ++--
 drivers/infiniband/hw/mlx5/srq.c     |  2 +-
 7 files changed, 12 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index fb62f1d04afa..f993b8f55231 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -747,7 +747,7 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 		goto err_umem;
 
 	mlx5_ib_cont_pages(cq->buf.umem, ucmd.buf_addr, 0, &npages, &page_shift,
-			   &ncont, NULL);
+			   &ncont);
 	mlx5_ib_dbg(dev, "addr 0x%llx, size %u, npages %d, page_shift %d, ncont %d\n",
 		    ucmd.buf_addr, entries * ucmd.cqe_size, npages, page_shift, ncont);
 
@@ -1155,8 +1155,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
 		return err;
 	}
 
-	mlx5_ib_cont_pages(umem, ucmd.buf_addr, 0, &npages, page_shift,
-			   npas, NULL);
+	mlx5_ib_cont_pages(umem, ucmd.buf_addr, 0, &npages, page_shift, npas);
 
 	cq->resize_umem = umem;
 	*cqe_size = ucmd.cqe_size;
diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
index 9e3d8b826498..ebd6e6c4127e 100644
--- a/drivers/infiniband/hw/mlx5/devx.c
+++ b/drivers/infiniband/hw/mlx5/devx.c
@@ -2083,7 +2083,7 @@ static int devx_umem_get(struct mlx5_ib_dev *dev, struct ib_ucontext *ucontext,
 
 	mlx5_ib_cont_pages(obj->umem, obj->umem->address,
 			   MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &obj->page_shift, &obj->ncont, NULL);
+			   &obj->page_shift, &obj->ncont);
 
 	if (!npages) {
 		ib_umem_release(obj->umem);
diff --git a/drivers/infiniband/hw/mlx5/mem.c b/drivers/infiniband/hw/mlx5/mem.c
index e63af1b05c0b..2faee44cca99 100644
--- a/drivers/infiniband/hw/mlx5/mem.c
+++ b/drivers/infiniband/hw/mlx5/mem.c
@@ -42,12 +42,11 @@
  * @count: number of PAGE_SIZE pages covered by umem
  * @shift: page shift for the compound pages found in the region
  * @ncont: number of compund pages
- * @order: log2 of the number of compound pages
  */
 void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 			unsigned long max_page_shift,
 			int *count, int *shift,
-			int *ncont, int *order)
+			int *ncont)
 {
 	unsigned long tmp;
 	unsigned long m;
@@ -63,8 +62,6 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 		*shift = odp->page_shift;
 		*ncont = ib_umem_odp_num_pages(odp);
 		*count = *ncont << (*shift - PAGE_SHIFT);
-		if (order)
-			*order = ilog2(roundup_pow_of_two(*count));
 		return;
 	}
 
@@ -95,17 +92,9 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 
 	if (i) {
 		m = min_t(unsigned long, ilog2(roundup_pow_of_two(i)), m);
-
-		if (order)
-			*order = ilog2(roundup_pow_of_two(i) >> m);
-
 		*ncont = DIV_ROUND_UP(i, (1 << m));
 	} else {
 		m  = 0;
-
-		if (order)
-			*order = 0;
-
 		*ncont = 0;
 	}
 	*shift = PAGE_SHIFT + m;
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 9c1d206cb900..15f95900e83c 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1232,7 +1232,7 @@ int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
 void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 			unsigned long max_page_shift,
 			int *count, int *shift,
-			int *ncont, int *order);
+			int *ncont);
 void __mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 			    int page_shift, size_t offset, size_t num_pages,
 			    __be64 *pas, int access_flags);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index f1e4b4c001fe..67f5834254c9 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -960,11 +960,11 @@ static struct mlx5_ib_mr *alloc_mr_from_cache(struct ib_pd *pd,
 	int npages;
 	int page_shift;
 	int ncont;
-	int order;
 
 	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &page_shift, &ncont, &order);
-	ent = mr_cache_ent_from_order(dev, order);
+			   &page_shift, &ncont);
+	ent = mr_cache_ent_from_order(dev, order_base_2(ib_umem_num_dma_blocks(
+						   umem, 1UL << page_shift)));
 	if (!ent)
 		return ERR_PTR(-E2BIG);
 
@@ -1165,7 +1165,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 		return ERR_PTR(-ENOMEM);
 
 	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &page_shift, &ncont, NULL);
+			   &page_shift, &ncont);
 
 	mr->page_shift = page_shift;
 	mr->ibmr.pd = pd;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index ab2a9c71c543..77ba3adfd148 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -791,7 +791,7 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 		return PTR_ERR(*umem);
 	}
 
-	mlx5_ib_cont_pages(*umem, addr, 0, npages, page_shift, ncont, NULL);
+	mlx5_ib_cont_pages(*umem, addr, 0, npages, page_shift, ncont);
 
 	err = mlx5_ib_get_buf_offset(addr, *page_shift, offset);
 	if (err) {
@@ -850,7 +850,7 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	}
 
 	mlx5_ib_cont_pages(rwq->umem, ucmd->buf_addr, 0, &npages, &page_shift,
-			   &ncont, NULL);
+			   &ncont);
 	err = mlx5_ib_get_buf_offset(ucmd->buf_addr, page_shift,
 				     &rwq->rq_page_offset);
 	if (err) {
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index e2f720eec1e1..9b655192797d 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -88,7 +88,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 	}
 
 	mlx5_ib_cont_pages(srq->umem, ucmd.buf_addr, 0, &npages,
-			   &page_shift, &ncont, NULL);
+			   &page_shift, &ncont);
 	err = mlx5_ib_get_buf_offset(ucmd.buf_addr, page_shift,
 				     &offset);
 	if (err) {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 6/7] RDMA/mlx5: Remove ncont from mlx5_ib_cont_pages()
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
                   ` (4 preceding siblings ...)
  2020-10-26 13:19 ` [PATCH rdma-next 5/7] RDMA/mlx5: Remove order from mlx5_ib_cont_pages() Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-10-26 13:19 ` [PATCH rdma-next 7/7] RDMA/mlx5: Remove npages " Leon Romanovsky
  2020-11-02 19:11 ` [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Jason Gunthorpe
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

This is the same as ib_umem_num_dma_blocks(umem, 1UL << page_shift), have
the callers compute it directly.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/cq.c      | 30 +++++++++++++++-------------
 drivers/infiniband/hw/mlx5/devx.c    | 12 ++++++-----
 drivers/infiniband/hw/mlx5/mem.c     | 15 ++++----------
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  3 +--
 drivers/infiniband/hw/mlx5/mr.c      |  6 ++----
 drivers/infiniband/hw/mlx5/qp.c      | 26 ++++++++++++------------
 drivers/infiniband/hw/mlx5/srq.c     |  6 +++---
 7 files changed, 46 insertions(+), 52 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index f993b8f55231..e2d28081bd2a 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -746,8 +746,9 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 	if (err)
 		goto err_umem;
 
-	mlx5_ib_cont_pages(cq->buf.umem, ucmd.buf_addr, 0, &npages, &page_shift,
-			   &ncont);
+	mlx5_ib_cont_pages(cq->buf.umem, ucmd.buf_addr, 0, &npages,
+			   &page_shift);
+	ncont = ib_umem_num_dma_blocks(cq->buf.umem, 1UL << page_shift);
 	mlx5_ib_dbg(dev, "addr 0x%llx, size %u, npages %d, page_shift %d, ncont %d\n",
 		    ucmd.buf_addr, entries * ucmd.cqe_size, npages, page_shift, ncont);
 
@@ -1128,7 +1129,7 @@ int mlx5_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
 }
 
 static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
-		       int entries, struct ib_udata *udata, int *npas,
+		       int entries, struct ib_udata *udata,
 		       int *page_shift, int *cqe_size)
 {
 	struct mlx5_ib_resize_cq ucmd;
@@ -1155,7 +1156,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
 		return err;
 	}
 
-	mlx5_ib_cont_pages(umem, ucmd.buf_addr, 0, &npages, page_shift, npas);
+	mlx5_ib_cont_pages(umem, ucmd.buf_addr, 0, &npages, page_shift);
 
 	cq->resize_umem = umem;
 	*cqe_size = ucmd.cqe_size;
@@ -1276,22 +1277,23 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
 
 	mutex_lock(&cq->resize_mutex);
 	if (udata) {
-		err = resize_user(dev, cq, entries, udata, &npas, &page_shift,
+		err = resize_user(dev, cq, entries, udata, &page_shift,
 				  &cqe_size);
+		if (err)
+			goto ex;
+		npas = ib_umem_num_dma_blocks(cq->resize_umem, 1UL << page_shift);
 	} else {
+		struct mlx5_frag_buf *frag_buf;
+
 		cqe_size = 64;
 		err = resize_kernel(dev, cq, entries, cqe_size);
-		if (!err) {
-			struct mlx5_frag_buf *frag_buf = &cq->resize_buf->frag_buf;
-
-			npas = frag_buf->npages;
-			page_shift = frag_buf->page_shift;
-		}
+		if (err)
+			goto ex;
+		frag_buf = &cq->resize_buf->frag_buf;
+		npas = frag_buf->npages;
+		page_shift = frag_buf->page_shift;
 	}
 
-	if (err)
-		goto ex;
-
 	inlen = MLX5_ST_SZ_BYTES(modify_cq_in) +
 		MLX5_FLD_SZ_BYTES(modify_cq_in, pas[0]) * npas;
 
diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
index ebd6e6c4127e..7d99b5b298da 100644
--- a/drivers/infiniband/hw/mlx5/devx.c
+++ b/drivers/infiniband/hw/mlx5/devx.c
@@ -95,7 +95,6 @@ struct devx_umem {
 	struct ib_umem			*umem;
 	u32				page_offset;
 	int				page_shift;
-	int				ncont;
 	u32				dinlen;
 	u32				dinbox[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)];
 };
@@ -2083,7 +2082,7 @@ static int devx_umem_get(struct mlx5_ib_dev *dev, struct ib_ucontext *ucontext,
 
 	mlx5_ib_cont_pages(obj->umem, obj->umem->address,
 			   MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &obj->page_shift, &obj->ncont);
+			   &obj->page_shift);
 
 	if (!npages) {
 		ib_umem_release(obj->umem);
@@ -2100,8 +2099,10 @@ static int devx_umem_reg_cmd_alloc(struct uverbs_attr_bundle *attrs,
 				   struct devx_umem *obj,
 				   struct devx_umem_reg_cmd *cmd)
 {
-	cmd->inlen = MLX5_ST_SZ_BYTES(create_umem_in) +
-		    (MLX5_ST_SZ_BYTES(mtt) * obj->ncont);
+	cmd->inlen =
+		MLX5_ST_SZ_BYTES(create_umem_in) +
+		(MLX5_ST_SZ_BYTES(mtt) *
+		 ib_umem_num_dma_blocks(obj->umem, 1UL << obj->page_shift));
 	cmd->in = uverbs_zalloc(attrs, cmd->inlen);
 	return PTR_ERR_OR_ZERO(cmd->in);
 }
@@ -2117,7 +2118,8 @@ static void devx_umem_reg_cmd_build(struct mlx5_ib_dev *dev,
 	mtt = (__be64 *)MLX5_ADDR_OF(umem, umem, mtt);
 
 	MLX5_SET(create_umem_in, cmd->in, opcode, MLX5_CMD_OP_CREATE_UMEM);
-	MLX5_SET64(umem, umem, num_of_mtt, obj->ncont);
+	MLX5_SET64(umem, umem, num_of_mtt,
+		   ib_umem_num_dma_blocks(obj->umem, 1UL << obj->page_shift));
 	MLX5_SET(umem, umem, log_page_size, obj->page_shift -
 					    MLX5_ADAPTER_PAGE_SHIFT);
 	MLX5_SET(umem, umem, page_offset, obj->page_offset);
diff --git a/drivers/infiniband/hw/mlx5/mem.c b/drivers/infiniband/hw/mlx5/mem.c
index 2faee44cca99..6d87f8c13ce0 100644
--- a/drivers/infiniband/hw/mlx5/mem.c
+++ b/drivers/infiniband/hw/mlx5/mem.c
@@ -41,12 +41,9 @@
  * @max_page_shift: high limit for page_shift - 0 means no limit
  * @count: number of PAGE_SIZE pages covered by umem
  * @shift: page shift for the compound pages found in the region
- * @ncont: number of compund pages
  */
 void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
-			unsigned long max_page_shift,
-			int *count, int *shift,
-			int *ncont)
+			unsigned long max_page_shift, int *count, int *shift)
 {
 	unsigned long tmp;
 	unsigned long m;
@@ -60,8 +57,7 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 		struct ib_umem_odp *odp = to_ib_umem_odp(umem);
 
 		*shift = odp->page_shift;
-		*ncont = ib_umem_odp_num_pages(odp);
-		*count = *ncont << (*shift - PAGE_SHIFT);
+		*count = ib_umem_num_pages(umem);
 		return;
 	}
 
@@ -90,13 +86,10 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 		i += len;
 	}
 
-	if (i) {
+	if (i)
 		m = min_t(unsigned long, ilog2(roundup_pow_of_two(i)), m);
-		*ncont = DIV_ROUND_UP(i, (1 << m));
-	} else {
+	else
 		m  = 0;
-		*ncont = 0;
-	}
 	*shift = PAGE_SHIFT + m;
 	*count = i;
 }
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 15f95900e83c..a36b0b8138a0 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1231,8 +1231,7 @@ int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
 		       struct ib_port_attr *props);
 void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 			unsigned long max_page_shift,
-			int *count, int *shift,
-			int *ncont);
+			int *count, int *shift);
 void __mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 			    int page_shift, size_t offset, size_t num_pages,
 			    __be64 *pas, int access_flags);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 67f5834254c9..33483765e85c 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -959,10 +959,9 @@ static struct mlx5_ib_mr *alloc_mr_from_cache(struct ib_pd *pd,
 	struct mlx5_ib_mr *mr;
 	int npages;
 	int page_shift;
-	int ncont;
 
 	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &page_shift, &ncont);
+			   &page_shift);
 	ent = mr_cache_ent_from_order(dev, order_base_2(ib_umem_num_dma_blocks(
 						   umem, 1UL << page_shift)));
 	if (!ent)
@@ -1153,7 +1152,6 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 	int page_shift;
 	__be64 *pas;
 	int npages;
-	int ncont;
 	void *mkc;
 	int inlen;
 	u32 *in;
@@ -1165,7 +1163,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 		return ERR_PTR(-ENOMEM);
 
 	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &page_shift, &ncont);
+			   &page_shift);
 
 	mr->page_shift = page_shift;
 	mr->ibmr.pd = pd;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 77ba3adfd148..259cef28a4d5 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -781,7 +781,7 @@ int bfregn_to_uar_index(struct mlx5_ib_dev *dev,
 static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 			    unsigned long addr, size_t size,
 			    struct ib_umem **umem, int *npages, int *page_shift,
-			    int *ncont, u32 *offset)
+			    u32 *offset)
 {
 	int err;
 
@@ -791,7 +791,7 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 		return PTR_ERR(*umem);
 	}
 
-	mlx5_ib_cont_pages(*umem, addr, 0, npages, page_shift, ncont);
+	mlx5_ib_cont_pages(*umem, addr, 0, npages, page_shift);
 
 	err = mlx5_ib_get_buf_offset(addr, *page_shift, offset);
 	if (err) {
@@ -799,8 +799,8 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 		goto err_umem;
 	}
 
-	mlx5_ib_dbg(dev, "addr 0x%lx, size %zu, npages %d, page_shift %d, ncont %d, offset %d\n",
-		    addr, size, *npages, *page_shift, *ncont, *offset);
+	mlx5_ib_dbg(dev, "addr 0x%lx, size %zu, npages %d, page_shift %d, offset %d\n",
+		    addr, size, *npages, *page_shift, *offset);
 
 	return 0;
 
@@ -836,7 +836,6 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	int page_shift = 0;
 	int npages;
 	u32 offset = 0;
-	int ncont = 0;
 	int err;
 
 	if (!ucmd->buf_addr)
@@ -849,8 +848,7 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		return err;
 	}
 
-	mlx5_ib_cont_pages(rwq->umem, ucmd->buf_addr, 0, &npages, &page_shift,
-			   &ncont);
+	mlx5_ib_cont_pages(rwq->umem, ucmd->buf_addr, 0, &npages, &page_shift);
 	err = mlx5_ib_get_buf_offset(ucmd->buf_addr, page_shift,
 				     &rwq->rq_page_offset);
 	if (err) {
@@ -858,14 +856,14 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		goto err_umem;
 	}
 
-	rwq->rq_num_pas = ncont;
+	rwq->rq_num_pas = ib_umem_num_dma_blocks(rwq->umem, 1UL << page_shift);
 	rwq->page_shift = page_shift;
 	rwq->log_page_size =  page_shift - MLX5_ADAPTER_PAGE_SHIFT;
 	rwq->wq_sig = !!(ucmd->flags & MLX5_WQ_FLAG_SIGNATURE);
 
 	mlx5_ib_dbg(dev, "addr 0x%llx, size %zd, npages %d, page_shift %d, ncont %d, offset %d\n",
 		    (unsigned long long)ucmd->buf_addr, rwq->buf_size,
-		    npages, page_shift, ncont, offset);
+		    npages, page_shift, rwq->rq_num_pas, offset);
 
 	err = mlx5_ib_db_map_user(ucontext, udata, ucmd->db_addr, &rwq->db);
 	if (err) {
@@ -952,9 +950,10 @@ static int _create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		ubuffer->buf_addr = ucmd->buf_addr;
 		err = mlx5_ib_umem_get(dev, udata, ubuffer->buf_addr,
 				       ubuffer->buf_size, &ubuffer->umem,
-				       &npages, &page_shift, &ncont, &offset);
+				       &npages, &page_shift, &offset);
 		if (err)
 			goto err_bfreg;
+		ncont = ib_umem_num_dma_blocks(ubuffer->umem, 1UL << page_shift);
 	} else {
 		ubuffer->umem = NULL;
 	}
@@ -1211,16 +1210,17 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev,
 	int err;
 	int page_shift = 0;
 	int npages;
-	int ncont = 0;
 	u32 offset = 0;
 
 	err = mlx5_ib_umem_get(dev, udata, ubuffer->buf_addr, ubuffer->buf_size,
-			       &sq->ubuffer.umem, &npages, &page_shift, &ncont,
+			       &sq->ubuffer.umem, &npages, &page_shift,
 			       &offset);
 	if (err)
 		return err;
 
-	inlen = MLX5_ST_SZ_BYTES(create_sq_in) + sizeof(u64) * ncont;
+	inlen = MLX5_ST_SZ_BYTES(create_sq_in) +
+		sizeof(u64) * ib_umem_num_dma_blocks(sq->ubuffer.umem,
+						     1UL << page_shift);
 	in = kvzalloc(inlen, GFP_KERNEL);
 	if (!in) {
 		err = -ENOMEM;
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 9b655192797d..ea7ae96c1bbf 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -53,7 +53,6 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 	int err;
 	int npages;
 	int page_shift;
-	int ncont;
 	u32 offset;
 	u32 uidx = MLX5_IB_DEFAULT_UIDX;
 
@@ -88,7 +87,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 	}
 
 	mlx5_ib_cont_pages(srq->umem, ucmd.buf_addr, 0, &npages,
-			   &page_shift, &ncont);
+			   &page_shift);
 	err = mlx5_ib_get_buf_offset(ucmd.buf_addr, page_shift,
 				     &offset);
 	if (err) {
@@ -96,7 +95,8 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 		goto err_umem;
 	}
 
-	in->pas = kvcalloc(ncont, sizeof(*in->pas), GFP_KERNEL);
+	in->pas = kvcalloc(ib_umem_num_dma_blocks(srq->umem, 1UL << page_shift),
+			   sizeof(*in->pas), GFP_KERNEL);
 	if (!in->pas) {
 		err = -ENOMEM;
 		goto err_umem;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH rdma-next 7/7] RDMA/mlx5: Remove npages from mlx5_ib_cont_pages()
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
                   ` (5 preceding siblings ...)
  2020-10-26 13:19 ` [PATCH rdma-next 6/7] RDMA/mlx5: Remove ncont " Leon Romanovsky
@ 2020-10-26 13:19 ` Leon Romanovsky
  2020-11-02 19:11 ` [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Jason Gunthorpe
  7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2020-10-26 13:19 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe; +Cc: linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

Most callers don't need this, and the few that do can get it as
ib_umem_num_pages(umem).

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/cq.c      | 14 +++++++-------
 drivers/infiniband/hw/mlx5/devx.c    | 10 +---------
 drivers/infiniband/hw/mlx5/mem.c     |  5 +----
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/mr.c      | 10 +++-------
 drivers/infiniband/hw/mlx5/qp.c      | 27 +++++++++++++--------------
 drivers/infiniband/hw/mlx5/srq.c     |  4 +---
 7 files changed, 27 insertions(+), 45 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index e2d28081bd2a..2088e4a3c32d 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -710,7 +710,6 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 	size_t ucmdlen;
 	int page_shift;
 	__be64 *pas;
-	int npages;
 	int ncont;
 	void *cqc;
 	int err;
@@ -746,11 +745,13 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 	if (err)
 		goto err_umem;
 
-	mlx5_ib_cont_pages(cq->buf.umem, ucmd.buf_addr, 0, &npages,
-			   &page_shift);
+	mlx5_ib_cont_pages(cq->buf.umem, ucmd.buf_addr, 0, &page_shift);
 	ncont = ib_umem_num_dma_blocks(cq->buf.umem, 1UL << page_shift);
-	mlx5_ib_dbg(dev, "addr 0x%llx, size %u, npages %d, page_shift %d, ncont %d\n",
-		    ucmd.buf_addr, entries * ucmd.cqe_size, npages, page_shift, ncont);
+	mlx5_ib_dbg(
+		dev,
+		"addr 0x%llx, size %u, npages %zu, page_shift %d, ncont %d\n",
+		ucmd.buf_addr, entries * ucmd.cqe_size,
+		ib_umem_num_pages(cq->buf.umem), page_shift, ncont);
 
 	*inlen = MLX5_ST_SZ_BYTES(create_cq_in) +
 		 MLX5_FLD_SZ_BYTES(create_cq_in, pas[0]) * ncont;
@@ -1135,7 +1136,6 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
 	struct mlx5_ib_resize_cq ucmd;
 	struct ib_umem *umem;
 	int err;
-	int npages;
 
 	err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd));
 	if (err)
@@ -1156,7 +1156,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
 		return err;
 	}
 
-	mlx5_ib_cont_pages(umem, ucmd.buf_addr, 0, &npages, page_shift);
+	mlx5_ib_cont_pages(umem, ucmd.buf_addr, 0, page_shift);
 
 	cq->resize_umem = umem;
 	*cqe_size = ucmd.cqe_size;
diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
index 7d99b5b298da..ae889266acf1 100644
--- a/drivers/infiniband/hw/mlx5/devx.c
+++ b/drivers/infiniband/hw/mlx5/devx.c
@@ -2056,7 +2056,6 @@ static int devx_umem_get(struct mlx5_ib_dev *dev, struct ib_ucontext *ucontext,
 	u64 addr;
 	size_t size;
 	u32 access;
-	int npages;
 	int err;
 	u32 page_mask;
 
@@ -2081,14 +2080,7 @@ static int devx_umem_get(struct mlx5_ib_dev *dev, struct ib_ucontext *ucontext,
 		return PTR_ERR(obj->umem);
 
 	mlx5_ib_cont_pages(obj->umem, obj->umem->address,
-			   MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &obj->page_shift);
-
-	if (!npages) {
-		ib_umem_release(obj->umem);
-		return -EINVAL;
-	}
-
+			   MLX5_MKEY_PAGE_SHIFT_MASK, &obj->page_shift);
 	page_mask = (1 << obj->page_shift) - 1;
 	obj->page_offset = obj->umem->address & page_mask;
 
diff --git a/drivers/infiniband/hw/mlx5/mem.c b/drivers/infiniband/hw/mlx5/mem.c
index 6d87f8c13ce0..7ae96b37bd6e 100644
--- a/drivers/infiniband/hw/mlx5/mem.c
+++ b/drivers/infiniband/hw/mlx5/mem.c
@@ -39,11 +39,10 @@
 /* @umem: umem object to scan
  * @addr: ib virtual address requested by the user
  * @max_page_shift: high limit for page_shift - 0 means no limit
- * @count: number of PAGE_SIZE pages covered by umem
  * @shift: page shift for the compound pages found in the region
  */
 void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
-			unsigned long max_page_shift, int *count, int *shift)
+			unsigned long max_page_shift, int *shift)
 {
 	unsigned long tmp;
 	unsigned long m;
@@ -57,7 +56,6 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 		struct ib_umem_odp *odp = to_ib_umem_odp(umem);
 
 		*shift = odp->page_shift;
-		*count = ib_umem_num_pages(umem);
 		return;
 	}
 
@@ -91,7 +89,6 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 	else
 		m  = 0;
 	*shift = PAGE_SHIFT + m;
-	*count = i;
 }
 
 /*
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index a36b0b8138a0..3e2c471d77bd 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1231,7 +1231,7 @@ int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
 		       struct ib_port_attr *props);
 void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr,
 			unsigned long max_page_shift,
-			int *count, int *shift);
+			int *shift);
 void __mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 			    int page_shift, size_t offset, size_t num_pages,
 			    __be64 *pas, int access_flags);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 33483765e85c..57d3dc111a2b 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -957,11 +957,9 @@ static struct mlx5_ib_mr *alloc_mr_from_cache(struct ib_pd *pd,
 	struct mlx5_ib_dev *dev = to_mdev(pd->device);
 	struct mlx5_cache_ent *ent;
 	struct mlx5_ib_mr *mr;
-	int npages;
 	int page_shift;
 
-	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &page_shift);
+	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &page_shift);
 	ent = mr_cache_ent_from_order(dev, order_base_2(ib_umem_num_dma_blocks(
 						   umem, 1UL << page_shift)));
 	if (!ent)
@@ -1151,7 +1149,6 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 	struct mlx5_ib_mr *mr;
 	int page_shift;
 	__be64 *pas;
-	int npages;
 	void *mkc;
 	int inlen;
 	u32 *in;
@@ -1162,8 +1159,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 	if (!mr)
 		return ERR_PTR(-ENOMEM);
 
-	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &npages,
-			   &page_shift);
+	mlx5_ib_cont_pages(umem, iova, MLX5_MKEY_PAGE_SHIFT_MASK, &page_shift);
 
 	mr->page_shift = page_shift;
 	mr->ibmr.pd = pd;
@@ -1171,7 +1167,7 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
 
 	inlen = MLX5_ST_SZ_BYTES(create_mkey_in);
 	if (populate)
-		inlen += sizeof(*pas) * roundup(npages, 2);
+		inlen += sizeof(*pas) * roundup(ib_umem_num_pages(umem), 2);
 	in = kvzalloc(inlen, GFP_KERNEL);
 	if (!in) {
 		err = -ENOMEM;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 259cef28a4d5..6915639a776f 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -780,7 +780,7 @@ int bfregn_to_uar_index(struct mlx5_ib_dev *dev,
 
 static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 			    unsigned long addr, size_t size,
-			    struct ib_umem **umem, int *npages, int *page_shift,
+			    struct ib_umem **umem, int *page_shift,
 			    u32 *offset)
 {
 	int err;
@@ -791,7 +791,7 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 		return PTR_ERR(*umem);
 	}
 
-	mlx5_ib_cont_pages(*umem, addr, 0, npages, page_shift);
+	mlx5_ib_cont_pages(*umem, addr, 0, page_shift);
 
 	err = mlx5_ib_get_buf_offset(addr, *page_shift, offset);
 	if (err) {
@@ -799,8 +799,8 @@ static int mlx5_ib_umem_get(struct mlx5_ib_dev *dev, struct ib_udata *udata,
 		goto err_umem;
 	}
 
-	mlx5_ib_dbg(dev, "addr 0x%lx, size %zu, npages %d, page_shift %d, offset %d\n",
-		    addr, size, *npages, *page_shift, *offset);
+	mlx5_ib_dbg(dev, "addr 0x%lx, size %zu, npages %zu, page_shift %d, offset %d\n",
+		    addr, size, ib_umem_num_pages(*umem), *page_shift, *offset);
 
 	return 0;
 
@@ -834,7 +834,6 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
 		udata, struct mlx5_ib_ucontext, ibucontext);
 	int page_shift = 0;
-	int npages;
 	u32 offset = 0;
 	int err;
 
@@ -848,7 +847,7 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		return err;
 	}
 
-	mlx5_ib_cont_pages(rwq->umem, ucmd->buf_addr, 0, &npages, &page_shift);
+	mlx5_ib_cont_pages(rwq->umem, ucmd->buf_addr, 0, &page_shift);
 	err = mlx5_ib_get_buf_offset(ucmd->buf_addr, page_shift,
 				     &rwq->rq_page_offset);
 	if (err) {
@@ -861,9 +860,12 @@ static int create_user_rq(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	rwq->log_page_size =  page_shift - MLX5_ADAPTER_PAGE_SHIFT;
 	rwq->wq_sig = !!(ucmd->flags & MLX5_WQ_FLAG_SIGNATURE);
 
-	mlx5_ib_dbg(dev, "addr 0x%llx, size %zd, npages %d, page_shift %d, ncont %d, offset %d\n",
-		    (unsigned long long)ucmd->buf_addr, rwq->buf_size,
-		    npages, page_shift, rwq->rq_num_pas, offset);
+	mlx5_ib_dbg(
+		dev,
+		"addr 0x%llx, size %zd, npages %zu, page_shift %d, ncont %d, offset %d\n",
+		(unsigned long long)ucmd->buf_addr, rwq->buf_size,
+		ib_umem_num_pages(rwq->umem), page_shift, rwq->rq_num_pas,
+		offset);
 
 	err = mlx5_ib_db_map_user(ucontext, udata, ucmd->db_addr, &rwq->db);
 	if (err) {
@@ -896,7 +898,6 @@ static int _create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 	struct mlx5_ib_ubuffer *ubuffer = &base->ubuffer;
 	int page_shift = 0;
 	int uar_index = 0;
-	int npages;
 	u32 offset = 0;
 	int bfregn;
 	int ncont = 0;
@@ -950,7 +951,7 @@ static int _create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		ubuffer->buf_addr = ucmd->buf_addr;
 		err = mlx5_ib_umem_get(dev, udata, ubuffer->buf_addr,
 				       ubuffer->buf_size, &ubuffer->umem,
-				       &npages, &page_shift, &offset);
+				       &page_shift, &offset);
 		if (err)
 			goto err_bfreg;
 		ncont = ib_umem_num_dma_blocks(ubuffer->umem, 1UL << page_shift);
@@ -1209,12 +1210,10 @@ static int create_raw_packet_qp_sq(struct mlx5_ib_dev *dev,
 	int inlen;
 	int err;
 	int page_shift = 0;
-	int npages;
 	u32 offset = 0;
 
 	err = mlx5_ib_umem_get(dev, udata, ubuffer->buf_addr, ubuffer->buf_size,
-			       &sq->ubuffer.umem, &npages, &page_shift,
-			       &offset);
+			       &sq->ubuffer.umem, &page_shift, &offset);
 	if (err)
 		return err;
 
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index ea7ae96c1bbf..239c7ec65e11 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -51,7 +51,6 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 		udata, struct mlx5_ib_ucontext, ibucontext);
 	size_t ucmdlen;
 	int err;
-	int npages;
 	int page_shift;
 	u32 offset;
 	u32 uidx = MLX5_IB_DEFAULT_UIDX;
@@ -86,8 +85,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 		return err;
 	}
 
-	mlx5_ib_cont_pages(srq->umem, ucmd.buf_addr, 0, &npages,
-			   &page_shift);
+	mlx5_ib_cont_pages(srq->umem, ucmd.buf_addr, 0, &page_shift);
 	err = mlx5_ib_get_buf_offset(ucmd.buf_addr, page_shift,
 				     &offset);
 	if (err) {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes
  2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
                   ` (6 preceding siblings ...)
  2020-10-26 13:19 ` [PATCH rdma-next 7/7] RDMA/mlx5: Remove npages " Leon Romanovsky
@ 2020-11-02 19:11 ` Jason Gunthorpe
  7 siblings, 0 replies; 9+ messages in thread
From: Jason Gunthorpe @ 2020-11-02 19:11 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Leon Romanovsky, Artemy Kovalyov, linux-rdma,
	Saeed Mahameed

On Mon, Oct 26, 2020 at 03:19:29PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> >From Jason:
> 
> The MR code computes and passes around a tuple of (npages, page_shift,
> ncont, order) to represent the size of the MR in various terms.
> 
> This is quite confusing about what term refers to what, and overlaps with
> data already stored and computed inside the umem. Rework all of this to be
> more umem centric and use these identities instead:
> 
>       npages == ib_umem_num_pages(mr->umem)
>       page_shift == mr->page_shift
>       ncont == ib_umem_num_dma_blocks(mr->umem, 1 << mr->page_shift)
>       order == order_base_2(ncont)
> 
> By storing the page_shift inside the mlx5_ib_mr it becomes nearly self
> describing.
> 
> Thanks
> 
> Jason Gunthorpe (7):
>   RDMA/mlx5: Remove mlx5_ib_mr->order
>   RDMA/mlx5: Fix corruption of reg_pages in mlx5_ib_rereg_user_mr()
>   RDMA/mlx5: Remove mlx5_ib_mr->npages
>   RDMA/mlx5: Move mlx5_ib_cont_pages() to the creation of the mlx5_ib_mr
>   RDMA/mlx5: Remove order from mlx5_ib_cont_pages()
>   RDMA/mlx5: Remove ncont from mlx5_ib_cont_pages()
>   RDMA/mlx5: Remove npages from mlx5_ib_cont_pages()

Applied to for-next, thanks

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-11-02 19:11 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26 13:19 [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Leon Romanovsky
2020-10-26 13:19 ` [PATCH rdma-next 1/7] RDMA/mlx5: Remove mlx5_ib_mr->order Leon Romanovsky
2020-10-26 13:19 ` [PATCH mlx5-next 2/7] RDMA/mlx5: Fix corruption of reg_pages in mlx5_ib_rereg_user_mr() Leon Romanovsky
2020-10-26 13:19 ` [PATCH rdma-next 3/7] RDMA/mlx5: Remove mlx5_ib_mr->npages Leon Romanovsky
2020-10-26 13:19 ` [PATCH rdma-next 4/7] RDMA/mlx5: Move mlx5_ib_cont_pages() to the creation of the mlx5_ib_mr Leon Romanovsky
2020-10-26 13:19 ` [PATCH rdma-next 5/7] RDMA/mlx5: Remove order from mlx5_ib_cont_pages() Leon Romanovsky
2020-10-26 13:19 ` [PATCH rdma-next 6/7] RDMA/mlx5: Remove ncont " Leon Romanovsky
2020-10-26 13:19 ` [PATCH rdma-next 7/7] RDMA/mlx5: Remove npages " Leon Romanovsky
2020-11-02 19:11 ` [PATCH rdma-next 0/7] Use only a umem and HW page_shift to calculate MR sizes Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).