* [PATCH rdma-next 0/6] Use ib_umem_find_best_pgsz() for all umems
@ 2020-10-26 13:26 Leon Romanovsky
0 siblings, 0 replies; only message in thread
From: Leon Romanovsky @ 2020-10-26 13:26 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, David S. Miller, Eli Cohen, Haggai Abramonvsky,
Jack Morgenstein, linux-kernel, linux-rdma, majd, Matan Barak,
Or Gerlitz, Roland Dreier, Sagi Grimberg, Yishai Hadas
From: Leon Romanovsky <leonro@nvidia.com>
From Jason:
Move the remaining cases working with umems to use versions of
ib_umem_find_best_pgsz() tailored to the calculations the devices
requires.
Unlike a MR there is no IOVA, instead a page offset from the starting page
is possible, with various restrictions.
Compute the best page size to meet the page_offset restrictions.
Thanks
Jason Gunthorpe (6):
RDMA/mlx5: Use ib_umem_find_best_pgsz() for devx
RDMA/mlx5: Use ib_umem_find_best_pgoff() for SRQ
RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for WQ
RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for QP
RDMA/mlx5: mlx5_umem_find_best_quantized_pgoff() for CQ
RDMA/mlx5: Lower setting the umem's PAS for SRQ
drivers/infiniband/hw/mlx5/cq.c | 48 ++++++++---
drivers/infiniband/hw/mlx5/devx.c | 56 ++++++------
drivers/infiniband/hw/mlx5/mem.c | 115 +++++++++----------------
drivers/infiniband/hw/mlx5/mlx5_ib.h | 56 ++++++++++--
drivers/infiniband/hw/mlx5/qp.c | 124 ++++++++++++---------------
drivers/infiniband/hw/mlx5/srq.c | 27 +-----
drivers/infiniband/hw/mlx5/srq.h | 1 +
drivers/infiniband/hw/mlx5/srq_cmd.c | 80 ++++++++++++++++-
include/rdma/ib_umem.h | 42 +++++++++
9 files changed, 326 insertions(+), 223 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-10-26 13:26 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26 13:26 [PATCH rdma-next 0/6] Use ib_umem_find_best_pgsz() for all umems Leon Romanovsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).