* [PATCH rdma-next 00/10] Prepare drivers to move QP allocation to ib_core
@ 2020-09-10 14:00 Leon Romanovsky
2020-09-24 23:09 ` Jason Gunthorpe
0 siblings, 1 reply; 2+ messages in thread
From: Leon Romanovsky @ 2020-09-10 14:00 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, Adit Ranadive, Ariel Elior, Faisal Latif,
Lijun Ou, linux-kernel, linux-rdma, Maor Gottlieb,
Michal Kalderon, Shiraz Saleem, VMware PV-Drivers, Weihang Li,
Wei Hu(Xavier),
Yishai Hadas
From: Leon Romanovsky <leonro@nvidia.com>
Hi,
This series mainly changes mlx4, mlx5, and mthca drivers to future
change of QP allocation scheme.
The rdmavt driver will be sent separately.
Thanks
Leon Romanovsky (10):
RDMA/mlx5: Embed GSI QP into general mlx5_ib QP
RDMA/mlx5: Reuse existing fields in parent QP storage object
RDMA/mlx5: Change GSI QP to have same creation flow like other QPs
RDMA/mlx5: Delete not needed GSI QP signature QP type
RDMA/mlx4: Embed GSI QP into general mlx4_ib QP
RDMA/mlx4: Prepare QP allocation to remove from the driver
RDMA/core: Align write and ioctl checks of QP types
RDMA/drivers: Remove udata check from special QP
RDMA/mthca: Combine special QP struct with mthca QP
RDMA/i40iw: Remove intermediate pointer that points to the same struct
drivers/infiniband/core/uverbs_cmd.c | 17 +-
drivers/infiniband/hw/hns/hns_roce_qp.c | 57 ++--
drivers/infiniband/hw/i40iw/i40iw_verbs.c | 9 +-
drivers/infiniband/hw/i40iw/i40iw_verbs.h | 1 -
drivers/infiniband/hw/mlx4/mlx4_ib.h | 25 +-
drivers/infiniband/hw/mlx4/qp.c | 285 ++++++++-----------
drivers/infiniband/hw/mlx5/gsi.c | 138 +++------
drivers/infiniband/hw/mlx5/main.c | 6 +-
drivers/infiniband/hw/mlx5/mlx5_ib.h | 28 +-
drivers/infiniband/hw/mlx5/qp.c | 50 ++--
drivers/infiniband/hw/mthca/mthca_dev.h | 2 +-
drivers/infiniband/hw/mthca/mthca_provider.c | 17 +-
drivers/infiniband/hw/mthca/mthca_provider.h | 27 +-
drivers/infiniband/hw/mthca/mthca_qp.c | 75 +++--
drivers/infiniband/hw/qedr/verbs.c | 8 -
drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 3 +-
16 files changed, 329 insertions(+), 419 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH rdma-next 00/10] Prepare drivers to move QP allocation to ib_core
2020-09-10 14:00 [PATCH rdma-next 00/10] Prepare drivers to move QP allocation to ib_core Leon Romanovsky
@ 2020-09-24 23:09 ` Jason Gunthorpe
0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2020-09-24 23:09 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Doug Ledford, Leon Romanovsky, Adit Ranadive, Ariel Elior,
Faisal Latif, Lijun Ou, linux-kernel, linux-rdma, Maor Gottlieb,
Michal Kalderon, Shiraz Saleem, VMware PV-Drivers, Weihang Li,
Wei Hu(Xavier),
Yishai Hadas
On Thu, Sep 10, 2020 at 05:00:36PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Hi,
>
> This series mainly changes mlx4, mlx5, and mthca drivers to future
> change of QP allocation scheme.
>
> The rdmavt driver will be sent separately.
>
> Thanks
>
> Leon Romanovsky (10):
> RDMA/mlx5: Embed GSI QP into general mlx5_ib QP
> RDMA/mlx5: Reuse existing fields in parent QP storage object
> RDMA/mlx5: Change GSI QP to have same creation flow like other QPs
> RDMA/mlx5: Delete not needed GSI QP signature QP type
> RDMA/mlx4: Embed GSI QP into general mlx4_ib QP
> RDMA/mlx4: Prepare QP allocation to remove from the driver
> RDMA/core: Align write and ioctl checks of QP types
> RDMA/drivers: Remove udata check from special QP
> RDMA/mthca: Combine special QP struct with mthca QP
> RDMA/i40iw: Remove intermediate pointer that points to the same struct
This all seems fine, can you re-send it with the fixes? i40iw also
needs a trivial rebase
Thanks,
Jason
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-09-24 23:09 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-10 14:00 [PATCH rdma-next 00/10] Prepare drivers to move QP allocation to ib_core Leon Romanovsky
2020-09-24 23:09 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).