* [PATCH rdma-rc] RDMA/mlx5: Use different doorbell memory for different processes
@ 2021-06-03 13:18 Leon Romanovsky
2021-06-03 17:39 ` Jason Gunthorpe
0 siblings, 1 reply; 2+ messages in thread
From: Leon Romanovsky @ 2021-06-03 13:18 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe; +Cc: Mark Zhang, linux-rdma
From: Mark Zhang <markzhang@nvidia.com>
In fork scenario, the parent and child can have same virtual address.
That causes to the list_for_each_entry search return same doorbell location
for all processes.
This patch takes the mm_struct into consideration during search, to make
sure that different doorbell memory is used for the different processes.
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/doorbell.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/doorbell.c b/drivers/infiniband/hw/mlx5/doorbell.c
index 40226c75406a..0333b0fe5d8a 100644
--- a/drivers/infiniband/hw/mlx5/doorbell.c
+++ b/drivers/infiniband/hw/mlx5/doorbell.c
@@ -41,6 +41,7 @@ struct mlx5_ib_user_db_page {
struct ib_umem *umem;
unsigned long user_virt;
int refcnt;
+ struct mm_struct *mm;
};
int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, unsigned long virt,
@@ -52,7 +53,8 @@ int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, unsigned long virt,
mutex_lock(&context->db_page_mutex);
list_for_each_entry(page, &context->db_page_list, list)
- if (page->user_virt == (virt & PAGE_MASK))
+ if ((current->mm == page->mm) &&
+ (page->user_virt == (virt & PAGE_MASK)))
goto found;
page = kmalloc(sizeof(*page), GFP_KERNEL);
@@ -71,6 +73,8 @@ int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, unsigned long virt,
kfree(page);
goto out;
}
+ mmgrab(current->mm);
+ page->mm = current->mm;
list_add(&page->list, &context->db_page_list);
@@ -91,6 +95,7 @@ void mlx5_ib_db_unmap_user(struct mlx5_ib_ucontext *context, struct mlx5_db *db)
if (!--db->u.user_page->refcnt) {
list_del(&db->u.user_page->list);
+ mmdrop(db->u.user_page->mm);
ib_umem_release(db->u.user_page->umem);
kfree(db->u.user_page);
}
--
2.31.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH rdma-rc] RDMA/mlx5: Use different doorbell memory for different processes
2021-06-03 13:18 [PATCH rdma-rc] RDMA/mlx5: Use different doorbell memory for different processes Leon Romanovsky
@ 2021-06-03 17:39 ` Jason Gunthorpe
0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2021-06-03 17:39 UTC (permalink / raw)
To: Leon Romanovsky; +Cc: Doug Ledford, Mark Zhang, linux-rdma
On Thu, Jun 03, 2021 at 04:18:03PM +0300, Leon Romanovsky wrote:
> From: Mark Zhang <markzhang@nvidia.com>
>
> In fork scenario, the parent and child can have same virtual address.
> That causes to the list_for_each_entry search return same doorbell location
> for all processes.
>
> This patch takes the mm_struct into consideration during search, to make
> sure that different doorbell memory is used for the different processes.
>
> Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Mark Zhang <markzhang@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> drivers/infiniband/hw/mlx5/doorbell.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
Applied to for-rc, thanks
Jason
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-06-03 17:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-03 13:18 [PATCH rdma-rc] RDMA/mlx5: Use different doorbell memory for different processes Leon Romanovsky
2021-06-03 17:39 ` Jason Gunthorpe
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.