All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-rc] RDMA/mlx5: Prevent prefetch from racing with implicit destruction
@ 2020-07-19  6:54 Leon Romanovsky
  2020-07-21 16:59 ` Jason Gunthorpe
  0 siblings, 1 reply; 2+ messages in thread
From: Leon Romanovsky @ 2020-07-19  6:54 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Jason Gunthorpe, Artemy Kovalyov, linux-rdma

From: Jason Gunthorpe <jgg@nvidia.com>

Prefetch work in mlx5_ib_prefetch_mr_work can be queued and able to run
concurrently with destruction of the implicit MR. The num_deferred_work
was intended to serialize this, but there is a race:

       CPU0                                          CPU1

    mlx5_ib_free_implicit_mr()
      xa_erase(odp_mkeys)
      synchronize_srcu()
      __xa_erase(implicit_children)
                                      mlx5_ib_prefetch_mr_work()
                                        pagefault_mr()
                                         pagefault_implicit_mr()
                                          implicit_get_child_mr()
                                           xa_cmpxchg()
                                        atomic_dec_and_test(num_deferred_mr)
      wait_event(imr->q_deferred_work)
      ib_umem_odp_release(odp_imr)
        kfree(odp_imr)

At this point in mlx5_ib_free_implicit_mr() the implicit_children list is
supposed to be empty forever so that destroy_unused_implicit_child_mr()
and related are not and will not be running.

Since it is not empty the destroy_unused_implicit_child_mr() flow ends up
touching deallocated memory as mlx5_ib_free_implicit_mr() already tore down the
imr parent.

The solution is to flush out the prefetch wq by driving num_deferred_work
to zero after creation of new prefetch work is blocked.

Fixes: 5256edcb98a1 ("RDMA/mlx5: Rework implicit ODP destroy")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/odp.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 7d2ec9ee5097..1ab676b66894 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -601,6 +601,23 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
 	 */
 	synchronize_srcu(&dev->odp_srcu);
 
+	/*
+	 * All work on the prefetch list must be completed, xa_erase() prevented
+	 * new work from being created.
+	 */
+	wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));
+
+	/*
+	 * At this point it is forbidden for any other thread to enter
+	 * pagefault_mr() on this imr. It is already forbidden to call
+	 * pagefault_mr() on an implicit child. Due to this additions to
+	 * implicit_children are prevented.
+	 */
+
+	/*
+	 * Block destroy_unused_implicit_child_mr() from incrementing
+	 * num_deferred_work.
+	 */
 	xa_lock(&imr->implicit_children);
 	xa_for_each (&imr->implicit_children, idx, mtt) {
 		__xa_erase(&imr->implicit_children, idx);
@@ -609,9 +626,8 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
 	xa_unlock(&imr->implicit_children);
 
 	/*
-	 * num_deferred_work can only be incremented inside the odp_srcu, or
-	 * under xa_lock while the child is in the xarray. Thus at this point
-	 * it is only decreasing, and all work holding it is now on the wq.
+	 * Wait for any concurrent destroy_unused_implicit_child_mr() to
+	 * complete.
 	 */
 	wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH rdma-rc] RDMA/mlx5: Prevent prefetch from racing with implicit destruction
  2020-07-19  6:54 [PATCH rdma-rc] RDMA/mlx5: Prevent prefetch from racing with implicit destruction Leon Romanovsky
@ 2020-07-21 16:59 ` Jason Gunthorpe
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2020-07-21 16:59 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: Doug Ledford, Artemy Kovalyov, linux-rdma

On Sun, Jul 19, 2020 at 09:54:35AM +0300, Leon Romanovsky wrote:
> From: Jason Gunthorpe <jgg@nvidia.com>
> 
> Prefetch work in mlx5_ib_prefetch_mr_work can be queued and able to run
> concurrently with destruction of the implicit MR. The num_deferred_work
> was intended to serialize this, but there is a race:
> 
>        CPU0                                          CPU1
> 
>     mlx5_ib_free_implicit_mr()
>       xa_erase(odp_mkeys)
>       synchronize_srcu()
>       __xa_erase(implicit_children)
>                                       mlx5_ib_prefetch_mr_work()
>                                         pagefault_mr()
>                                          pagefault_implicit_mr()
>                                           implicit_get_child_mr()
>                                            xa_cmpxchg()
>                                         atomic_dec_and_test(num_deferred_mr)
>       wait_event(imr->q_deferred_work)
>       ib_umem_odp_release(odp_imr)
>         kfree(odp_imr)
> 
> At this point in mlx5_ib_free_implicit_mr() the implicit_children list is
> supposed to be empty forever so that destroy_unused_implicit_child_mr()
> and related are not and will not be running.
> 
> Since it is not empty the destroy_unused_implicit_child_mr() flow ends up
> touching deallocated memory as mlx5_ib_free_implicit_mr() already tore down the
> imr parent.
> 
> The solution is to flush out the prefetch wq by driving num_deferred_work
> to zero after creation of new prefetch work is blocked.
> 
> Fixes: 5256edcb98a1 ("RDMA/mlx5: Rework implicit ODP destroy")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  drivers/infiniband/hw/mlx5/odp.c | 22 +++++++++++++++++++---
>  1 file changed, 19 insertions(+), 3 deletions(-)

Applied to for-rc, thanks

Jason

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-07-21 17:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-19  6:54 [PATCH rdma-rc] RDMA/mlx5: Prevent prefetch from racing with implicit destruction Leon Romanovsky
2020-07-21 16:59 ` Jason Gunthorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.