From: Jason Gunthorpe <jgg@ziepe.ca> To: Jianxin Xiong <jianxin.xiong@intel.com> Cc: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org, Doug Ledford <dledford@redhat.com>, Leon Romanovsky <leon@kernel.org>, Sumit Semwal <sumit.semwal@linaro.org>, Christian Koenig <christian.koenig@amd.com>, Daniel Vetter <daniel.vetter@intel.com> Subject: Re: [PATCH v10 4/6] RDMA/mlx5: Support dma-buf based userspace memory region Date: Thu, 12 Nov 2020 20:39:46 -0400 [thread overview] Message-ID: <20201113003946.GA244516@ziepe.ca> (raw) In-Reply-To: <1605044477-51833-5-git-send-email-jianxin.xiong@intel.com> On Tue, Nov 10, 2020 at 01:41:15PM -0800, Jianxin Xiong wrote: > -static int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) > +int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) > { > struct mlx5_ib_dev *dev = mr->dev; > struct device *ddev = dev->ib_dev.dev.parent; > @@ -1255,6 +1267,10 @@ static int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) > cur_mtt->ptag = > cpu_to_be64(rdma_block_iter_dma_address(&biter) | > MLX5_IB_MTT_PRESENT); > + > + if (mr->umem->is_dmabuf && (flags & MLX5_IB_UPD_XLT_ZAP)) > + cur_mtt->ptag = 0; > + > cur_mtt++; > } > > @@ -1291,8 +1307,15 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd, > int err; > bool pg_cap = !!(MLX5_CAP_GEN(dev->mdev, pg)); > > - page_size = > - mlx5_umem_find_best_pgsz(umem, mkc, log_page_size, 0, iova); > + if (umem->is_dmabuf) { > + if ((iova ^ umem->address) & (PAGE_SIZE - 1)) > + return ERR_PTR(-EINVAL); > + umem->iova = iova; > + page_size = PAGE_SIZE; > + } else { > + page_size = mlx5_umem_find_best_pgsz(umem, mkc, log_page_size, > + 0, iova); > + } Urk, maybe this duplicated sequence should be in a function.. This will also collide with a rereg_mr bugfixing series that should be posted soon.. > +static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) > +{ > + struct ib_umem_dmabuf *umem_dmabuf = attach->importer_priv; > + struct mlx5_ib_mr *mr = umem_dmabuf->private; > + > + dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); > + > + if (mr) I don't think this 'if (mr)' test is needed anymore? I certainly prefer it gone as it is kind of messy. I expect unmapping the dma to ensure this function is not running, and won't run again. > +/** > + * mlx5_ib_fence_dmabuf_mr - Stop all access to the dmabuf MR > + * @mr: to fence > + * > + * On return no parallel threads will be touching this MR and no DMA will be > + * active. > + */ > +void mlx5_ib_fence_dmabuf_mr(struct mlx5_ib_mr *mr) > +{ > + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(mr->umem); > + > + /* Prevent new page faults and prefetch requests from succeeding */ > + xa_erase(&mr->dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key)); > + > + /* Wait for all running page-fault handlers to finish. */ > + synchronize_srcu(&mr->dev->odp_srcu); > + > + wait_event(mr->q_deferred_work, !atomic_read(&mr->num_deferred_work)); > + > + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); > + mlx5_mr_cache_invalidate(mr); > + umem_dmabuf->private = NULL; > + ib_umem_dmabuf_unmap_pages(umem_dmabuf); > + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); > + > + if (!mr->cache_ent) { > + mlx5_core_destroy_mkey(mr->dev->mdev, &mr->mmkey); > + WARN_ON(mr->descs); > + } I didn't check carefully, but are you sure this destroy_mkey should be here?? Jason
WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@ziepe.ca> To: Jianxin Xiong <jianxin.xiong@intel.com> Cc: Leon Romanovsky <leon@kernel.org>, linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org, Doug Ledford <dledford@redhat.com>, Daniel Vetter <daniel.vetter@intel.com>, Christian Koenig <christian.koenig@amd.com> Subject: Re: [PATCH v10 4/6] RDMA/mlx5: Support dma-buf based userspace memory region Date: Thu, 12 Nov 2020 20:39:46 -0400 [thread overview] Message-ID: <20201113003946.GA244516@ziepe.ca> (raw) In-Reply-To: <1605044477-51833-5-git-send-email-jianxin.xiong@intel.com> On Tue, Nov 10, 2020 at 01:41:15PM -0800, Jianxin Xiong wrote: > -static int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) > +int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) > { > struct mlx5_ib_dev *dev = mr->dev; > struct device *ddev = dev->ib_dev.dev.parent; > @@ -1255,6 +1267,10 @@ static int mlx5_ib_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags) > cur_mtt->ptag = > cpu_to_be64(rdma_block_iter_dma_address(&biter) | > MLX5_IB_MTT_PRESENT); > + > + if (mr->umem->is_dmabuf && (flags & MLX5_IB_UPD_XLT_ZAP)) > + cur_mtt->ptag = 0; > + > cur_mtt++; > } > > @@ -1291,8 +1307,15 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd, > int err; > bool pg_cap = !!(MLX5_CAP_GEN(dev->mdev, pg)); > > - page_size = > - mlx5_umem_find_best_pgsz(umem, mkc, log_page_size, 0, iova); > + if (umem->is_dmabuf) { > + if ((iova ^ umem->address) & (PAGE_SIZE - 1)) > + return ERR_PTR(-EINVAL); > + umem->iova = iova; > + page_size = PAGE_SIZE; > + } else { > + page_size = mlx5_umem_find_best_pgsz(umem, mkc, log_page_size, > + 0, iova); > + } Urk, maybe this duplicated sequence should be in a function.. This will also collide with a rereg_mr bugfixing series that should be posted soon.. > +static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) > +{ > + struct ib_umem_dmabuf *umem_dmabuf = attach->importer_priv; > + struct mlx5_ib_mr *mr = umem_dmabuf->private; > + > + dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); > + > + if (mr) I don't think this 'if (mr)' test is needed anymore? I certainly prefer it gone as it is kind of messy. I expect unmapping the dma to ensure this function is not running, and won't run again. > +/** > + * mlx5_ib_fence_dmabuf_mr - Stop all access to the dmabuf MR > + * @mr: to fence > + * > + * On return no parallel threads will be touching this MR and no DMA will be > + * active. > + */ > +void mlx5_ib_fence_dmabuf_mr(struct mlx5_ib_mr *mr) > +{ > + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(mr->umem); > + > + /* Prevent new page faults and prefetch requests from succeeding */ > + xa_erase(&mr->dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key)); > + > + /* Wait for all running page-fault handlers to finish. */ > + synchronize_srcu(&mr->dev->odp_srcu); > + > + wait_event(mr->q_deferred_work, !atomic_read(&mr->num_deferred_work)); > + > + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); > + mlx5_mr_cache_invalidate(mr); > + umem_dmabuf->private = NULL; > + ib_umem_dmabuf_unmap_pages(umem_dmabuf); > + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); > + > + if (!mr->cache_ent) { > + mlx5_core_destroy_mkey(mr->dev->mdev, &mr->mmkey); > + WARN_ON(mr->descs); > + } I didn't check carefully, but are you sure this destroy_mkey should be here?? Jason _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2020-11-13 0:39 UTC|newest] Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-11-10 21:41 [PATCH v10 0/6] RDMA: Add dma-buf support Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-10 21:41 ` [PATCH v10 1/6] RDMA/umem: Support importing dma-buf as user memory region Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-13 0:30 ` Jason Gunthorpe 2020-11-13 0:30 ` Jason Gunthorpe 2020-11-13 3:30 ` Xiong, Jianxin 2020-11-13 3:30 ` Xiong, Jianxin 2020-11-13 12:49 ` Jason Gunthorpe 2020-11-13 12:49 ` Jason Gunthorpe 2020-11-10 21:41 ` [PATCH v10 2/6] RDMA/core: Add device method for registering dma-buf based " Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-10 21:41 ` [PATCH v10 3/6] RDMA/uverbs: Add uverbs command for dma-buf based MR registration Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-13 0:33 ` Jason Gunthorpe 2020-11-13 0:33 ` Jason Gunthorpe 2020-11-13 4:02 ` Xiong, Jianxin 2020-11-13 4:02 ` Xiong, Jianxin 2020-11-10 21:41 ` [PATCH v10 4/6] RDMA/mlx5: Support dma-buf based userspace memory region Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-13 0:39 ` Jason Gunthorpe [this message] 2020-11-13 0:39 ` Jason Gunthorpe 2020-11-13 3:51 ` Xiong, Jianxin 2020-11-13 3:51 ` Xiong, Jianxin 2020-11-13 13:06 ` Jason Gunthorpe 2020-11-13 13:06 ` Jason Gunthorpe 2020-11-13 17:28 ` Xiong, Jianxin 2020-11-13 17:28 ` Xiong, Jianxin 2020-11-10 21:41 ` [PATCH v10 5/6] dma-buf: Reject attach request from importers that use dma_virt_ops Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-10 21:41 ` [PATCH v10 6/6] dma-buf: Document that dma-buf size is fixed Jianxin Xiong 2020-11-10 21:41 ` Jianxin Xiong 2020-11-11 16:33 ` Daniel Vetter 2020-11-11 16:33 ` Daniel Vetter 2020-11-12 13:25 ` Jason Gunthorpe 2020-11-12 13:25 ` Jason Gunthorpe 2020-11-13 20:41 ` Daniel Vetter 2020-11-13 20:41 ` Daniel Vetter 2020-11-13 0:42 ` [PATCH v10 0/6] RDMA: Add dma-buf support Jason Gunthorpe 2020-11-13 0:42 ` Jason Gunthorpe
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20201113003946.GA244516@ziepe.ca \ --to=jgg@ziepe.ca \ --cc=christian.koenig@amd.com \ --cc=daniel.vetter@intel.com \ --cc=dledford@redhat.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=jianxin.xiong@intel.com \ --cc=leon@kernel.org \ --cc=linux-rdma@vger.kernel.org \ --cc=sumit.semwal@linaro.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.