linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Xiong, Jianxin" <jianxin.xiong@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"Doug Ledford" <dledford@redhat.com>,
	Leon Romanovsky <leon@kernel.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	Christian Koenig <christian.koenig@amd.com>,
	"Vetter, Daniel" <daniel.vetter@intel.com>
Subject: RE: [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as user memory region
Date: Mon, 19 Oct 2020 05:28:40 +0000	[thread overview]
Message-ID: <MW3PR11MB45552B6EC3A50E38483547ECE51E0@MW3PR11MB4555.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20201017010437.GZ6219@nvidia.com>

> -----Original Message-----
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, October 16, 2020 6:05 PM
> To: Xiong, Jianxin <jianxin.xiong@intel.com>
> Cc: linux-rdma@vger.kernel.org; dri-devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Leon Romanovsky
> <leon@kernel.org>; Sumit Semwal <sumit.semwal@linaro.org>; Christian Koenig <christian.koenig@amd.com>; Vetter, Daniel
> <daniel.vetter@intel.com>
> Subject: Re: [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as user memory region
> 
> On Sat, Oct 17, 2020 at 12:57:21AM +0000, Xiong, Jianxin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Friday, October 16, 2020 5:28 PM
> > > To: Xiong, Jianxin <jianxin.xiong@intel.com>
> > > Cc: linux-rdma@vger.kernel.org; dri-devel@lists.freedesktop.org;
> > > Doug Ledford <dledford@redhat.com>; Leon Romanovsky
> > > <leon@kernel.org>; Sumit Semwal <sumit.semwal@linaro.org>; Christian
> > > Koenig <christian.koenig@amd.com>; Vetter, Daniel
> > > <daniel.vetter@intel.com>
> > > Subject: Re: [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as
> > > user memory region
> > >
> > > On Thu, Oct 15, 2020 at 03:02:45PM -0700, Jianxin Xiong wrote:
> > > > +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device,
> > > > +				   unsigned long addr, size_t size,
> > > > +				   int dmabuf_fd, int access,
> > > > +				   const struct ib_umem_dmabuf_ops *ops) {
> > > > +	struct dma_buf *dmabuf;
> > > > +	struct ib_umem_dmabuf *umem_dmabuf;
> > > > +	struct ib_umem *umem;
> > > > +	unsigned long end;
> > > > +	long ret;
> > > > +
> > > > +	if (check_add_overflow(addr, (unsigned long)size, &end))
> > > > +		return ERR_PTR(-EINVAL);
> > > > +
> > > > +	if (unlikely(PAGE_ALIGN(end) < PAGE_SIZE))
> > > > +		return ERR_PTR(-EINVAL);
> > > > +
> > > > +	if (unlikely(!ops || !ops->invalidate || !ops->update))
> > > > +		return ERR_PTR(-EINVAL);
> > > > +
> > > > +	umem_dmabuf = kzalloc(sizeof(*umem_dmabuf), GFP_KERNEL);
> > > > +	if (!umem_dmabuf)
> > > > +		return ERR_PTR(-ENOMEM);
> > > > +
> > > > +	umem_dmabuf->ops = ops;
> > > > +	INIT_WORK(&umem_dmabuf->work, ib_umem_dmabuf_work);
> > > > +
> > > > +	umem = &umem_dmabuf->umem;
> > > > +	umem->ibdev = device;
> > > > +	umem->length = size;
> > > > +	umem->address = addr;
> > >
> > > addr here is offset within the dma buf, but this code does nothing with it.
> > >
> > The current code assumes 0 offset, and 'addr' is the nominal starting
> > address of the buffer. If this is to be changed to offset, then yes,
> > some more handling is needed as you mentioned below.
> 
> There is no such thing as 'nominal starting address'
> 
> If the user is to provide any argument it can only be offset and length.
> 
> > > Also, dma_buf_map_attachment() does not do the correct dma mapping
> > > for RDMA, eg it does not use ib_dma_map(). This is not a problem for
> > > mlx5 but it is troublesome to put in the core code.
> >
> > ib_dma_map() uses dma_map_single(), GPU drivers use dma_map_resource()
> > for dma_buf_map_attachment(). They belong to the same family, but take
> > different address type (kernel address vs MMIO physical address).
> > Could you elaborate what the problem could be for non-mlx5 HCAs?
> 
> They use the virtual dma ops which we intend to remove

We can have a check with the dma device before attaching the dma-buf and thus 
ib_umem_dmabuf_get() call from such drivers would fail. Something like:

#ifdef CONFIG_DMA_VIRT_OPS
	if (device->dma_device->dma_ops == &dma_virt_ops)
		return ERR_PTR(-EINVAL);
#endif
 
> 
> Jason

      reply	other threads:[~2020-10-19  5:28 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-15 22:02 [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as user memory region Jianxin Xiong
2020-10-16 18:59 ` Jason Gunthorpe
2020-10-16 20:16   ` Xiong, Jianxin
2020-10-18 18:05   ` Daniel Vetter
2020-10-17  0:28 ` Jason Gunthorpe
2020-10-17  0:57   ` Xiong, Jianxin
2020-10-17  1:04     ` Jason Gunthorpe
2020-10-19  5:28       ` Xiong, Jianxin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW3PR11MB45552B6EC3A50E38483547ECE51E0@MW3PR11MB4555.namprd11.prod.outlook.com \
    --to=jianxin.xiong@intel.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@intel.com \
    --cc=dledford@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jgg@nvidia.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=sumit.semwal@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).