From: Jason Gunthorpe <jgg@ziepe.ca> To: Christoph Hellwig <hch@infradead.org> Cc: "Christian König" <christian.koenig@amd.com>, David1.Zhou@amd.com, daniel@ffwll.ch, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-media@vger.kernel.org, intel-gfx@lists.freedesktop.org, "Logan Gunthorpe" <logang@deltatee.com> Subject: Re: [PATCH 1/6] lib/scatterlist: add sg_set_dma_addr() function Date: Thu, 12 Mar 2020 11:19:28 -0300 [thread overview] Message-ID: <20200312141928.GK31668@ziepe.ca> (raw) In-Reply-To: <20200312104729.GA26031@infradead.org> On Thu, Mar 12, 2020 at 03:47:29AM -0700, Christoph Hellwig wrote: > On Thu, Mar 12, 2020 at 11:31:35AM +0100, Christian König wrote: > > But how should we then deal with all the existing interfaces which already > > take a scatterlist/sg_table ? > > > > The whole DMA-buf design and a lot of drivers are build around > > scatterlist/sg_table and to me that actually makes quite a lot of sense. > > > > Replace them with a saner interface that doesn't take a scatterlist. > At very least for new functionality like peer to peer DMA, but > especially this code would also benefit from a general move away > from the scatterlist. If dma buf can do P2P I'd like to see support for consuming a dmabuf in RDMA. Looking at how.. there is an existing sgl based path starting from get_user_pages through dma map to the drivers. (ib_umem) I can replace the driver part with something else (dma_sg), but not until we get a way to DMA map pages directly into that something else.. The non-page scatterlist is also a big concern for RDMA as we have drivers that want the page list, so even if we did as this series contemplates I'd have still have to split the drivers and create the notion of a dma-only SGL. > > I mean we could come up with a new structure for this, but to me that just > > looks like reinventing the wheel. Especially since drivers need to be able > > to handle both I/O to system memory and I/O to PCIe BARs. > > The structure for holding the struct page side of the scatterlist is > called struct bio_vec, so far mostly used by the block and networking > code. I haven't used bio_vecs before, do they support chaining like SGL so they can be very big? RDMA dma maps gigabytes of memory > The structure for holding dma addresses doesn't really exist > in a generic form, but would be an array of these structures: > > struct dma_sg { > dma_addr_t addr; > u32 len; > }; Same question, RDMA needs to represent gigabytes of pages in a DMA list, we will need some generic way to handle that. I suspect GPU has a similar need? Can it be accomidated in some generic dma_sg? So I'm guessing the path forward is something like - Add some generic dma_sg data structure and helper - Add dma mapping code to go from pages to dma_sg - Rework RDMA to use dma_sg and the new dma mapping code - Rework dmabuf to support dma mapping to a dma_sg - Rework GPU drivers to use dma_sg - Teach p2pdma to generate a dma_sg from a BAR page list - This series ? Jason
WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@ziepe.ca> To: Christoph Hellwig <hch@infradead.org> Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, "Logan Gunthorpe" <logang@deltatee.com>, "Christian König" <christian.koenig@amd.com>, linux-media@vger.kernel.org Subject: Re: [PATCH 1/6] lib/scatterlist: add sg_set_dma_addr() function Date: Thu, 12 Mar 2020 11:19:28 -0300 [thread overview] Message-ID: <20200312141928.GK31668@ziepe.ca> (raw) In-Reply-To: <20200312104729.GA26031@infradead.org> On Thu, Mar 12, 2020 at 03:47:29AM -0700, Christoph Hellwig wrote: > On Thu, Mar 12, 2020 at 11:31:35AM +0100, Christian König wrote: > > But how should we then deal with all the existing interfaces which already > > take a scatterlist/sg_table ? > > > > The whole DMA-buf design and a lot of drivers are build around > > scatterlist/sg_table and to me that actually makes quite a lot of sense. > > > > Replace them with a saner interface that doesn't take a scatterlist. > At very least for new functionality like peer to peer DMA, but > especially this code would also benefit from a general move away > from the scatterlist. If dma buf can do P2P I'd like to see support for consuming a dmabuf in RDMA. Looking at how.. there is an existing sgl based path starting from get_user_pages through dma map to the drivers. (ib_umem) I can replace the driver part with something else (dma_sg), but not until we get a way to DMA map pages directly into that something else.. The non-page scatterlist is also a big concern for RDMA as we have drivers that want the page list, so even if we did as this series contemplates I'd have still have to split the drivers and create the notion of a dma-only SGL. > > I mean we could come up with a new structure for this, but to me that just > > looks like reinventing the wheel. Especially since drivers need to be able > > to handle both I/O to system memory and I/O to PCIe BARs. > > The structure for holding the struct page side of the scatterlist is > called struct bio_vec, so far mostly used by the block and networking > code. I haven't used bio_vecs before, do they support chaining like SGL so they can be very big? RDMA dma maps gigabytes of memory > The structure for holding dma addresses doesn't really exist > in a generic form, but would be an array of these structures: > > struct dma_sg { > dma_addr_t addr; > u32 len; > }; Same question, RDMA needs to represent gigabytes of pages in a DMA list, we will need some generic way to handle that. I suspect GPU has a similar need? Can it be accomidated in some generic dma_sg? So I'm guessing the path forward is something like - Add some generic dma_sg data structure and helper - Add dma mapping code to go from pages to dma_sg - Rework RDMA to use dma_sg and the new dma mapping code - Rework dmabuf to support dma mapping to a dma_sg - Rework GPU drivers to use dma_sg - Teach p2pdma to generate a dma_sg from a BAR page list - This series ? Jason _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2020-03-12 14:19 UTC|newest] Thread overview: 83+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-03-11 13:51 P2P for DMA-buf Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 13:51 ` [PATCH 1/6] lib/scatterlist: add sg_set_dma_addr() function Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 15:28 ` Christoph Hellwig 2020-03-11 15:28 ` [Intel-gfx] " Christoph Hellwig 2020-03-12 10:14 ` Christian König 2020-03-12 10:14 ` [Intel-gfx] " Christian König 2020-03-12 10:14 ` Christian König 2020-03-12 10:19 ` Christoph Hellwig 2020-03-12 10:19 ` [Intel-gfx] " Christoph Hellwig 2020-03-12 10:31 ` Christian König 2020-03-12 10:31 ` [Intel-gfx] " Christian König 2020-03-12 10:31 ` Christian König 2020-03-12 10:47 ` Christoph Hellwig 2020-03-12 10:47 ` [Intel-gfx] " Christoph Hellwig 2020-03-12 11:02 ` Christian König 2020-03-12 11:02 ` [Intel-gfx] " Christian König 2020-03-12 11:02 ` Christian König 2020-03-12 14:19 ` Jason Gunthorpe [this message] 2020-03-12 14:19 ` Jason Gunthorpe 2020-03-12 15:39 ` Christian König 2020-03-12 15:39 ` [Intel-gfx] " Christian König 2020-03-12 15:39 ` Christian König 2020-03-12 16:19 ` Jason Gunthorpe 2020-03-12 16:19 ` Jason Gunthorpe 2020-03-12 16:13 ` Logan Gunthorpe 2020-03-12 16:13 ` [Intel-gfx] " Logan Gunthorpe 2020-03-12 16:13 ` Logan Gunthorpe 2020-03-13 11:21 ` Christoph Hellwig 2020-03-13 11:21 ` [Intel-gfx] " Christoph Hellwig 2020-03-13 12:17 ` Jason Gunthorpe 2020-03-13 12:17 ` Jason Gunthorpe 2020-03-16 8:56 ` Christoph Hellwig 2020-03-16 8:56 ` [Intel-gfx] " Christoph Hellwig 2020-03-16 9:41 ` Christian König 2020-03-16 9:41 ` [Intel-gfx] " Christian König 2020-03-16 9:41 ` Christian König 2020-03-16 9:52 ` Christoph Hellwig 2020-03-16 9:52 ` [Intel-gfx] " Christoph Hellwig 2020-03-16 12:37 ` Jason Gunthorpe 2020-03-16 12:37 ` Jason Gunthorpe 2020-03-13 13:33 ` Christian König 2020-03-13 13:33 ` [Intel-gfx] " Christian König 2020-03-13 13:33 ` Christian König 2020-03-11 13:51 ` [PATCH 2/6] dma-buf: add peer2peer flag Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 13:51 ` [PATCH 3/6] drm/amdgpu: note that we can handle peer2peer DMA-buf Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 13:51 ` [PATCH 4/6] drm/amdgpu: add checks if DMA-buf P2P is supported Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 14:04 ` Jason Gunthorpe 2020-03-11 14:04 ` Jason Gunthorpe 2020-03-11 14:33 ` Christian König 2020-03-11 14:33 ` [Intel-gfx] " Christian König 2020-03-11 14:33 ` Christian König 2020-03-11 14:38 ` Jason Gunthorpe 2020-03-11 14:38 ` Jason Gunthorpe 2020-03-11 14:43 ` Christian König 2020-03-11 14:43 ` [Intel-gfx] " Christian König 2020-03-11 14:43 ` Christian König 2020-03-11 14:48 ` Jason Gunthorpe 2020-03-11 14:48 ` Jason Gunthorpe 2020-03-11 13:51 ` [PATCH 5/6] drm/amdgpu: add support for exporting VRAM using DMA-buf v2 Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 14:33 ` Jason Gunthorpe 2020-03-11 14:33 ` Jason Gunthorpe 2020-03-11 14:39 ` Jason Gunthorpe 2020-03-11 14:39 ` Jason Gunthorpe 2020-03-11 15:08 ` Alex Deucher 2020-03-11 15:08 ` [Intel-gfx] " Alex Deucher 2020-03-11 15:08 ` Alex Deucher 2020-03-11 13:51 ` [PATCH 6/6] drm/amdgpu: improve amdgpu_gem_info debugfs file Christian König 2020-03-11 13:51 ` [Intel-gfx] " Christian König 2020-03-11 13:51 ` Christian König 2020-03-11 18:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/6] lib/scatterlist: add sg_set_dma_addr() function Patchwork 2020-03-11 18:34 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200312141928.GK31668@ziepe.ca \ --to=jgg@ziepe.ca \ --cc=David1.Zhou@amd.com \ --cc=christian.koenig@amd.com \ --cc=daniel@ffwll.ch \ --cc=dri-devel@lists.freedesktop.org \ --cc=hch@infradead.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=linaro-mm-sig@lists.linaro.org \ --cc=linux-media@vger.kernel.org \ --cc=logang@deltatee.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.