From: "Christian König" <christian.koenig@amd.com>
To: Christoph Hellwig <hch@infradead.org>, Jason Gunthorpe <jgg@ziepe.ca>
Cc: David1.Zhou@amd.com, intel-gfx@lists.freedesktop.org,
dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org,
Logan Gunthorpe <logang@deltatee.com>,
linux-media@vger.kernel.org
Subject: Re: [Intel-gfx] [PATCH 1/6] lib/scatterlist: add sg_set_dma_addr() function
Date: Fri, 13 Mar 2020 14:33:37 +0100 [thread overview]
Message-ID: <0beef7ca-dd77-b442-5f45-f3a496189731@amd.com> (raw)
In-Reply-To: <20200313112139.GA4913@infradead.org>
Am 13.03.20 um 12:21 schrieb Christoph Hellwig:
> On Thu, Mar 12, 2020 at 11:19:28AM -0300, Jason Gunthorpe wrote:
>> The non-page scatterlist is also a big concern for RDMA as we have
>> drivers that want the page list, so even if we did as this series
>> contemplates I'd have still have to split the drivers and create the
>> notion of a dma-only SGL.
> The drivers I looked at want a list of IOVA address, aligned to the
> device "page size". What other data do drivers want?
Well for GPUs I have the requirement that those IOVA addresses allow
random access.
That's the reason why we currently convert the sg_table into a linear
arrays of addresses and pages. To solve that keeping the length in
separate optional array would be ideal for us.
But this is so a special use case that I'm not sure if we want to
support this in the common framework or not.
> Execept for the software protocol stack drivers, which of couse need pages for the
> stack futher down.
Yes completely agree.
For the GPUs I will propose a patch to stop copying the page from the
sg_table over into our linear arrays and see if anybody starts to scream.
I don't think so, but probably better to double check.
Thanks,
Christian.
>
>> I haven't used bio_vecs before, do they support chaining like SGL so
>> they can be very big? RDMA dma maps gigabytes of memory
> bio_vecs itself don't have the chaining, but the bios build around them
> do. But each entry can map a huge pile. If needed we could use the
> same chaining scheme we use for scatterlists for bio_vecs as well, but
> lets see if we really end up needing that.
>
>> So I'm guessing the path forward is something like
>>
>> - Add some generic dma_sg data structure and helper
>> - Add dma mapping code to go from pages to dma_sg
> That has been on my todo list for a while. All the DMA consolidatation
> is to prepare for that and we're finally getting close.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2020-03-13 13:33 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-11 13:51 [Intel-gfx] P2P for DMA-buf Christian König
2020-03-11 13:51 ` [Intel-gfx] [PATCH 1/6] lib/scatterlist: add sg_set_dma_addr() function Christian König
2020-03-11 15:28 ` Christoph Hellwig
2020-03-12 10:14 ` Christian König
2020-03-12 10:19 ` Christoph Hellwig
2020-03-12 10:31 ` Christian König
2020-03-12 10:47 ` Christoph Hellwig
2020-03-12 11:02 ` Christian König
[not found] ` <20200312141928.GK31668@ziepe.ca>
2020-03-12 15:39 ` Christian König
2020-03-12 16:13 ` Logan Gunthorpe
2020-03-13 11:21 ` Christoph Hellwig
2020-03-13 13:33 ` Christian König [this message]
[not found] ` <20200313121742.GZ31668@ziepe.ca>
2020-03-16 8:56 ` Christoph Hellwig
2020-03-16 9:41 ` Christian König
2020-03-16 9:52 ` Christoph Hellwig
2020-03-11 13:51 ` [Intel-gfx] [PATCH 2/6] dma-buf: add peer2peer flag Christian König
2020-03-11 13:51 ` [Intel-gfx] [PATCH 3/6] drm/amdgpu: note that we can handle peer2peer DMA-buf Christian König
2020-03-11 13:51 ` [Intel-gfx] [PATCH 4/6] drm/amdgpu: add checks if DMA-buf P2P is supported Christian König
[not found] ` <20200311140415.GB31668@ziepe.ca>
2020-03-11 14:33 ` Christian König
[not found] ` <20200311143835.GD31668@ziepe.ca>
2020-03-11 14:43 ` Christian König
2020-03-11 13:51 ` [Intel-gfx] [PATCH 5/6] drm/amdgpu: add support for exporting VRAM using DMA-buf v2 Christian König
2020-03-11 15:08 ` Alex Deucher
2020-03-11 13:51 ` [Intel-gfx] [PATCH 6/6] drm/amdgpu: improve amdgpu_gem_info debugfs file Christian König
2020-03-11 18:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/6] lib/scatterlist: add sg_set_dma_addr() function Patchwork
2020-03-11 18:34 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0beef7ca-dd77-b442-5f45-f3a496189731@amd.com \
--to=christian.koenig@amd.com \
--cc=David1.Zhou@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@infradead.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-media@vger.kernel.org \
--cc=logang@deltatee.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).