archive mirror
 help / color / mirror / Atom feed
From: Russell King - ARM Linux <>
To: Marek Szyprowski <>
Cc: Shuah Khan <>,,,,,,,,,,,,,,,,,,,,,,,,,,,
Subject: Re: [PATCH] arm: dma: fix sharing of coherent DMA memory without struct page
Date: Fri, 14 Apr 2017 10:46:44 +0100	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Fri, Apr 14, 2017 at 09:56:07AM +0200, Marek Szyprowski wrote:
> >>This would be however quite large task, especially taking into account
> >>all current users of DMA-buf framework...
> >Yeah it will be a large task.
> Maybe once scatterlist are switched to pfns, changing dmabuf internal
> memory representation to pfn array might be much easier.

Switching to a PFN array won't work either as we have no cross-arch
way to translate PFNs to a DMA address and vice versa.  Yes, we have
them in ARM, but they are an _implementation detail_ of ARM's
DMA API support, they are not for use by drivers.

So, the very first problem that needs solving is this:

  How do we go from a coherent DMA allocation for device X to a set
  of DMA addresses for device Y.

Essentially, we need a way of remapping the DMA buffer for use with
another device, and returning a DMA address suitable for that device.
This could well mean that we need to deal with setting up an IOMMU
mapping.  My guess is that this needs to happen at the DMA coherent
API level - the DMA coherent API needs to be augmented with support
for this.  I'll call this "DMA coherent remap".

We then need to think about how to pass this through the dma-buf API.
dma_map_sg() is done by the exporter, who should know what kind of
memory is being exported.  The exporter can avoid calling dma_map_sg()
if it knows in advance that it is exporting DMA coherent memory.
Instead, the exporter can simply create a scatterlist with the DMA
address and DMA length prepopulated with the results of the DMA
coherent remap operation above.

What the scatterlist can't carry in this case is a set of valid
struct page pointers, and an importer must not walk the scatterlist
expecting to get at the virtual address parameters or struct page

On the mmap() side of things, remember that DMA coherent allocations
may require special mapping into userspace, and which can only be
mapped by the DMA coherent mmap support.  kmap etc will also need to
be different.  So it probably makes sense for DMA coherent dma-buf
exports to use a completely separate set of dma_buf_ops from the
streaming version.

I think this is the easiest approach to solving the problem without
needing massive driver changes all over the kernel.

RMK's Patch system:
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to

  reply	other threads:[~2017-04-14  9:47 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <>
2017-04-05 16:02 ` Shuah Khan
2017-04-05 23:14   ` Russell King - ARM Linux
2017-04-10 14:52     ` Shuah Khan
2017-04-06  4:11   ` kbuild test robot
2017-04-06 12:01   ` Marek Szyprowski
2017-04-10 22:50     ` Shuah Khan
2017-04-14  7:56       ` Marek Szyprowski
2017-04-14  9:46         ` Russell King - ARM Linux [this message]
2017-04-17  1:10           ` Shuah Khan
2017-04-17 10:22             ` Russell King - ARM Linux
2017-04-19 23:38           ` Shuah Khan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [PATCH] arm: dma: fix sharing of coherent DMA memory without struct page' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).