linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: "Koenig, Christian" <Christian.Koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>,
	Liam Mark <lmark@codeaurora.org>,
	Linaro MM SIG <linaro-mm-sig@lists.linaro.org>,
	"open list:DMA BUFFER SHARING FRAMEWORK" 
	<linux-media@vger.kernel.org>,
	DRI mailing list <dri-devel@lists.freedesktop.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 01/12] dma-buf: add dynamic caching of sg_table
Date: Thu, 23 May 2019 13:30:35 +0200	[thread overview]
Message-ID: <CAKMK7uFghnEHjyOaJyui+8g9gJahhnhNWPruPMqPr6VAN1UvsA@mail.gmail.com> (raw)
In-Reply-To: <cbb1f3a1-6cd1-c231-f1b2-8f20a6bad067@amd.com>

On Thu, May 23, 2019 at 1:21 PM Koenig, Christian
<Christian.Koenig@amd.com> wrote:
>
> Am 22.05.19 um 20:30 schrieb Daniel Vetter:
> > [SNIP]
> >> Well, it seems you are making incorrect assumptions about the cache
> >> maintenance of DMA-buf here.
> >>
> >> At least for all DRM devices I'm aware of mapping/unmapping an
> >> attachment does *NOT* have any cache maintenance implications.
> >>
> >> E.g. the use case you describe above would certainly fail with amdgpu,
> >> radeon, nouveau and i915 because mapping a DMA-buf doesn't stop the
> >> exporter from reading/writing to that buffer (just the opposite actually).
> >>
> >> All of them assume perfectly coherent access to the underlying memory.
> >> As far as I know there is no documented cache maintenance requirements
> >> for DMA-buf.
> > I think it is documented. It's just that on x86, we ignore that
> > because the dma-api pretends there's never a need for cache flushing
> > on x86, and that everything snoops the cpu caches. Which isn't true
> > since over 20 ago when AGP happened. The actual rules for x86 dma-buf
> > are very much ad-hoc (and we occasionally reapply some duct-tape when
> > cacheline noise shows up somewhere).
>
> Well I strongly disagree on this. Even on x86 at least AMD GPUs are also
> not fully coherent.
>
> For example you have the texture cache and the HDP read/write cache. So
> when both amdgpu as well as i915 would write to the same buffer at the
> same time we would get a corrupted data as well.
>
> The key point is that it is NOT DMA-buf in it's map/unmap call who is
> defining the coherency, but rather the reservation object and its
> attached dma_fence instances.
>
> So for example as long as a exclusive reservation object fence is still
> not signaled I can't assume that all caches are flushed and so can't
> start with my own operation/access to the data in question.

The dma-api doesn't flush device caches, ever. It might flush some
iommu caches or some other bus cache somewhere in-between. So it also
won't ever make sure that multiple devices don't trample on each
another. For that you need something else (like reservation object,
but I think that's not really followed outside of drm much).

The other bit is the coherent vs. non-coherent thing, which in the
dma-api land just talks about whether cpu/device access need extra
flushing or not. Now in practice that extra flushing is always only
cpu side, i.e. will cpu writes/reads go through the cpu cache, and
will device reads/writes snoop the cpu caches. That's (afaik at least,
an in practice, not the abstract spec) the _only_ thing dma-api's
cache maintenance does. For 0 copy that's all completely irrelevant,
because as soon as you pick a mode where you need to do manual cache
management you've screwed up, it's not 0-copy anymore really.

The other hilarious stuff is that on x86 we let userspace (at least
with i915) do that cache management, so the kernel doesn't even have a
clue. I think what we need in dma-buf (and dma-api people will scream
about the "abstraction leak") is some notition about whether an
importer should snoop or not (or if that device always uses non-snoop
or snooped transactions). But that would shred the illusion the
dma-api tries to keep up that all that matters is whether a mapping is
coherent from the cpu's pov or not, and you can achieve coherence both
with a cache cpu mapping + snooped transactions, or with wc cpu side
and non-snooped transactions. Trying to add cache managment (which
some dma-buf exporter do indeed attempt to) will be even worse.

Again, none of this is about preventing concurrent writes, or making
sure device caches are flushed correctly around batches.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

  reply	other threads:[~2019-05-23 11:30 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-16 18:38 dynamic DMA-buf sharing between devices Christian König
2019-04-16 18:38 ` [PATCH 01/12] dma-buf: add dynamic caching of sg_table Christian König
2019-04-16 18:38 ` [PATCH 02/12] dma-buf: add dma_buf_(un)map_attachment_locked variants v3 Christian König
2019-05-27 10:56   ` Christian König
2019-04-16 18:38 ` [PATCH 03/12] dma-buf: lock the reservation object during (un)map_dma_buf v3 Christian König
2019-04-17 14:08   ` Daniel Vetter
2019-04-17 14:14     ` Christian König
2019-04-17 14:26       ` Daniel Vetter
2019-04-17 17:10         ` Christian König
2019-04-16 18:38 ` [PATCH 04/12] dma-buf: add optional invalidate_mappings callback v5 Christian König
2019-04-17 14:01   ` Daniel Vetter
2019-04-17 14:33     ` Daniel Vetter
2019-04-17 19:07   ` Daniel Vetter
2019-04-17 19:13     ` Christian König
2019-04-18  8:08       ` Daniel Vetter
2019-04-18  8:28         ` Koenig, Christian
2019-04-18  8:40           ` Daniel Vetter
2019-04-16 18:38 ` [PATCH 05/12] dma-buf: add explicit buffer pinning Christian König
2019-04-17 14:20   ` Daniel Vetter
2019-04-17 14:30     ` Daniel Vetter
2019-04-17 14:40       ` Daniel Vetter
2019-04-17 19:05         ` Daniel Vetter
2019-04-19 19:05   ` Alex Deucher
2019-04-16 18:38 ` [PATCH 06/12] drm: remove prime sg_table caching Christian König
2019-04-16 18:38 ` [PATCH 07/12] drm/ttm: remove the backing store if no placement is given Christian König
2019-04-16 18:38 ` [PATCH 08/12] drm/ttm: use the parent resv for ghost objects Christian König
2019-04-16 18:38 ` [PATCH 09/12] drm/amdgpu: add independent DMA-buf export v3 Christian König
2019-04-16 18:38 ` [PATCH 10/12] drm/amdgpu: add independent DMA-buf import v4 Christian König
2019-04-16 18:38 ` [PATCH 11/12] drm/amdgpu: add DMA-buf pin/unpin implementation Christian König
2019-04-16 18:38 ` [PATCH 12/12] drm/amdgpu: add DMA-buf invalidation callback v2 Christian König
2019-04-17 13:52 ` dynamic DMA-buf sharing between devices Chunming Zhou
2019-04-17 13:59   ` Christian König
2019-04-17 14:14     ` Chunming Zhou
2019-04-18  9:13 ` Daniel Vetter
2019-04-18 10:57   ` Christian König
2019-04-27  0:01 ` [PATCH 01/12] dma-buf: add dynamic caching of sg_table Liam Mark
2019-05-22 16:17   ` Sumit Semwal
2019-05-22 17:27     ` Christian König
2019-05-22 18:30       ` Daniel Vetter
2019-05-23 11:21         ` Koenig, Christian
2019-05-23 11:30           ` Daniel Vetter [this message]
2019-05-23 11:32             ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKMK7uFghnEHjyOaJyui+8g9gJahhnhNWPruPMqPr6VAN1UvsA@mail.gmail.com \
    --to=daniel@ffwll.ch \
    --cc=Christian.Koenig@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=lmark@codeaurora.org \
    --cc=sumit.semwal@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).