dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Jason Ekstrand" <jason@jlekstrand.net>
Cc: "moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [RFC] Add DMA_RESV_USAGE flags
Date: Mon, 17 May 2021 22:15:42 +0200	[thread overview]
Message-ID: <CAKMK7uF=y44e9-0-4MBj3jRBdCqMaLgKutTMeBWCbySRnPR4KQ@mail.gmail.com> (raw)
In-Reply-To: <5a3e9500-9d6b-a865-5385-fde43da2bf66@gmail.com>

On Mon, May 17, 2021 at 9:38 PM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Am 17.05.21 um 17:04 schrieb Daniel Vetter:
> > On Mon, May 17, 2021 at 04:11:18PM +0200, Christian König wrote:
> >> We had a long outstanding problem in amdgpu that buffers exported to
> >> user drivers by DMA-buf serialize all command submissions using them.
> >>
> >> In other words we can't compose the buffer with different engines and
> >> then send it to another driver for display further processing.
> >>
> >> This was added to work around the fact that i915 didn't wanted to wait
> >> for shared fences in the dma_resv objects before displaying a buffer.
> >>
> >> Since this problem is now causing issues with Vulkan we need to find a
> >> better solution for that.
> >>
> >> The patch set here tries to do this by adding an usage flag to the
> >> shared fences noting when and how they should participate in implicit
> >> synchronization.
> > So the way this is fixed in every other vulkan driver is that vulkan
> > userspace sets flags in the CS ioctl when it wants to synchronize with
> > implicit sync. This gets you mostly there. Last time I checked amdgpu
> > isn't doing this, and yes that's broken.
>
> And exactly that is a really bad approach as far as I can see. The
> Vulkan stack on top simply doesn't know when to set this flag during CS.

Adding Jason for the Vulkan side of things, because this isn't how I
understand this works.

But purely form a kernel pov your patches are sketchy for two reasons:

- we reinstate the amdgpu special case of not setting exclusive fences

- you only fix the single special case of i915 display, nothing else

That's not how a cross driver interface works. And if you'd do this
properly, you'd be back to all the same sync fun you've orignally had,
with all the same fallout.

> That's also the reason the Valve guys came up with a solution where each
> BO gets a flag for explicit sync, but that only works for exports and
> not for imports.
>
> > I915 and iirc msm has explicit flags for this, panfrost was designed to
> > support this correctly from the start (also with flags I think). That's at
> > least what I remember from all the discussions at XDC and #dri-devel, but
> > didn't check the code again to give you the list of uapi flags you need
> > for each driver.
> >
> > The other piece is making sure you're only picking up implicit fences when
> > you should, and not any later ones, for which Jason has a solution:
> >
> > https://lore.kernel.org/dri-devel/20210317221940.2146688-1-jason@jlekstrand.net/
>
> Yes, I helped with that as well. But I think that this is just another
> workaround without really addressing the underlying problem.
>
> > If amdgpu isn't using those, then you will suffer from
> > over-synchronization in vulkan and pay a price. The entire point of vulkan
> > is that you pick up sync points very explicitly, and we also need to have
> > very explicit uapi for userspace to pick up/set the implicit fences.
> >
> > Trying to paper over this with more implicit magic is imo just wrong, and
> > definitely not the long term explicit sync model we want.
>
> I completely disagree.
>
> In my opinion the implicit sync model we have for dma_resv currently is
> just not well designed at all, since it always requires cooperation from
> userspace.
>
> In other words you need to know when to enable implicit sync in
> userspace and that information is simply not present all of the time.
>
> What we have done here is just keeping the old reader/writer flags i915,
> radeon and nouveau once had and pushed that out to everybody else making
> the assumption that everybody would follow that without documenting the
> actual rules of engagement you need to follow here.
>
> That was a really big mistake and we should try to fix that sooner or
> later. The only other clean alternative I see is to use a flag on the
> exporter to tell the importer if it should sync to shared fences or not.
>
> Additional to that I'm perfectly fine with implicit sync. Explicit sync
> certainly has some use cases as well, but I don't see it as an absolute
> advantage over the implicit model.

Ok this stops making sense. Somehow you claim userspace doesn't know
when to sync, but somehow the kernel does? By guessing, and getting it
wrong mostly, except for the one case that you benchmarked?

Aside from silly userspace which exports a buffer to a dma-buf, but
then never imports it anywhere else, there isn't a case I know of
where the kernel actually knows more than userspace. But there's lots
of cases where the kernel definitely knows less, especially if
userspace doesn't tell it about what's going on with each rendering
and buffer.

So here's the 2 things you need to make this work like every other driver:

1. A way to set the explicit fence on a buffer. CS ioctl is perfectly
fine, but also can be seperate. Userspace uses this only on a) shared
buffers b) when there's a flush/swap on that shared buffer. Not when
rendering any of the interim stuff, that only leads to oversync.
Anything non-shared is handled explicitly in userspace (at least for
modern-ish drivers). This is the only thing that ever sets an
exclusive fence (aside from ttm moving buffers around ofc).

2. A way to sync with the implicit fences, either all of them (for
upcoming write access) or just the write fence (for read access). At
first we thought it's good enough to do this in the CS ioctl, but
that's a wee bit too late, hence the patches from Jason. My
understanding is that vulkan converts this into an vk syncobj/fence of
some sorts, so really can't make this more explicit and intentional
than that.

None of this is something the kernel has the slightest idea about when
it happens, so you have to have explicit uapi for it. Trying to fake
it in the kernel just doesn't work.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

  parent reply	other threads:[~2021-05-17 20:15 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17 14:11 [RFC] Add DMA_RESV_USAGE flags Christian König
2021-05-17 14:11 ` [PATCH 01/11] dma-buf: fix invalid debug print Christian König
2021-05-17 14:11 ` [PATCH 02/11] dma-buf: add SPDX header and fix style in dma-resv.c Christian König
2021-05-17 14:11 ` [PATCH 03/11] dma-buf: cleanup dma-resv shared fence debugging a bit Christian König
2021-05-17 14:11 ` [PATCH 04/11] dma-buf: rename and cleanup dma_resv_get_excl Christian König
2021-05-17 14:11 ` [PATCH 05/11] dma-buf: rename and cleanup dma_resv_get_list Christian König
2021-05-17 14:11 ` [PATCH 06/11] dma-buf: add dma_resv_list_fence helper Christian König
2021-05-17 14:11 ` [PATCH 07/11] dma-buf: add dma_resv_replace_shared Christian König
2021-05-17 14:11 ` [PATCH 08/11] dma-buf: improve shared fence abstraction Christian König
2021-05-17 14:11 ` [PATCH 09/11] dma-buf: add shared fence usage flags Christian König
2021-05-17 20:36   ` Daniel Vetter
2021-05-18 12:54     ` Christian König
2021-05-17 14:11 ` [PATCH 10/11] drm/i915: also wait for shared dmabuf fences before flip Christian König
2021-05-17 14:11 ` [PATCH 11/11] drm/amdgpu: fix shared access to exported DMA-bufs Christian König
2021-05-17 15:04 ` [RFC] Add DMA_RESV_USAGE flags Daniel Vetter
2021-05-17 19:38   ` Christian König
2021-05-17 20:15     ` Jason Ekstrand
2021-05-17 20:15     ` Daniel Vetter [this message]
2021-05-17 22:49       ` Jason Ekstrand
2021-05-18  5:59         ` Daniel Vetter
2021-05-18 10:29           ` Daniel Vetter
2021-05-18 12:49           ` Christian König
2021-05-18 13:26             ` Daniel Stone
2021-05-18 13:51               ` Christian König
2021-05-18 16:48             ` Daniel Vetter
2021-05-18 17:40               ` Christian König
2021-05-18 21:17                 ` Daniel Vetter
2021-05-18 22:06                   ` Jason Ekstrand
2021-05-19 10:52                     ` Michel Dänzer
2021-05-19 15:21                       ` Jason Ekstrand
2021-05-19 15:48                         ` Michel Dänzer
2021-05-20  7:55                           ` Daniel Vetter
2021-05-20  8:13                             ` Michel Dänzer
2021-05-20 10:00                               ` Christian König
2021-05-20 14:18                               ` Daniel Vetter
2021-05-20 14:30                                 ` Michel Dänzer
2021-05-20 17:08                                 ` Jason Ekstrand
2021-05-31 12:49                                 ` Michel Dänzer
2021-05-20 10:50                             ` Christian König
2021-05-20 17:23                               ` Jason Ekstrand
2021-05-20 19:04                                 ` Jason Ekstrand
2021-05-20 19:14                                   ` Daniel Vetter
2021-05-21  7:27                                     ` Christian König
2021-05-21  9:36                                     ` Bas Nieuwenhuizen
2021-05-21  7:24                                 ` Christian König
2021-05-19 11:43                     ` Christian König
2021-05-19 15:35                       ` Jason Ekstrand
2021-05-19 11:24                   ` Christian König
2021-05-20  7:58                     ` Daniel Vetter
2021-05-18 21:31                 ` Dave Airlie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKMK7uF=y44e9-0-4MBj3jRBdCqMaLgKutTMeBWCbySRnPR4KQ@mail.gmail.com' \
    --to=daniel@ffwll.ch \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=linaro-mm-sig@lists.linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).