dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Michel Dänzer" <michel@daenzer.net>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Jason Ekstrand" <jason@jlekstrand.net>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [RFC] Add DMA_RESV_USAGE flags
Date: Mon, 31 May 2021 14:49:19 +0200	[thread overview]
Message-ID: <ee6e6934-4c77-5377-58d1-a80208fc3eaa@daenzer.net> (raw)
In-Reply-To: <YKZvx0UXYnJrfVw4@phenom.ffwll.local>

On 2021-05-20 4:18 p.m., Daniel Vetter wrote:
> On Thu, May 20, 2021 at 10:13:38AM +0200, Michel Dänzer wrote:
>> On 2021-05-20 9:55 a.m., Daniel Vetter wrote:
>>> On Wed, May 19, 2021 at 5:48 PM Michel Dänzer <michel@daenzer.net> wrote:
>>>>
>>>> On 2021-05-19 5:21 p.m., Jason Ekstrand wrote:
>>>>> On Wed, May 19, 2021 at 5:52 AM Michel Dänzer <michel@daenzer.net> wrote:
>>>>>>
>>>>>> On 2021-05-19 12:06 a.m., Jason Ekstrand wrote:
>>>>>>> On Tue, May 18, 2021 at 4:17 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>>>>>>
>>>>>>>> On Tue, May 18, 2021 at 7:40 PM Christian König
>>>>>>>> <ckoenig.leichtzumerken@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> Am 18.05.21 um 18:48 schrieb Daniel Vetter:
>>>>>>>>>> On Tue, May 18, 2021 at 2:49 PM Christian König
>>>>>>>>>> <ckoenig.leichtzumerken@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> And as long as we are all inside amdgpu we also don't have any oversync,
>>>>>>>>>>> the issue only happens when we share dma-bufs with i915 (radeon and
>>>>>>>>>>> AFAIK nouveau does the right thing as well).
>>>>>>>>>> Yeah because then you can't use the amdgpu dma_resv model anymore and
>>>>>>>>>> have to use the one atomic helpers use. Which is also the one that
>>>>>>>>>> e.g. Jason is threathening to bake in as uapi with his dma_buf ioctl,
>>>>>>>>>> so as soon as that lands and someone starts using it, something has to
>>>>>>>>>> adapt _anytime_ you have a dma-buf hanging around. Not just when it's
>>>>>>>>>> shared with another device.
>>>>>>>>>
>>>>>>>>> Yeah, and that is exactly the reason why I will NAK this uAPI change.
>>>>>>>>>
>>>>>>>>> This doesn't works for amdgpu at all for the reasons outlined above.
>>>>>>>>
>>>>>>>> Uh that's really not how uapi works. "my driver is right, everyone
>>>>>>>> else is wrong" is not how cross driver contracts are defined. If that
>>>>>>>> means a perf impact until you've fixed your rules, that's on you.
>>>>>>>>
>>>>>>>> Also you're a few years too late with nacking this, it's already uapi
>>>>>>>> in the form of the dma-buf poll() support.
>>>>>>>
>>>>>>> ^^  My fancy new ioctl doesn't expose anything that isn't already
>>>>>>> there.  It just lets you take a snap-shot of a wait instead of doing
>>>>>>> an active wait which might end up with more fences added depending on
>>>>>>> interrupts and retries.  The dma-buf poll waits on all fences for
>>>>>>> POLLOUT and only the exclusive fence for POLLIN.  It's already uAPI.
>>>>>>
>>>>>> Note that the dma-buf poll support could be useful to Wayland compositors for the same purpose as Jason's new ioctl (only using client buffers which have finished drawing for an output frame, to avoid missing a refresh cycle due to client drawing), *if* it didn't work differently with amdgpu.
>>>>>>
>>>>>> Am I understanding correctly that Jason's new ioctl would also work differently with amdgpu as things stand currently? If so, that would be a real bummer and might hinder adoption of the ioctl by Wayland compositors.
>>>>>
>>>>> My new ioctl has identical semantics to poll().  It just lets you take
>>>>> a snapshot in time to wait on later instead of waiting on whatever
>>>>> happens to be set right now.  IMO, having identical semantics to
>>>>> poll() isn't something we want to change.
>>>>
>>>> Agreed.
>>>>
>>>> I'd argue then that making amdgpu poll semantics match those of other drivers is a pre-requisite for the new ioctl, otherwise it seems unlikely that the ioctl will be widely adopted.
>>>
>>> This seems backwards, because that means useful improvements in all
>>> other drivers are stalled until amdgpu is fixed.
>>>
>>> I think we need agreement on what the rules are, reasonable plan to
>>> get there, and then that should be enough to unblock work in the wider
>>> community. Holding the community at large hostage because one driver
>>> is different is really not great.
>>
>> I think we're in violent agreement. :) The point I was trying to make is
>> that amdgpu really needs to be fixed to be consistent with other drivers
>> ASAP.
> 
> It's not that easy at all. I think best case we're looking at about a one
> year plan to get this into shape, taking into account usual release/distro
> update latencies.
> 
> Best case.
> 
> But also it's not a really big issue, since this shouldn't stop
> compositors from using poll on dma-buf fd or the sync_file stuff from
> Jason: The use-case for this in compositors is to avoid a single client
> stalling the entire desktop. If a driver lies by not setting the exclusive
> fence when expected, you simply don't get this stall avoidance benefit of
> misbehaving clients.

That's a good point; I was coming to the same realization.


> But also this needs a gpu scheduler and higher priority for the
> compositor (or a lot of hw planes so you can composite
> with them alone), so it's all fairly academic issue.

I went ahead and implemented this for mutter: https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1880

Works as intended on my work laptop with Intel GPU, so it's not just academic. :)

I hope this can serve as motivation for providing the same poll semantics (and a higher priority GFX queue exposed via EGL_IMG_context_priority) in amdgpu as well.


-- 
Earthling Michel Dänzer               |               https://redhat.com
Libre software enthusiast             |             Mesa and X developer

  parent reply	other threads:[~2021-05-31 12:49 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17 14:11 [RFC] Add DMA_RESV_USAGE flags Christian König
2021-05-17 14:11 ` [PATCH 01/11] dma-buf: fix invalid debug print Christian König
2021-05-17 14:11 ` [PATCH 02/11] dma-buf: add SPDX header and fix style in dma-resv.c Christian König
2021-05-17 14:11 ` [PATCH 03/11] dma-buf: cleanup dma-resv shared fence debugging a bit Christian König
2021-05-17 14:11 ` [PATCH 04/11] dma-buf: rename and cleanup dma_resv_get_excl Christian König
2021-05-17 14:11 ` [PATCH 05/11] dma-buf: rename and cleanup dma_resv_get_list Christian König
2021-05-17 14:11 ` [PATCH 06/11] dma-buf: add dma_resv_list_fence helper Christian König
2021-05-17 14:11 ` [PATCH 07/11] dma-buf: add dma_resv_replace_shared Christian König
2021-05-17 14:11 ` [PATCH 08/11] dma-buf: improve shared fence abstraction Christian König
2021-05-17 14:11 ` [PATCH 09/11] dma-buf: add shared fence usage flags Christian König
2021-05-17 20:36   ` Daniel Vetter
2021-05-18 12:54     ` Christian König
2021-05-17 14:11 ` [PATCH 10/11] drm/i915: also wait for shared dmabuf fences before flip Christian König
2021-05-17 14:11 ` [PATCH 11/11] drm/amdgpu: fix shared access to exported DMA-bufs Christian König
2021-05-17 15:04 ` [RFC] Add DMA_RESV_USAGE flags Daniel Vetter
2021-05-17 19:38   ` Christian König
2021-05-17 20:15     ` Jason Ekstrand
2021-05-17 20:15     ` Daniel Vetter
2021-05-17 22:49       ` Jason Ekstrand
2021-05-18  5:59         ` Daniel Vetter
2021-05-18 10:29           ` Daniel Vetter
2021-05-18 12:49           ` Christian König
2021-05-18 13:26             ` Daniel Stone
2021-05-18 13:51               ` Christian König
2021-05-18 16:48             ` Daniel Vetter
2021-05-18 17:40               ` Christian König
2021-05-18 21:17                 ` Daniel Vetter
2021-05-18 22:06                   ` Jason Ekstrand
2021-05-19 10:52                     ` Michel Dänzer
2021-05-19 15:21                       ` Jason Ekstrand
2021-05-19 15:48                         ` Michel Dänzer
2021-05-20  7:55                           ` Daniel Vetter
2021-05-20  8:13                             ` Michel Dänzer
2021-05-20 10:00                               ` Christian König
2021-05-20 14:18                               ` Daniel Vetter
2021-05-20 14:30                                 ` Michel Dänzer
2021-05-20 17:08                                 ` Jason Ekstrand
2021-05-31 12:49                                 ` Michel Dänzer [this message]
2021-05-20 10:50                             ` Christian König
2021-05-20 17:23                               ` Jason Ekstrand
2021-05-20 19:04                                 ` Jason Ekstrand
2021-05-20 19:14                                   ` Daniel Vetter
2021-05-21  7:27                                     ` Christian König
2021-05-21  9:36                                     ` Bas Nieuwenhuizen
2021-05-21  7:24                                 ` Christian König
2021-05-19 11:43                     ` Christian König
2021-05-19 15:35                       ` Jason Ekstrand
2021-05-19 11:24                   ` Christian König
2021-05-20  7:58                     ` Daniel Vetter
2021-05-18 21:31                 ` Dave Airlie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ee6e6934-4c77-5377-58d1-a80208fc3eaa@daenzer.net \
    --to=michel@daenzer.net \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=linaro-mm-sig@lists.linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).