From: Lucas Stach <l.stach@pengutronix.de>
To: "Marek Olšák" <maraeo@gmail.com>,
"Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: ML Mesa-dev <mesa-dev@lists.freedesktop.org>,
dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [Mesa-dev] [RFC] Linux Graphics Next: Explicit fences everywhere and no BO fences - initial proposal
Date: Tue, 27 Apr 2021 19:31:09 +0200 [thread overview]
Message-ID: <23ea06c825279c7a9f7678b335c7f89437d387ed.camel@pengutronix.de> (raw)
In-Reply-To: <CAAxE2A4FwZ11_opL++TPUViTOD6ZpV5b3MR+rTDUPvzqYz-oeQ@mail.gmail.com>
Hi,
Am Dienstag, dem 27.04.2021 um 09:26 -0400 schrieb Marek Olšák:
> Ok. So that would only make the following use cases broken for now:
> - amd render -> external gpu
> - amd video encode -> network device
FWIW, "only" breaking amd render -> external gpu will make us pretty
unhappy, as we have some cases where we are combining an AMD APU with a
FPGA based graphics card. I can't go into the specifics of this use-
case too much but basically the AMD graphics is rendering content that
gets composited on top of a live video pipeline running through the
FPGA.
> What about the case when we get a buffer from an external device and
> we're supposed to make it "busy" when we are using it, and the
> external device wants to wait until we stop using it? Is it something
> that can happen, thus turning "external -> amd" into "external <->
> amd"?
Zero-copy texture sampling from a video input certainly appreciates
this very much. Trying to pass the render fence through the various
layers of userspace to be able to tell when the video input can reuse a
buffer is a great experience in yak shaving. Allowing the video input
to reuse the buffer as soon as the read dma_fence from the GPU is
signaled is much more straight forward.
Regards,
Lucas
> Marek
>
> On Tue., Apr. 27, 2021, 08:50 Christian König, <
> ckoenig.leichtzumerken@gmail.com> wrote:
> > Only amd -> external.
> >
> > We can easily install something in an user queue which waits for a
> > dma_fence in the kernel.
> >
> > But we can't easily wait for an user queue as dependency of a
> > dma_fence.
> >
> > The good thing is we have this wait before signal case on Vulkan
> > timeline semaphores which have the same problem in the kernel.
> >
> > The good news is I think we can relatively easily convert i915 and
> > older amdgpu device to something which is compatible with user
> > fences.
> >
> > So yes, getting that fixed case by case should work.
> >
> > Christian
> >
> > Am 27.04.21 um 14:46 schrieb Marek Olšák:
> >
> > > I'll defer to Christian and Alex to decide whether dropping sync
> > > with non-amd devices (GPUs, cameras etc.) is acceptable.
> > >
> > > Rewriting those drivers to this new sync model could be done on a
> > > case by case basis.
> > >
> > > For now, would we only lose the "amd -> external" dependency? Or
> > > the "external -> amd" dependency too?
> > >
> > > Marek
> > >
> > > On Tue., Apr. 27, 2021, 08:15 Daniel Vetter, <daniel@ffwll.ch>
> > > wrote:
> > >
> > > > On Tue, Apr 27, 2021 at 2:11 PM Marek Olšák <maraeo@gmail.com>
> > > > wrote:
> > > > > Ok. I'll interpret this as "yes, it will work, let's do it".
> > > >
> > > > It works if all you care about is drm/amdgpu. I'm not sure
> > > > that's a
> > > > reasonable approach for upstream, but it definitely is an
> > > > approach :-)
> > > >
> > > > We've already gone somewhat through the pain of drm/amdgpu
> > > > redefining
> > > > how implicit sync works without sufficiently talking with
> > > > other
> > > > people, maybe we should avoid a repeat of this ...
> > > > -Daniel
> > > >
> > > > >
> > > > > Marek
> > > > >
> > > > > On Tue., Apr. 27, 2021, 08:06 Christian König,
> > > > <ckoenig.leichtzumerken@gmail.com> wrote:
> > > > >>
> > > > >> Correct, we wouldn't have synchronization between device
> > > > with
> > > > and without user queues any more.
> > > > >>
> > > > >> That could only be a problem for A+I Laptops.
> > > > >>
> > > > >> Memory management will just work with preemption fences
> > > > which
> > > > pause the user queues of a process before evicting something.
> > > > That will be a dma_fence, but also a well known approach.
> > > > >>
> > > > >> Christian.
> > > > >>
> > > > >> Am 27.04.21 um 13:49 schrieb Marek Olšák:
> > > > >>
> > > > >> If we don't use future fences for DMA fences at all, e.g.
> > > > we
> > > > don't use them for memory management, it can work, right?
> > > > Memory
> > > > management can suspend user queues anytime. It doesn't need to
> > > > use DMA fences. There might be something that I'm missing here.
> > > > >>
> > > > >> What would we lose without DMA fences? Just inter-device
> > > > synchronization? I think that might be acceptable.
> > > > >>
> > > > >> The only case when the kernel will wait on a future fence
> > > > is
> > > > before a page flip. Everything today already depends on
> > > > userspace
> > > > not hanging the gpu, which makes everything a future fence.
> > > > >>
> > > > >> Marek
> > > > >>
> > > > >> On Tue., Apr. 27, 2021, 04:02 Daniel Vetter,
> > > > <daniel@ffwll.ch> wrote:
> > > > >>>
> > > > >>> On Mon, Apr 26, 2021 at 04:59:28PM -0400, Marek Olšák
> > > > wrote:
> > > > >>> > Thanks everybody. The initial proposal is dead. Here are
> > > > some thoughts on
> > > > >>> > how to do it differently.
> > > > >>> >
> > > > >>> > I think we can have direct command submission from
> > > > userspace via
> > > > >>> > memory-mapped queues ("user queues") without changing
> > > > window systems.
> > > > >>> >
> > > > >>> > The memory management doesn't have to use GPU page
> > > > faults
> > > > like HMM.
> > > > >>> > Instead, it can wait for user queues of a specific
> > > > process
> > > > to go idle and
> > > > >>> > then unmap the queues, so that userspace can't submit
> > > > anything. Buffer
> > > > >>> > evictions, pinning, etc. can be executed when all queues
> > > > are unmapped
> > > > >>> > (suspended). Thus, no BO fences and page faults are
> > > > needed.
> > > > >>> >
> > > > >>> > Inter-process synchronization can use timeline
> > > > semaphores.
> > > > Userspace will
> > > > >>> > query the wait and signal value for a shared buffer from
> > > > the kernel. The
> > > > >>> > kernel will keep a history of those queries to know
> > > > which
> > > > process is
> > > > >>> > responsible for signalling which buffer. There is only
> > > > the
> > > > wait-timeout
> > > > >>> > issue and how to identify the culprit. One of the
> > > > solutions is to have the
> > > > >>> > GPU send all GPU signal commands and all timed out wait
> > > > commands via an
> > > > >>> > interrupt to the kernel driver to monitor and validate
> > > > userspace behavior.
> > > > >>> > With that, it can be identified whether the culprit is
> > > > the
> > > > waiting process
> > > > >>> > or the signalling process and which one. Invalid
> > > > signal/wait parameters can
> > > > >>> > also be detected. The kernel can force-signal only the
> > > > semaphores that time
> > > > >>> > out, and punish the processes which caused the timeout
> > > > or
> > > > used invalid
> > > > >>> > signal/wait parameters.
> > > > >>> >
> > > > >>> > The question is whether this synchronization solution is
> > > > robust enough for
> > > > >>> > dma_fence and whatever the kernel and window systems
> > > > need.
> > > > >>>
> > > > >>> The proper model here is the preempt-ctx dma_fence that
> > > > amdkfd uses
> > > > >>> (without page faults). That means dma_fence for
> > > > synchronization is doa, at
> > > > >>> least as-is, and we're back to figuring out the winsys
> > > > problem.
> > > > >>>
> > > > >>> "We'll solve it with timeouts" is very tempting, but
> > > > doesn't
> > > > work. It's
> > > > >>> akin to saying that we're solving deadlock issues in a
> > > > locking design by
> > > > >>> doing a global s/mutex_lock/mutex_lock_timeout/ in the
> > > > kernel. Sure it
> > > > >>> avoids having to reach the reset button, but that's about
> > > > it.
> > > > >>>
> > > > >>> And the fundamental problem is that once you throw in
> > > > userspace command
> > > > >>> submission (and syncing, at least within the userspace
> > > > driver, otherwise
> > > > >>> there's kinda no point if you still need the kernel for
> > > > cross-engine sync)
> > > > >>> means you get deadlocks if you still use dma_fence for
> > > > sync
> > > > under
> > > > >>> perfectly legit use-case. We've discussed that one ad
> > > > nauseam last summer:
> > > > >>>
> > > > >>>
> > > >
> > > > https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html?highlight=dma_fence#indefinite-dma-fences
> > > > >>>
> > > > >>> See silly diagramm at the bottom.
> > > > >>>
> > > > >>> Now I think all isn't lost, because imo the first step to
> > > > getting to this
> > > > >>> brave new world is rebuilding the driver on top of
> > > > userspace
> > > > fences, and
> > > > >>> with the adjusted cmd submit model. You probably don't
> > > > want
> > > > to use amdkfd,
> > > > >>> but port that as a context flag or similar to render nodes
> > > > for gl/vk. Of
> > > > >>> course that means you can only use this mode in headless,
> > > > without
> > > > >>> glx/wayland winsys support, but it's a start.
> > > > >>> -Daniel
> > > > >>>
> > > > >>> >
> > > > >>> > Marek
> > > > >>> >
> > > > >>> > On Tue, Apr 20, 2021 at 4:34 PM Daniel Stone
> > > > <daniel@fooishbar.org> wrote:
> > > > >>> >
> > > > >>> > > Hi,
> > > > >>> > >
> > > > >>> > > On Tue, 20 Apr 2021 at 20:30, Daniel Vetter
> > > > <daniel@ffwll.ch> wrote:
> > > > >>> > >
> > > > >>> > >> The thing is, you can't do this in drm/scheduler. At
> > > > least not without
> > > > >>> > >> splitting up the dma_fence in the kernel into
> > > > separate
> > > > memory fences
> > > > >>> > >> and sync fences
> > > > >>> > >
> > > > >>> > >
> > > > >>> > > I'm starting to think this thread needs its own
> > > > glossary
> > > > ...
> > > > >>> > >
> > > > >>> > > I propose we use 'residency fence' for execution
> > > > fences
> > > > which enact
> > > > >>> > > memory-residency operations, e.g. faulting in a page
> > > > ultimately depending
> > > > >>> > > on GPU work retiring.
> > > > >>> > >
> > > > >>> > > And 'value fence' for the pure-userspace model
> > > > suggested
> > > > by timeline
> > > > >>> > > semaphores, i.e. fences being (*addr == val) rather
> > > > than
> > > > being able to look
> > > > >>> > > at ctx seqno.
> > > > >>> > >
> > > > >>> > > Cheers,
> > > > >>> > > Daniel
> > > > >>> > > _______________________________________________
> > > > >>> > > mesa-dev mailing list
> > > > >>> > > mesa-dev@lists.freedesktop.org
> > > > >>> > >
> > > > https://lists.freedesktop.org/mailman/listinfo/mesa-dev
> > > > >>> > >
> > > > >>>
> > > > >>> --
> > > > >>> Daniel Vetter
> > > > >>> Software Engineer, Intel Corporation
> > > > >>> http://blog.ffwll.ch
> > > > >>
> > > > >>
> > > > >> _______________________________________________
> > > > >> mesa-dev mailing list
> > > > >> mesa-dev@lists.freedesktop.org
> > > > >> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
> > > > >>
> > > > >>
> > > >
> > > >
> > > > --
> > > > Daniel Vetter
> > > > Software Engineer, Intel Corporation
> > > > http://blog.ffwll.ch
> > > >
> >
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2021-04-27 17:31 UTC|newest]
Thread overview: 105+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-19 10:47 [RFC] Linux Graphics Next: Explicit fences everywhere and no BO fences - initial proposal Marek Olšák
2021-04-19 15:48 ` Jason Ekstrand
2021-04-20 2:25 ` Marek Olšák
2021-04-20 10:15 ` [Mesa-dev] " Christian König
2021-04-20 10:34 ` Daniel Vetter
2021-04-20 11:03 ` Marek Olšák
2021-04-20 11:16 ` Daniel Vetter
2021-04-20 11:59 ` Christian König
2021-04-20 14:09 ` Daniel Vetter
2021-04-20 16:24 ` Jason Ekstrand
2021-04-20 16:19 ` Jason Ekstrand
2021-04-20 12:01 ` Daniel Vetter
2021-04-20 12:19 ` [Mesa-dev] " Christian König
2021-04-20 13:03 ` Daniel Stone
2021-04-20 14:04 ` Daniel Vetter
2021-04-20 12:42 ` Daniel Stone
2021-04-20 15:45 ` Jason Ekstrand
2021-04-20 17:44 ` Daniel Stone
2021-04-20 18:00 ` [Mesa-dev] " Christian König
2021-04-20 18:15 ` Daniel Stone
2021-04-20 19:03 ` Bas Nieuwenhuizen
2021-04-20 19:18 ` Daniel Stone
2021-04-20 18:53 ` Daniel Vetter
2021-04-20 19:14 ` Daniel Stone
2021-04-20 19:29 ` Daniel Vetter
2021-04-20 20:32 ` Daniel Stone
2021-04-26 20:59 ` Marek Olšák
2021-04-27 8:02 ` Daniel Vetter
2021-04-27 11:49 ` Marek Olšák
2021-04-27 12:06 ` Christian König
2021-04-27 12:11 ` Marek Olšák
2021-04-27 12:15 ` Daniel Vetter
2021-04-27 12:27 ` Christian König
2021-04-27 12:46 ` Marek Olšák
2021-04-27 12:50 ` Christian König
2021-04-27 13:26 ` Marek Olšák
2021-04-27 15:13 ` Christian König
2021-04-27 17:31 ` Lucas Stach [this message]
2021-04-27 17:35 ` Simon Ser
2021-04-27 18:01 ` Alex Deucher
2021-04-27 18:27 ` Simon Ser
2021-04-28 10:01 ` Daniel Vetter
2021-04-28 10:05 ` Daniel Vetter
2021-04-28 10:31 ` Christian König
2021-04-28 12:21 ` Daniel Vetter
2021-04-28 12:26 ` Daniel Vetter
2021-04-28 13:11 ` Christian König
2021-04-28 13:34 ` Daniel Vetter
2021-04-28 13:37 ` Christian König
2021-04-28 14:34 ` Daniel Vetter
2021-04-28 14:45 ` Christian König
2021-04-29 11:07 ` Daniel Vetter
2021-04-28 20:39 ` Alex Deucher
2021-04-29 11:12 ` Daniel Vetter
2021-04-30 8:58 ` Daniel Vetter
2021-04-30 9:07 ` Christian König
2021-04-30 9:35 ` Daniel Vetter
2021-04-30 10:17 ` Daniel Stone
2021-04-28 12:45 ` Simon Ser
2021-04-28 13:03 ` Alex Deucher
2021-04-27 19:41 ` Jason Ekstrand
2021-04-27 21:58 ` Marek Olšák
2021-04-28 4:01 ` Jason Ekstrand
2021-04-28 5:19 ` Marek Olšák
2021-04-27 18:38 ` Dave Airlie
2021-04-27 19:23 ` Marek Olšák
2021-04-28 6:59 ` Christian König
2021-04-28 9:07 ` Michel Dänzer
2021-04-28 9:57 ` Daniel Vetter
2021-05-01 22:27 ` Marek Olšák
2021-05-03 14:42 ` Alex Deucher
2021-05-03 14:59 ` Jason Ekstrand
2021-05-03 15:03 ` Christian König
2021-05-03 15:15 ` Jason Ekstrand
2021-05-03 15:16 ` Bas Nieuwenhuizen
2021-05-03 15:23 ` Jason Ekstrand
2021-05-03 20:36 ` Marek Olšák
2021-05-04 3:11 ` Marek Olšák
2021-05-04 7:01 ` Christian König
2021-05-04 7:32 ` Daniel Vetter
2021-05-04 8:09 ` Christian König
2021-05-04 8:27 ` Daniel Vetter
2021-05-04 9:14 ` Christian König
2021-05-04 9:47 ` Daniel Vetter
2021-05-04 10:53 ` Christian König
2021-05-04 11:13 ` Daniel Vetter
2021-05-04 12:48 ` Christian König
2021-05-04 16:44 ` Daniel Vetter
2021-05-04 17:16 ` Marek Olšák
2021-05-04 21:06 ` Jason Ekstrand
2021-04-28 9:54 ` Daniel Vetter
2021-04-27 20:49 ` Jason Ekstrand
2021-04-27 12:12 ` Daniel Vetter
2021-04-20 19:16 ` Jason Ekstrand
2021-04-20 19:27 ` Daniel Vetter
2021-04-20 14:53 ` Daniel Stone
2021-04-20 14:58 ` [Mesa-dev] " Christian König
2021-04-20 15:07 ` Daniel Stone
2021-04-20 15:16 ` Christian König
2021-04-20 15:49 ` Daniel Stone
2021-04-20 16:25 ` Marek Olšák
2021-04-20 16:42 ` Jacob Lifshay
2021-04-20 18:03 ` Daniel Stone
2021-04-20 18:39 ` Daniel Vetter
2021-04-20 19:20 ` Marek Olšák
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=23ea06c825279c7a9f7678b335c7f89437d387ed.camel@pengutronix.de \
--to=l.stach@pengutronix.de \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=maraeo@gmail.com \
--cc=mesa-dev@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).