From: Daniel Vetter <daniel@ffwll.ch>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: dri-devel <dri-devel@lists.freedesktop.org>,
ML Mesa-dev <mesa-dev@lists.freedesktop.org>
Subject: Re: [Mesa-dev] [RFC] Linux Graphics Next: Explicit fences everywhere and no BO fences - initial proposal
Date: Wed, 28 Apr 2021 14:26:01 +0200 [thread overview]
Message-ID: <YIlUWdxyXGQgHFj+@phenom.ffwll.local> (raw)
In-Reply-To: <YIlTYjNv5RI5GuiN@phenom.ffwll.local>
On Wed, Apr 28, 2021 at 02:21:54PM +0200, Daniel Vetter wrote:
> On Wed, Apr 28, 2021 at 12:31:09PM +0200, Christian König wrote:
> > Am 28.04.21 um 12:05 schrieb Daniel Vetter:
> > > On Tue, Apr 27, 2021 at 02:01:20PM -0400, Alex Deucher wrote:
> > > > On Tue, Apr 27, 2021 at 1:35 PM Simon Ser <contact@emersion.fr> wrote:
> > > > > On Tuesday, April 27th, 2021 at 7:31 PM, Lucas Stach <l.stach@pengutronix.de> wrote:
> > > > >
> > > > > > > Ok. So that would only make the following use cases broken for now:
> > > > > > >
> > > > > > > - amd render -> external gpu
> > > > > > > - amd video encode -> network device
> > > > > > FWIW, "only" breaking amd render -> external gpu will make us pretty
> > > > > > unhappy
> > > > > I concur. I have quite a few users with a multi-GPU setup involving
> > > > > AMD hardware.
> > > > >
> > > > > Note, if this brokenness can't be avoided, I'd prefer a to get a clear
> > > > > error, and not bad results on screen because nothing is synchronized
> > > > > anymore.
> > > > It's an upcoming requirement for windows[1], so you are likely to
> > > > start seeing this across all GPU vendors that support windows. I
> > > > think the timing depends on how quickly the legacy hardware support
> > > > sticks around for each vendor.
> > > Yeah but hw scheduling doesn't mean the hw has to be constructed to not
> > > support isolating the ringbuffer at all.
> > >
> > > E.g. even if the hw loses the bit to put the ringbuffer outside of the
> > > userspace gpu vm, if you have pagetables I'm seriously hoping you have r/o
> > > pte flags. Otherwise the entire "share address space with cpu side,
> > > seamlessly" thing is out of the window.
> > >
> > > And with that r/o bit on the ringbuffer you can once more force submit
> > > through kernel space, and all the legacy dma_fence based stuff keeps
> > > working. And we don't have to invent some horrendous userspace fence based
> > > implicit sync mechanism in the kernel, but can instead do this transition
> > > properly with drm_syncobj timeline explicit sync and protocol reving.
> > >
> > > At least I think you'd have to work extra hard to create a gpu which
> > > cannot possibly be intercepted by the kernel, even when it's designed to
> > > support userspace direct submit only.
> > >
> > > Or are your hw engineers more creative here and we're screwed?
> >
> > The upcomming hardware generation will have this hardware scheduler as a
> > must have, but there are certain ways we can still stick to the old
> > approach:
> >
> > 1. The new hardware scheduler currently still supports kernel queues which
> > essentially is the same as the old hardware ring buffer.
> >
> > 2. Mapping the top level ring buffer into the VM at least partially solves
> > the problem. This way you can't manipulate the ring buffer content, but the
> > location for the fence must still be writeable.
>
> Yeah allowing userspace to lie about completion fences in this model is
> ok. Though I haven't thought through full consequences of that, but I
> think it's not any worse than userspace lying about which buffers/address
> it uses in the current model - we rely on hw vm ptes to catch that stuff.
>
> Also it might be good to switch to a non-recoverable ctx model for these.
> That's already what we do in i915 (opt-in, but all current umd use that
> mode). So any hang/watchdog just kills the entire ctx and you don't have
> to worry about userspace doing something funny with it's ringbuffer.
> Simplifies everything.
>
> Also ofc userspace fencing still disallowed, but since userspace would
> queu up all writes to its ringbuffer through the drm/scheduler, we'd
> handle dependencies through that still. Not great, but workable.
>
> Thinking about this, not even mapping the ringbuffer r/o is required, it's
> just that we must queue things throug the kernel to resolve dependencies
> and everything without breaking dma_fence. If userspace lies, tdr will
> shoot it and the kernel stops running that context entirely.
>
> So I think even if we have hw with 100% userspace submit model only we
> should be still fine. It's ofc silly, because instead of using userspace
> fences and gpu semaphores the hw scheduler understands we still take the
> detour through drm/scheduler, but at least it's not a break-the-world
> event.
Also no page fault support, userptr invalidates still stall until
end-of-batch instead of just preempting it, and all that too. But I mean
there needs to be some motivation to fix this and roll out explicit sync
:-)
-Daniel
>
> Or do I miss something here?
>
> > For now and the next hardware we are save to support the old submission
> > model, but the functionality of kernel queues will sooner or later go away
> > if it is only for Linux.
> >
> > So we need to work on something which works in the long term and get us away
> > from this implicit sync.
>
> Yeah I think we have pretty clear consensus on that goal, just no one yet
> volunteered to get going with the winsys/wayland work to plumb drm_syncobj
> through, and the kernel/mesa work to make that optionally a userspace
> fence underneath. And it's for a sure a lot of work.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2021-04-28 12:26 UTC|newest]
Thread overview: 105+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-19 10:47 [RFC] Linux Graphics Next: Explicit fences everywhere and no BO fences - initial proposal Marek Olšák
2021-04-19 15:48 ` Jason Ekstrand
2021-04-20 2:25 ` Marek Olšák
2021-04-20 10:15 ` [Mesa-dev] " Christian König
2021-04-20 10:34 ` Daniel Vetter
2021-04-20 11:03 ` Marek Olšák
2021-04-20 11:16 ` Daniel Vetter
2021-04-20 11:59 ` Christian König
2021-04-20 14:09 ` Daniel Vetter
2021-04-20 16:24 ` Jason Ekstrand
2021-04-20 16:19 ` Jason Ekstrand
2021-04-20 12:01 ` Daniel Vetter
2021-04-20 12:19 ` [Mesa-dev] " Christian König
2021-04-20 13:03 ` Daniel Stone
2021-04-20 14:04 ` Daniel Vetter
2021-04-20 12:42 ` Daniel Stone
2021-04-20 15:45 ` Jason Ekstrand
2021-04-20 17:44 ` Daniel Stone
2021-04-20 18:00 ` [Mesa-dev] " Christian König
2021-04-20 18:15 ` Daniel Stone
2021-04-20 19:03 ` Bas Nieuwenhuizen
2021-04-20 19:18 ` Daniel Stone
2021-04-20 18:53 ` Daniel Vetter
2021-04-20 19:14 ` Daniel Stone
2021-04-20 19:29 ` Daniel Vetter
2021-04-20 20:32 ` Daniel Stone
2021-04-26 20:59 ` Marek Olšák
2021-04-27 8:02 ` Daniel Vetter
2021-04-27 11:49 ` Marek Olšák
2021-04-27 12:06 ` Christian König
2021-04-27 12:11 ` Marek Olšák
2021-04-27 12:15 ` Daniel Vetter
2021-04-27 12:27 ` Christian König
2021-04-27 12:46 ` Marek Olšák
2021-04-27 12:50 ` Christian König
2021-04-27 13:26 ` Marek Olšák
2021-04-27 15:13 ` Christian König
2021-04-27 17:31 ` Lucas Stach
2021-04-27 17:35 ` Simon Ser
2021-04-27 18:01 ` Alex Deucher
2021-04-27 18:27 ` Simon Ser
2021-04-28 10:01 ` Daniel Vetter
2021-04-28 10:05 ` Daniel Vetter
2021-04-28 10:31 ` Christian König
2021-04-28 12:21 ` Daniel Vetter
2021-04-28 12:26 ` Daniel Vetter [this message]
2021-04-28 13:11 ` Christian König
2021-04-28 13:34 ` Daniel Vetter
2021-04-28 13:37 ` Christian König
2021-04-28 14:34 ` Daniel Vetter
2021-04-28 14:45 ` Christian König
2021-04-29 11:07 ` Daniel Vetter
2021-04-28 20:39 ` Alex Deucher
2021-04-29 11:12 ` Daniel Vetter
2021-04-30 8:58 ` Daniel Vetter
2021-04-30 9:07 ` Christian König
2021-04-30 9:35 ` Daniel Vetter
2021-04-30 10:17 ` Daniel Stone
2021-04-28 12:45 ` Simon Ser
2021-04-28 13:03 ` Alex Deucher
2021-04-27 19:41 ` Jason Ekstrand
2021-04-27 21:58 ` Marek Olšák
2021-04-28 4:01 ` Jason Ekstrand
2021-04-28 5:19 ` Marek Olšák
2021-04-27 18:38 ` Dave Airlie
2021-04-27 19:23 ` Marek Olšák
2021-04-28 6:59 ` Christian König
2021-04-28 9:07 ` Michel Dänzer
2021-04-28 9:57 ` Daniel Vetter
2021-05-01 22:27 ` Marek Olšák
2021-05-03 14:42 ` Alex Deucher
2021-05-03 14:59 ` Jason Ekstrand
2021-05-03 15:03 ` Christian König
2021-05-03 15:15 ` Jason Ekstrand
2021-05-03 15:16 ` Bas Nieuwenhuizen
2021-05-03 15:23 ` Jason Ekstrand
2021-05-03 20:36 ` Marek Olšák
2021-05-04 3:11 ` Marek Olšák
2021-05-04 7:01 ` Christian König
2021-05-04 7:32 ` Daniel Vetter
2021-05-04 8:09 ` Christian König
2021-05-04 8:27 ` Daniel Vetter
2021-05-04 9:14 ` Christian König
2021-05-04 9:47 ` Daniel Vetter
2021-05-04 10:53 ` Christian König
2021-05-04 11:13 ` Daniel Vetter
2021-05-04 12:48 ` Christian König
2021-05-04 16:44 ` Daniel Vetter
2021-05-04 17:16 ` Marek Olšák
2021-05-04 21:06 ` Jason Ekstrand
2021-04-28 9:54 ` Daniel Vetter
2021-04-27 20:49 ` Jason Ekstrand
2021-04-27 12:12 ` Daniel Vetter
2021-04-20 19:16 ` Jason Ekstrand
2021-04-20 19:27 ` Daniel Vetter
2021-04-20 14:53 ` Daniel Stone
2021-04-20 14:58 ` [Mesa-dev] " Christian König
2021-04-20 15:07 ` Daniel Stone
2021-04-20 15:16 ` Christian König
2021-04-20 15:49 ` Daniel Stone
2021-04-20 16:25 ` Marek Olšák
2021-04-20 16:42 ` Jacob Lifshay
2021-04-20 18:03 ` Daniel Stone
2021-04-20 18:39 ` Daniel Vetter
2021-04-20 19:20 ` Marek Olšák
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YIlUWdxyXGQgHFj+@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=mesa-dev@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).