dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: dri-devel@lists.freedesktop.org
Subject: Re: [RFC] Implicit vs explicit user fence sync
Date: Tue, 4 May 2021 16:15:55 +0200	[thread overview]
Message-ID: <YJFXG/THrjXqQjyN@phenom.ffwll.local> (raw)
In-Reply-To: <20210504132729.2046-1-christian.koenig@amd.com>

Hi Christian,

On Tue, May 04, 2021 at 03:27:17PM +0200, Christian König wrote:
> Hi guys,
> 
> with this patch set I want to look into how much more additional work it
> would be to support implicit sync compared to only explicit sync.
> 
> Turned out that this is much simpler than expected since the only
> addition is that before a command submission or flip the kernel and
> classic drivers would need to wait for the user fence to signal before
> taking any locks.

It's a lot more I think
- sync_file/drm_syncobj still need to be supported somehow
- we need userspace to handle the stall in a submit thread at least
- there's nothing here that sets the sync object
- implicit sync isn't just execbuf, it's everything. E.g. the various
  wait_bo ioctl also need to keep working, including timeout and
  everything
- we can't stall in atomic kms where you're currently stalling, that's for
  sure. The uapi says "we're not stalling for fences in there", and you're
  breaking that.
- ... at this point I stopped pondering but there's definitely more

Imo the only way we'll even get the complete is if we do the following:
1. roll out implicit sync with userspace fences on a driver-by-driver basis
   1a. including all the winsys/modeset stuff
2. roll out support for userspace fences to drm_syncobj timeline for
   interop, both across process/userspace and across drivers
   2a. including all the winsys/modeset stuff, but hopefully that's
       largely solved with 1. already.
3. only then try to figure out how to retroshoehorn this into implicit
   sync, and whether that even makes sense.

Because doing 3 before we've done 1&2 for at least 2 drivers (2 because
interop fun across drivers) is just praying that this time around we're
not collectively idiots and can correctly predict the future. That never
worked :-)

> For this prototype this patch set doesn't implement any user fence
> synchronization at all, but just assumes that faulting user pages is
> sufficient to make sure that we can wait for user space to finish
> submitting the work. If necessary this can be made even more strict, the
> only use case I could find which blocks this is the radeon driver and
> that should be handle able.
> 
> This of course doesn't give you the same semantic as the classic
> implicit sync to guarantee that you have exclusive access to a buffers,
> but this is also not necessary.
> 
> So I think the conclusion should be that we don't need to concentrate on
> implicit vs. explicit sync, but rather how to get the synchronization
> and timeout signalling figured out in general.

I'm not sure what exactly you're proving here aside from "it's possible to
roll out a function with ill-defined semantics to all drivers". This
really is a lot harder than just this one function and just this one patch
set.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  parent reply	other threads:[~2021-05-04 14:16 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-04 13:27 [RFC] Implicit vs explicit user fence sync Christian König
2021-05-04 13:27 ` [PATCH 01/12] dma-buf: add interface for user fence synchronization Christian König
2021-05-04 13:27 ` [PATCH 02/12] RDMA/mlx5: add DMA-buf user fence support Christian König
2021-05-04 13:27 ` [PATCH 03/12] drm/amdgpu: " Christian König
2021-05-04 13:27 ` [PATCH 04/12] drm/gem: dd DMA-buf user fence support for the atomic helper Christian König
2021-05-04 13:27 ` [PATCH 05/12] drm/etnaviv: add DMA-buf user fence support Christian König
2021-05-04 13:27 ` [PATCH 06/12] drm/i915: " Christian König
2021-05-04 13:27 ` [PATCH 07/12] drm/lima: " Christian König
2021-05-04 13:27 ` [PATCH 08/12] drm/msm: " Christian König
2021-05-04 13:27 ` [PATCH 09/12] drm/nouveau: " Christian König
2021-05-04 13:27 ` [PATCH 10/12] drm/panfrost: " Christian König
2021-05-04 13:27 ` [PATCH 11/12] drm/radeon: " Christian König
2021-05-04 13:27 ` [PATCH 12/12] drm/v3d: " Christian König
2021-05-04 14:15 ` Daniel Vetter [this message]
2021-05-04 14:26   ` [RFC] Implicit vs explicit user fence sync Christian König
2021-05-04 15:11     ` Daniel Vetter
2021-05-10 18:12       ` Christian König
2021-05-11  7:31         ` Daniel Vetter
2021-05-11  7:47           ` Christian König
2021-05-11 14:23             ` Daniel Vetter
2021-05-11 15:32               ` Christian König
2021-05-11 16:48                 ` Daniel Vetter
2021-05-11 19:34                   ` Christian König
2021-05-12  8:13                     ` Daniel Vetter
2021-05-12  8:23                       ` Christian König
2021-05-12  8:50                         ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YJFXG/THrjXqQjyN@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).