From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: ** X-Spam-Status: No, score=2.5 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F0F9C433B4 for ; Tue, 27 Apr 2021 12:46:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B3BF613C3 for ; Tue, 27 Apr 2021 12:46:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B3BF613C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6CA706E95C; Tue, 27 Apr 2021 12:46:23 +0000 (UTC) Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by gabe.freedesktop.org (Postfix) with ESMTPS id F0A796E95B; Tue, 27 Apr 2021 12:46:21 +0000 (UTC) Received: by mail-pl1-x62c.google.com with SMTP id 20so26853061pll.7; Tue, 27 Apr 2021 05:46:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=U3tHAhU0scsdtQ9hjoW5rZf7Gumi9Ap+9ZbIw3NLC1Q=; b=Q52q8IxrozWn+09dh28RAg/SkDD56LWIM/wVRMMnplQ6/IgqAp8SAv5KZPajPcL0u/ cL0zP8TFoJReIOu5x7N0ibt1Vm8e7yA/Iv0aZ24PXCWdd4E8y585bKQVRf3OkJkA+UXw p9q+oIcwYoTqFoCDU12ZXDS2EdN1w84OhPfFt03u7pjW+RrcYwy9dFXMv/+jYZfVdqYc m0SLqFODHESLpxv8vzXrszBCeV+6kQ7wYcny4wOjh7BlCQjXMeEDvACwqEMnvoLO6gRy y3hqaqqXB7OYof3DILi8uQJiLQCPOiScDD2qFr365w1EEQ48nzq4l4L2nXUOV3RMeBve lvug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=U3tHAhU0scsdtQ9hjoW5rZf7Gumi9Ap+9ZbIw3NLC1Q=; b=PX8/4UFN6l0Yvpbnawp6a9/a10AyCYgIwbHbfTyZNw4zlPxxxCtemuyTdNXhHiToK/ fzUp0/dD9vW/GnLQR31UrSGmmzHuTJnb0k1dDPHIt2T7+NxD2dpeaLL133oPua2Dqt/K 0RjMDYHniJWO7vm57os4xCjo9SxMc09S6S4I0VbcLEvodpsZcfSHXk3U/GoqXVtzzkVw qg4GNV6MrrdFcZHgEQmm0ZTNc5xOGE0g0U/8Tk/WmY0kdaAlFSo33JXO26bf8R+5w1Qp 01pDpvQapZR9cXLRAlOKd+Xn0trfL60r9G8ZwKqpukwW+EkDUH0RR8jfTNQx7nF1pV5q EoYA== X-Gm-Message-State: AOAM533ys4ULDr9PzsuBgcEKZGeMHZu0F2oYC6cr1804F1FP5BTzOG4U DE7HaSbhJvae8glLGtgWazRF4uG4QbESWURZxIo= X-Google-Smtp-Source: ABdhPJxMthHzy5ZzJkzVJ0KXEhIAg5YN3tS7FS+mxwoUfNwVj/l0/XWfxhsj2sb+EcpNBY09m/9AbYeWBhN5Bw8Ny6Q= X-Received: by 2002:a17:902:c209:b029:ec:7add:e183 with SMTP id 9-20020a170902c209b02900ec7adde183mr24806485pll.74.1619527581462; Tue, 27 Apr 2021 05:46:21 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: =?UTF-8?B?TWFyZWsgT2zFocOhaw==?= Date: Tue, 27 Apr 2021 08:46:08 -0400 Message-ID: Subject: Re: [Mesa-dev] [RFC] Linux Graphics Next: Explicit fences everywhere and no BO fences - initial proposal To: Daniel Vetter X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?UTF-8?Q?Christian_K=C3=B6nig?= , dri-devel , ML Mesa-dev Content-Type: multipart/mixed; boundary="===============1697721130==" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" --===============1697721130== Content-Type: multipart/alternative; boundary="00000000000032739a05c0f3a6a6" --00000000000032739a05c0f3a6a6 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I'll defer to Christian and Alex to decide whether dropping sync with non-amd devices (GPUs, cameras etc.) is acceptable. Rewriting those drivers to this new sync model could be done on a case by case basis. For now, would we only lose the "amd -> external" dependency? Or the "external -> amd" dependency too? Marek On Tue., Apr. 27, 2021, 08:15 Daniel Vetter, wrote: > On Tue, Apr 27, 2021 at 2:11 PM Marek Ol=C5=A1=C3=A1k = wrote: > > Ok. I'll interpret this as "yes, it will work, let's do it". > > It works if all you care about is drm/amdgpu. I'm not sure that's a > reasonable approach for upstream, but it definitely is an approach :-) > > We've already gone somewhat through the pain of drm/amdgpu redefining > how implicit sync works without sufficiently talking with other > people, maybe we should avoid a repeat of this ... > -Daniel > > > > > Marek > > > > On Tue., Apr. 27, 2021, 08:06 Christian K=C3=B6nig, < > ckoenig.leichtzumerken@gmail.com> wrote: > >> > >> Correct, we wouldn't have synchronization between device with and > without user queues any more. > >> > >> That could only be a problem for A+I Laptops. > >> > >> Memory management will just work with preemption fences which pause th= e > user queues of a process before evicting something. That will be a > dma_fence, but also a well known approach. > >> > >> Christian. > >> > >> Am 27.04.21 um 13:49 schrieb Marek Ol=C5=A1=C3=A1k: > >> > >> If we don't use future fences for DMA fences at all, e.g. we don't use > them for memory management, it can work, right? Memory management can > suspend user queues anytime. It doesn't need to use DMA fences. There mig= ht > be something that I'm missing here. > >> > >> What would we lose without DMA fences? Just inter-device > synchronization? I think that might be acceptable. > >> > >> The only case when the kernel will wait on a future fence is before a > page flip. Everything today already depends on userspace not hanging the > gpu, which makes everything a future fence. > >> > >> Marek > >> > >> On Tue., Apr. 27, 2021, 04:02 Daniel Vetter, wrote: > >>> > >>> On Mon, Apr 26, 2021 at 04:59:28PM -0400, Marek Ol=C5=A1=C3=A1k wrote= : > >>> > Thanks everybody. The initial proposal is dead. Here are some > thoughts on > >>> > how to do it differently. > >>> > > >>> > I think we can have direct command submission from userspace via > >>> > memory-mapped queues ("user queues") without changing window system= s. > >>> > > >>> > The memory management doesn't have to use GPU page faults like HMM. > >>> > Instead, it can wait for user queues of a specific process to go > idle and > >>> > then unmap the queues, so that userspace can't submit anything. > Buffer > >>> > evictions, pinning, etc. can be executed when all queues are unmapp= ed > >>> > (suspended). Thus, no BO fences and page faults are needed. > >>> > > >>> > Inter-process synchronization can use timeline semaphores. Userspac= e > will > >>> > query the wait and signal value for a shared buffer from the kernel= . > The > >>> > kernel will keep a history of those queries to know which process i= s > >>> > responsible for signalling which buffer. There is only the > wait-timeout > >>> > issue and how to identify the culprit. One of the solutions is to > have the > >>> > GPU send all GPU signal commands and all timed out wait commands vi= a > an > >>> > interrupt to the kernel driver to monitor and validate userspace > behavior. > >>> > With that, it can be identified whether the culprit is the waiting > process > >>> > or the signalling process and which one. Invalid signal/wait > parameters can > >>> > also be detected. The kernel can force-signal only the semaphores > that time > >>> > out, and punish the processes which caused the timeout or used > invalid > >>> > signal/wait parameters. > >>> > > >>> > The question is whether this synchronization solution is robust > enough for > >>> > dma_fence and whatever the kernel and window systems need. > >>> > >>> The proper model here is the preempt-ctx dma_fence that amdkfd uses > >>> (without page faults). That means dma_fence for synchronization is > doa, at > >>> least as-is, and we're back to figuring out the winsys problem. > >>> > >>> "We'll solve it with timeouts" is very tempting, but doesn't work. It= 's > >>> akin to saying that we're solving deadlock issues in a locking design > by > >>> doing a global s/mutex_lock/mutex_lock_timeout/ in the kernel. Sure i= t > >>> avoids having to reach the reset button, but that's about it. > >>> > >>> And the fundamental problem is that once you throw in userspace comma= nd > >>> submission (and syncing, at least within the userspace driver, > otherwise > >>> there's kinda no point if you still need the kernel for cross-engine > sync) > >>> means you get deadlocks if you still use dma_fence for sync under > >>> perfectly legit use-case. We've discussed that one ad nauseam last > summer: > >>> > >>> > https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html?highlight=3D= dma_fence#indefinite-dma-fences > >>> > >>> See silly diagramm at the bottom. > >>> > >>> Now I think all isn't lost, because imo the first step to getting to > this > >>> brave new world is rebuilding the driver on top of userspace fences, > and > >>> with the adjusted cmd submit model. You probably don't want to use > amdkfd, > >>> but port that as a context flag or similar to render nodes for gl/vk. > Of > >>> course that means you can only use this mode in headless, without > >>> glx/wayland winsys support, but it's a start. > >>> -Daniel > >>> > >>> > > >>> > Marek > >>> > > >>> > On Tue, Apr 20, 2021 at 4:34 PM Daniel Stone > wrote: > >>> > > >>> > > Hi, > >>> > > > >>> > > On Tue, 20 Apr 2021 at 20:30, Daniel Vetter > wrote: > >>> > > > >>> > >> The thing is, you can't do this in drm/scheduler. At least not > without > >>> > >> splitting up the dma_fence in the kernel into separate memory > fences > >>> > >> and sync fences > >>> > > > >>> > > > >>> > > I'm starting to think this thread needs its own glossary ... > >>> > > > >>> > > I propose we use 'residency fence' for execution fences which ena= ct > >>> > > memory-residency operations, e.g. faulting in a page ultimately > depending > >>> > > on GPU work retiring. > >>> > > > >>> > > And 'value fence' for the pure-userspace model suggested by > timeline > >>> > > semaphores, i.e. fences being (*addr =3D=3D val) rather than bein= g > able to look > >>> > > at ctx seqno. > >>> > > > >>> > > Cheers, > >>> > > Daniel > >>> > > _______________________________________________ > >>> > > mesa-dev mailing list > >>> > > mesa-dev@lists.freedesktop.org > >>> > > https://lists.freedesktop.org/mailman/listinfo/mesa-dev > >>> > > > >>> > >>> -- > >>> Daniel Vetter > >>> Software Engineer, Intel Corporation > >>> http://blog.ffwll.ch > >> > >> > >> _______________________________________________ > >> mesa-dev mailing list > >> mesa-dev@lists.freedesktop.org > >> https://lists.freedesktop.org/mailman/listinfo/mesa-dev > >> > >> > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch > --00000000000032739a05c0f3a6a6 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I'll defer to Christian and Alex to decide wheth= er dropping sync with non-amd devices (GPUs, cameras etc.) is acceptable.

Rewriting those drivers t= o this new sync model could be done on a case by case basis.

For now, would we only lose the "= ;amd -> external" dependency? Or the "external -> amd"= dependency too?

Marek

On Tue., Apr. 27, 2021, 08:15 Daniel Vetter, = <daniel@ffwll.ch> wrote:
On Tue, Apr 27, 2021 at 2:11 PM Marek Ol=C5=A1=C3=A1k <maraeo@gmail.com> wrote:
> Ok. I'll interpret this as "yes, it will work, let's do i= t".

It works if all you care about is drm/amdgpu. I'm not sure that's a=
reasonable approach for upstream, but it definitely is an approach :-)

We've already gone somewhat through the pain of drm/amdgpu redefining how implicit sync works without sufficiently talking with other
people, maybe we should avoid a repeat of this ...
-Daniel

>
> Marek
>
> On Tue., Apr. 27, 2021, 08:06 Christian K=C3=B6nig, <ckoenig.leichtzumerken@gmail.com> wrote:
>>
>> Correct, we wouldn't have synchronization between device with = and without user queues any more.
>>
>> That could only be a problem for A+I Laptops.
>>
>> Memory management will just work with preemption fences which paus= e the user queues of a process before evicting something. That will be a dm= a_fence, but also a well known approach.
>>
>> Christian.
>>
>> Am 27.04.21 um 13:49 schrieb Marek Ol=C5=A1=C3=A1k:
>>
>> If we don't use future fences for DMA fences at all, e.g. we d= on't use them for memory management, it can work, right? Memory managem= ent can suspend user queues anytime. It doesn't need to use DMA fences.= There might be something that I'm missing here.
>>
>> What would we lose without DMA fences? Just inter-device synchroni= zation? I think that might be acceptable.
>>
>> The only case when the kernel will wait on a future fence is befor= e a page flip. Everything today already depends on userspace not hanging th= e gpu, which makes everything a future fence.
>>
>> Marek
>>
>> On Tue., Apr. 27, 2021, 04:02 Daniel Vetter, <daniel@ffwll.ch> wrote:
>>>
>>> On Mon, Apr 26, 2021 at 04:59:28PM -0400, Marek Ol=C5=A1=C3=A1= k wrote:
>>> > Thanks everybody. The initial proposal is dead. Here are = some thoughts on
>>> > how to do it differently.
>>> >
>>> > I think we can have direct command submission from usersp= ace via
>>> > memory-mapped queues ("user queues") without ch= anging window systems.
>>> >
>>> > The memory management doesn't have to use GPU page fa= ults like HMM.
>>> > Instead, it can wait for user queues of a specific proces= s to go idle and
>>> > then unmap the queues, so that userspace can't submit= anything. Buffer
>>> > evictions, pinning, etc. can be executed when all queues = are unmapped
>>> > (suspended). Thus, no BO fences and page faults are neede= d.
>>> >
>>> > Inter-process synchronization can use timeline semaphores= . Userspace will
>>> > query the wait and signal value for a shared buffer from = the kernel. The
>>> > kernel will keep a history of those queries to know which= process is
>>> > responsible for signalling which buffer. There is only th= e wait-timeout
>>> > issue and how to identify the culprit. One of the solutio= ns is to have the
>>> > GPU send all GPU signal commands and all timed out wait c= ommands via an
>>> > interrupt to the kernel driver to monitor and validate us= erspace behavior.
>>> > With that, it can be identified whether the culprit is th= e waiting process
>>> > or the signalling process and which one. Invalid signal/w= ait parameters can
>>> > also be detected. The kernel can force-signal only the se= maphores that time
>>> > out, and punish the processes which caused the timeout or= used invalid
>>> > signal/wait parameters.
>>> >
>>> > The question is whether this synchronization solution is = robust enough for
>>> > dma_fence and whatever the kernel and window systems need= .
>>>
>>> The proper model here is the preempt-ctx dma_fence that amdkfd= uses
>>> (without page faults). That means dma_fence for synchronizatio= n is doa, at
>>> least as-is, and we're back to figuring out the winsys pro= blem.
>>>
>>> "We'll solve it with timeouts" is very tempting,= but doesn't work. It's
>>> akin to saying that we're solving deadlock issues in a loc= king design by
>>> doing a global s/mutex_lock/mutex_lock_timeout/ in the kernel.= Sure it
>>> avoids having to reach the reset button, but that's about = it.
>>>
>>> And the fundamental problem is that once you throw in userspac= e command
>>> submission (and syncing, at least within the userspace driver,= otherwise
>>> there's kinda no point if you still need the kernel for cr= oss-engine sync)
>>> means you get deadlocks if you still use dma_fence for sync un= der
>>> perfectly legit use-case. We've discussed that one ad naus= eam last summer:
>>>
>>> https://dri.freedesktop.o= rg/docs/drm/driver-api/dma-buf.html?highlight=3Ddma_fence#indefinite-dma-fe= nces
>>>
>>> See silly diagramm at the bottom.
>>>
>>> Now I think all isn't lost, because imo the first step to = getting to this
>>> brave new world is rebuilding the driver on top of userspace f= ences, and
>>> with the adjusted cmd submit model. You probably don't wan= t to use amdkfd,
>>> but port that as a context flag or similar to render nodes for= gl/vk. Of
>>> course that means you can only use this mode in headless, with= out
>>> glx/wayland winsys support, but it's a start.
>>> -Daniel
>>>
>>> >
>>> > Marek
>>> >
>>> > On Tue, Apr 20, 2021 at 4:34 PM Daniel Stone <daniel@fooishbar.org> wrote:
>>> >
>>> > > Hi,
>>> > >
>>> > > On Tue, 20 Apr 2021 at 20:30, Daniel Vetter <daniel@ffwll.ch> wrote:
>>> > >
>>> > >> The thing is, you can't do this in drm/sched= uler. At least not without
>>> > >> splitting up the dma_fence in the kernel into se= parate memory fences
>>> > >> and sync fences
>>> > >
>>> > >
>>> > > I'm starting to think this thread needs its own = glossary ...
>>> > >
>>> > > I propose we use 'residency fence' for execu= tion fences which enact
>>> > > memory-residency operations, e.g. faulting in a page= ultimately depending
>>> > > on GPU work retiring.
>>> > >
>>> > > And 'value fence' for the pure-userspace mod= el suggested by timeline
>>> > > semaphores, i.e. fences being (*addr =3D=3D val) rat= her than being able to look
>>> > > at ctx seqno.
>>> > >
>>> > > Cheers,
>>> > > Daniel
>>> > > _______________________________________________
>>> > > mesa-dev mailing list
>>> > > mesa-dev@lists.fre= edesktop.org
>>> > > https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>> > >
>>>
>>> --
>>> Daniel Vetter
>>> Software Engineer, Intel Corporation
>>> http://blog.ffwll.ch
>>
>>
>> _______________________________________________
>> mesa-dev mailing list
>> mesa-dev@lists.freedesktop.org
>>
htt= ps://lists.freedesktop.org/mailman/listinfo/mesa-dev
>>
>>


--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
--00000000000032739a05c0f3a6a6-- --===============1697721130== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel --===============1697721130==--