All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
	DRI Development <dri-devel@lists.freedesktop.org>
Cc: "Felix Kuehling" <felix.kuehling@amd.com>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	linaro-mm-sig@lists.linaro.org,
	"Jerome Glisse" <jglisse@redhat.com>,
	"Thomas Hellström" <thomas.hellstrom@intel.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Christian König" <christian.koenig@amd.com>,
	linux-media@vger.kernel.org
Subject: Re: [Linaro-mm-sig] [PATCH] dma-fence: Document recoverable page fault implications
Date: Wed, 24 Feb 2021 09:47:52 +0100	[thread overview]
Message-ID: <81df5b1c-2838-49d8-4ae4-bab4f55b411a@shipmail.org> (raw)
In-Reply-To: <20210203152921.2429937-1-daniel.vetter@ffwll.ch>


On 2/3/21 4:29 PM, Daniel Vetter wrote:
> Recently there was a fairly long thread about recoreable hardware page
> faults, how they can deadlock, and what to do about that.
>
> While the discussion is still fresh I figured good time to try and
> document the conclusions a bit. This documentation section explains
> what's the potential problem, and the remedies we've discussed,
> roughly ordered from best to worst.
>
> v2: Linus -> Linux typoe (Dave)
>
> v3:
> - Make it clear drivers only need to implement one option (Christian)
> - Make it clearer that implicit sync is out the window with exclusive
>    fences (Christian)
> - Add the fairly theoretical option of segementing the memory (either
>    statically or through dynamic checks at runtime for which piece of
>    memory is managed how) and explain why it's not a great idea (Felix)
>
> References: https://lore.kernel.org/dri-devel/20210107030127.20393-1-Felix.Kuehling@amd.com/
> Cc: Dave Airlie <airlied@gmail.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@intel.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Jerome Glisse <jglisse@redhat.com>
> Cc: Felix Kuehling <felix.kuehling@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> ---
>   Documentation/driver-api/dma-buf.rst | 76 ++++++++++++++++++++++++++++
>   1 file changed, 76 insertions(+)
>
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index a2133d69872c..7f37ec30d9fd 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -257,3 +257,79 @@ fences in the kernel. This means:
>     userspace is allowed to use userspace fencing or long running compute
>     workloads. This also means no implicit fencing for shared buffers in these
>     cases.
> +
> +Recoverable Hardware Page Faults Implications
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Modern hardware supports recoverable page faults, which has a lot of
> +implications for DMA fences.
> +
> +First, a pending page fault obviously holds up the work that's running on the
> +accelerator and a memory allocation is usually required to resolve the fault.
> +But memory allocations are not allowed to gate completion of DMA fences, which
> +means any workload using recoverable page faults cannot use DMA fences for
> +synchronization. Synchronization fences controlled by userspace must be used
> +instead.
> +
> +On GPUs this poses a problem, because current desktop compositor protocols on
> +Linux rely on DMA fences, which means without an entirely new userspace stack
> +built on top of userspace fences, they cannot benefit from recoverable page
> +faults. Specifically this means implicit synchronization will not be possible.
> +The exception is when page faults are only used as migration hints and never to
> +on-demand fill a memory request. For now this means recoverable page
> +faults on GPUs are limited to pure compute workloads.
> +
> +Furthermore GPUs usually have shared resources between the 3D rendering and
> +compute side, like compute units or command submission engines. If both a 3D
> +job with a DMA fence and a compute workload using recoverable page faults are
> +pending they could deadlock:
> +
> +- The 3D workload might need to wait for the compute job to finish and release
> +  hardware resources first.
> +
> +- The compute workload might be stuck in a page fault, because the memory
> +  allocation is waiting for the DMA fence of the 3D workload to complete.
> +
> +There are a few options to prevent this problem, one of which drivers need to
> +ensure:
> +
> +- Compute workloads can always be preempted, even when a page fault is pending
> +  and not yet repaired. Not all hardware supports this.
> +
> +- DMA fence workloads and workloads which need page fault handling have
> +  independent hardware resources to guarantee forward progress. This could be
> +  achieved through e.g. through dedicated engines and minimal compute unit
> +  reservations for DMA fence workloads.
> +
> +- The reservation approach could be further refined by only reserving the
> +  hardware resources for DMA fence workloads when they are in-flight. This must
> +  cover the time from when the DMA fence is visible to other threads up to
> +  moment when fence is completed through dma_fence_signal().
> +
> +- As a last resort, if the hardware provides no useful reservation mechanics,
> +  all workloads must be flushed from the GPU when switching between jobs
> +  requiring DMA fences or jobs requiring page fault handling: This means all DMA
> +  fences must complete before a compute job with page fault handling can be
> +  inserted into the scheduler queue. And vice versa, before a DMA fence can be
> +  made visible anywhere in the system, all compute workloads must be preempted
> +  to guarantee all pending GPU page faults are flushed.
> +
> +- Only a fairly theoretical option would be to untangle these dependencies when
> +  allocating memory to repair hardware page faults, either through separate
> +  memory blocks or runtime tracking of the full dependency graph of all DMA
> +  fences. This results very wide impact on the kernel, since resolving the page
> +  on the CPU side can itself involve a page fault. It is much more feasible and
> +  robust to limit the impact of handling hardware page faults to the specific
> +  driver.
> +
> +Note that workloads that run on independent hardware like copy engines or other
> +GPUs do not have any impact. This allows us to keep using DMA fences internally
> +in the kernel even for resolving hardware page faults, e.g. by using copy
> +engines to clear or copy memory needed to resolve the page fault.
> +
> +In some ways this page fault problem is a special case of the `Infinite DMA
> +Fences` discussions: Infinite fences from compute workloads are allowed to
> +depend on DMA fences, but not the other way around. And not even the page fault
> +problem is new, because some other CPU thread in userspace might
> +hit a page fault which holds up a userspace fence - supporting page faults on
> +GPUs doesn't anything fundamentally new.

To me, in general this looks good. One thing, though is that for a first 
time reader it might not be totally clear what's special with a compute 
workload. Perhaps some clarification?

Also since the current cross-driver dma_fence locking order is

1) dma_resv ->
2) memory_allocation / reclaim ->
3) dma_fence_wait/critical

And the locking order required for recoverable pagefault is

a) dma_resv ->
b) fence_wait/critical ->
c) memory_allocation / reclaim

(Possibly with a) and b) interchanged above, Is it possible to service a 
recoverable pagefault without taking the dma_resv lock?)

It's clear that the fence critical section in b) is not compatible with 
the dma_fence wait in 3) and thus the memory restrictions are needed. 
But I think given the memory allocation restrictions for recoverable 
pagefaults I guess at some point we must ask ourselves why are they 
necessary and what's the price to be paid for getting rid of them, and 
document also that. *If* it's the case that it all boils down to the 2) 
-> 3) locking order above, and that's mandated *only* by the dma_fence 
wait in the userptr mmu notifiers, I think these restrictions are a 
pretty high price to pay. Wouldn't it be possible now to replace that 
fence wait with either page pinning (which now is coherent since 5.9) or 
preempt-ctx fences + unpinned pages if available and thus invert the 2) 
-> 3) locking order?

Thanks,
Thomas



WARNING: multiple messages have this Message-ID (diff)
From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
	DRI Development <dri-devel@lists.freedesktop.org>
Cc: "Felix Kuehling" <felix.kuehling@amd.com>,
	linaro-mm-sig@lists.linaro.org,
	"Jerome Glisse" <jglisse@redhat.com>,
	"Thomas Hellström" <thomas.hellstrom@intel.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Christian König" <christian.koenig@amd.com>,
	linux-media@vger.kernel.org
Subject: Re: [Linaro-mm-sig] [PATCH] dma-fence: Document recoverable page fault implications
Date: Wed, 24 Feb 2021 09:47:52 +0100	[thread overview]
Message-ID: <81df5b1c-2838-49d8-4ae4-bab4f55b411a@shipmail.org> (raw)
In-Reply-To: <20210203152921.2429937-1-daniel.vetter@ffwll.ch>


On 2/3/21 4:29 PM, Daniel Vetter wrote:
> Recently there was a fairly long thread about recoreable hardware page
> faults, how they can deadlock, and what to do about that.
>
> While the discussion is still fresh I figured good time to try and
> document the conclusions a bit. This documentation section explains
> what's the potential problem, and the remedies we've discussed,
> roughly ordered from best to worst.
>
> v2: Linus -> Linux typoe (Dave)
>
> v3:
> - Make it clear drivers only need to implement one option (Christian)
> - Make it clearer that implicit sync is out the window with exclusive
>    fences (Christian)
> - Add the fairly theoretical option of segementing the memory (either
>    statically or through dynamic checks at runtime for which piece of
>    memory is managed how) and explain why it's not a great idea (Felix)
>
> References: https://lore.kernel.org/dri-devel/20210107030127.20393-1-Felix.Kuehling@amd.com/
> Cc: Dave Airlie <airlied@gmail.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@intel.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Jerome Glisse <jglisse@redhat.com>
> Cc: Felix Kuehling <felix.kuehling@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> ---
>   Documentation/driver-api/dma-buf.rst | 76 ++++++++++++++++++++++++++++
>   1 file changed, 76 insertions(+)
>
> diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
> index a2133d69872c..7f37ec30d9fd 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -257,3 +257,79 @@ fences in the kernel. This means:
>     userspace is allowed to use userspace fencing or long running compute
>     workloads. This also means no implicit fencing for shared buffers in these
>     cases.
> +
> +Recoverable Hardware Page Faults Implications
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Modern hardware supports recoverable page faults, which has a lot of
> +implications for DMA fences.
> +
> +First, a pending page fault obviously holds up the work that's running on the
> +accelerator and a memory allocation is usually required to resolve the fault.
> +But memory allocations are not allowed to gate completion of DMA fences, which
> +means any workload using recoverable page faults cannot use DMA fences for
> +synchronization. Synchronization fences controlled by userspace must be used
> +instead.
> +
> +On GPUs this poses a problem, because current desktop compositor protocols on
> +Linux rely on DMA fences, which means without an entirely new userspace stack
> +built on top of userspace fences, they cannot benefit from recoverable page
> +faults. Specifically this means implicit synchronization will not be possible.
> +The exception is when page faults are only used as migration hints and never to
> +on-demand fill a memory request. For now this means recoverable page
> +faults on GPUs are limited to pure compute workloads.
> +
> +Furthermore GPUs usually have shared resources between the 3D rendering and
> +compute side, like compute units or command submission engines. If both a 3D
> +job with a DMA fence and a compute workload using recoverable page faults are
> +pending they could deadlock:
> +
> +- The 3D workload might need to wait for the compute job to finish and release
> +  hardware resources first.
> +
> +- The compute workload might be stuck in a page fault, because the memory
> +  allocation is waiting for the DMA fence of the 3D workload to complete.
> +
> +There are a few options to prevent this problem, one of which drivers need to
> +ensure:
> +
> +- Compute workloads can always be preempted, even when a page fault is pending
> +  and not yet repaired. Not all hardware supports this.
> +
> +- DMA fence workloads and workloads which need page fault handling have
> +  independent hardware resources to guarantee forward progress. This could be
> +  achieved through e.g. through dedicated engines and minimal compute unit
> +  reservations for DMA fence workloads.
> +
> +- The reservation approach could be further refined by only reserving the
> +  hardware resources for DMA fence workloads when they are in-flight. This must
> +  cover the time from when the DMA fence is visible to other threads up to
> +  moment when fence is completed through dma_fence_signal().
> +
> +- As a last resort, if the hardware provides no useful reservation mechanics,
> +  all workloads must be flushed from the GPU when switching between jobs
> +  requiring DMA fences or jobs requiring page fault handling: This means all DMA
> +  fences must complete before a compute job with page fault handling can be
> +  inserted into the scheduler queue. And vice versa, before a DMA fence can be
> +  made visible anywhere in the system, all compute workloads must be preempted
> +  to guarantee all pending GPU page faults are flushed.
> +
> +- Only a fairly theoretical option would be to untangle these dependencies when
> +  allocating memory to repair hardware page faults, either through separate
> +  memory blocks or runtime tracking of the full dependency graph of all DMA
> +  fences. This results very wide impact on the kernel, since resolving the page
> +  on the CPU side can itself involve a page fault. It is much more feasible and
> +  robust to limit the impact of handling hardware page faults to the specific
> +  driver.
> +
> +Note that workloads that run on independent hardware like copy engines or other
> +GPUs do not have any impact. This allows us to keep using DMA fences internally
> +in the kernel even for resolving hardware page faults, e.g. by using copy
> +engines to clear or copy memory needed to resolve the page fault.
> +
> +In some ways this page fault problem is a special case of the `Infinite DMA
> +Fences` discussions: Infinite fences from compute workloads are allowed to
> +depend on DMA fences, but not the other way around. And not even the page fault
> +problem is new, because some other CPU thread in userspace might
> +hit a page fault which holds up a userspace fence - supporting page faults on
> +GPUs doesn't anything fundamentally new.

To me, in general this looks good. One thing, though is that for a first 
time reader it might not be totally clear what's special with a compute 
workload. Perhaps some clarification?

Also since the current cross-driver dma_fence locking order is

1) dma_resv ->
2) memory_allocation / reclaim ->
3) dma_fence_wait/critical

And the locking order required for recoverable pagefault is

a) dma_resv ->
b) fence_wait/critical ->
c) memory_allocation / reclaim

(Possibly with a) and b) interchanged above, Is it possible to service a 
recoverable pagefault without taking the dma_resv lock?)

It's clear that the fence critical section in b) is not compatible with 
the dma_fence wait in 3) and thus the memory restrictions are needed. 
But I think given the memory allocation restrictions for recoverable 
pagefaults I guess at some point we must ask ourselves why are they 
necessary and what's the price to be paid for getting rid of them, and 
document also that. *If* it's the case that it all boils down to the 2) 
-> 3) locking order above, and that's mandated *only* by the dma_fence 
wait in the userptr mmu notifiers, I think these restrictions are a 
pretty high price to pay. Wouldn't it be possible now to replace that 
fence wait with either page pinning (which now is coherent since 5.9) or 
preempt-ctx fences + unpinned pages if available and thus invert the 2) 
-> 3) locking order?

Thanks,
Thomas


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  parent reply	other threads:[~2021-02-24  8:49 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03 15:29 [PATCH] dma-fence: Document recoverable page fault implications Daniel Vetter
2021-02-03 15:29 ` Daniel Vetter
2021-02-03 15:40 ` Christian König
2021-02-03 15:40   ` Christian König
2021-03-12 14:07   ` Daniel Vetter
2021-03-12 14:07     ` Daniel Vetter
2021-02-24  8:47 ` Thomas Hellström (Intel) [this message]
2021-02-24  8:47   ` [Linaro-mm-sig] " Thomas Hellström (Intel)
2021-02-24  9:26   ` Daniel Vetter
2021-02-24  9:26     ` Daniel Vetter
2021-02-24 11:22     ` Thomas Hellström (Intel)
2021-02-24 11:22       ` Thomas Hellström (Intel)
2021-02-24 14:05       ` Daniel Vetter
2021-02-24 14:05         ` Daniel Vetter
2021-03-03 19:07 ` Thomas Hellström (Intel)
2021-03-03 19:07   ` Thomas Hellström (Intel)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=81df5b1c-2838-49d8-4ae4-bab4f55b411a@shipmail.org \
    --to=thomas_os@shipmail.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=jglisse@redhat.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-media@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=thomas.hellstrom@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.