dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Felix Kuehling <felix.kuehling@amd.com>,
	Jerome Glisse <jglisse@redhat.com>
Cc: Alex Sierra <alex.sierra@amd.com>,
	"Yang, Philip" <philip.yang@amd.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>
Subject: Re: HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)
Date: Fri, 15 Jan 2021 08:47:49 +0100	[thread overview]
Message-ID: <223d8a97-44b4-fe5c-4736-c5c3580edd10@amd.com> (raw)
In-Reply-To: <db1f8cef-a60a-9860-c43b-f1ef6b9b6463@amd.com>

Am 14.01.21 um 22:13 schrieb Felix Kuehling:
> Am 2021-01-14 um 11:51 a.m. schrieb Jerome Glisse:
>> On Thu, Jan 14, 2021 at 02:37:36PM +0100, Christian König wrote:
>>> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
>>>> [SNIP]
>>>>>> I had a new idea, i wanted to think more about it but have not yet,
>>>>>> anyway here it is. Adding a new callback to dma fence which ask the
>>>>>> question can it dead lock ? Any time a GPU driver has pending page
>>>>>> fault (ie something calling into the mm) it answer yes, otherwise
>>>>>> no. The GPU shrinker would ask the question before waiting on any
>>>>>> dma-fence and back of if it gets yes. Shrinker can still try many
>>>>>> dma buf object for which it does not get a yes on associated fence.
>>>>>>
>>>>>> This does not solve the mmu notifier case, for this you would just
>>>>>> invalidate the gem userptr object (with a flag but not releasing the
>>>>>> page refcount) but you would not wait for the GPU (ie no dma fence
>>>>>> wait in that code path anymore). The userptr API never really made
>>>>>> the contract that it will always be in sync with the mm view of the
>>>>>> world so if different page get remapped to same virtual address
>>>>>> while GPU is still working with the old pages it should not be an
>>>>>> issue (it would not be in our usage of userptr for compositor and
>>>>>> what not).
>>>>> The current working idea in my mind goes into a similar direction.
>>>>>
>>>>> But instead of a callback I'm adding a complete new class of HMM fences.
>>>>>
>>>>> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>>>>> the dma_fences and HMM fences are ignored in container objects.
>>>>>
>>>>> When you handle an implicit or explicit synchronization request from
>>>>> userspace you need to block for HMM fences to complete before taking any
>>>>> resource locks.
>>>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
>>>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
>>>> really matter I think). By forcing that split we avoid the problem,
>>>> but it means occasionally full stalls on mixed workloads.
>>>>
>>>> But that's not what Jerome wants (afaiui at least), I think his idea
>>>> is to track the reverse dependencies of all the fences floating
>>>> around, and then skip evicting an object if you have to wait for any
>>>> fence that is problematic for the current calling context. And I don't
>>>> think that's very feasible in practice.
>>>>
>>>> So what kind of hmm fences do you have in mind here?
>>> It's a bit more relaxed than your gang schedule.
>>>
>>> See the requirements are as follow:
>>>
>>> 1. dma_fences never depend on hmm_fences.
>>> 2. hmm_fences can never preempt dma_fences.
>>> 3. dma_fences must be able to preempt hmm_fences or we always reserve enough
>>> hardware resources (CUs) to guarantee forward progress of dma_fences.
>>>
>>> Critical sections are MMU notifiers, page faults, GPU schedulers and
>>> dma_reservation object locks.
>>>
>>> 4. It is valid to wait for a dma_fences in critical sections.
>>> 5. It is not valid to wait for hmm_fences in critical sections.
>>>
>>> Fence creation either happens during command submission or by adding
>>> something like a barrier or signal command to your userspace queue.
>>>
>>> 6. If we have an hmm_fence as implicit or explicit dependency for creating a
>>> dma_fence we must wait for that before taking any locks or reserving
>>> resources.
>>> 7. If we have a dma_fence as implicit or explicit dependency for creating an
>>> hmm_fence we can wait later on. So busy waiting or special WAIT hardware
>>> commands are valid.
>>>
>>> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the same
>>> time on the hardware.
>>>
>>> In other words we can have a high priority gfx queue running jobs based on
>>> dma_fences and a low priority compute queue running jobs based on
>>> hmm_fences.
>>>
>>> Only when we switch from hmm_fence to dma_fence we need to block the
>>> submission until all the necessary resources (both memory as well as CUs)
>>> are available.
>>>
>>> This is somewhat an extension to your gang submit idea.
>> What is hmm_fence ? You should not have fence with hmm at all.
>> So i am kind of scare now.
> I kind of had the same question trying to follow Christian and Daniel's
> discussion. I think an HMM fence would be any fence resulting from the
> completion of a user mode operation in a context with HMM-based memory
> management that may stall indefinitely due to page faults.

It was more of a placeholder for something which can be used for inter 
process synchronization.

> But on a hardware engine that cannot preempt page-faulting work and has
> not reserved resources to guarantee forward progress for kernel jobs, I
> think all fences will need to be HMM fences, because any work submitted
> to such an engine can stall by getting stuck behind a stalled user mode
> operation.
>
> So for example, you have a DMA engine that can preempt during page
> faults, but a graphics engine that cannot. Then work submitted to the
> DMA engine can use dma_fence. But work submitted to the graphics engine
> must use hmm_fence. To avoid deadlocks, dma_fences must never depend on
> hmm_fences and resolution of page faults must never depend on hmm_fences.

Yeah, it's a bit more complicated but in general that fits.

Regards,
Christian.

>
> Regards,
>    Felix
>
>
>> Cheers,
>> Jérôme
>>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2021-01-15  7:48 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-07  3:00 [PATCH 00/35] Add HMM-based SVM memory manager to KFD Felix Kuehling
2021-01-07  3:00 ` [PATCH 01/35] drm/amdkfd: select kernel DEVICE_PRIVATE option Felix Kuehling
2021-01-07  3:00 ` [PATCH 02/35] drm/amdgpu: replace per_device_list by array Felix Kuehling
2021-01-07  3:00 ` [PATCH 03/35] drm/amdkfd: helper to convert gpu id and idx Felix Kuehling
2021-01-07  3:00 ` [PATCH 04/35] drm/amdkfd: add svm ioctl API Felix Kuehling
2021-01-07  3:00 ` [PATCH 05/35] drm/amdkfd: Add SVM API support capability bits Felix Kuehling
2021-01-07  3:00 ` [PATCH 06/35] drm/amdkfd: register svm range Felix Kuehling
2021-01-07  3:00 ` [PATCH 07/35] drm/amdkfd: add svm ioctl GET_ATTR op Felix Kuehling
2021-01-07  3:01 ` [PATCH 08/35] drm/amdgpu: add common HMM get pages function Felix Kuehling
2021-01-07 10:53   ` Christian König
2021-01-07  3:01 ` [PATCH 09/35] drm/amdkfd: validate svm range system memory Felix Kuehling
2021-01-07  3:01 ` [PATCH 10/35] drm/amdkfd: register overlap system memory range Felix Kuehling
2021-01-07  3:01 ` [PATCH 11/35] drm/amdkfd: deregister svm range Felix Kuehling
2021-01-07  3:01 ` [PATCH 12/35] drm/amdgpu: export vm update mapping interface Felix Kuehling
2021-01-07 10:54   ` Christian König
2021-01-07  3:01 ` [PATCH 13/35] drm/amdkfd: map svm range to GPUs Felix Kuehling
2021-01-07  3:01 ` [PATCH 14/35] drm/amdkfd: svm range eviction and restore Felix Kuehling
2021-01-07  3:01 ` [PATCH 15/35] drm/amdkfd: add xnack enabled flag to kfd_process Felix Kuehling
2021-01-07  3:01 ` [PATCH 16/35] drm/amdkfd: add ioctl to configure and query xnack retries Felix Kuehling
2021-01-07  3:01 ` [PATCH 17/35] drm/amdkfd: register HMM device private zone Felix Kuehling
2021-03-01  8:32   ` Daniel Vetter
2021-03-01  8:46     ` Thomas Hellström (Intel)
2021-03-01  8:58       ` Daniel Vetter
2021-03-01  9:30         ` Thomas Hellström (Intel)
2021-03-04 17:58       ` Felix Kuehling
2021-03-11 12:24         ` Thomas Hellström (Intel)
2021-01-07  3:01 ` [PATCH 18/35] drm/amdkfd: validate vram svm range from TTM Felix Kuehling
2021-01-07  3:01 ` [PATCH 19/35] drm/amdkfd: support xgmi same hive mapping Felix Kuehling
2021-01-07  3:01 ` [PATCH 20/35] drm/amdkfd: copy memory through gart table Felix Kuehling
2021-01-07  3:01 ` [PATCH 21/35] drm/amdkfd: HMM migrate ram to vram Felix Kuehling
2021-01-07  3:01 ` [PATCH 22/35] drm/amdkfd: HMM migrate vram to ram Felix Kuehling
2021-01-07  3:01 ` [PATCH 23/35] drm/amdkfd: invalidate tables on page retry fault Felix Kuehling
2021-01-07  3:01 ` [PATCH 24/35] drm/amdkfd: page table restore through svm API Felix Kuehling
2021-01-07  3:01 ` [PATCH 25/35] drm/amdkfd: SVM API call to restore page tables Felix Kuehling
2021-01-07  3:01 ` [PATCH 26/35] drm/amdkfd: add svm_bo reference for eviction fence Felix Kuehling
2021-01-07  3:01 ` [PATCH 27/35] drm/amdgpu: add param bit flag to create SVM BOs Felix Kuehling
2021-01-07  3:01 ` [PATCH 28/35] drm/amdkfd: add svm_bo eviction mechanism support Felix Kuehling
2021-01-07  3:01 ` [PATCH 29/35] drm/amdgpu: svm bo enable_signal call condition Felix Kuehling
2021-01-07 10:56   ` Christian König
2021-01-07 16:16     ` Felix Kuehling
2021-01-07 16:28       ` Christian König
2021-01-07 16:53         ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 30/35] drm/amdgpu: add svm_bo eviction to enable_signal cb Felix Kuehling
2021-01-07  3:01 ` [PATCH 31/35] drm/amdgpu: reserve fence slot to update page table Felix Kuehling
2021-01-07 10:57   ` Christian König
2021-01-07  3:01 ` [PATCH 32/35] drm/amdgpu: enable retry fault wptr overflow Felix Kuehling
2021-01-07 11:01   ` Christian König
2021-01-07  3:01 ` [PATCH 33/35] drm/amdkfd: refine migration policy with xnack on Felix Kuehling
2021-01-07  3:01 ` [PATCH 34/35] drm/amdkfd: add svm range validate timestamp Felix Kuehling
2021-01-07  3:01 ` [PATCH 35/35] drm/amdkfd: multiple gpu migrate vram to vram Felix Kuehling
2021-01-07  9:23 ` [PATCH 00/35] Add HMM-based SVM memory manager to KFD Daniel Vetter
2021-01-07 16:25   ` Felix Kuehling
2021-01-08 14:40     ` Daniel Vetter
2021-01-08 14:45       ` Christian König
2021-01-08 15:58       ` Felix Kuehling
2021-01-08 16:06         ` Daniel Vetter
2021-01-08 16:36           ` Felix Kuehling
2021-01-08 16:53             ` Daniel Vetter
2021-01-08 17:56               ` Felix Kuehling
2021-01-11 16:29                 ` Daniel Vetter
2021-01-14  5:34                   ` Felix Kuehling
2021-01-14 12:19                     ` Christian König
2021-01-13 16:56       ` Jerome Glisse
2021-01-13 20:31         ` Daniel Vetter
2021-01-14  3:27           ` Jerome Glisse
2021-01-14  9:26             ` Daniel Vetter
2021-01-14 10:39               ` Daniel Vetter
2021-01-14 10:49         ` Christian König
2021-01-14 11:52           ` Daniel Vetter
2021-01-14 13:37             ` HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD) Christian König
2021-01-14 13:57               ` Daniel Vetter
2021-01-14 14:13                 ` Christian König
2021-01-14 14:23                   ` Daniel Vetter
2021-01-14 15:08                     ` Christian König
2021-01-14 15:40                       ` Daniel Vetter
2021-01-14 16:01                         ` Christian König
2021-01-14 16:36                           ` Daniel Vetter
2021-01-14 19:08                             ` Christian König
2021-01-14 20:09                               ` Daniel Vetter
2021-01-14 16:51               ` Jerome Glisse
2021-01-14 21:13                 ` Felix Kuehling
2021-01-15  7:47                   ` Christian König [this message]
2021-01-13 16:47 ` [PATCH 00/35] Add HMM-based SVM memory manager to KFD Jerome Glisse
2021-01-14  0:06   ` Felix Kuehling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=223d8a97-44b4-fe5c-4736-c5c3580edd10@amd.com \
    --to=christian.koenig@amd.com \
    --cc=alex.sierra@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=jglisse@redhat.com \
    --cc=philip.yang@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).