All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: "Christian König" <christian.koenig@amd.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	"Daniel Vetter" <daniel@ffwll.ch>
Cc: linaro-mm-sig@lists.linaro.org, matthew.auld@intel.com
Subject: Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
Date: Fri, 3 Dec 2021 16:13:02 +0100	[thread overview]
Message-ID: <962a5319-62b5-00f5-a987-80d8abd75ece@shipmail.org> (raw)
In-Reply-To: <4d3c9eb5-f093-84c9-47da-ee27630ee646@amd.com>


On 12/3/21 16:00, Christian König wrote:
> Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>>
>> On 12/3/21 15:26, Christian König wrote:
>>> [Adding Daniel here as well]
>>>
>>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>>> [SNIP]
>>>>> Well that's ok as well. My question is why does this single dma_fence
>>>>> then shows up in the dma_fence_chain representing the whole
>>>>> migration?
>>>> What we'd like to happen during eviction is that we
>>>>
>>>> 1) await any exclusive- or moving fences, then schedule the migration
>>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>>> fences.
>>>> 3) Most but not all of the remaining resv shared fences will have been
>>>> finished in 2) We can't easily tell which so we have a couple of 
>>>> shared
>>>> fences left.
>>>
>>> Stop, wait a second here. We are going a bit in circles.
>>>
>>> Before you migrate a buffer, you *MUST* wait for all shared fences 
>>> to complete. This is documented mandatory DMA-buf behavior.
>>>
>>> Daniel and I have discussed that quite extensively in the last few 
>>> month.
>>>
>>> So how does it come that you do the blit before all shared fences 
>>> are completed?
>>
>> Well we don't currently but wanted to... (I haven't consulted Daniel 
>> in the matter, tbh).
>>
>> I was under the impression that all writes would add an exclusive 
>> fence to the dma_resv.
>
> Yes that's correct. I'm working on to have more than one write fence, 
> but that is currently under review.
>
>> If that's not the case or this is otherwise against the mandatory 
>> DMA-buf bevhavior, we can certainly keep that part as is and that 
>> would eliminate 3).
>
> Ah, now that somewhat starts to make sense.
>
> So your blit only waits for the writes to finish before starting the 
> blit. Yes that's legal as long as you don't change the original 
> content with the blit.
>
> But don't you then need to wait for both reads and writes before you 
> unmap the VMAs?

Yes, but that's planned to be done all async, and those unbind jobs are 
scheduled simultaneosly with the blit, and the blit itself manages its 
own page-table-entries, so no need to unbind any blit vmas.

>
> Anyway the good news is your problem totally goes away with the 
> DMA-resv rework I've already send out. Basically it is now possible to 
> have more than one fence in the DMA-resv object for migrations and all 
> existing fences are kept around until they are finished.

Sounds good.

Thanks,

Thomas


WARNING: multiple messages have this Message-ID (diff)
From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: "Christian König" <christian.koenig@amd.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	"Daniel Vetter" <daniel@ffwll.ch>
Cc: linaro-mm-sig@lists.linaro.org, matthew.auld@intel.com
Subject: Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
Date: Fri, 3 Dec 2021 16:13:02 +0100	[thread overview]
Message-ID: <962a5319-62b5-00f5-a987-80d8abd75ece@shipmail.org> (raw)
In-Reply-To: <4d3c9eb5-f093-84c9-47da-ee27630ee646@amd.com>


On 12/3/21 16:00, Christian König wrote:
> Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>>
>> On 12/3/21 15:26, Christian König wrote:
>>> [Adding Daniel here as well]
>>>
>>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>>> [SNIP]
>>>>> Well that's ok as well. My question is why does this single dma_fence
>>>>> then shows up in the dma_fence_chain representing the whole
>>>>> migration?
>>>> What we'd like to happen during eviction is that we
>>>>
>>>> 1) await any exclusive- or moving fences, then schedule the migration
>>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>>> fences.
>>>> 3) Most but not all of the remaining resv shared fences will have been
>>>> finished in 2) We can't easily tell which so we have a couple of 
>>>> shared
>>>> fences left.
>>>
>>> Stop, wait a second here. We are going a bit in circles.
>>>
>>> Before you migrate a buffer, you *MUST* wait for all shared fences 
>>> to complete. This is documented mandatory DMA-buf behavior.
>>>
>>> Daniel and I have discussed that quite extensively in the last few 
>>> month.
>>>
>>> So how does it come that you do the blit before all shared fences 
>>> are completed?
>>
>> Well we don't currently but wanted to... (I haven't consulted Daniel 
>> in the matter, tbh).
>>
>> I was under the impression that all writes would add an exclusive 
>> fence to the dma_resv.
>
> Yes that's correct. I'm working on to have more than one write fence, 
> but that is currently under review.
>
>> If that's not the case or this is otherwise against the mandatory 
>> DMA-buf bevhavior, we can certainly keep that part as is and that 
>> would eliminate 3).
>
> Ah, now that somewhat starts to make sense.
>
> So your blit only waits for the writes to finish before starting the 
> blit. Yes that's legal as long as you don't change the original 
> content with the blit.
>
> But don't you then need to wait for both reads and writes before you 
> unmap the VMAs?

Yes, but that's planned to be done all async, and those unbind jobs are 
scheduled simultaneosly with the blit, and the blit itself manages its 
own page-table-entries, so no need to unbind any blit vmas.

>
> Anyway the good news is your problem totally goes away with the 
> DMA-resv rework I've already send out. Basically it is now possible to 
> have more than one fence in the DMA-resv object for migrations and all 
> existing fences are kept around until they are finished.

Sounds good.

Thanks,

Thomas


  reply	other threads:[~2021-12-03 15:13 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-30 12:19 [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats Thomas Hellström
2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:19 ` [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes Thomas Hellström
2021-11-30 12:19   ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:25   ` Maarten Lankhorst
2021-11-30 12:25     ` [Intel-gfx] " Maarten Lankhorst
2021-11-30 12:31     ` Thomas Hellström
2021-11-30 12:31       ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:42       ` Christian König
2021-11-30 12:42         ` [Intel-gfx] " Christian König
2021-11-30 12:56         ` Thomas Hellström
2021-11-30 12:56           ` [Intel-gfx] " Thomas Hellström
2021-11-30 13:26           ` Christian König
2021-11-30 13:26             ` [Intel-gfx] " Christian König
2021-11-30 14:35             ` Thomas Hellström
2021-11-30 14:35               ` [Intel-gfx] " Thomas Hellström
2021-11-30 15:02               ` Christian König
2021-11-30 15:02                 ` [Intel-gfx] " Christian König
2021-11-30 18:12                 ` Thomas Hellström
2021-11-30 18:12                   ` Thomas Hellström
2021-11-30 19:27                   ` Thomas Hellström
2021-11-30 19:27                     ` [Intel-gfx] " Thomas Hellström
2021-12-01  7:05                     ` Christian König
2021-12-01  7:05                       ` [Intel-gfx] " Christian König
2021-12-01  8:23                       ` [Linaro-mm-sig] " Thomas Hellström (Intel)
2021-12-01  8:23                         ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-01  8:36                         ` Christian König
2021-12-01  8:36                           ` [Intel-gfx] " Christian König
2021-12-01 10:15                           ` Thomas Hellström (Intel)
2021-12-01 10:15                             ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-01 10:32                             ` Christian König
2021-12-01 10:32                               ` [Intel-gfx] " Christian König
2021-12-01 11:04                               ` Thomas Hellström (Intel)
2021-12-01 11:04                                 ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-01 11:25                                 ` Christian König
2021-12-01 11:25                                   ` [Intel-gfx] " Christian König
2021-12-01 12:16                                   ` Thomas Hellström (Intel)
2021-12-01 12:16                                     ` Thomas Hellström (Intel)
2021-12-03 13:08                                     ` Christian König
2021-12-03 13:08                                       ` [Intel-gfx] " Christian König
2021-12-03 14:18                                       ` Thomas Hellström
2021-12-03 14:18                                         ` [Intel-gfx] " Thomas Hellström
2021-12-03 14:26                                         ` Christian König
2021-12-03 14:26                                           ` [Intel-gfx] " Christian König
2021-12-03 14:50                                           ` Thomas Hellström
2021-12-03 14:50                                             ` [Intel-gfx] " Thomas Hellström
2021-12-03 15:00                                             ` Christian König
2021-12-03 15:00                                               ` [Intel-gfx] " Christian König
2021-12-03 15:13                                               ` Thomas Hellström (Intel) [this message]
2021-12-03 15:13                                                 ` Thomas Hellström (Intel)
2021-12-07 18:08                                         ` Daniel Vetter
2021-12-07 18:08                                           ` Daniel Vetter
2021-12-07 20:46                                           ` Thomas Hellström
2021-12-07 20:46                                             ` Thomas Hellström
2021-12-20  9:37                                             ` Daniel Vetter
2021-12-20  9:37                                               ` Daniel Vetter
2021-11-30 12:32   ` Thomas Hellström
2021-11-30 12:32     ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:19 ` [RFC PATCH 2/2] dma-fence: Avoid excessive recursive fence locking from enable_signaling() callbacks Thomas Hellström
2021-11-30 12:19   ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:36 ` [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats Christian König
2021-11-30 12:36   ` [Intel-gfx] " Christian König
2021-11-30 13:05 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
2021-11-30 13:48 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-11-30 17:47 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=962a5319-62b5-00f5-a987-80d8abd75ece@shipmail.org \
    --to=thomas_os@shipmail.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=matthew.auld@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.