* Re: [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate
[not found] ` <517032d9-1a37-ed7b-1443-9f5148e2f457@amd.com>
@ 2021-04-14 15:15 ` Felix Kuehling
2021-04-15 7:41 ` Christian König
0 siblings, 1 reply; 4+ messages in thread
From: Felix Kuehling @ 2021-04-14 15:15 UTC (permalink / raw)
To: Christian König, amd-gfx, dri-devel
Am 2021-04-14 um 3:33 a.m. schrieb Christian König:
> Am 14.04.21 um 08:46 schrieb Felix Kuehling:
>> amdgpu_ttm_tt_unpopulate can be called during bo_destroy. The
>> dmabuf->resv
>> must not be held by the caller or dma_buf_detach will deadlock. This is
>> probably not the right fix. I get a recursive lock warning with the
>> reservation held in ttm_bo_release. Should unmap_attachment move to
>> backend_unbind instead?
>
> Yes probably, but I'm really wondering if we should call unpopulate
> without holding the reservation lock.
There is an error handling code path in ttm_tt_populate that calls
unpopulate. I believe that has to be holding the reservation lock. The
other cases (destroy and swapout) do not hold the lock, AIUI.
Regards,
Felix
>
> Christian.
>
>>
>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>> ---
>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 +++++++++++++
>> 1 file changed, 13 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index 936b3cfdde55..257750921eed 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -1216,9 +1216,22 @@ static void amdgpu_ttm_tt_unpopulate(struct
>> ttm_device *bdev,
>> if (ttm->sg && gtt->gobj->import_attach) {
>> struct dma_buf_attachment *attach;
>> + bool locked;
>> attach = gtt->gobj->import_attach;
>> + /* FIXME: unpopulate can be called during bo_destroy.
>> + * The dmabuf->resv must not be held by the caller or
>> + * dma_buf_detach will deadlock. This is probably not
>> + * the right fix. I get a recursive lock warning with the
>> + * reservation held in ttm_bo_releas.. Should
>> + * unmap_attachment move to backend_unbind instead?
>> + */
>> + locked = dma_resv_is_locked(attach->dmabuf->resv);
>> + if (!locked)
>> + dma_resv_lock(attach->dmabuf->resv, NULL);
>> dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
>> + if (!locked)
>> + dma_resv_unlock(attach->dmabuf->resv);
>> ttm->sg = NULL;
>> return;
>> }
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate
2021-04-14 15:15 ` [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate Felix Kuehling
@ 2021-04-15 7:41 ` Christian König
2021-04-15 14:29 ` Felix Kuehling
0 siblings, 1 reply; 4+ messages in thread
From: Christian König @ 2021-04-15 7:41 UTC (permalink / raw)
To: Felix Kuehling, amd-gfx, dri-devel
Am 14.04.21 um 17:15 schrieb Felix Kuehling:
> Am 2021-04-14 um 3:33 a.m. schrieb Christian König:
>> Am 14.04.21 um 08:46 schrieb Felix Kuehling:
>>> amdgpu_ttm_tt_unpopulate can be called during bo_destroy. The
>>> dmabuf->resv
>>> must not be held by the caller or dma_buf_detach will deadlock. This is
>>> probably not the right fix. I get a recursive lock warning with the
>>> reservation held in ttm_bo_release. Should unmap_attachment move to
>>> backend_unbind instead?
>> Yes probably, but I'm really wondering if we should call unpopulate
>> without holding the reservation lock.
> There is an error handling code path in ttm_tt_populate that calls
> unpopulate.
That should be harmless. For populating the page array we need the same
lock as for unpopulating it.
> I believe that has to be holding the reservation lock.
Correct, yes.
> The other cases (destroy and swapout) do not hold the lock, AIUI.
That's not correct. See ttm_bo_release() for example:
...
if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
!dma_resv_trylock(bo->base.resv)) {
...
We intentionally lock the reservation object here or put it on the
delayed delete list because dropping the tt object without holding the
lock is illegal for multiple reasons.
If you run into an unpopulate which doesn't hold the lock then I really
need that backtrace because we are running into a much larger bug here.
Thanks,
Christian.
>
> Regards,
> Felix
>
>
>> Christian.
>>
>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 +++++++++++++
>>> 1 file changed, 13 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> index 936b3cfdde55..257750921eed 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -1216,9 +1216,22 @@ static void amdgpu_ttm_tt_unpopulate(struct
>>> ttm_device *bdev,
>>> if (ttm->sg && gtt->gobj->import_attach) {
>>> struct dma_buf_attachment *attach;
>>> + bool locked;
>>> attach = gtt->gobj->import_attach;
>>> + /* FIXME: unpopulate can be called during bo_destroy.
>>> + * The dmabuf->resv must not be held by the caller or
>>> + * dma_buf_detach will deadlock. This is probably not
>>> + * the right fix. I get a recursive lock warning with the
>>> + * reservation held in ttm_bo_releas.. Should
>>> + * unmap_attachment move to backend_unbind instead?
>>> + */
>>> + locked = dma_resv_is_locked(attach->dmabuf->resv);
>>> + if (!locked)
>>> + dma_resv_lock(attach->dmabuf->resv, NULL);
>>> dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
>>> + if (!locked)
>>> + dma_resv_unlock(attach->dmabuf->resv);
>>> ttm->sg = NULL;
>>> return;
>>> }
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate
2021-04-15 7:41 ` Christian König
@ 2021-04-15 14:29 ` Felix Kuehling
0 siblings, 0 replies; 4+ messages in thread
From: Felix Kuehling @ 2021-04-15 14:29 UTC (permalink / raw)
To: Christian König, amd-gfx, dri-devel
Am 2021-04-15 um 3:41 a.m. schrieb Christian König:
>
>
> Am 14.04.21 um 17:15 schrieb Felix Kuehling:
>> Am 2021-04-14 um 3:33 a.m. schrieb Christian König:
>>> Am 14.04.21 um 08:46 schrieb Felix Kuehling:
>>>> amdgpu_ttm_tt_unpopulate can be called during bo_destroy. The
>>>> dmabuf->resv
>>>> must not be held by the caller or dma_buf_detach will deadlock.
>>>> This is
>>>> probably not the right fix. I get a recursive lock warning with the
>>>> reservation held in ttm_bo_release. Should unmap_attachment move to
>>>> backend_unbind instead?
>>> Yes probably, but I'm really wondering if we should call unpopulate
>>> without holding the reservation lock.
>> There is an error handling code path in ttm_tt_populate that calls
>> unpopulate.
>
> That should be harmless. For populating the page array we need the
> same lock as for unpopulating it.
>
>> I believe that has to be holding the reservation lock.
>
> Correct, yes.
>
>> The other cases (destroy and swapout) do not hold the lock, AIUI.
>
> That's not correct. See ttm_bo_release() for example:
>
> ...
> if (!dma_resv_test_signaled_rcu(bo->base.resv, true) ||
> !dma_resv_trylock(bo->base.resv)) {
> ...
>
> We intentionally lock the reservation object here or put it on the
> delayed delete list because dropping the tt object without holding the
> lock is illegal for multiple reasons.
I think this is because I manually individualized the reservation in
patch 4. Without that I was running into different problems (probably
need to dig a bit more to understand what's happening there). So the
lock held by release is not the same as the lock of the original dmabuf.
Regards,
Felix
>
> If you run into an unpopulate which doesn't hold the lock then I
> really need that backtrace because we are running into a much larger
> bug here.
>
> Thanks,
> Christian.
>
>
>>
>> Regards,
>> Felix
>>
>>
>>> Christian.
>>>
>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 +++++++++++++
>>>> 1 file changed, 13 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> index 936b3cfdde55..257750921eed 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> @@ -1216,9 +1216,22 @@ static void amdgpu_ttm_tt_unpopulate(struct
>>>> ttm_device *bdev,
>>>> if (ttm->sg && gtt->gobj->import_attach) {
>>>> struct dma_buf_attachment *attach;
>>>> + bool locked;
>>>> attach = gtt->gobj->import_attach;
>>>> + /* FIXME: unpopulate can be called during bo_destroy.
>>>> + * The dmabuf->resv must not be held by the caller or
>>>> + * dma_buf_detach will deadlock. This is probably not
>>>> + * the right fix. I get a recursive lock warning with the
>>>> + * reservation held in ttm_bo_releas.. Should
>>>> + * unmap_attachment move to backend_unbind instead?
>>>> + */
>>>> + locked = dma_resv_is_locked(attach->dmabuf->resv);
>>>> + if (!locked)
>>>> + dma_resv_lock(attach->dmabuf->resv, NULL);
>>>> dma_buf_unmap_attachment(attach, ttm->sg,
>>>> DMA_BIDIRECTIONAL);
>>>> + if (!locked)
>>>> + dma_resv_unlock(attach->dmabuf->resv);
>>>> ttm->sg = NULL;
>>>> return;
>>>> }
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 0/9] Implement multi-GPU DMA mappings for KFD
@ 2021-04-14 6:47 Felix Kuehling
2021-04-14 6:48 ` [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate Felix Kuehling
0 siblings, 1 reply; 4+ messages in thread
From: Felix Kuehling @ 2021-04-14 6:47 UTC (permalink / raw)
To: amd-gfx, dri-devel; +Cc: christian.koenig
This patch series fixes DMA-mappings of system memory (GTT and userptr)
for KFD running on multi-GPU systems with IOMMU enabled. One SG-BO per
GPU is needed to maintain the DMA mappings of each BO.
I ran into some reservation issues when unmapping or freeing DMA-buf
imports. There are a few FIXME comments in this patch series where I'm
hoping for some expert advice. Patches 8 and 9 are some related fixes
in TTM and amdgpu_ttm. I'm pretty sure patch 9 is not the right way to
do this.
Felix Kuehling (9):
drm/amdgpu: Rename kfd_bo_va_list to kfd_mem_attachment
drm/amdgpu: Keep a bo-reference per-attachment
drm/amdgpu: Simplify AQL queue mapping
drm/amdgpu: Add multi-GPU DMA mapping helpers
drm/amdgpu: DMA map/unmap when updating GPU mappings
drm/amdgpu: Move kfd_mem_attach outside reservation
drm/amdgpu: Add DMA mapping of GTT BOs
drm/ttm: Don't count pages in SG BOs against pages_limit
drm/amdgpu: Lock the attached dmabuf in unpopulate
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 18 +-
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 535 ++++++++++++------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 +
drivers/gpu/drm/ttm/ttm_tt.c | 27 +-
4 files changed, 420 insertions(+), 173 deletions(-)
--
2.31.1
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate
2021-04-14 6:47 [PATCH 0/9] Implement multi-GPU DMA mappings for KFD Felix Kuehling
@ 2021-04-14 6:48 ` Felix Kuehling
0 siblings, 0 replies; 4+ messages in thread
From: Felix Kuehling @ 2021-04-14 6:48 UTC (permalink / raw)
To: amd-gfx, dri-devel; +Cc: christian.koenig
amdgpu_ttm_tt_unpopulate can be called during bo_destroy. The dmabuf->resv
must not be held by the caller or dma_buf_detach will deadlock. This is
probably not the right fix. I get a recursive lock warning with the
reservation held in ttm_bo_release. Should unmap_attachment move to
backend_unbind instead?
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 936b3cfdde55..257750921eed 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1216,9 +1216,22 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_device *bdev,
if (ttm->sg && gtt->gobj->import_attach) {
struct dma_buf_attachment *attach;
+ bool locked;
attach = gtt->gobj->import_attach;
+ /* FIXME: unpopulate can be called during bo_destroy.
+ * The dmabuf->resv must not be held by the caller or
+ * dma_buf_detach will deadlock. This is probably not
+ * the right fix. I get a recursive lock warning with the
+ * reservation held in ttm_bo_releas.. Should
+ * unmap_attachment move to backend_unbind instead?
+ */
+ locked = dma_resv_is_locked(attach->dmabuf->resv);
+ if (!locked)
+ dma_resv_lock(attach->dmabuf->resv, NULL);
dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
+ if (!locked)
+ dma_resv_unlock(attach->dmabuf->resv);
ttm->sg = NULL;
return;
}
--
2.31.1
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-04-15 14:29 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20210414064621.29273-1-Felix.Kuehling@amd.com>
[not found] ` <20210414064621.29273-10-Felix.Kuehling@amd.com>
[not found] ` <517032d9-1a37-ed7b-1443-9f5148e2f457@amd.com>
2021-04-14 15:15 ` [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate Felix Kuehling
2021-04-15 7:41 ` Christian König
2021-04-15 14:29 ` Felix Kuehling
2021-04-14 6:47 [PATCH 0/9] Implement multi-GPU DMA mappings for KFD Felix Kuehling
2021-04-14 6:48 ` [PATCH 9/9] drm/amdgpu: Lock the attached dmabuf in unpopulate Felix Kuehling
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).