dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/ttm: Don't evict SG BOs
@ 2021-04-28  5:33 Felix Kuehling
  2021-04-28  5:33 ` [PATCH 2/2] drm/ttm: Fix swapout in ttm_tt_populate Felix Kuehling
  2021-04-28  7:04 ` [PATCH 1/2] drm/ttm: Don't evict SG BOs Christian König
  0 siblings, 2 replies; 11+ messages in thread
From: Felix Kuehling @ 2021-04-28  5:33 UTC (permalink / raw)
  To: dri-devel, amd-gfx

SG BOs do not occupy space that is managed by TTM. So do not evict them.

This fixes unexpected evictions of KFD's userptr BOs. KFD only expects
userptr "evictions" in the form of MMU notifiers.

Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index de1ec838cf8b..0b953654fdbf 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
 		list_for_each_entry(bo, &man->lru[i], lru) {
 			bool busy;
 
+			/* Don't evict SG BOs */
+			if (bo->ttm && bo->ttm->sg)
+				continue;
+
 			if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked,
 							    &busy)) {
 				if (busy && !busy_bo && ticket !=
-- 
2.31.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] drm/ttm: Fix swapout in ttm_tt_populate
  2021-04-28  5:33 [PATCH 1/2] drm/ttm: Don't evict SG BOs Felix Kuehling
@ 2021-04-28  5:33 ` Felix Kuehling
  2021-04-28  7:03   ` Christian König
  2021-04-28  7:04 ` [PATCH 1/2] drm/ttm: Don't evict SG BOs Christian König
  1 sibling, 1 reply; 11+ messages in thread
From: Felix Kuehling @ 2021-04-28  5:33 UTC (permalink / raw)
  To: dri-devel, amd-gfx

ttm_bo_swapout returns a non-0 value on success. Don't treat that as an
error in ttm_tt_populate.

Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
---
 drivers/gpu/drm/ttm/ttm_tt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 5d8820725b75..1858a7fb9169 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -326,7 +326,7 @@ int ttm_tt_populate(struct ttm_device *bdev,
 	       ttm_dma32_pages_limit) {
 
 		ret = ttm_bo_swapout(ctx, GFP_KERNEL);
-		if (ret)
+		if (ret < 0)
 			goto error;
 	}
 
-- 
2.31.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] drm/ttm: Fix swapout in ttm_tt_populate
  2021-04-28  5:33 ` [PATCH 2/2] drm/ttm: Fix swapout in ttm_tt_populate Felix Kuehling
@ 2021-04-28  7:03   ` Christian König
  0 siblings, 0 replies; 11+ messages in thread
From: Christian König @ 2021-04-28  7:03 UTC (permalink / raw)
  To: Felix Kuehling, dri-devel, amd-gfx

That is already fixed upstream.

Am 28.04.21 um 07:33 schrieb Felix Kuehling:
> ttm_bo_swapout returns a non-0 value on success. Don't treat that as an
> error in ttm_tt_populate.
>
> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
> ---
>   drivers/gpu/drm/ttm/ttm_tt.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> index 5d8820725b75..1858a7fb9169 100644
> --- a/drivers/gpu/drm/ttm/ttm_tt.c
> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> @@ -326,7 +326,7 @@ int ttm_tt_populate(struct ttm_device *bdev,
>   	       ttm_dma32_pages_limit) {
>   
>   		ret = ttm_bo_swapout(ctx, GFP_KERNEL);
> -		if (ret)
> +		if (ret < 0)
>   			goto error;
>   	}
>   

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28  5:33 [PATCH 1/2] drm/ttm: Don't evict SG BOs Felix Kuehling
  2021-04-28  5:33 ` [PATCH 2/2] drm/ttm: Fix swapout in ttm_tt_populate Felix Kuehling
@ 2021-04-28  7:04 ` Christian König
  2021-04-28  7:49   ` Felix Kuehling
  1 sibling, 1 reply; 11+ messages in thread
From: Christian König @ 2021-04-28  7:04 UTC (permalink / raw)
  To: Felix Kuehling, dri-devel, amd-gfx

Am 28.04.21 um 07:33 schrieb Felix Kuehling:
> SG BOs do not occupy space that is managed by TTM. So do not evict them.
>
> This fixes unexpected evictions of KFD's userptr BOs. KFD only expects
> userptr "evictions" in the form of MMU notifiers.

NAK, SG BOs also account for the memory the GPU can currently access.

We can ignore them for the allocated memory, but not for the GTT domain.

Christian.

>
> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>   1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index de1ec838cf8b..0b953654fdbf 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
>   		list_for_each_entry(bo, &man->lru[i], lru) {
>   			bool busy;
>   
> +			/* Don't evict SG BOs */
> +			if (bo->ttm && bo->ttm->sg)
> +				continue;
> +
>   			if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked,
>   							    &busy)) {
>   				if (busy && !busy_bo && ticket !=

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28  7:04 ` [PATCH 1/2] drm/ttm: Don't evict SG BOs Christian König
@ 2021-04-28  7:49   ` Felix Kuehling
  2021-04-28  9:05     ` Christian König
  0 siblings, 1 reply; 11+ messages in thread
From: Felix Kuehling @ 2021-04-28  7:49 UTC (permalink / raw)
  To: Christian König, dri-devel, amd-gfx


Am 2021-04-28 um 3:04 a.m. schrieb Christian König:
> Am 28.04.21 um 07:33 schrieb Felix Kuehling:
>> SG BOs do not occupy space that is managed by TTM. So do not evict them.
>>
>> This fixes unexpected evictions of KFD's userptr BOs. KFD only expects
>> userptr "evictions" in the form of MMU notifiers.
>
> NAK, SG BOs also account for the memory the GPU can currently access. 
>
> We can ignore them for the allocated memory, but not for the GTT domain.
Hmm, the only reason I found this problem is, that I am now testing with
IOMMU enabled. Evicting the userptr BO destroys the DMA mapping. Without
IOMMU-enforced device isolation I was blissfully unaware that the
userptr BOs were being evicted. The GPUVM mappings were unaffected and
just worked without problems. Having to evict these BOs is crippling
KFD's ability to map system memory for GPU access, once again.

I think this affects not only userptr BOs but also DMABuf imports for
BOs shared between multiple GPUs.

The GTT size limitation is entirely artificial. And the only reason I
know of for keeping it limited to the VRAM size is to work around some
OOM issues with GTT BOs. Applying this to userptrs and DMABuf imports
makes no sense. But I understand that the way TTM manages the GTT domain
there is no easy fix for this. Maybe we'd have to create a new domain
for validating SG BOs that's separate from GTT, so that TTM would not
try to allocate GTT space for them.

Failing that, I'd probably have to abandon userptr BOs altogether and
switch system memory mappings over to using the new SVM API on systems
where it is avaliable.

Regards,
  Felix


>
> Christian.
>
>>
>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>> ---
>>   drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>> index de1ec838cf8b..0b953654fdbf 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
>>           list_for_each_entry(bo, &man->lru[i], lru) {
>>               bool busy;
>>   +            /* Don't evict SG BOs */
>> +            if (bo->ttm && bo->ttm->sg)
>> +                continue;
>> +
>>               if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked,
>>                                   &busy)) {
>>                   if (busy && !busy_bo && ticket !=
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28  7:49   ` Felix Kuehling
@ 2021-04-28  9:05     ` Christian König
  2021-04-28 15:19       ` Felix Kuehling
  0 siblings, 1 reply; 11+ messages in thread
From: Christian König @ 2021-04-28  9:05 UTC (permalink / raw)
  To: Felix Kuehling, dri-devel, amd-gfx

Am 28.04.21 um 09:49 schrieb Felix Kuehling:
> Am 2021-04-28 um 3:04 a.m. schrieb Christian König:
>> Am 28.04.21 um 07:33 schrieb Felix Kuehling:
>>> SG BOs do not occupy space that is managed by TTM. So do not evict them.
>>>
>>> This fixes unexpected evictions of KFD's userptr BOs. KFD only expects
>>> userptr "evictions" in the form of MMU notifiers.
>> NAK, SG BOs also account for the memory the GPU can currently access.
>>
>> We can ignore them for the allocated memory, but not for the GTT domain.
> Hmm, the only reason I found this problem is, that I am now testing with
> IOMMU enabled. Evicting the userptr BO destroys the DMA mapping. Without
> IOMMU-enforced device isolation I was blissfully unaware that the
> userptr BOs were being evicted. The GPUVM mappings were unaffected and
> just worked without problems. Having to evict these BOs is crippling
> KFD's ability to map system memory for GPU access, once again.
>
> I think this affects not only userptr BOs but also DMABuf imports for
> BOs shared between multiple GPUs.

Correct, yes.

> The GTT size limitation is entirely artificial. And the only reason I
> know of for keeping it limited to the VRAM size is to work around some
> OOM issues with GTT BOs. Applying this to userptrs and DMABuf imports
> makes no sense. But I understand that the way TTM manages the GTT domain
> there is no easy fix for this. Maybe we'd have to create a new domain
> for validating SG BOs that's separate from GTT, so that TTM would not
> try to allocate GTT space for them.

Well that is contradict to what the GTT domain is all about.

It should limit the amount of system memory the GPU can access at the 
same time. This includes imported DMA-bus as well as userptrs.

That the GPUVM mappings are still there is certainly a bug we should 
look into, but in general if we don't want that limitation we need to 
increase the GTT size and not work around it.

But increasing the GTT size in turn as has a huge negative impact on OOM 
situations up to the point that the OOM killer can't work any more.

> Failing that, I'd probably have to abandon userptr BOs altogether and
> switch system memory mappings over to using the new SVM API on systems
> where it is avaliable.

Well as long as that provides the necessary functionality through HMM it 
would be an option.

Regards,
Christian.

>
> Regards,
>    Felix
>
>
>> Christian.
>>
>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>>    1 file changed, 4 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index de1ec838cf8b..0b953654fdbf 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
>>>            list_for_each_entry(bo, &man->lru[i], lru) {
>>>                bool busy;
>>>    +            /* Don't evict SG BOs */
>>> +            if (bo->ttm && bo->ttm->sg)
>>> +                continue;
>>> +
>>>                if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked,
>>>                                    &busy)) {
>>>                    if (busy && !busy_bo && ticket !=

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28  9:05     ` Christian König
@ 2021-04-28 15:19       ` Felix Kuehling
  2021-04-28 16:33         ` Christian König
  0 siblings, 1 reply; 11+ messages in thread
From: Felix Kuehling @ 2021-04-28 15:19 UTC (permalink / raw)
  To: Christian König, dri-devel, amd-gfx

Am 2021-04-28 um 5:05 a.m. schrieb Christian König:
> Am 28.04.21 um 09:49 schrieb Felix Kuehling:
>> Am 2021-04-28 um 3:04 a.m. schrieb Christian König:
>>> Am 28.04.21 um 07:33 schrieb Felix Kuehling:
>>>> SG BOs do not occupy space that is managed by TTM. So do not evict
>>>> them.
>>>>
>>>> This fixes unexpected evictions of KFD's userptr BOs. KFD only expects
>>>> userptr "evictions" in the form of MMU notifiers.
>>> NAK, SG BOs also account for the memory the GPU can currently access.
>>>
>>> We can ignore them for the allocated memory, but not for the GTT
>>> domain.
>> Hmm, the only reason I found this problem is, that I am now testing with
>> IOMMU enabled. Evicting the userptr BO destroys the DMA mapping. Without
>> IOMMU-enforced device isolation I was blissfully unaware that the
>> userptr BOs were being evicted. The GPUVM mappings were unaffected and
>> just worked without problems. Having to evict these BOs is crippling
>> KFD's ability to map system memory for GPU access, once again.
>>
>> I think this affects not only userptr BOs but also DMABuf imports for
>> BOs shared between multiple GPUs.
>
> Correct, yes.
>
>> The GTT size limitation is entirely artificial. And the only reason I
>> know of for keeping it limited to the VRAM size is to work around some
>> OOM issues with GTT BOs. Applying this to userptrs and DMABuf imports
>> makes no sense. But I understand that the way TTM manages the GTT domain
>> there is no easy fix for this. Maybe we'd have to create a new domain
>> for validating SG BOs that's separate from GTT, so that TTM would not
>> try to allocate GTT space for them.
>
> Well that is contradict to what the GTT domain is all about.
>
> It should limit the amount of system memory the GPU can access at the
> same time. This includes imported DMA-bus as well as userptrs.

Hmm, I was missing something. The amdgpu_gtt_mgr doesn't actually
allocate space for many BOs:

        if (!place->lpfn) {
                mem->mm_node = NULL;
                mem->start = AMDGPU_BO_INVALID_OFFSET;
                return 0;
        }

I think our userptr BOs don't have mm_nodes and don't use GTT space. So
I could add a check for that to amdgpu_ttm_bo_eviction_valuable.
Evicting a BO that doesn't have an mm_node is not valuable because it
cannot free up any space.


>
> That the GPUVM mappings are still there is certainly a bug we should
> look into, but in general if we don't want that limitation we need to
> increase the GTT size and not work around it.

I can fix that by adding the KFD eviction fence to userptr BOs. But
given the above suggestion, I think this would never be triggered by
ttm_mem_evict_first. Also not by ttm_bo_swapout, because SG BOs are
never added to the swap_lru (for good reason).


>
> But increasing the GTT size in turn as has a huge negative impact on
> OOM situations up to the point that the OOM killer can't work any more.
>
>> Failing that, I'd probably have to abandon userptr BOs altogether and
>> switch system memory mappings over to using the new SVM API on systems
>> where it is avaliable.
>
> Well as long as that provides the necessary functionality through HMM
> it would be an option.
Just another way of circumventing "It should limit the amount of system
memory the GPU can access at the same time," a premise I disagree with
in case of userptrs and HMM. Both use pageable, unpinned memory. Both
can cause the GPU to be preempted in case of MMU interval notifiers.
Statically limiting the amount of pageable memory accessible to GTT is
redundant and overly limiting.

Regards,
  Felix


>
> Regards,
> Christian.
>
>>
>> Regards,
>>    Felix
>>
>>
>>> Christian.
>>>
>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>>> ---
>>>>    drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>>>    1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo.c
>>>> index de1ec838cf8b..0b953654fdbf 100644
>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
>>>>            list_for_each_entry(bo, &man->lru[i], lru) {
>>>>                bool busy;
>>>>    +            /* Don't evict SG BOs */
>>>> +            if (bo->ttm && bo->ttm->sg)
>>>> +                continue;
>>>> +
>>>>                if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked,
>>>>                                    &busy)) {
>>>>                    if (busy && !busy_bo && ticket !=
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28 15:19       ` Felix Kuehling
@ 2021-04-28 16:33         ` Christian König
  2021-04-28 16:49           ` Felix Kuehling
  0 siblings, 1 reply; 11+ messages in thread
From: Christian König @ 2021-04-28 16:33 UTC (permalink / raw)
  To: Felix Kuehling, dri-devel, amd-gfx

Am 28.04.21 um 17:19 schrieb Felix Kuehling:
> Am 2021-04-28 um 5:05 a.m. schrieb Christian König:
> [SNIP]
> Hmm, I was missing something. The amdgpu_gtt_mgr doesn't actually
> allocate space for many BOs:
>
>          if (!place->lpfn) {
>                  mem->mm_node = NULL;
>                  mem->start = AMDGPU_BO_INVALID_OFFSET;
>                  return 0;
>          }
>
> I think our userptr BOs don't have mm_nodes and don't use GTT space. So
> I could add a check for that to amdgpu_ttm_bo_eviction_valuable.

That's for allocating GART space and completely unrelated here.

[SNIP]
>>> Failing that, I'd probably have to abandon userptr BOs altogether and
>>> switch system memory mappings over to using the new SVM API on systems
>>> where it is avaliable.
>> Well as long as that provides the necessary functionality through HMM
>> it would be an option.
> Just another way of circumventing "It should limit the amount of system
> memory the GPU can access at the same time," a premise I disagree with
> in case of userptrs and HMM. Both use pageable, unpinned memory.

> Both can cause the GPU to be preempted in case of MMU interval notifiers.

Well that's the key point. GFX userptrs and DMA-buf imports can't be 
preempted.

So they basically lock the backing memory until the last submission is 
completed and that is causing problems if it happens for to much memory 
at the same time.

What we could do is to figure out in the valuable callback if the BO is 
preempt-able or not.

Regards,
Christian.

> Statically limiting the amount of pageable memory accessible to GTT is
> redundant and overly limiting.
>
> Regards,
>    Felix
>
>
>> Regards,
>> Christian.
>>
>>> Regards,
>>>     Felix
>>>
>>>
>>>> Christian.
>>>>
>>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>>>> ---
>>>>>     drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>>>>     1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> index de1ec838cf8b..0b953654fdbf 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device *bdev,
>>>>>             list_for_each_entry(bo, &man->lru[i], lru) {
>>>>>                 bool busy;
>>>>>     +            /* Don't evict SG BOs */
>>>>> +            if (bo->ttm && bo->ttm->sg)
>>>>> +                continue;
>>>>> +
>>>>>                 if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked,
>>>>>                                     &busy)) {
>>>>>                     if (busy && !busy_bo && ticket !=

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28 16:33         ` Christian König
@ 2021-04-28 16:49           ` Felix Kuehling
  2021-04-28 16:58             ` Christian König
  0 siblings, 1 reply; 11+ messages in thread
From: Felix Kuehling @ 2021-04-28 16:49 UTC (permalink / raw)
  To: Christian König, dri-devel, amd-gfx

Am 2021-04-28 um 12:33 p.m. schrieb Christian König:
> Am 28.04.21 um 17:19 schrieb Felix Kuehling:
>> Am 2021-04-28 um 5:05 a.m. schrieb Christian König:
>> [SNIP]
>> Hmm, I was missing something. The amdgpu_gtt_mgr doesn't actually
>> allocate space for many BOs:
>>
>>          if (!place->lpfn) {
>>                  mem->mm_node = NULL;
>>                  mem->start = AMDGPU_BO_INVALID_OFFSET;
>>                  return 0;
>>          }
>>
>> I think our userptr BOs don't have mm_nodes and don't use GTT space. So
>> I could add a check for that to amdgpu_ttm_bo_eviction_valuable.
>
> That's for allocating GART space and completely unrelated here.

Ah, I see, the GTT space allocation doesn't use an mm_node, but just the
mgr->available atomic counter.


>
> [SNIP]
>>>> Failing that, I'd probably have to abandon userptr BOs altogether and
>>>> switch system memory mappings over to using the new SVM API on systems
>>>> where it is avaliable.
>>> Well as long as that provides the necessary functionality through HMM
>>> it would be an option.
>> Just another way of circumventing "It should limit the amount of system
>> memory the GPU can access at the same time," a premise I disagree with
>> in case of userptrs and HMM. Both use pageable, unpinned memory.
>
>> Both can cause the GPU to be preempted in case of MMU interval
>> notifiers.
>
> Well that's the key point. GFX userptrs and DMA-buf imports can't be
> preempted.

But they don't need to be. They don't use any resources on the importing
GPU or system memory, so why do we limit them?

With dynamic attachment, the exported BOs can be evicted and that
affects the imports as well. I don't see why the import needs to be
evicted as if there was some resource limitation on the importing GPU.


>
> So they basically lock the backing memory until the last submission is
> completed and that is causing problems if it happens for to much
> memory at the same time.
>
> What we could do is to figure out in the valuable callback if the BO
> is preempt-able or not.

Then we should also not count them in mgr->available. Otherwise not
evicting these BOs can block other GTT allocations. Again, maybe it's
easier to use a different domain for preemptible BOs.

Regards,
  Felix


>
> Regards,
> Christian.
>
>> Statically limiting the amount of pageable memory accessible to GTT is
>> redundant and overly limiting.
>>
>> Regards,
>>    Felix
>>
>>
>>> Regards,
>>> Christian.
>>>
>>>> Regards,
>>>>     Felix
>>>>
>>>>
>>>>> Christian.
>>>>>
>>>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>>>>> ---
>>>>>>     drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>>>>>     1 file changed, 4 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>> b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>> index de1ec838cf8b..0b953654fdbf 100644
>>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device
>>>>>> *bdev,
>>>>>>             list_for_each_entry(bo, &man->lru[i], lru) {
>>>>>>                 bool busy;
>>>>>>     +            /* Don't evict SG BOs */
>>>>>> +            if (bo->ttm && bo->ttm->sg)
>>>>>> +                continue;
>>>>>> +
>>>>>>                 if (!ttm_bo_evict_swapout_allowable(bo, ctx,
>>>>>> &locked,
>>>>>>                                     &busy)) {
>>>>>>                     if (busy && !busy_bo && ticket !=
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28 16:49           ` Felix Kuehling
@ 2021-04-28 16:58             ` Christian König
  2021-04-28 17:02               ` Felix Kuehling
  0 siblings, 1 reply; 11+ messages in thread
From: Christian König @ 2021-04-28 16:58 UTC (permalink / raw)
  To: Felix Kuehling, dri-devel, amd-gfx

Am 28.04.21 um 18:49 schrieb Felix Kuehling:
> Am 2021-04-28 um 12:33 p.m. schrieb Christian König:
>> Am 28.04.21 um 17:19 schrieb Felix Kuehling:
>> [SNIP]
>>>>> Failing that, I'd probably have to abandon userptr BOs altogether and
>>>>> switch system memory mappings over to using the new SVM API on systems
>>>>> where it is avaliable.
>>>> Well as long as that provides the necessary functionality through HMM
>>>> it would be an option.
>>> Just another way of circumventing "It should limit the amount of system
>>> memory the GPU can access at the same time," a premise I disagree with
>>> in case of userptrs and HMM. Both use pageable, unpinned memory.
>>> Both can cause the GPU to be preempted in case of MMU interval
>>> notifiers.
>> Well that's the key point. GFX userptrs and DMA-buf imports can't be
>> preempted.
> But they don't need to be. They don't use any resources on the importing
> GPU or system memory, so why do we limit them?

Yeah, but at least user pointer effectively pin their backing store as 
long as the GPU operation is running.

> With dynamic attachment, the exported BOs can be evicted and that
> affects the imports as well. I don't see why the import needs to be
> evicted as if there was some resource limitation on the importing GPU.

It prevents that multiple DMA-buf imports are active at the same time.

See the following example: GTT space is 1GiB and we have two DMA-buf 
imports of 600MiB each.

When userspace wants to submit work using both at the same time we 
return -ENOSPC (or -ENOMEM, not 100% sure).

When one is in use and a submission made with the other we block until 
that submission is completed.

This way there is never more than 1 GiB of memory in use or "pinned" by 
the GPU using it.

>> So they basically lock the backing memory until the last submission is
>> completed and that is causing problems if it happens for to much
>> memory at the same time.
>>
>> What we could do is to figure out in the valuable callback if the BO
>> is preempt-able or not.
> Then we should also not count them in mgr->available. Otherwise not
> evicting these BOs can block other GTT allocations. Again, maybe it's
> easier to use a different domain for preemptible BOs.

Good point. That would also be valuable when we get user queues at some 
point.

Regards,
Christian.

>
> Regards,
>    Felix
>
>
>> Regards,
>> Christian.
>>
>>> Statically limiting the amount of pageable memory accessible to GTT is
>>> redundant and overly limiting.
>>>
>>> Regards,
>>>     Felix
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> Regards,
>>>>>      Felix
>>>>>
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>>>>>> ---
>>>>>>>      drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>>>>>>      1 file changed, 4 insertions(+)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> index de1ec838cf8b..0b953654fdbf 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device
>>>>>>> *bdev,
>>>>>>>              list_for_each_entry(bo, &man->lru[i], lru) {
>>>>>>>                  bool busy;
>>>>>>>      +            /* Don't evict SG BOs */
>>>>>>> +            if (bo->ttm && bo->ttm->sg)
>>>>>>> +                continue;
>>>>>>> +
>>>>>>>                  if (!ttm_bo_evict_swapout_allowable(bo, ctx,
>>>>>>> &locked,
>>>>>>>                                      &busy)) {
>>>>>>>                      if (busy && !busy_bo && ticket !=

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] drm/ttm: Don't evict SG BOs
  2021-04-28 16:58             ` Christian König
@ 2021-04-28 17:02               ` Felix Kuehling
  0 siblings, 0 replies; 11+ messages in thread
From: Felix Kuehling @ 2021-04-28 17:02 UTC (permalink / raw)
  To: Christian König, dri-devel, amd-gfx

Am 2021-04-28 um 12:58 p.m. schrieb Christian König:
> Am 28.04.21 um 18:49 schrieb Felix Kuehling:
>> Am 2021-04-28 um 12:33 p.m. schrieb Christian König:
>>> Am 28.04.21 um 17:19 schrieb Felix Kuehling:
>>> [SNIP]
>>>>>> Failing that, I'd probably have to abandon userptr BOs altogether
>>>>>> and
>>>>>> switch system memory mappings over to using the new SVM API on
>>>>>> systems
>>>>>> where it is avaliable.
>>>>> Well as long as that provides the necessary functionality through HMM
>>>>> it would be an option.
>>>> Just another way of circumventing "It should limit the amount of
>>>> system
>>>> memory the GPU can access at the same time," a premise I disagree with
>>>> in case of userptrs and HMM. Both use pageable, unpinned memory.
>>>> Both can cause the GPU to be preempted in case of MMU interval
>>>> notifiers.
>>> Well that's the key point. GFX userptrs and DMA-buf imports can't be
>>> preempted.
>> But they don't need to be. They don't use any resources on the importing
>> GPU or system memory, so why do we limit them?
>
> Yeah, but at least user pointer effectively pin their backing store as
> long as the GPU operation is running.
>
>> With dynamic attachment, the exported BOs can be evicted and that
>> affects the imports as well. I don't see why the import needs to be
>> evicted as if there was some resource limitation on the importing GPU.
>
> It prevents that multiple DMA-buf imports are active at the same time.
>
> See the following example: GTT space is 1GiB and we have two DMA-buf
> imports of 600MiB each.
>
> When userspace wants to submit work using both at the same time we
> return -ENOSPC (or -ENOMEM, not 100% sure).
>
> When one is in use and a submission made with the other we block until
> that submission is completed.
>
> This way there is never more than 1 GiB of memory in use or "pinned"
> by the GPU using it.

Is this reasonable for imports of VRAM in a multi GPU system? E.g. you
allocate 600 MB on GPU A and 600 MB on GPU B. You export both and import
them on the other GPU because you want both GPUs to access each other's
memory. This is a common use case for KFD, and something we want to
implement for upstreamable PCIe P2P support.

With your limitation, I will lever be able to validate both BOs and run
KFD user mode queues in the above scenario.

Regards,
  Felix


>
>>> So they basically lock the backing memory until the last submission is
>>> completed and that is causing problems if it happens for to much
>>> memory at the same time.
>>>
>>> What we could do is to figure out in the valuable callback if the BO
>>> is preempt-able or not.
>> Then we should also not count them in mgr->available. Otherwise not
>> evicting these BOs can block other GTT allocations. Again, maybe it's
>> easier to use a different domain for preemptible BOs.
>
> Good point. That would also be valuable when we get user queues at
> some point.
>
> Regards,
> Christian.
>
>>
>> Regards,
>>    Felix
>>
>>
>>> Regards,
>>> Christian.
>>>
>>>> Statically limiting the amount of pageable memory accessible to GTT is
>>>> redundant and overly limiting.
>>>>
>>>> Regards,
>>>>     Felix
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>> Regards,
>>>>>>      Felix
>>>>>>
>>>>>>
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>>>>>>>> ---
>>>>>>>>      drivers/gpu/drm/ttm/ttm_bo.c | 4 ++++
>>>>>>>>      1 file changed, 4 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>>> b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>>> index de1ec838cf8b..0b953654fdbf 100644
>>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>>>>> @@ -655,6 +655,10 @@ int ttm_mem_evict_first(struct ttm_device
>>>>>>>> *bdev,
>>>>>>>>              list_for_each_entry(bo, &man->lru[i], lru) {
>>>>>>>>                  bool busy;
>>>>>>>>      +            /* Don't evict SG BOs */
>>>>>>>> +            if (bo->ttm && bo->ttm->sg)
>>>>>>>> +                continue;
>>>>>>>> +
>>>>>>>>                  if (!ttm_bo_evict_swapout_allowable(bo, ctx,
>>>>>>>> &locked,
>>>>>>>>                                      &busy)) {
>>>>>>>>                      if (busy && !busy_bo && ticket !=
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-04-28 17:02 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-28  5:33 [PATCH 1/2] drm/ttm: Don't evict SG BOs Felix Kuehling
2021-04-28  5:33 ` [PATCH 2/2] drm/ttm: Fix swapout in ttm_tt_populate Felix Kuehling
2021-04-28  7:03   ` Christian König
2021-04-28  7:04 ` [PATCH 1/2] drm/ttm: Don't evict SG BOs Christian König
2021-04-28  7:49   ` Felix Kuehling
2021-04-28  9:05     ` Christian König
2021-04-28 15:19       ` Felix Kuehling
2021-04-28 16:33         ` Christian König
2021-04-28 16:49           ` Felix Kuehling
2021-04-28 16:58             ` Christian König
2021-04-28 17:02               ` Felix Kuehling

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).