amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/amdgpu: flush the fence on the bo after we individualize
@ 2020-01-15  6:26 Pan, Xinhui
  2020-01-15 13:19 ` Christian König
  0 siblings, 1 reply; 2+ messages in thread
From: Pan, Xinhui @ 2020-01-15  6:26 UTC (permalink / raw)
  To: amd-gfx; +Cc: Koenig, Christian

As we move the ttm_bo_individualize_resv() upwards, we need flush the
copied fence too. Otherwise the driver keeps waiting for fence.

run&Kill kfdtest, then perf top.

  25.53%  [ttm]                     [k] ttm_bo_delayed_delete
  24.29%  [kernel]                  [k] dma_resv_test_signaled_rcu
  19.72%  [kernel]                  [k] ww_mutex_lock

Fix: 378e2d5b("drm/ttm: fix ttm_bo_cleanup_refs_or_queue once more")
Signed-off-by: xinhui pan <xinhui.pan@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 8d91b0428af1..1494aebb8128 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -499,8 +499,10 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
 
 		dma_resv_unlock(bo->base.resv);
 	}
-	if (bo->base.resv != &bo->base._resv)
+	if (bo->base.resv != &bo->base._resv) {
+		ttm_bo_flush_all_fences(bo);
 		dma_resv_unlock(&bo->base._resv);
+	}
 
 error:
 	kref_get(&bo->list_kref);
-- 
2.17.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] drm/amdgpu: flush the fence on the bo after we individualize
  2020-01-15  6:26 [PATCH] drm/amdgpu: flush the fence on the bo after we individualize Pan, Xinhui
@ 2020-01-15 13:19 ` Christian König
  0 siblings, 0 replies; 2+ messages in thread
From: Christian König @ 2020-01-15 13:19 UTC (permalink / raw)
  To: Pan, Xinhui, amd-gfx; +Cc: Koenig, Christian, dri-devel

Am 15.01.20 um 07:26 schrieb Pan, Xinhui:
> As we move the ttm_bo_individualize_resv() upwards, we need flush the
> copied fence too. Otherwise the driver keeps waiting for fence.
>
> run&Kill kfdtest, then perf top.
>
>    25.53%  [ttm]                     [k] ttm_bo_delayed_delete
>    24.29%  [kernel]                  [k] dma_resv_test_signaled_rcu
>    19.72%  [kernel]                  [k] ww_mutex_lock
>
> Fix: 378e2d5b("drm/ttm: fix ttm_bo_cleanup_refs_or_queue once more")
> Signed-off-by: xinhui pan <xinhui.pan@amd.com>

That's indeed a rather nice idea. Reviewed-by: Christian König 
<christian.koenig@amd.com>

I'm going to pick that up for inclusion in drm-misc-next. Please send 
TTM patches also to the dri-devel mailing list in the future.

Christian.

> ---
>   drivers/gpu/drm/ttm/ttm_bo.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 8d91b0428af1..1494aebb8128 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -499,8 +499,10 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
>   
>   		dma_resv_unlock(bo->base.resv);
>   	}
> -	if (bo->base.resv != &bo->base._resv)
> +	if (bo->base.resv != &bo->base._resv) {
> +		ttm_bo_flush_all_fences(bo);
>   		dma_resv_unlock(&bo->base._resv);
> +	}
>   
>   error:
>   	kref_get(&bo->list_kref);

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-01-15 13:19 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-15  6:26 [PATCH] drm/amdgpu: flush the fence on the bo after we individualize Pan, Xinhui
2020-01-15 13:19 ` Christian König

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).