All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/amdgpu: Skip drm_sched_entity realted ops for KIQ ring.
@ 2018-05-15 19:31 Andrey Grodzovsky
       [not found] ` <1526412674-15913-1-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Andrey Grodzovsky @ 2018-05-15 19:31 UTC (permalink / raw)
  To: amd-gfx, dri-devel; +Cc: Christian.Koenig

Follwoing change 75fbed2 we need to skip KIQ ring when iterating
amdgpu_ctx's scheduler entites.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 6741a62..744519b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -173,9 +173,14 @@ static void amdgpu_ctx_do_release(struct kref *ref)
 
 	ctx = container_of(ref, struct amdgpu_ctx, refcount);
 
-	for (i = 0; i < ctx->adev->num_rings; i++)
+	for (i = 0; i < ctx->adev->num_rings; i++) {
+
+		if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
+			continue;
+
 		drm_sched_entity_fini(&ctx->adev->rings[i]->sched,
 			&ctx->rings[i].entity);
+	}
 
 	amdgpu_ctx_fini(ref);
 }
@@ -452,12 +457,17 @@ void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr)
 		if (!ctx->adev)
 			return;
 
-		for (i = 0; i < ctx->adev->num_rings; i++)
+		for (i = 0; i < ctx->adev->num_rings; i++) {
+
+			if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
+				continue;
+
 			if (kref_read(&ctx->refcount) == 1)
 				drm_sched_entity_do_release(&ctx->adev->rings[i]->sched,
 						  &ctx->rings[i].entity);
 			else
 				DRM_ERROR("ctx %p is still alive\n", ctx);
+		}
 	}
 }
 
@@ -474,12 +484,17 @@ void amdgpu_ctx_mgr_entity_cleanup(struct amdgpu_ctx_mgr *mgr)
 		if (!ctx->adev)
 			return;
 
-		for (i = 0; i < ctx->adev->num_rings; i++)
+		for (i = 0; i < ctx->adev->num_rings; i++) {
+
+			if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
+				continue;
+
 			if (kref_read(&ctx->refcount) == 1)
 				drm_sched_entity_cleanup(&ctx->adev->rings[i]->sched,
 					&ctx->rings[i].entity);
 			else
 				DRM_ERROR("ctx %p is still alive\n", ctx);
+		}
 	}
 }
 
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found] ` <1526412674-15913-1-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
@ 2018-05-15 19:31   ` Andrey Grodzovsky
       [not found]     ` <1526412674-15913-2-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
  2018-05-15 19:42   ` [PATCH 1/2] drm/amdgpu: Skip drm_sched_entity realted ops for KIQ ring Alex Deucher
  1 sibling, 1 reply; 19+ messages in thread
From: Andrey Grodzovsky @ 2018-05-15 19:31 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Andrey Grodzovsky, Christian.Koenig-5C7GfCeVMHo

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
---
 drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
 include/drm/gpu_scheduler.h               | 1 -
 2 files changed, 5 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
index 1f1dd70..2569a63 100644
--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
 	entity->last_scheduled = NULL;
 
 	spin_lock_init(&entity->rq_lock);
-	spin_lock_init(&entity->queue_lock);
 	spsc_queue_init(&entity->job_queue);
 
 	atomic_set(&entity->fence_seq, 0);
@@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
 
 	trace_drm_sched_job(sched_job, entity);
 
-	spin_lock(&entity->queue_lock);
 	first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
 
-	spin_unlock(&entity->queue_lock);
-
 	/* first job wakes up scheduler */
 	if (first) {
 		/* Add the entity to the run queue */
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 350a62c..683eb65 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -56,7 +56,6 @@ struct drm_sched_entity {
 	spinlock_t			rq_lock;
 	struct drm_gpu_scheduler	*sched;
 
-	spinlock_t			queue_lock;
 	struct spsc_queue		job_queue;
 
 	atomic_t			fence_seq;
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]     ` <1526412674-15913-2-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
@ 2018-05-15 19:38       ` Alex Deucher
       [not found]         ` <CADnq5_Pp=itdSOwq75petxvte1kUcLxmLymp_5zy7LVrgowyVw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2018-05-16  3:04       ` zhoucm1
  1 sibling, 1 reply; 19+ messages in thread
From: Alex Deucher @ 2018-05-15 19:38 UTC (permalink / raw)
  To: Andrey Grodzovsky
  Cc: Maling list - DRI developers, amd-gfx list, Christian Koenig

On Tue, May 15, 2018 at 3:31 PM, Andrey Grodzovsky
<andrey.grodzovsky@amd.com> wrote:
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Please provide a better patch description.

Alex

> ---
>  drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>  include/drm/gpu_scheduler.h               | 1 -
>  2 files changed, 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 1f1dd70..2569a63 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
>         entity->last_scheduled = NULL;
>
>         spin_lock_init(&entity->rq_lock);
> -       spin_lock_init(&entity->queue_lock);
>         spsc_queue_init(&entity->job_queue);
>
>         atomic_set(&entity->fence_seq, 0);
> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>
>         trace_drm_sched_job(sched_job, entity);
>
> -       spin_lock(&entity->queue_lock);
>         first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>
> -       spin_unlock(&entity->queue_lock);
> -
>         /* first job wakes up scheduler */
>         if (first) {
>                 /* Add the entity to the run queue */
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 350a62c..683eb65 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>         spinlock_t                      rq_lock;
>         struct drm_gpu_scheduler        *sched;
>
> -       spinlock_t                      queue_lock;
>         struct spsc_queue               job_queue;
>
>         atomic_t                        fence_seq;
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] drm/amdgpu: Skip drm_sched_entity realted ops for KIQ ring.
       [not found] ` <1526412674-15913-1-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
  2018-05-15 19:31   ` [PATCH 2/2] drm/scheduler: Remove obsolete spinlock Andrey Grodzovsky
@ 2018-05-15 19:42   ` Alex Deucher
       [not found]     ` <CADnq5_OtJDxZSrTe_peJtFTCm6T8QnBW-iri2UJv1yvOmjraNQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 19+ messages in thread
From: Alex Deucher @ 2018-05-15 19:42 UTC (permalink / raw)
  To: Andrey Grodzovsky
  Cc: Maling list - DRI developers, amd-gfx list, Christian Koenig

On Tue, May 15, 2018 at 3:31 PM, Andrey Grodzovsky
<andrey.grodzovsky@amd.com> wrote:
> Follwoing change 75fbed2 we need to skip KIQ ring when iterating
> amdgpu_ctx's scheduler entites.
>
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Typo in the title: realted -> related
Typo in the description: Follwoing -> Following
Also expand on the reasoning a bit in the patch description.  E.g., we
never initialize or use the GPU scheduler for KIQ.
With those things fixes:
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

Alex

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index 6741a62..744519b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -173,9 +173,14 @@ static void amdgpu_ctx_do_release(struct kref *ref)
>
>         ctx = container_of(ref, struct amdgpu_ctx, refcount);
>
> -       for (i = 0; i < ctx->adev->num_rings; i++)
> +       for (i = 0; i < ctx->adev->num_rings; i++) {
> +
> +               if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
> +                       continue;
> +
>                 drm_sched_entity_fini(&ctx->adev->rings[i]->sched,
>                         &ctx->rings[i].entity);
> +       }
>
>         amdgpu_ctx_fini(ref);
>  }
> @@ -452,12 +457,17 @@ void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr)
>                 if (!ctx->adev)
>                         return;
>
> -               for (i = 0; i < ctx->adev->num_rings; i++)
> +               for (i = 0; i < ctx->adev->num_rings; i++) {
> +
> +                       if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
> +                               continue;
> +
>                         if (kref_read(&ctx->refcount) == 1)
>                                 drm_sched_entity_do_release(&ctx->adev->rings[i]->sched,
>                                                   &ctx->rings[i].entity);
>                         else
>                                 DRM_ERROR("ctx %p is still alive\n", ctx);
> +               }
>         }
>  }
>
> @@ -474,12 +484,17 @@ void amdgpu_ctx_mgr_entity_cleanup(struct amdgpu_ctx_mgr *mgr)
>                 if (!ctx->adev)
>                         return;
>
> -               for (i = 0; i < ctx->adev->num_rings; i++)
> +               for (i = 0; i < ctx->adev->num_rings; i++) {
> +
> +                       if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
> +                               continue;
> +
>                         if (kref_read(&ctx->refcount) == 1)
>                                 drm_sched_entity_cleanup(&ctx->adev->rings[i]->sched,
>                                         &ctx->rings[i].entity);
>                         else
>                                 DRM_ERROR("ctx %p is still alive\n", ctx);
> +               }
>         }
>  }
>
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]         ` <CADnq5_Pp=itdSOwq75petxvte1kUcLxmLymp_5zy7LVrgowyVw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2018-05-15 20:24           ` Andrey Grodzovsky
  0 siblings, 0 replies; 19+ messages in thread
From: Andrey Grodzovsky @ 2018-05-15 20:24 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Maling list - DRI developers, amd-gfx list, Christian Koenig

Yea, I might need to give another thought to whether this  spinlock can 
actually be removed.

Andrey

On 05/15/2018 03:38 PM, Alex Deucher wrote:
> On Tue, May 15, 2018 at 3:31 PM, Andrey Grodzovsky
> <andrey.grodzovsky@amd.com> wrote:
>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> Please provide a better patch description.
>
> Alex
>
>> ---
>>   drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>   include/drm/gpu_scheduler.h               | 1 -
>>   2 files changed, 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> index 1f1dd70..2569a63 100644
>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
>>          entity->last_scheduled = NULL;
>>
>>          spin_lock_init(&entity->rq_lock);
>> -       spin_lock_init(&entity->queue_lock);
>>          spsc_queue_init(&entity->job_queue);
>>
>>          atomic_set(&entity->fence_seq, 0);
>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>>
>>          trace_drm_sched_job(sched_job, entity);
>>
>> -       spin_lock(&entity->queue_lock);
>>          first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>>
>> -       spin_unlock(&entity->queue_lock);
>> -
>>          /* first job wakes up scheduler */
>>          if (first) {
>>                  /* Add the entity to the run queue */
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 350a62c..683eb65 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>          spinlock_t                      rq_lock;
>>          struct drm_gpu_scheduler        *sched;
>>
>> -       spinlock_t                      queue_lock;
>>          struct spsc_queue               job_queue;
>>
>>          atomic_t                        fence_seq;
>> --
>> 2.7.4
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]     ` <1526412674-15913-2-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
  2018-05-15 19:38       ` Alex Deucher
@ 2018-05-16  3:04       ` zhoucm1
       [not found]         ` <dda4fb88-e228-e10b-46a5-27ba9888b78d-5C7GfCeVMHo@public.gmane.org>
  1 sibling, 1 reply; 19+ messages in thread
From: zhoucm1 @ 2018-05-16  3:04 UTC (permalink / raw)
  To: Andrey Grodzovsky, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Christian.Koenig-5C7GfCeVMHo



On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> ---
>   drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>   include/drm/gpu_scheduler.h               | 1 -
>   2 files changed, 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 1f1dd70..2569a63 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
>   	entity->last_scheduled = NULL;
>   
>   	spin_lock_init(&entity->rq_lock);
> -	spin_lock_init(&entity->queue_lock);
>   	spsc_queue_init(&entity->job_queue);
>   
>   	atomic_set(&entity->fence_seq, 0);
> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>   
>   	trace_drm_sched_job(sched_job, entity);
>   
> -	spin_lock(&entity->queue_lock);
>   	first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>   
> -	spin_unlock(&entity->queue_lock);
Is your spsc safely to be added simultaneously?

Regards,
David Zhou
> -
>   	/* first job wakes up scheduler */
>   	if (first) {
>   		/* Add the entity to the run queue */
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 350a62c..683eb65 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>   	spinlock_t			rq_lock;
>   	struct drm_gpu_scheduler	*sched;
>   
> -	spinlock_t			queue_lock;
>   	struct spsc_queue		job_queue;
>   
>   	atomic_t			fence_seq;

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]         ` <dda4fb88-e228-e10b-46a5-27ba9888b78d-5C7GfCeVMHo@public.gmane.org>
@ 2018-05-16  3:33           ` Grodzovsky, Andrey
       [not found]             ` <MWHPR12MB14530840CD7DB545DD286827EA920-Gy0DoCVfaSWZBIDmKHdw+wdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Grodzovsky, Andrey @ 2018-05-16  3:33 UTC (permalink / raw)
  To: Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Koenig, Christian

Yeah, that what I am not sure about... It's lockless in a sense of single producer single consumer but not for multiple concurrent producers... So now I think this spinlock should stay there... It just looked useless to me at first sight...

Andrey

________________________________________
From: Zhou, David(ChunMing)
Sent: 15 May 2018 23:04:44
To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Cc: Koenig, Christian
Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.



On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> ---
>   drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>   include/drm/gpu_scheduler.h               | 1 -
>   2 files changed, 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 1f1dd70..2569a63 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
>       entity->last_scheduled = NULL;
>
>       spin_lock_init(&entity->rq_lock);
> -     spin_lock_init(&entity->queue_lock);
>       spsc_queue_init(&entity->job_queue);
>
>       atomic_set(&entity->fence_seq, 0);
> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>
>       trace_drm_sched_job(sched_job, entity);
>
> -     spin_lock(&entity->queue_lock);
>       first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>
> -     spin_unlock(&entity->queue_lock);
Is your spsc safely to be added simultaneously?

Regards,
David Zhou
> -
>       /* first job wakes up scheduler */
>       if (first) {
>               /* Add the entity to the run queue */
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 350a62c..683eb65 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>       spinlock_t                      rq_lock;
>       struct drm_gpu_scheduler        *sched;
>
> -     spinlock_t                      queue_lock;
>       struct spsc_queue               job_queue;
>
>       atomic_t                        fence_seq;

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]             ` <MWHPR12MB14530840CD7DB545DD286827EA920-Gy0DoCVfaSWZBIDmKHdw+wdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2018-05-16  6:50               ` Christian König
       [not found]                 ` <3a54ce0c-6821-9c94-a67f-2b1a3c74959d-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-05-16  6:50 UTC (permalink / raw)
  To: Grodzovsky, Andrey, Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

No, that spinlock is indeed incorrect.

See even when we protect the spsc queue with a spinlock that doesn't  
make it correct. It can happen that the jobs pushed to the queue are  
reversed in their sequence order and that can cause severe problems in  
the memory management.

Christian.

Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
> Yeah, that what I am not sure about... It's lockless in a sense of single producer single consumer but not for multiple concurrent producers... So now I think this spinlock should stay there... It just looked useless to me at first sight...
>
> Andrey
>
> ________________________________________
> From: Zhou, David(ChunMing)
> Sent: 15 May 2018 23:04:44
> To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> Cc: Koenig, Christian
> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
>
>
>
> On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>> ---
>>    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>    include/drm/gpu_scheduler.h               | 1 -
>>    2 files changed, 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> index 1f1dd70..2569a63 100644
>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
>>        entity->last_scheduled = NULL;
>>
>>        spin_lock_init(&entity->rq_lock);
>> -     spin_lock_init(&entity->queue_lock);
>>        spsc_queue_init(&entity->job_queue);
>>
>>        atomic_set(&entity->fence_seq, 0);
>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>>
>>        trace_drm_sched_job(sched_job, entity);
>>
>> -     spin_lock(&entity->queue_lock);
>>        first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>>
>> -     spin_unlock(&entity->queue_lock);
> Is your spsc safely to be added simultaneously?
>
> Regards,
> David Zhou
>> -
>>        /* first job wakes up scheduler */
>>        if (first) {
>>                /* Add the entity to the run queue */
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 350a62c..683eb65 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>        spinlock_t                      rq_lock;
>>        struct drm_gpu_scheduler        *sched;
>>
>> -     spinlock_t                      queue_lock;
>>        struct spsc_queue               job_queue;
>>
>>        atomic_t                        fence_seq;

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] drm/amdgpu: Skip drm_sched_entity realted ops for KIQ ring.
       [not found]     ` <CADnq5_OtJDxZSrTe_peJtFTCm6T8QnBW-iri2UJv1yvOmjraNQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2018-05-16  7:05       ` Christian König
  0 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2018-05-16  7:05 UTC (permalink / raw)
  To: Alex Deucher, Andrey Grodzovsky
  Cc: amd-gfx list, Maling list - DRI developers, Christian Koenig

Am 15.05.2018 um 21:42 schrieb Alex Deucher:
> On Tue, May 15, 2018 at 3:31 PM, Andrey Grodzovsky
> <andrey.grodzovsky@amd.com> wrote:
>> Follwoing change 75fbed2 we need to skip KIQ ring when iterating
>> amdgpu_ctx's scheduler entites.
>>
>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> Typo in the title: realted -> related
> Typo in the description: Follwoing -> Following
> Also expand on the reasoning a bit in the patch description.  E.g., we
> never initialize or use the GPU scheduler for KIQ.
> With those things fixes:
> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

With the typos fixed Reviewed-by: Christian König 
<christian.koenig@amd.com>.

Christian.

>
> Alex
>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 21 ++++++++++++++++++---
>>   1 file changed, 18 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> index 6741a62..744519b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>> @@ -173,9 +173,14 @@ static void amdgpu_ctx_do_release(struct kref *ref)
>>
>>          ctx = container_of(ref, struct amdgpu_ctx, refcount);
>>
>> -       for (i = 0; i < ctx->adev->num_rings; i++)
>> +       for (i = 0; i < ctx->adev->num_rings; i++) {
>> +
>> +               if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
>> +                       continue;
>> +
>>                  drm_sched_entity_fini(&ctx->adev->rings[i]->sched,
>>                          &ctx->rings[i].entity);
>> +       }
>>
>>          amdgpu_ctx_fini(ref);
>>   }
>> @@ -452,12 +457,17 @@ void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr)
>>                  if (!ctx->adev)
>>                          return;
>>
>> -               for (i = 0; i < ctx->adev->num_rings; i++)
>> +               for (i = 0; i < ctx->adev->num_rings; i++) {
>> +
>> +                       if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
>> +                               continue;
>> +
>>                          if (kref_read(&ctx->refcount) == 1)
>>                                  drm_sched_entity_do_release(&ctx->adev->rings[i]->sched,
>>                                                    &ctx->rings[i].entity);
>>                          else
>>                                  DRM_ERROR("ctx %p is still alive\n", ctx);
>> +               }
>>          }
>>   }
>>
>> @@ -474,12 +484,17 @@ void amdgpu_ctx_mgr_entity_cleanup(struct amdgpu_ctx_mgr *mgr)
>>                  if (!ctx->adev)
>>                          return;
>>
>> -               for (i = 0; i < ctx->adev->num_rings; i++)
>> +               for (i = 0; i < ctx->adev->num_rings; i++) {
>> +
>> +                       if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
>> +                               continue;
>> +
>>                          if (kref_read(&ctx->refcount) == 1)
>>                                  drm_sched_entity_cleanup(&ctx->adev->rings[i]->sched,
>>                                          &ctx->rings[i].entity);
>>                          else
>>                                  DRM_ERROR("ctx %p is still alive\n", ctx);
>> +               }
>>          }
>>   }
>>
>> --
>> 2.7.4
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]                 ` <3a54ce0c-6821-9c94-a67f-2b1a3c74959d-5C7GfCeVMHo@public.gmane.org>
@ 2018-05-16 11:15                   ` Andrey Grodzovsky
  2018-05-16 11:23                     ` Christian König
  0 siblings, 1 reply; 19+ messages in thread
From: Andrey Grodzovsky @ 2018-05-16 11:15 UTC (permalink / raw)
  To: Christian König, Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Can you please elaborate more, maybe give an example - I don't 
understand yet the problematic scenario.

Andrey


On 05/16/2018 02:50 AM, Christian König wrote:
> No, that spinlock is indeed incorrect. I
>
> See even when we protect the spsc queue with a spinlock that doesn't 
> make it correct. It can happen that the jobs pushed to the queue are 
> reversed in their sequence order and that can cause severe problems in 
> the memory management.
>
> Christian.
>
> Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
>> Yeah, that what I am not sure about... It's lockless in a sense of 
>> single producer single consumer but not for multiple concurrent 
>> producers... So now I think this spinlock should stay there... It 
>> just looked useless to me at first sight...
>>
>> Andrey
>>
>> ________________________________________
>> From: Zhou, David(ChunMing)
>> Sent: 15 May 2018 23:04:44
>> To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; 
>> dri-devel@lists.freedesktop.org
>> Cc: Koenig, Christian
>> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
>>
>>
>>
>> On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>>    include/drm/gpu_scheduler.h               | 1 -
>>>    2 files changed, 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
>>> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>> index 1f1dd70..2569a63 100644
>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct 
>>> drm_gpu_scheduler *sched,
>>>        entity->last_scheduled = NULL;
>>>
>>>        spin_lock_init(&entity->rq_lock);
>>> -     spin_lock_init(&entity->queue_lock);
>>>        spsc_queue_init(&entity->job_queue);
>>>
>>>        atomic_set(&entity->fence_seq, 0);
>>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct 
>>> drm_sched_job *sched_job,
>>>
>>>        trace_drm_sched_job(sched_job, entity);
>>>
>>> -     spin_lock(&entity->queue_lock);
>>>        first = spsc_queue_push(&entity->job_queue, 
>>> &sched_job->queue_node);
>>>
>>> -     spin_unlock(&entity->queue_lock);
>> Is your spsc safely to be added simultaneously?
>>
>> Regards,
>> David Zhou
>>> -
>>>        /* first job wakes up scheduler */
>>>        if (first) {
>>>                /* Add the entity to the run queue */
>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>> index 350a62c..683eb65 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>>        spinlock_t                      rq_lock;
>>>        struct drm_gpu_scheduler        *sched;
>>>
>>> -     spinlock_t                      queue_lock;
>>>        struct spsc_queue               job_queue;
>>>
>>>        atomic_t                        fence_seq;
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
  2018-05-16 11:15                   ` Andrey Grodzovsky
@ 2018-05-16 11:23                     ` Christian König
       [not found]                       ` <0d936838-8a88-9bd7-d4ff-053cd73f41cc-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-05-16 11:23 UTC (permalink / raw)
  To: Andrey Grodzovsky, Christian König, Zhou, David(ChunMing),
	amd-gfx, dri-devel

drm_sched_fence_create() assigns a sequence number to the fence it creates.

Now drm_sched_fence_create() is called by drm_sched_job_init() to 
initialize the jobs we want to push on the scheduler queue.

When you now call drm_sched_entity_push_job() without a protecting lock 
it can happen that you push two (or even more) job with reversed 
sequence numbers.

Since the sequence numbers are used to determine job completion order 
reversing them can seriously mess things up.

So the spin lock should be superfluous, if it isn't we have a much 
larger bug we need to fix.

Christian.

Am 16.05.2018 um 13:15 schrieb Andrey Grodzovsky:
> Can you please elaborate more, maybe give an example - I don't 
> understand yet the problematic scenario.
>
> Andrey
>
>
> On 05/16/2018 02:50 AM, Christian König wrote:
>> No, that spinlock is indeed incorrect. I
>>
>> See even when we protect the spsc queue with a spinlock that doesn't 
>> make it correct. It can happen that the jobs pushed to the queue are 
>> reversed in their sequence order and that can cause severe problems 
>> in the memory management.
>>
>> Christian.
>>
>> Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
>>> Yeah, that what I am not sure about... It's lockless in a sense of 
>>> single producer single consumer but not for multiple concurrent 
>>> producers... So now I think this spinlock should stay there... It 
>>> just looked useless to me at first sight...
>>>
>>> Andrey
>>>
>>> ________________________________________
>>> From: Zhou, David(ChunMing)
>>> Sent: 15 May 2018 23:04:44
>>> To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; 
>>> dri-devel@lists.freedesktop.org
>>> Cc: Koenig, Christian
>>> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
>>>
>>>
>>>
>>> On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>> ---
>>>>    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>>>    include/drm/gpu_scheduler.h               | 1 -
>>>>    2 files changed, 5 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
>>>> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>> index 1f1dd70..2569a63 100644
>>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct 
>>>> drm_gpu_scheduler *sched,
>>>>        entity->last_scheduled = NULL;
>>>>
>>>>        spin_lock_init(&entity->rq_lock);
>>>> -     spin_lock_init(&entity->queue_lock);
>>>>        spsc_queue_init(&entity->job_queue);
>>>>
>>>>        atomic_set(&entity->fence_seq, 0);
>>>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct 
>>>> drm_sched_job *sched_job,
>>>>
>>>>        trace_drm_sched_job(sched_job, entity);
>>>>
>>>> -     spin_lock(&entity->queue_lock);
>>>>        first = spsc_queue_push(&entity->job_queue, 
>>>> &sched_job->queue_node);
>>>>
>>>> -     spin_unlock(&entity->queue_lock);
>>> Is your spsc safely to be added simultaneously?
>>>
>>> Regards,
>>> David Zhou
>>>> -
>>>>        /* first job wakes up scheduler */
>>>>        if (first) {
>>>>                /* Add the entity to the run queue */
>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>> index 350a62c..683eb65 100644
>>>> --- a/include/drm/gpu_scheduler.h
>>>> +++ b/include/drm/gpu_scheduler.h
>>>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>>>        spinlock_t                      rq_lock;
>>>>        struct drm_gpu_scheduler        *sched;
>>>>
>>>> -     spinlock_t                      queue_lock;
>>>>        struct spsc_queue               job_queue;
>>>>
>>>>        atomic_t                        fence_seq;
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]                       ` <0d936838-8a88-9bd7-d4ff-053cd73f41cc-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2018-05-16 11:47                         ` Andrey Grodzovsky
  2018-05-16 12:08                           ` Christian König
  0 siblings, 1 reply; 19+ messages in thread
From: Andrey Grodzovsky @ 2018-05-16 11:47 UTC (permalink / raw)
  To: christian.koenig-5C7GfCeVMHo, Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

So are you saying that you expect this to  be already in code for any 
usage of drm_sched_fence_create and drm_sched_entity_push_job ?

lock()

drm_sched_fence_create()

... (some code)

drm_sched_entity_push_job()

unlock()


Andrey

On 05/16/2018 07:23 AM, Christian König wrote:
> drm_sched_fence_create() assigns a sequence number to the fence it 
> creates.
>
> Now drm_sched_fence_create() is called by drm_sched_job_init() to 
> initialize the jobs we want to push on the scheduler queue.
>
> When you now call drm_sched_entity_push_job() without a protecting 
> lock it can happen that you push two (or even more) job with reversed 
> sequence numbers.
>
> Since the sequence numbers are used to determine job completion order 
> reversing them can seriously mess things up.
>
> So the spin lock should be superfluous, if it isn't we have a much 
> larger bug we need to fix.
>
> Christian.
>
> Am 16.05.2018 um 13:15 schrieb Andrey Grodzovsky:
>> Can you please elaborate more, maybe give an example - I don't 
>> understand yet the problematic scenario.
>>
>> Andrey
>>
>>
>> On 05/16/2018 02:50 AM, Christian König wrote:
>>> No, that spinlock is indeed incorrect. I
>>>
>>> See even when we protect the spsc queue with a spinlock that doesn't 
>>> make it correct. It can happen that the jobs pushed to the queue are 
>>> reversed in their sequence order and that can cause severe problems 
>>> in the memory management.
>>>
>>> Christian.
>>>
>>> Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
>>>> Yeah, that what I am not sure about... It's lockless in a sense of 
>>>> single producer single consumer but not for multiple concurrent 
>>>> producers... So now I think this spinlock should stay there... It 
>>>> just looked useless to me at first sight...
>>>>
>>>> Andrey
>>>>
>>>> ________________________________________
>>>> From: Zhou, David(ChunMing)
>>>> Sent: 15 May 2018 23:04:44
>>>> To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; 
>>>> dri-devel@lists.freedesktop.org
>>>> Cc: Koenig, Christian
>>>> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
>>>>
>>>>
>>>>
>>>> On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>> ---
>>>>>    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>>>>    include/drm/gpu_scheduler.h               | 1 -
>>>>>    2 files changed, 5 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
>>>>> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>> index 1f1dd70..2569a63 100644
>>>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct 
>>>>> drm_gpu_scheduler *sched,
>>>>>        entity->last_scheduled = NULL;
>>>>>
>>>>>        spin_lock_init(&entity->rq_lock);
>>>>> -     spin_lock_init(&entity->queue_lock);
>>>>>        spsc_queue_init(&entity->job_queue);
>>>>>
>>>>>        atomic_set(&entity->fence_seq, 0);
>>>>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct 
>>>>> drm_sched_job *sched_job,
>>>>>
>>>>>        trace_drm_sched_job(sched_job, entity);
>>>>>
>>>>> -     spin_lock(&entity->queue_lock);
>>>>>        first = spsc_queue_push(&entity->job_queue, 
>>>>> &sched_job->queue_node);
>>>>>
>>>>> -     spin_unlock(&entity->queue_lock);
>>>> Is your spsc safely to be added simultaneously?
>>>>
>>>> Regards,
>>>> David Zhou
>>>>> -
>>>>>        /* first job wakes up scheduler */
>>>>>        if (first) {
>>>>>                /* Add the entity to the run queue */
>>>>> diff --git a/include/drm/gpu_scheduler.h 
>>>>> b/include/drm/gpu_scheduler.h
>>>>> index 350a62c..683eb65 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>>>>        spinlock_t                      rq_lock;
>>>>>        struct drm_gpu_scheduler        *sched;
>>>>>
>>>>> -     spinlock_t                      queue_lock;
>>>>>        struct spsc_queue               job_queue;
>>>>>
>>>>>        atomic_t                        fence_seq;
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
  2018-05-16 11:47                         ` Andrey Grodzovsky
@ 2018-05-16 12:08                           ` Christian König
       [not found]                             ` <0be3bb9a-004a-7390-469c-3f974ed44055-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-05-16 12:08 UTC (permalink / raw)
  To: Andrey Grodzovsky, Zhou, David(ChunMing), amd-gfx, dri-devel

Yes, exactly.

For normal user space command submission we should have tons of locks 
guaranteeing that (e.g. just the VM lock should do).

For kernel moves we have the mutex for the GTT windows which protects it.

The could be problems with the UVD/VCE queues to cleanup the handles 
when an application crashes.

Christian.

Am 16.05.2018 um 13:47 schrieb Andrey Grodzovsky:
> So are you saying that you expect this to  be already in code for any 
> usage of drm_sched_fence_create and drm_sched_entity_push_job ?
>
> lock()
>
> drm_sched_fence_create()
>
> ... (some code)
>
> drm_sched_entity_push_job()
>
> unlock()
>
>
> Andrey
>
> On 05/16/2018 07:23 AM, Christian König wrote:
>> drm_sched_fence_create() assigns a sequence number to the fence it 
>> creates.
>>
>> Now drm_sched_fence_create() is called by drm_sched_job_init() to 
>> initialize the jobs we want to push on the scheduler queue.
>>
>> When you now call drm_sched_entity_push_job() without a protecting 
>> lock it can happen that you push two (or even more) job with reversed 
>> sequence numbers.
>>
>> Since the sequence numbers are used to determine job completion order 
>> reversing them can seriously mess things up.
>>
>> So the spin lock should be superfluous, if it isn't we have a much 
>> larger bug we need to fix.
>>
>> Christian.
>>
>> Am 16.05.2018 um 13:15 schrieb Andrey Grodzovsky:
>>> Can you please elaborate more, maybe give an example - I don't 
>>> understand yet the problematic scenario.
>>>
>>> Andrey
>>>
>>>
>>> On 05/16/2018 02:50 AM, Christian König wrote:
>>>> No, that spinlock is indeed incorrect. I
>>>>
>>>> See even when we protect the spsc queue with a spinlock that 
>>>> doesn't make it correct. It can happen that the jobs pushed to the 
>>>> queue are reversed in their sequence order and that can cause 
>>>> severe problems in the memory management.
>>>>
>>>> Christian.
>>>>
>>>> Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
>>>>> Yeah, that what I am not sure about... It's lockless in a sense of 
>>>>> single producer single consumer but not for multiple concurrent 
>>>>> producers... So now I think this spinlock should stay there... It 
>>>>> just looked useless to me at first sight...
>>>>>
>>>>> Andrey
>>>>>
>>>>> ________________________________________
>>>>> From: Zhou, David(ChunMing)
>>>>> Sent: 15 May 2018 23:04:44
>>>>> To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; 
>>>>> dri-devel@lists.freedesktop.org
>>>>> Cc: Koenig, Christian
>>>>> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
>>>>>
>>>>>
>>>>>
>>>>> On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>> ---
>>>>>>    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>>>>>    include/drm/gpu_scheduler.h               | 1 -
>>>>>>    2 files changed, 5 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
>>>>>> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>> index 1f1dd70..2569a63 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct 
>>>>>> drm_gpu_scheduler *sched,
>>>>>>        entity->last_scheduled = NULL;
>>>>>>
>>>>>>        spin_lock_init(&entity->rq_lock);
>>>>>> -     spin_lock_init(&entity->queue_lock);
>>>>>>        spsc_queue_init(&entity->job_queue);
>>>>>>
>>>>>>        atomic_set(&entity->fence_seq, 0);
>>>>>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct 
>>>>>> drm_sched_job *sched_job,
>>>>>>
>>>>>>        trace_drm_sched_job(sched_job, entity);
>>>>>>
>>>>>> -     spin_lock(&entity->queue_lock);
>>>>>>        first = spsc_queue_push(&entity->job_queue, 
>>>>>> &sched_job->queue_node);
>>>>>>
>>>>>> -     spin_unlock(&entity->queue_lock);
>>>>> Is your spsc safely to be added simultaneously?
>>>>>
>>>>> Regards,
>>>>> David Zhou
>>>>>> -
>>>>>>        /* first job wakes up scheduler */
>>>>>>        if (first) {
>>>>>>                /* Add the entity to the run queue */
>>>>>> diff --git a/include/drm/gpu_scheduler.h 
>>>>>> b/include/drm/gpu_scheduler.h
>>>>>> index 350a62c..683eb65 100644
>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>>>>>        spinlock_t                      rq_lock;
>>>>>>        struct drm_gpu_scheduler        *sched;
>>>>>>
>>>>>> -     spinlock_t                      queue_lock;
>>>>>>        struct spsc_queue               job_queue;
>>>>>>
>>>>>>        atomic_t                        fence_seq;
>>>>
>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]                             ` <0be3bb9a-004a-7390-469c-3f974ed44055-5C7GfCeVMHo@public.gmane.org>
@ 2018-05-16 12:28                               ` Lucas Stach
       [not found]                                 ` <1526473713.10230.1.camel-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Lucas Stach @ 2018-05-16 12:28 UTC (permalink / raw)
  To: Christian König, Andrey Grodzovsky, Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
> Yes, exactly.
> 
> For normal user space command submission we should have tons of
> locks 
> guaranteeing that (e.g. just the VM lock should do).
> 
> For kernel moves we have the mutex for the GTT windows which protects
> it.
> 
> The could be problems with the UVD/VCE queues to cleanup the handles 
> when an application crashes.

FWIW, etnaviv is currently completely unlocked in this path, but I
believe that this isn't an issue as the sched fence seq numbers are per
entity. So to actually end up with reversed seqnos one context has to
preempt itself to do another submit, while the current one hasn't
returned from kernel space, which I believe is a fairly theoretical
issue. Is my understanding correct?

Regards,
Lucas

> 
> Am 16.05.2018 um 13:47 schrieb Andrey Grodzovsky:
> > So are you saying that you expect this to  be already in code for
> > any 
> > usage of drm_sched_fence_create and drm_sched_entity_push_job ?
> > 
> > lock()
> > 
> > drm_sched_fence_create()
> > 
> > ... (some code)
> > 
> > drm_sched_entity_push_job()
> > 
> > unlock()
> > 
> > 
> > Andrey
> > 
> > On 05/16/2018 07:23 AM, Christian König wrote:
> > > drm_sched_fence_create() assigns a sequence number to the fence
> > > it 
> > > creates.
> > > 
> > > Now drm_sched_fence_create() is called by drm_sched_job_init()
> > > to 
> > > initialize the jobs we want to push on the scheduler queue.
> > > 
> > > When you now call drm_sched_entity_push_job() without a
> > > protecting 
> > > lock it can happen that you push two (or even more) job with
> > > reversed 
> > > sequence numbers.
> > > 
> > > Since the sequence numbers are used to determine job completion
> > > order 
> > > reversing them can seriously mess things up.
> > > 
> > > So the spin lock should be superfluous, if it isn't we have a
> > > much 
> > > larger bug we need to fix.
> > > 
> > > Christian.
> > > 
> > > Am 16.05.2018 um 13:15 schrieb Andrey Grodzovsky:
> > > > Can you please elaborate more, maybe give an example - I don't 
> > > > understand yet the problematic scenario.
> > > > 
> > > > Andrey
> > > > 
> > > > 
> > > > On 05/16/2018 02:50 AM, Christian König wrote:
> > > > > No, that spinlock is indeed incorrect. I
> > > > > 
> > > > > See even when we protect the spsc queue with a spinlock that 
> > > > > doesn't make it correct. It can happen that the jobs pushed
> > > > > to the 
> > > > > queue are reversed in their sequence order and that can
> > > > > cause 
> > > > > severe problems in the memory management.
> > > > > 
> > > > > Christian.
> > > > > 
> > > > > Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
> > > > > > Yeah, that what I am not sure about... It's lockless in a
> > > > > > sense of 
> > > > > > single producer single consumer but not for multiple
> > > > > > concurrent 
> > > > > > producers... So now I think this spinlock should stay
> > > > > > there... It 
> > > > > > just looked useless to me at first sight...
> > > > > > 
> > > > > > Andrey
> > > > > > 
> > > > > > ________________________________________
> > > > > > From: Zhou, David(ChunMing)
> > > > > > Sent: 15 May 2018 23:04:44
> > > > > > To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org; 
> > > > > > dri-devel@lists.freedesktop.org
> > > > > > Cc: Koenig, Christian
> > > > > > Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete
> > > > > > spinlock.
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
> > > > > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.c
> > > > > > > om>
> > > > > > > ---
> > > > > > >    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
> > > > > > >    include/drm/gpu_scheduler.h               | 1 -
> > > > > > >    2 files changed, 5 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
> > > > > > > b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> > > > > > > index 1f1dd70..2569a63 100644
> > > > > > > --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> > > > > > > +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> > > > > > > @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct 
> > > > > > > drm_gpu_scheduler *sched,
> > > > > > >        entity->last_scheduled = NULL;
> > > > > > > 
> > > > > > >        spin_lock_init(&entity->rq_lock);
> > > > > > > -     spin_lock_init(&entity->queue_lock);
> > > > > > >        spsc_queue_init(&entity->job_queue);
> > > > > > > 
> > > > > > >        atomic_set(&entity->fence_seq, 0);
> > > > > > > @@ -424,11 +423,8 @@ void
> > > > > > > drm_sched_entity_push_job(struct 
> > > > > > > drm_sched_job *sched_job,
> > > > > > > 
> > > > > > >        trace_drm_sched_job(sched_job, entity);
> > > > > > > 
> > > > > > > -     spin_lock(&entity->queue_lock);
> > > > > > >        first = spsc_queue_push(&entity->job_queue, 
> > > > > > > &sched_job->queue_node);
> > > > > > > 
> > > > > > > -     spin_unlock(&entity->queue_lock);
> > > > > > 
> > > > > > Is your spsc safely to be added simultaneously?
> > > > > > 
> > > > > > Regards,
> > > > > > David Zhou
> > > > > > > -
> > > > > > >        /* first job wakes up scheduler */
> > > > > > >        if (first) {
> > > > > > >                /* Add the entity to the run queue */
> > > > > > > diff --git a/include/drm/gpu_scheduler.h 
> > > > > > > b/include/drm/gpu_scheduler.h
> > > > > > > index 350a62c..683eb65 100644
> > > > > > > --- a/include/drm/gpu_scheduler.h
> > > > > > > +++ b/include/drm/gpu_scheduler.h
> > > > > > > @@ -56,7 +56,6 @@ struct drm_sched_entity {
> > > > > > >        spinlock_t                      rq_lock;
> > > > > > >        struct drm_gpu_scheduler        *sched;
> > > > > > > 
> > > > > > > -     spinlock_t                      queue_lock;
> > > > > > >        struct spsc_queue               job_queue;
> > > > > > > 
> > > > > > >        atomic_t                        fence_seq;
> > > > 
> > > > _______________________________________________
> > > > amd-gfx mailing list
> > > > amd-gfx@lists.freedesktop.org
> > > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]                                 ` <1526473713.10230.1.camel-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
@ 2018-05-16 12:32                                   ` Christian König
  2018-05-16 13:00                                     ` Lucas Stach
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-05-16 12:32 UTC (permalink / raw)
  To: Lucas Stach, Christian König, Andrey Grodzovsky, Zhou,
	David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 16.05.2018 um 14:28 schrieb Lucas Stach:
> Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
>> Yes, exactly.
>>
>> For normal user space command submission we should have tons of
>> locks
>> guaranteeing that (e.g. just the VM lock should do).
>>
>> For kernel moves we have the mutex for the GTT windows which protects
>> it.
>>
>> The could be problems with the UVD/VCE queues to cleanup the handles
>> when an application crashes.
> FWIW, etnaviv is currently completely unlocked in this path, but I
> believe that this isn't an issue as the sched fence seq numbers are per
> entity. So to actually end up with reversed seqnos one context has to
> preempt itself to do another submit, while the current one hasn't
> returned from kernel space, which I believe is a fairly theoretical
> issue. Is my understanding correct?

Yes. The problem is with the right timing this can be used to access 
freed up memory.

If you then manage to place a page table in that freed up memory taking 
over the system is just a typing exercise.

Regards,
Christian.

>
> Regards,
> Lucas
>
>> Am 16.05.2018 um 13:47 schrieb Andrey Grodzovsky:
>>> So are you saying that you expect this to  be already in code for
>>> any
>>> usage of drm_sched_fence_create and drm_sched_entity_push_job ?
>>>
>>> lock()
>>>
>>> drm_sched_fence_create()
>>>
>>> ... (some code)
>>>
>>> drm_sched_entity_push_job()
>>>
>>> unlock()
>>>
>>>
>>> Andrey
>>>
>>> On 05/16/2018 07:23 AM, Christian König wrote:
>>>> drm_sched_fence_create() assigns a sequence number to the fence
>>>> it
>>>> creates.
>>>>
>>>> Now drm_sched_fence_create() is called by drm_sched_job_init()
>>>> to
>>>> initialize the jobs we want to push on the scheduler queue.
>>>>
>>>> When you now call drm_sched_entity_push_job() without a
>>>> protecting
>>>> lock it can happen that you push two (or even more) job with
>>>> reversed
>>>> sequence numbers.
>>>>
>>>> Since the sequence numbers are used to determine job completion
>>>> order
>>>> reversing them can seriously mess things up.
>>>>
>>>> So the spin lock should be superfluous, if it isn't we have a
>>>> much
>>>> larger bug we need to fix.
>>>>
>>>> Christian.
>>>>
>>>> Am 16.05.2018 um 13:15 schrieb Andrey Grodzovsky:
>>>>> Can you please elaborate more, maybe give an example - I don't
>>>>> understand yet the problematic scenario.
>>>>>
>>>>> Andrey
>>>>>
>>>>>
>>>>> On 05/16/2018 02:50 AM, Christian König wrote:
>>>>>> No, that spinlock is indeed incorrect. I
>>>>>>
>>>>>> See even when we protect the spsc queue with a spinlock that
>>>>>> doesn't make it correct. It can happen that the jobs pushed
>>>>>> to the
>>>>>> queue are reversed in their sequence order and that can
>>>>>> cause
>>>>>> severe problems in the memory management.
>>>>>>
>>>>>> Christian.
>>>>>>
>>>>>> Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
>>>>>>> Yeah, that what I am not sure about... It's lockless in a
>>>>>>> sense of
>>>>>>> single producer single consumer but not for multiple
>>>>>>> concurrent
>>>>>>> producers... So now I think this spinlock should stay
>>>>>>> there... It
>>>>>>> just looked useless to me at first sight...
>>>>>>>
>>>>>>> Andrey
>>>>>>>
>>>>>>> ________________________________________
>>>>>>> From: Zhou, David(ChunMing)
>>>>>>> Sent: 15 May 2018 23:04:44
>>>>>>> To: Grodzovsky, Andrey; amd-gfx@lists.freedesktop.org;
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> Cc: Koenig, Christian
>>>>>>> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete
>>>>>>> spinlock.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 2018年05月16日 03:31, Andrey Grodzovsky wrote:
>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.c
>>>>>>>> om>
>>>>>>>> ---
>>>>>>>>     drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>>>>>>>     include/drm/gpu_scheduler.h               | 1 -
>>>>>>>>     2 files changed, 5 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>>>> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>>>> index 1f1dd70..2569a63 100644
>>>>>>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>>>>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct
>>>>>>>> drm_gpu_scheduler *sched,
>>>>>>>>         entity->last_scheduled = NULL;
>>>>>>>>
>>>>>>>>         spin_lock_init(&entity->rq_lock);
>>>>>>>> -     spin_lock_init(&entity->queue_lock);
>>>>>>>>         spsc_queue_init(&entity->job_queue);
>>>>>>>>
>>>>>>>>         atomic_set(&entity->fence_seq, 0);
>>>>>>>> @@ -424,11 +423,8 @@ void
>>>>>>>> drm_sched_entity_push_job(struct
>>>>>>>> drm_sched_job *sched_job,
>>>>>>>>
>>>>>>>>         trace_drm_sched_job(sched_job, entity);
>>>>>>>>
>>>>>>>> -     spin_lock(&entity->queue_lock);
>>>>>>>>         first = spsc_queue_push(&entity->job_queue,
>>>>>>>> &sched_job->queue_node);
>>>>>>>>
>>>>>>>> -     spin_unlock(&entity->queue_lock);
>>>>>>> Is your spsc safely to be added simultaneously?
>>>>>>>
>>>>>>> Regards,
>>>>>>> David Zhou
>>>>>>>> -
>>>>>>>>         /* first job wakes up scheduler */
>>>>>>>>         if (first) {
>>>>>>>>                 /* Add the entity to the run queue */
>>>>>>>> diff --git a/include/drm/gpu_scheduler.h
>>>>>>>> b/include/drm/gpu_scheduler.h
>>>>>>>> index 350a62c..683eb65 100644
>>>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>>>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>>>>>>>         spinlock_t                      rq_lock;
>>>>>>>>         struct drm_gpu_scheduler        *sched;
>>>>>>>>
>>>>>>>> -     spinlock_t                      queue_lock;
>>>>>>>>         struct spsc_queue               job_queue;
>>>>>>>>
>>>>>>>>         atomic_t                        fence_seq;
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
  2018-05-16 12:32                                   ` Christian König
@ 2018-05-16 13:00                                     ` Lucas Stach
       [not found]                                       ` <1526475600.10230.3.camel-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Lucas Stach @ 2018-05-16 13:00 UTC (permalink / raw)
  To: christian.koenig, Andrey Grodzovsky, Zhou, David(ChunMing),
	amd-gfx, dri-devel

Am Mittwoch, den 16.05.2018, 14:32 +0200 schrieb Christian König:
> Am 16.05.2018 um 14:28 schrieb Lucas Stach:
> > Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
> > > Yes, exactly.
> > > 
> > > For normal user space command submission we should have tons of
> > > locks
> > > guaranteeing that (e.g. just the VM lock should do).
> > > 
> > > For kernel moves we have the mutex for the GTT windows which
> > > protects
> > > it.
> > > 
> > > The could be problems with the UVD/VCE queues to cleanup the
> > > handles
> > > when an application crashes.
> > 
> > FWIW, etnaviv is currently completely unlocked in this path, but I
> > believe that this isn't an issue as the sched fence seq numbers are
> > per
> > entity. So to actually end up with reversed seqnos one context has
> > to
> > preempt itself to do another submit, while the current one hasn't
> > returned from kernel space, which I believe is a fairly theoretical
> > issue. Is my understanding correct?
> 
> Yes. The problem is with the right timing this can be used to access 
> freed up memory.
> 
> If you then manage to place a page table in that freed up memory
> taking 
> over the system is just a typing exercise.

Thanks. I believe we don't have this problem in etnaviv, as memory
referencing is tied to the job and will only be unreferenced on
free_job, but I'll re-check this when I've got some time.

Regards,
Lucas
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]                                       ` <1526475600.10230.3.camel-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
@ 2018-05-16 13:10                                         ` Christian König
       [not found]                                           ` <96b4e712-5e80-0053-301e-3f89940be8f0-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 19+ messages in thread
From: Christian König @ 2018-05-16 13:10 UTC (permalink / raw)
  To: Lucas Stach, Andrey Grodzovsky, Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am 16.05.2018 um 15:00 schrieb Lucas Stach:
> Am Mittwoch, den 16.05.2018, 14:32 +0200 schrieb Christian König:
>> Am 16.05.2018 um 14:28 schrieb Lucas Stach:
>>> Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
>>>> Yes, exactly.
>>>>
>>>> For normal user space command submission we should have tons of
>>>> locks
>>>> guaranteeing that (e.g. just the VM lock should do).
>>>>
>>>> For kernel moves we have the mutex for the GTT windows which
>>>> protects
>>>> it.
>>>>
>>>> The could be problems with the UVD/VCE queues to cleanup the
>>>> handles
>>>> when an application crashes.
>>> FWIW, etnaviv is currently completely unlocked in this path, but I
>>> believe that this isn't an issue as the sched fence seq numbers are
>>> per
>>> entity. So to actually end up with reversed seqnos one context has
>>> to
>>> preempt itself to do another submit, while the current one hasn't
>>> returned from kernel space, which I believe is a fairly theoretical
>>> issue. Is my understanding correct?
>> Yes. The problem is with the right timing this can be used to access
>> freed up memory.
>>
>> If you then manage to place a page table in that freed up memory
>> taking
>> over the system is just a typing exercise.
> Thanks. I believe we don't have this problem in etnaviv, as memory
> referencing is tied to the job and will only be unreferenced on
> free_job, but I'll re-check this when I've got some time.

Well that depends on how you use the sequence numbers.

If you use them for job completion check somewhere then you certainly 
have a problem if job A gets the 1 and B the 2, but B completes before A.

At bare minimum that's still a bug we need to fix because it confuses 
functions like dma_fence_is_later() and dma_fence_later().

Christian.

>
> Regards,
> Lucas

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
       [not found]                                           ` <96b4e712-5e80-0053-301e-3f89940be8f0-5C7GfCeVMHo@public.gmane.org>
@ 2018-05-16 13:16                                             ` Lucas Stach
  2018-05-16 14:25                                               ` Andrey Grodzovsky
  0 siblings, 1 reply; 19+ messages in thread
From: Lucas Stach @ 2018-05-16 13:16 UTC (permalink / raw)
  To: Christian König, Andrey Grodzovsky, Zhou, David(ChunMing),
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Am Mittwoch, den 16.05.2018, 15:10 +0200 schrieb Christian König:
> Am 16.05.2018 um 15:00 schrieb Lucas Stach:
> > Am Mittwoch, den 16.05.2018, 14:32 +0200 schrieb Christian König:
> > > Am 16.05.2018 um 14:28 schrieb Lucas Stach:
> > > > Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
> > > > > Yes, exactly.
> > > > > 
> > > > > For normal user space command submission we should have tons of
> > > > > locks
> > > > > guaranteeing that (e.g. just the VM lock should do).
> > > > > 
> > > > > For kernel moves we have the mutex for the GTT windows which
> > > > > protects
> > > > > it.
> > > > > 
> > > > > The could be problems with the UVD/VCE queues to cleanup the
> > > > > handles
> > > > > when an application crashes.
> > > > 
> > > > FWIW, etnaviv is currently completely unlocked in this path, but I
> > > > believe that this isn't an issue as the sched fence seq numbers are
> > > > per
> > > > entity. So to actually end up with reversed seqnos one context has
> > > > to
> > > > preempt itself to do another submit, while the current one hasn't
> > > > returned from kernel space, which I believe is a fairly theoretical
> > > > issue. Is my understanding correct?
> > > 
> > > Yes. The problem is with the right timing this can be used to access
> > > freed up memory.
> > > 
> > > If you then manage to place a page table in that freed up memory
> > > taking
> > > over the system is just a typing exercise.
> > 
> > Thanks. I believe we don't have this problem in etnaviv, as memory
> > referencing is tied to the job and will only be unreferenced on
> > free_job, but I'll re-check this when I've got some time.
> 
> Well that depends on how you use the sequence numbers.
> 
> If you use them for job completion check somewhere then you certainly 
> have a problem if job A gets the 1 and B the 2, but B completes before A.

We don't do this anymore. All the etnaviv internals are driven by the
fence signal callbacks.

> At bare minimum that's still a bug we need to fix because it confuses 
> functions like dma_fence_is_later() and dma_fence_later().

Agreed. Also amending the documentation to state that
drm_sched_job_init() and drm_sched_entity_push_job() need to be called
under a common lock seems like a good idea.

Regards,
Lucas
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
  2018-05-16 13:16                                             ` Lucas Stach
@ 2018-05-16 14:25                                               ` Andrey Grodzovsky
  0 siblings, 0 replies; 19+ messages in thread
From: Andrey Grodzovsky @ 2018-05-16 14:25 UTC (permalink / raw)
  To: Lucas Stach, Christian König, Zhou, David(ChunMing),
	amd-gfx, dri-devel



On 05/16/2018 09:16 AM, Lucas Stach wrote:
> Am Mittwoch, den 16.05.2018, 15:10 +0200 schrieb Christian König:
>> Am 16.05.2018 um 15:00 schrieb Lucas Stach:
>>> Am Mittwoch, den 16.05.2018, 14:32 +0200 schrieb Christian König:
>>>> Am 16.05.2018 um 14:28 schrieb Lucas Stach:
>>>>> Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
>>>>>> Yes, exactly.
>>>>>>
>>>>>> For normal user space command submission we should have tons of
>>>>>> locks
>>>>>> guaranteeing that (e.g. just the VM lock should do).
>>>>>>
>>>>>> For kernel moves we have the mutex for the GTT windows which
>>>>>> protects
>>>>>> it.
>>>>>>
>>>>>> The could be problems with the UVD/VCE queues to cleanup the
>>>>>> handles
>>>>>> when an application crashes.
>>>>> FWIW, etnaviv is currently completely unlocked in this path, but I
>>>>> believe that this isn't an issue as the sched fence seq numbers are
>>>>> per
>>>>> entity. So to actually end up with reversed seqnos one context has
>>>>> to
>>>>> preempt itself to do another submit, while the current one hasn't
>>>>> returned from kernel space, which I believe is a fairly theoretical
>>>>> issue. Is my understanding correct?
>>>> Yes. The problem is with the right timing this can be used to access
>>>> freed up memory.
>>>>
>>>> If you then manage to place a page table in that freed up memory
>>>> taking
>>>> over the system is just a typing exercise.
>>> Thanks. I believe we don't have this problem in etnaviv, as memory
>>> referencing is tied to the job and will only be unreferenced on
>>> free_job, but I'll re-check this when I've got some time.
>> Well that depends on how you use the sequence numbers.
>>
>> If you use them for job completion check somewhere then you certainly
>> have a problem if job A gets the 1 and B the 2, but B completes before A.
> We don't do this anymore. All the etnaviv internals are driven by the
> fence signal callbacks.
>
>> At bare minimum that's still a bug we need to fix because it confuses
>> functions like dma_fence_is_later() and dma_fence_later().
> Agreed. Also amending the documentation to state that
> drm_sched_job_init() and drm_sched_entity_push_job() need to be called
> under a common lock seems like a good idea.

I will respin the patch with added documentation.

Andrey

> Regards,
> Lucas

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-05-16 14:25 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-15 19:31 [PATCH 1/2] drm/amdgpu: Skip drm_sched_entity realted ops for KIQ ring Andrey Grodzovsky
     [not found] ` <1526412674-15913-1-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
2018-05-15 19:31   ` [PATCH 2/2] drm/scheduler: Remove obsolete spinlock Andrey Grodzovsky
     [not found]     ` <1526412674-15913-2-git-send-email-andrey.grodzovsky-5C7GfCeVMHo@public.gmane.org>
2018-05-15 19:38       ` Alex Deucher
     [not found]         ` <CADnq5_Pp=itdSOwq75petxvte1kUcLxmLymp_5zy7LVrgowyVw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-05-15 20:24           ` Andrey Grodzovsky
2018-05-16  3:04       ` zhoucm1
     [not found]         ` <dda4fb88-e228-e10b-46a5-27ba9888b78d-5C7GfCeVMHo@public.gmane.org>
2018-05-16  3:33           ` Grodzovsky, Andrey
     [not found]             ` <MWHPR12MB14530840CD7DB545DD286827EA920-Gy0DoCVfaSWZBIDmKHdw+wdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2018-05-16  6:50               ` Christian König
     [not found]                 ` <3a54ce0c-6821-9c94-a67f-2b1a3c74959d-5C7GfCeVMHo@public.gmane.org>
2018-05-16 11:15                   ` Andrey Grodzovsky
2018-05-16 11:23                     ` Christian König
     [not found]                       ` <0d936838-8a88-9bd7-d4ff-053cd73f41cc-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-05-16 11:47                         ` Andrey Grodzovsky
2018-05-16 12:08                           ` Christian König
     [not found]                             ` <0be3bb9a-004a-7390-469c-3f974ed44055-5C7GfCeVMHo@public.gmane.org>
2018-05-16 12:28                               ` Lucas Stach
     [not found]                                 ` <1526473713.10230.1.camel-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
2018-05-16 12:32                                   ` Christian König
2018-05-16 13:00                                     ` Lucas Stach
     [not found]                                       ` <1526475600.10230.3.camel-bIcnvbaLZ9MEGnE8C9+IrQ@public.gmane.org>
2018-05-16 13:10                                         ` Christian König
     [not found]                                           ` <96b4e712-5e80-0053-301e-3f89940be8f0-5C7GfCeVMHo@public.gmane.org>
2018-05-16 13:16                                             ` Lucas Stach
2018-05-16 14:25                                               ` Andrey Grodzovsky
2018-05-15 19:42   ` [PATCH 1/2] drm/amdgpu: Skip drm_sched_entity realted ops for KIQ ring Alex Deucher
     [not found]     ` <CADnq5_OtJDxZSrTe_peJtFTCm6T8QnBW-iri2UJv1yvOmjraNQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-05-16  7:05       ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.