dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/scheduler: Fix hang when sched_entity released
@ 2021-02-16 17:07 Andrey Grodzovsky
  2021-02-17 15:53 ` Andrey Grodzovsky
  2021-02-17 21:32 ` Christian König
  0 siblings, 2 replies; 6+ messages in thread
From: Andrey Grodzovsky @ 2021-02-16 17:07 UTC (permalink / raw)
  To: dri-devel, amd-gfx; +Cc: ckoenig.leichtzumerken

Problem: If scheduler is already stopped by the time sched_entity
is released and entity's job_queue not empty I encountred
a hang in drm_sched_entity_flush. This is because drm_sched_entity_is_idle
never becomes false.

Fix: In drm_sched_fini detach all sched_entities from the
scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
Also wakeup all those processes stuck in sched_entity flushing
as the scheduler main thread which wakes them up is stopped by now.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
---
 drivers/gpu/drm/scheduler/sched_main.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 908b0b5..11abf5d 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
  */
 void drm_sched_fini(struct drm_gpu_scheduler *sched)
 {
+	int i;
+	struct drm_sched_entity *s_entity;
 	if (sched->thread)
 		kthread_stop(sched->thread);
 
+	/* Detach all sched_entites from this scheduler once it's stopped */
+	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+		struct drm_sched_rq *rq = &sched->sched_rq[i];
+
+		if (!rq)
+			continue;
+
+		/* Loop this way because rq->lock is taken in drm_sched_rq_remove_entity */
+		spin_lock(&rq->lock);
+		while ((s_entity = list_first_entry_or_null(&rq->entities,
+							    struct drm_sched_entity,
+							    list))) {
+			spin_unlock(&rq->lock);
+			drm_sched_rq_remove_entity(rq, s_entity);
+
+			/* Mark as stopped to reject adding to any new rq */
+			spin_lock(&s_entity->rq_lock);
+			s_entity->stopped = true;
+			spin_unlock(&s_entity->rq_lock);
+
+			spin_lock(&rq->lock);
+		}
+		spin_unlock(&rq->lock);
+
+	}
+
+	/* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
+	wake_up_all(&sched->job_scheduled);
+
 	/* Confirm no work left behind accessing device structures */
 	cancel_delayed_work_sync(&sched->work_tdr);
 
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/scheduler: Fix hang when sched_entity released
  2021-02-16 17:07 [PATCH] drm/scheduler: Fix hang when sched_entity released Andrey Grodzovsky
@ 2021-02-17 15:53 ` Andrey Grodzovsky
  2021-02-17 21:32 ` Christian König
  1 sibling, 0 replies; 6+ messages in thread
From: Andrey Grodzovsky @ 2021-02-17 15:53 UTC (permalink / raw)
  To: dri-devel, amd-gfx; +Cc: ckoenig.leichtzumerken

Ping

Andrey

On 2/16/21 12:07 PM, Andrey Grodzovsky wrote:
> Problem: If scheduler is already stopped by the time sched_entity
> is released and entity's job_queue not empty I encountred
> a hang in drm_sched_entity_flush. This is because drm_sched_entity_is_idle
> never becomes false.
>
> Fix: In drm_sched_fini detach all sched_entities from the
> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
> Also wakeup all those processes stuck in sched_entity flushing
> as the scheduler main thread which wakes them up is stopped by now.
>
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 31 +++++++++++++++++++++++++++++++
>   1 file changed, 31 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 908b0b5..11abf5d 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
>    */
>   void drm_sched_fini(struct drm_gpu_scheduler *sched)
>   {
> +	int i;
> +	struct drm_sched_entity *s_entity;
>   	if (sched->thread)
>   		kthread_stop(sched->thread);
>   
> +	/* Detach all sched_entites from this scheduler once it's stopped */
> +	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> +		struct drm_sched_rq *rq = &sched->sched_rq[i];
> +
> +		if (!rq)
> +			continue;
> +
> +		/* Loop this way because rq->lock is taken in drm_sched_rq_remove_entity */
> +		spin_lock(&rq->lock);
> +		while ((s_entity = list_first_entry_or_null(&rq->entities,
> +							    struct drm_sched_entity,
> +							    list))) {
> +			spin_unlock(&rq->lock);
> +			drm_sched_rq_remove_entity(rq, s_entity);
> +
> +			/* Mark as stopped to reject adding to any new rq */
> +			spin_lock(&s_entity->rq_lock);
> +			s_entity->stopped = true;
> +			spin_unlock(&s_entity->rq_lock);
> +
> +			spin_lock(&rq->lock);
> +		}
> +		spin_unlock(&rq->lock);
> +
> +	}
> +
> +	/* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
> +	wake_up_all(&sched->job_scheduled);
> +
>   	/* Confirm no work left behind accessing device structures */
>   	cancel_delayed_work_sync(&sched->work_tdr);
>   
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/scheduler: Fix hang when sched_entity released
  2021-02-16 17:07 [PATCH] drm/scheduler: Fix hang when sched_entity released Andrey Grodzovsky
  2021-02-17 15:53 ` Andrey Grodzovsky
@ 2021-02-17 21:32 ` Christian König
  2021-02-17 21:36   ` Andrey Grodzovsky
  1 sibling, 1 reply; 6+ messages in thread
From: Christian König @ 2021-02-17 21:32 UTC (permalink / raw)
  To: Andrey Grodzovsky, dri-devel, amd-gfx

Am 16.02.21 um 18:07 schrieb Andrey Grodzovsky:
> Problem: If scheduler is already stopped by the time sched_entity
> is released and entity's job_queue not empty I encountred
> a hang in drm_sched_entity_flush. This is because drm_sched_entity_is_idle
> never becomes false.
>
> Fix: In drm_sched_fini detach all sched_entities from the
> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
> Also wakeup all those processes stuck in sched_entity flushing
> as the scheduler main thread which wakes them up is stopped by now.
>
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 31 +++++++++++++++++++++++++++++++
>   1 file changed, 31 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 908b0b5..11abf5d 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
>    */
>   void drm_sched_fini(struct drm_gpu_scheduler *sched)
>   {
> +	int i;
> +	struct drm_sched_entity *s_entity;
>   	if (sched->thread)
>   		kthread_stop(sched->thread);
>   
> +	/* Detach all sched_entites from this scheduler once it's stopped */
> +	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> +		struct drm_sched_rq *rq = &sched->sched_rq[i];
> +
> +		if (!rq)
> +			continue;
> +
> +		/* Loop this way because rq->lock is taken in drm_sched_rq_remove_entity */
> +		spin_lock(&rq->lock);
> +		while ((s_entity = list_first_entry_or_null(&rq->entities,
> +							    struct drm_sched_entity,
> +							    list))) {
> +			spin_unlock(&rq->lock);
> +			drm_sched_rq_remove_entity(rq, s_entity);
> +
> +			/* Mark as stopped to reject adding to any new rq */
> +			spin_lock(&s_entity->rq_lock);
> +			s_entity->stopped = true;

Why not marking it as stopped and then removing it?

Regards,
Christian.

> +			spin_unlock(&s_entity->rq_lock);
> +
> +			spin_lock(&rq->lock);
> +		}
> +		spin_unlock(&rq->lock);
> +
> +	}
> +
> +	/* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
> +	wake_up_all(&sched->job_scheduled);
> +
>   	/* Confirm no work left behind accessing device structures */
>   	cancel_delayed_work_sync(&sched->work_tdr);
>   

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/scheduler: Fix hang when sched_entity released
  2021-02-17 21:32 ` Christian König
@ 2021-02-17 21:36   ` Andrey Grodzovsky
  2021-02-17 21:37     ` Christian König
  0 siblings, 1 reply; 6+ messages in thread
From: Andrey Grodzovsky @ 2021-02-17 21:36 UTC (permalink / raw)
  To: Christian König, dri-devel, amd-gfx


On 2/17/21 4:32 PM, Christian König wrote:
> Am 16.02.21 um 18:07 schrieb Andrey Grodzovsky:
>> Problem: If scheduler is already stopped by the time sched_entity
>> is released and entity's job_queue not empty I encountred
>> a hang in drm_sched_entity_flush. This is because drm_sched_entity_is_idle
>> never becomes false.
>>
>> Fix: In drm_sched_fini detach all sched_entities from the
>> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
>> Also wakeup all those processes stuck in sched_entity flushing
>> as the scheduler main thread which wakes them up is stopped by now.
>>
>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>> ---
>>   drivers/gpu/drm/scheduler/sched_main.c | 31 +++++++++++++++++++++++++++++++
>>   1 file changed, 31 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index 908b0b5..11abf5d 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
>>    */
>>   void drm_sched_fini(struct drm_gpu_scheduler *sched)
>>   {
>> +    int i;
>> +    struct drm_sched_entity *s_entity;
>>       if (sched->thread)
>>           kthread_stop(sched->thread);
>>   +    /* Detach all sched_entites from this scheduler once it's stopped */
>> +    for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>> +        struct drm_sched_rq *rq = &sched->sched_rq[i];
>> +
>> +        if (!rq)
>> +            continue;
>> +
>> +        /* Loop this way because rq->lock is taken in 
>> drm_sched_rq_remove_entity */
>> +        spin_lock(&rq->lock);
>> +        while ((s_entity = list_first_entry_or_null(&rq->entities,
>> +                                struct drm_sched_entity,
>> +                                list))) {
>> +            spin_unlock(&rq->lock);
>> +            drm_sched_rq_remove_entity(rq, s_entity);
>> +
>> +            /* Mark as stopped to reject adding to any new rq */
>> +            spin_lock(&s_entity->rq_lock);
>> +            s_entity->stopped = true;
>
> Why not marking it as stopped and then removing it?
>
> Regards,
> Christian.


You mean just reverse the order of operations here to prevent a race where 
someone adding it again to rq before marking it as stopped ?

Andrey


>
>> + spin_unlock(&s_entity->rq_lock);
>> +
>> +            spin_lock(&rq->lock);
>> +        }
>> +        spin_unlock(&rq->lock);
>> +
>> +    }
>> +
>> +    /* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
>> +    wake_up_all(&sched->job_scheduled);
>> +
>>       /* Confirm no work left behind accessing device structures */
>>       cancel_delayed_work_sync(&sched->work_tdr);
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/scheduler: Fix hang when sched_entity released
  2021-02-17 21:36   ` Andrey Grodzovsky
@ 2021-02-17 21:37     ` Christian König
  2021-02-17 21:37       ` Andrey Grodzovsky
  0 siblings, 1 reply; 6+ messages in thread
From: Christian König @ 2021-02-17 21:37 UTC (permalink / raw)
  To: Andrey Grodzovsky, dri-devel, amd-gfx



Am 17.02.21 um 22:36 schrieb Andrey Grodzovsky:
>
> On 2/17/21 4:32 PM, Christian König wrote:
>> Am 16.02.21 um 18:07 schrieb Andrey Grodzovsky:
>>> Problem: If scheduler is already stopped by the time sched_entity
>>> is released and entity's job_queue not empty I encountred
>>> a hang in drm_sched_entity_flush. This is because 
>>> drm_sched_entity_is_idle
>>> never becomes false.
>>>
>>> Fix: In drm_sched_fini detach all sched_entities from the
>>> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
>>> Also wakeup all those processes stuck in sched_entity flushing
>>> as the scheduler main thread which wakes them up is stopped by now.
>>>
>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> ---
>>>   drivers/gpu/drm/scheduler/sched_main.c | 31 
>>> +++++++++++++++++++++++++++++++
>>>   1 file changed, 31 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 908b0b5..11abf5d 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
>>>    */
>>>   void drm_sched_fini(struct drm_gpu_scheduler *sched)
>>>   {
>>> +    int i;
>>> +    struct drm_sched_entity *s_entity;
>>>       if (sched->thread)
>>>           kthread_stop(sched->thread);
>>>   +    /* Detach all sched_entites from this scheduler once it's 
>>> stopped */
>>> +    for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= 
>>> DRM_SCHED_PRIORITY_MIN; i--) {
>>> +        struct drm_sched_rq *rq = &sched->sched_rq[i];
>>> +
>>> +        if (!rq)
>>> +            continue;
>>> +
>>> +        /* Loop this way because rq->lock is taken in 
>>> drm_sched_rq_remove_entity */
>>> +        spin_lock(&rq->lock);
>>> +        while ((s_entity = list_first_entry_or_null(&rq->entities,
>>> +                                struct drm_sched_entity,
>>> +                                list))) {
>>> +            spin_unlock(&rq->lock);
>>> +            drm_sched_rq_remove_entity(rq, s_entity);
>>> +
>>> +            /* Mark as stopped to reject adding to any new rq */
>>> +            spin_lock(&s_entity->rq_lock);
>>> +            s_entity->stopped = true;
>>
>> Why not marking it as stopped and then removing it?
>>
>> Regards,
>> Christian.
>
>
> You mean just reverse the order of operations here to prevent a race 
> where someone adding it again to rq before marking it as stopped ?

Exactly that, yeah.

Christian.

>
> Andrey
>
>
>>
>>> + spin_unlock(&s_entity->rq_lock);
>>> +
>>> +            spin_lock(&rq->lock);
>>> +        }
>>> +        spin_unlock(&rq->lock);
>>> +
>>> +    }
>>> +
>>> +    /* Wakeup everyone stuck in drm_sched_entity_flush for this 
>>> scheduler */
>>> +    wake_up_all(&sched->job_scheduled);
>>> +
>>>       /* Confirm no work left behind accessing device structures */
>>>       cancel_delayed_work_sync(&sched->work_tdr);
>>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/scheduler: Fix hang when sched_entity released
  2021-02-17 21:37     ` Christian König
@ 2021-02-17 21:37       ` Andrey Grodzovsky
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey Grodzovsky @ 2021-02-17 21:37 UTC (permalink / raw)
  To: Christian König, dri-devel, amd-gfx

Will do.

Andrey

On 2/17/21 4:37 PM, Christian König wrote:
>
>
> Am 17.02.21 um 22:36 schrieb Andrey Grodzovsky:
>>
>> On 2/17/21 4:32 PM, Christian König wrote:
>>> Am 16.02.21 um 18:07 schrieb Andrey Grodzovsky:
>>>> Problem: If scheduler is already stopped by the time sched_entity
>>>> is released and entity's job_queue not empty I encountred
>>>> a hang in drm_sched_entity_flush. This is because drm_sched_entity_is_idle
>>>> never becomes false.
>>>>
>>>> Fix: In drm_sched_fini detach all sched_entities from the
>>>> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
>>>> Also wakeup all those processes stuck in sched_entity flushing
>>>> as the scheduler main thread which wakes them up is stopped by now.
>>>>
>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/scheduler/sched_main.c | 31 +++++++++++++++++++++++++++++++
>>>>   1 file changed, 31 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>> index 908b0b5..11abf5d 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
>>>>    */
>>>>   void drm_sched_fini(struct drm_gpu_scheduler *sched)
>>>>   {
>>>> +    int i;
>>>> +    struct drm_sched_entity *s_entity;
>>>>       if (sched->thread)
>>>>           kthread_stop(sched->thread);
>>>>   +    /* Detach all sched_entites from this scheduler once it's stopped */
>>>> +    for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; 
>>>> i--) {
>>>> +        struct drm_sched_rq *rq = &sched->sched_rq[i];
>>>> +
>>>> +        if (!rq)
>>>> +            continue;
>>>> +
>>>> +        /* Loop this way because rq->lock is taken in 
>>>> drm_sched_rq_remove_entity */
>>>> +        spin_lock(&rq->lock);
>>>> +        while ((s_entity = list_first_entry_or_null(&rq->entities,
>>>> +                                struct drm_sched_entity,
>>>> +                                list))) {
>>>> +            spin_unlock(&rq->lock);
>>>> +            drm_sched_rq_remove_entity(rq, s_entity);
>>>> +
>>>> +            /* Mark as stopped to reject adding to any new rq */
>>>> +            spin_lock(&s_entity->rq_lock);
>>>> +            s_entity->stopped = true;
>>>
>>> Why not marking it as stopped and then removing it?
>>>
>>> Regards,
>>> Christian.
>>
>>
>> You mean just reverse the order of operations here to prevent a race where 
>> someone adding it again to rq before marking it as stopped ?
>
> Exactly that, yeah.
>
> Christian.
>
>>
>> Andrey
>>
>>
>>>
>>>> + spin_unlock(&s_entity->rq_lock);
>>>> +
>>>> +            spin_lock(&rq->lock);
>>>> +        }
>>>> +        spin_unlock(&rq->lock);
>>>> +
>>>> +    }
>>>> +
>>>> +    /* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
>>>> +    wake_up_all(&sched->job_scheduled);
>>>> +
>>>>       /* Confirm no work left behind accessing device structures */
>>>>       cancel_delayed_work_sync(&sched->work_tdr);
>>>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-02-17 21:37 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-16 17:07 [PATCH] drm/scheduler: Fix hang when sched_entity released Andrey Grodzovsky
2021-02-17 15:53 ` Andrey Grodzovsky
2021-02-17 21:32 ` Christian König
2021-02-17 21:36   ` Andrey Grodzovsky
2021-02-17 21:37     ` Christian König
2021-02-17 21:37       ` Andrey Grodzovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).