* [PATCH] drm/panfrost: Handle resetting on timeout better
@ 2019-10-07 12:50 Steven Price
2019-10-07 13:09 ` Neil Armstrong
2019-10-08 7:48 ` Neil Armstrong
0 siblings, 2 replies; 5+ messages in thread
From: Steven Price @ 2019-10-07 12:50 UTC (permalink / raw)
To: Daniel Vetter, David Airlie, Rob Herring, Tomeu Vizoso
Cc: Alyssa Rosenzweig, Steven Price, dri-devel, linux-kernel, Neil Armstrong
Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
and on a timeout has to stop all the schedulers to safely perform a
reset. However more than one scheduler can trigger a timeout at the same
time. This race condition results in jobs being freed while they are
still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that
any timeout started for that slot has completed. Also use
mutex_trylock() to obtain reset_lock. This means that only one thread
attempts the reset, the other threads will simply complete without doing
anything (the first thread will wait for this in the call to
cancel_delayed_work_sync()).
While we're here and since the function is already dependent on
sched_job not being NULL, let's remove the unnecessary checks, along
with a commented out call to panfrost_core_dump() which has never
existed in mainline.
Signed-off-by: Steven Price <steven.price@arm.com>
---
This is a tidied up version of the patch orginally posted here:
http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index a58551668d9a..dcc9a7603685 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
job_read(pfdev, JS_TAIL_LO(js)),
sched_job);
- mutex_lock(&pfdev->reset_lock);
+ if (!mutex_trylock(&pfdev->reset_lock))
+ return;
- for (i = 0; i < NUM_JOB_SLOTS; i++)
- drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
+ for (i = 0; i < NUM_JOB_SLOTS; i++) {
+ struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
+
+ drm_sched_stop(sched, sched_job);
+ if (js != i)
+ /* Ensure any timeouts on other slots have finished */
+ cancel_delayed_work_sync(&sched->work_tdr);
+ }
- if (sched_job)
- drm_sched_increase_karma(sched_job);
+ drm_sched_increase_karma(sched_job);
spin_lock_irqsave(&pfdev->js->job_lock, flags);
for (i = 0; i < NUM_JOB_SLOTS; i++) {
@@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
}
spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
- /* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js);
panfrost_device_reset(pfdev);
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] drm/panfrost: Handle resetting on timeout better
2019-10-07 12:50 [PATCH] drm/panfrost: Handle resetting on timeout better Steven Price
@ 2019-10-07 13:09 ` Neil Armstrong
2019-10-07 16:14 ` Tomeu Vizoso
2019-10-08 7:48 ` Neil Armstrong
1 sibling, 1 reply; 5+ messages in thread
From: Neil Armstrong @ 2019-10-07 13:09 UTC (permalink / raw)
To: Steven Price, Daniel Vetter, David Airlie, Rob Herring, Tomeu Vizoso
Cc: Alyssa Rosenzweig, dri-devel, linux-kernel
Hi Steven,
On 07/10/2019 14:50, Steven Price wrote:
> Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
> and on a timeout has to stop all the schedulers to safely perform a
> reset. However more than one scheduler can trigger a timeout at the same
> time. This race condition results in jobs being freed while they are
> still in use.
>
> When stopping other slots use cancel_delayed_work_sync() to ensure that
> any timeout started for that slot has completed. Also use
> mutex_trylock() to obtain reset_lock. This means that only one thread
> attempts the reset, the other threads will simply complete without doing
> anything (the first thread will wait for this in the call to
> cancel_delayed_work_sync()).
>
> While we're here and since the function is already dependent on
> sched_job not being NULL, let's remove the unnecessary checks, along
> with a commented out call to panfrost_core_dump() which has never
> existed in mainline.
>
A Fixes: tags would be welcome here so it would be backported to v5.3
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> This is a tidied up version of the patch orginally posted here:
> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
>
> drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
> 1 file changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index a58551668d9a..dcc9a7603685 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
> job_read(pfdev, JS_TAIL_LO(js)),
> sched_job);
>
> - mutex_lock(&pfdev->reset_lock);
> + if (!mutex_trylock(&pfdev->reset_lock))
> + return;
>
> - for (i = 0; i < NUM_JOB_SLOTS; i++)
> - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
> + for (i = 0; i < NUM_JOB_SLOTS; i++) {
> + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
> +
> + drm_sched_stop(sched, sched_job);
> + if (js != i)
> + /* Ensure any timeouts on other slots have finished */
> + cancel_delayed_work_sync(&sched->work_tdr);
> + }
>
> - if (sched_job)
> - drm_sched_increase_karma(sched_job);
> + drm_sched_increase_karma(sched_job);
Indeed looks cleaner.
>
> spin_lock_irqsave(&pfdev->js->job_lock, flags);
> for (i = 0; i < NUM_JOB_SLOTS; i++) {
> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
> }
> spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
>
> - /* panfrost_core_dump(pfdev); */
This should be cleaned in another patch !
>
> panfrost_devfreq_record_transition(pfdev, js);
> panfrost_device_reset(pfdev);
>
Thanks,
Testing it right now with the last change removed (doesn't apply on v5.3 with it),
results in a few hours... or minutes !
Neil
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] drm/panfrost: Handle resetting on timeout better
2019-10-07 13:09 ` Neil Armstrong
@ 2019-10-07 16:14 ` Tomeu Vizoso
2019-10-09 9:42 ` Steven Price
0 siblings, 1 reply; 5+ messages in thread
From: Tomeu Vizoso @ 2019-10-07 16:14 UTC (permalink / raw)
To: Neil Armstrong, Steven Price, Daniel Vetter, David Airlie, Rob Herring
Cc: Alyssa Rosenzweig, dri-devel, linux-kernel
On 10/7/19 6:09 AM, Neil Armstrong wrote:
> Hi Steven,
>
> On 07/10/2019 14:50, Steven Price wrote:
>> Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
>> and on a timeout has to stop all the schedulers to safely perform a
>> reset. However more than one scheduler can trigger a timeout at the same
>> time. This race condition results in jobs being freed while they are
>> still in use.
>>
>> When stopping other slots use cancel_delayed_work_sync() to ensure that
>> any timeout started for that slot has completed. Also use
>> mutex_trylock() to obtain reset_lock. This means that only one thread
>> attempts the reset, the other threads will simply complete without doing
>> anything (the first thread will wait for this in the call to
>> cancel_delayed_work_sync()).
>>
>> While we're here and since the function is already dependent on
>> sched_job not being NULL, let's remove the unnecessary checks, along
>> with a commented out call to panfrost_core_dump() which has never
>> existed in mainline.
>>
>
> A Fixes: tags would be welcome here so it would be backported to v5.3
>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> This is a tidied up version of the patch orginally posted here:
>> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
>>
>> drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
>> 1 file changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index a58551668d9a..dcc9a7603685 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>> job_read(pfdev, JS_TAIL_LO(js)),
>> sched_job);
>>
>> - mutex_lock(&pfdev->reset_lock);
>> + if (!mutex_trylock(&pfdev->reset_lock))
>> + return;
>>
>> - for (i = 0; i < NUM_JOB_SLOTS; i++)
>> - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
>> + for (i = 0; i < NUM_JOB_SLOTS; i++) {
>> + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
>> +
>> + drm_sched_stop(sched, sched_job);
>> + if (js != i)
>> + /* Ensure any timeouts on other slots have finished */
>> + cancel_delayed_work_sync(&sched->work_tdr);
>> + }
>>
>> - if (sched_job)
>> - drm_sched_increase_karma(sched_job);
>> + drm_sched_increase_karma(sched_job);
>
> Indeed looks cleaner.
>
>>
>> spin_lock_irqsave(&pfdev->js->job_lock, flags);
>> for (i = 0; i < NUM_JOB_SLOTS; i++) {
>> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>> }
>> spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
>>
>> - /* panfrost_core_dump(pfdev); */
>
> This should be cleaned in another patch !
Seems to me that this should be some kind of TODO, see
etnaviv_core_dump() for the kind of things we could be doing.
Maybe we can delete this line and mention this in the TODO file?
Cheers,
Tomeu
>>
>> panfrost_devfreq_record_transition(pfdev, js);
>> panfrost_device_reset(pfdev);
>>
>
> Thanks,
> Testing it right now with the last change removed (doesn't apply on v5.3 with it),
> results in a few hours... or minutes !
>
>
> Neil
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] drm/panfrost: Handle resetting on timeout better
2019-10-07 12:50 [PATCH] drm/panfrost: Handle resetting on timeout better Steven Price
2019-10-07 13:09 ` Neil Armstrong
@ 2019-10-08 7:48 ` Neil Armstrong
1 sibling, 0 replies; 5+ messages in thread
From: Neil Armstrong @ 2019-10-08 7:48 UTC (permalink / raw)
To: Steven Price, Daniel Vetter, David Airlie, Rob Herring, Tomeu Vizoso
Cc: Alyssa Rosenzweig, dri-devel, linux-kernel,
open list:ARM/Amlogic Meson...
On 07/10/2019 14:50, Steven Price wrote:
> Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
> and on a timeout has to stop all the schedulers to safely perform a
> reset. However more than one scheduler can trigger a timeout at the same
> time. This race condition results in jobs being freed while they are
> still in use.
>
> When stopping other slots use cancel_delayed_work_sync() to ensure that
> any timeout started for that slot has completed. Also use
> mutex_trylock() to obtain reset_lock. This means that only one thread
> attempts the reset, the other threads will simply complete without doing
> anything (the first thread will wait for this in the call to
> cancel_delayed_work_sync()).
>
> While we're here and since the function is already dependent on
> sched_job not being NULL, let's remove the unnecessary checks, along
> with a commented out call to panfrost_core_dump() which has never
> existed in mainline.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> This is a tidied up version of the patch orginally posted here:
> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
>
> drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
> 1 file changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index a58551668d9a..dcc9a7603685 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
> job_read(pfdev, JS_TAIL_LO(js)),
> sched_job);
>
> - mutex_lock(&pfdev->reset_lock);
> + if (!mutex_trylock(&pfdev->reset_lock))
> + return;
>
> - for (i = 0; i < NUM_JOB_SLOTS; i++)
> - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
> + for (i = 0; i < NUM_JOB_SLOTS; i++) {
> + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
> +
> + drm_sched_stop(sched, sched_job);
> + if (js != i)
> + /* Ensure any timeouts on other slots have finished */
> + cancel_delayed_work_sync(&sched->work_tdr);
> + }
>
> - if (sched_job)
> - drm_sched_increase_karma(sched_job);
> + drm_sched_increase_karma(sched_job);
>
> spin_lock_irqsave(&pfdev->js->job_lock, flags);
> for (i = 0; i < NUM_JOB_SLOTS; i++) {
> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
> }
> spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
>
> - /* panfrost_core_dump(pfdev); */
>
> panfrost_devfreq_record_transition(pfdev, js);
> panfrost_device_reset(pfdev);
>
It ran successfully 10 dEQP tests without crashing the Amlogic S912 with Mali T820:
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] drm/panfrost: Handle resetting on timeout better
2019-10-07 16:14 ` Tomeu Vizoso
@ 2019-10-09 9:42 ` Steven Price
0 siblings, 0 replies; 5+ messages in thread
From: Steven Price @ 2019-10-09 9:42 UTC (permalink / raw)
To: Tomeu Vizoso, Neil Armstrong, Daniel Vetter, David Airlie, Rob Herring
Cc: Alyssa Rosenzweig, dri-devel, linux-kernel
On 07/10/2019 17:14, Tomeu Vizoso wrote:
> On 10/7/19 6:09 AM, Neil Armstrong wrote:
>> Hi Steven,
>>
>> On 07/10/2019 14:50, Steven Price wrote:
>>> Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
>>> and on a timeout has to stop all the schedulers to safely perform a
>>> reset. However more than one scheduler can trigger a timeout at the same
>>> time. This race condition results in jobs being freed while they are
>>> still in use.
>>>
>>> When stopping other slots use cancel_delayed_work_sync() to ensure that
>>> any timeout started for that slot has completed. Also use
>>> mutex_trylock() to obtain reset_lock. This means that only one thread
>>> attempts the reset, the other threads will simply complete without doing
>>> anything (the first thread will wait for this in the call to
>>> cancel_delayed_work_sync()).
>>>
>>> While we're here and since the function is already dependent on
>>> sched_job not being NULL, let's remove the unnecessary checks, along
>>> with a commented out call to panfrost_core_dump() which has never
>>> existed in mainline.
>>>
>>
>> A Fixes: tags would be welcome here so it would be backported to v5.3
>>
>>> Signed-off-by: Steven Price <steven.price@arm.com>
>>> ---
>>> This is a tidied up version of the patch orginally posted here:
>>> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
>>>
>>> drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
>>> 1 file changed, 11 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> index a58551668d9a..dcc9a7603685 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct
>>> drm_sched_job *sched_job)
>>> job_read(pfdev, JS_TAIL_LO(js)),
>>> sched_job);
>>> - mutex_lock(&pfdev->reset_lock);
>>> + if (!mutex_trylock(&pfdev->reset_lock))
>>> + return;
>>> - for (i = 0; i < NUM_JOB_SLOTS; i++)
>>> - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
>>> + for (i = 0; i < NUM_JOB_SLOTS; i++) {
>>> + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
>>> +
>>> + drm_sched_stop(sched, sched_job);
>>> + if (js != i)
>>> + /* Ensure any timeouts on other slots have finished */
>>> + cancel_delayed_work_sync(&sched->work_tdr);
>>> + }
>>> - if (sched_job)
>>> - drm_sched_increase_karma(sched_job);
>>> + drm_sched_increase_karma(sched_job);
>>
>> Indeed looks cleaner.
>>
>>> spin_lock_irqsave(&pfdev->js->job_lock, flags);
>>> for (i = 0; i < NUM_JOB_SLOTS; i++) {
>>> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct
>>> drm_sched_job *sched_job)
>>> }
>>> spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
>>> - /* panfrost_core_dump(pfdev); */
>>
>> This should be cleaned in another patch !
>
> Seems to me that this should be some kind of TODO, see
> etnaviv_core_dump() for the kind of things we could be doing.
>
> Maybe we can delete this line and mention this in the TODO file?
Fair enough - I'll split this into a separate patch and add an entry to
the TODO file. kbase has a mechanism to "dump on job fault" [1],[2] so
we could do something similar.
Steve
[1]
https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/product/kernel/drivers/gpu/arm/midgard/backend/gpu/mali_kbase_debug_job_fault_backend.c
[2]
https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/product/kernel/drivers/gpu/arm/midgard/mali_kbase_debug_job_fault.c
> Cheers,
>
> Tomeu
>
>>> panfrost_devfreq_record_transition(pfdev, js);
>>> panfrost_device_reset(pfdev);
>>>
>>
>> Thanks,
>> Testing it right now with the last change removed (doesn't apply on
>> v5.3 with it),
>> results in a few hours... or minutes !
>>
>>
>> Neil
>>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-10-09 9:42 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-07 12:50 [PATCH] drm/panfrost: Handle resetting on timeout better Steven Price
2019-10-07 13:09 ` Neil Armstrong
2019-10-07 16:14 ` Tomeu Vizoso
2019-10-09 9:42 ` Steven Price
2019-10-08 7:48 ` Neil Armstrong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).