From: "Christian König" <christian.koenig@amd.com>
To: Alex Deucher <alexdeucher@gmail.com>
Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>,
amd-gfx list <amd-gfx@lists.freedesktop.org>,
steven.price@arm.com, Emily Deng <Emily.Deng@amd.com>,
Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
Lucas Stach <l.stach@pengutronix.de>
Subject: Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job.
Date: Thu, 6 Feb 2020 15:51:31 +0100 [thread overview]
Message-ID: <61128c11-9e65-bc21-6306-ea4efea18b76@amd.com> (raw)
In-Reply-To: <CADnq5_Np=OFgqAb4TPRz5yqx1YZSwWybS=F6R_r6r01QRrzADA@mail.gmail.com>
Am 06.02.20 um 15:49 schrieb Alex Deucher:
> On Thu, Feb 6, 2020 at 6:50 AM Christian König
> <ckoenig.leichtzumerken@gmail.com> wrote:
>> Am 06.02.20 um 12:10 schrieb Lucas Stach:
>>> Hi all,
>>>
>>> On Mi, 2020-02-05 at 19:24 +0100, Lucas Stach wrote:
>>>> Hi Andrey,
>>>>
>>>> This commit breaks all drivers, which may bail out of the timeout
>>>> processing as they wish to extend the timeout (etnaviv, v3d).
>>>>
>>>> Those drivers currently just return from the timeout handler before
>>>> calling drm_sched_stop(), which means with this commit applied we are
>>>> removing the first job from the ring_mirror_list, but never put it
>>>> back. This leads to jobs getting lost from the ring mirror, which then
>>>> causes quite a bit of fallout like unsignaled fences.
>>>>
>>>> Not sure yet what to do about it, we can either add a function to add
>>>> the job back to the ring_mirror if the driver wants to extend the
>>>> timeout, or we could look for another way to stop
>>>> drm_sched_cleanup_jobs from freeing jobs that are currently in timeout
>>>> processing.
>>> So after thinking about this a bit more my opinion is that we need to
>>> revert this change for now and go back to the drawing board for the
>>> scheduler timeout handling.
>>>
>>> Right now this starts to feel like a big midlayer mistake with all the
>>> very intricate intertwining between the drivers and the scheduler. The
>>> rules on when it's safe to manipulate the ring mirror and when
>>> completed jobs are signaled and freed are not really well specified.
>>> The fact that we need to mutate state in order to get rid of races
>>> instead of having a single big "timeout processing is owner of the
>>> scheduler state for now" is a big fat warning sign IMHO.
>> Yes, that strongly feels like a hack to me as well. But I didn't had
>> time and still haven't to take a closer look and suggest something better.
>>
> In that case, can someone send me a revert?
Well a revert would break our driver.
The real solution is that somebody needs to sit down, gather ALL the
requirements and then come up with a solution which is clean and works
for everyone.
Christian.
>
> Alex
>
>
>> Christian.
>>
>>> It took me far longer than I'd like to admit to understand the failure
>>> mode with fences not getting signaled after a GPU hang. The back and
>>> forth between scheduler and driver code makes things really hard to
>>> follow.
>>>
>>> Regards,
>>> Lucas
>>>
>>>> Regards,
>>>> Lucas
>>>>
>>>> On Mo, 2019-11-25 at 15:51 -0500, Andrey Grodzovsky wrote:
>>>>> Problem:
>>>>> Due to a race between drm_sched_cleanup_jobs in sched thread and
>>>>> drm_sched_job_timedout in timeout work there is a possiblity that
>>>>> bad job was already freed while still being accessed from the
>>>>> timeout thread.
>>>>>
>>>>> Fix:
>>>>> Instead of just peeking at the bad job in the mirror list
>>>>> remove it from the list under lock and then put it back later when
>>>>> we are garanteed no race with main sched thread is possible which
>>>>> is after the thread is parked.
>>>>>
>>>>> v2: Lock around processing ring_mirror_list in drm_sched_cleanup_jobs.
>>>>>
>>>>> v3: Rebase on top of drm-misc-next. v2 is not needed anymore as
>>>>> drm_sched_get_cleanup_job already has a lock there.
>>>>>
>>>>> v4: Fix comments to relfect latest code in drm-misc.
>>>>>
>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>> Tested-by: Emily Deng <Emily.Deng@amd.com>
>>>>> ---
>>>>> drivers/gpu/drm/scheduler/sched_main.c | 27 +++++++++++++++++++++++++++
>>>>> 1 file changed, 27 insertions(+)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> index 6774955..1bf9c40 100644
>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> @@ -284,10 +284,21 @@ static void drm_sched_job_timedout(struct work_struct *work)
>>>>> unsigned long flags;
>>>>>
>>>>> sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
>>>>> +
>>>>> + /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
>>>>> + spin_lock_irqsave(&sched->job_list_lock, flags);
>>>>> job = list_first_entry_or_null(&sched->ring_mirror_list,
>>>>> struct drm_sched_job, node);
>>>>>
>>>>> if (job) {
>>>>> + /*
>>>>> + * Remove the bad job so it cannot be freed by concurrent
>>>>> + * drm_sched_cleanup_jobs. It will be reinserted back after sched->thread
>>>>> + * is parked at which point it's safe.
>>>>> + */
>>>>> + list_del_init(&job->node);
>>>>> + spin_unlock_irqrestore(&sched->job_list_lock, flags);
>>>>> +
>>>>> job->sched->ops->timedout_job(job);
>>>>>
>>>>> /*
>>>>> @@ -298,6 +309,8 @@ static void drm_sched_job_timedout(struct work_struct *work)
>>>>> job->sched->ops->free_job(job);
>>>>> sched->free_guilty = false;
>>>>> }
>>>>> + } else {
>>>>> + spin_unlock_irqrestore(&sched->job_list_lock, flags);
>>>>> }
>>>>>
>>>>> spin_lock_irqsave(&sched->job_list_lock, flags);
>>>>> @@ -370,6 +383,20 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad)
>>>>> kthread_park(sched->thread);
>>>>>
>>>>> /*
>>>>> + * Reinsert back the bad job here - now it's safe as
>>>>> + * drm_sched_get_cleanup_job cannot race against us and release the
>>>>> + * bad job at this point - we parked (waited for) any in progress
>>>>> + * (earlier) cleanups and drm_sched_get_cleanup_job will not be called
>>>>> + * now until the scheduler thread is unparked.
>>>>> + */
>>>>> + if (bad && bad->sched == sched)
>>>>> + /*
>>>>> + * Add at the head of the queue to reflect it was the earliest
>>>>> + * job extracted.
>>>>> + */
>>>>> + list_add(&bad->node, &sched->ring_mirror_list);
>>>>> +
>>>>> + /*
>>>>> * Iterate the job list from later to earlier one and either deactive
>>>>> * their HW callbacks or remove them from mirror list if they already
>>>>> * signaled.
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7Ce88b51a2443741b0b56f08d7ab13da74%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637165974076779365&sdata=L7Hin%2Faw7vK9IYBaZn%2BVmWZKzjqTYBsvJ%2BIL80qB3M4%3D&reserved=0
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7Ce88b51a2443741b0b56f08d7ab13da74%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637165974076779365&sdata=94EyD8X91MT5IVE8TN9%2FRYed8aIX6tN1Pvl8LJBkCeU%3D&reserved=0
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7Ce88b51a2443741b0b56f08d7ab13da74%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637165974076779365&sdata=L7Hin%2Faw7vK9IYBaZn%2BVmWZKzjqTYBsvJ%2BIL80qB3M4%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next prev parent reply other threads:[~2020-02-06 14:51 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-25 20:51 [PATCH v4] drm/scheduler: Avoid accessing freed bad job Andrey Grodzovsky
2019-11-25 20:51 ` Andrey Grodzovsky
2019-11-25 21:44 ` Deng, Emily
2019-11-25 21:44 ` Deng, Emily
2019-11-26 0:09 ` Grodzovsky, Andrey
2019-11-26 0:09 ` Grodzovsky, Andrey
[not found] ` <MWHPR12MB1453C6FC45A83482232CA3EDEA450-Gy0DoCVfaSWZBIDmKHdw+wdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2019-11-26 15:36 ` Deucher, Alexander
2019-11-26 15:36 ` Deucher, Alexander
2019-11-26 15:37 ` Andrey Grodzovsky
2019-11-26 15:37 ` Andrey Grodzovsky
[not found] ` <b8b716a7-e235-38b2-ea6d-0a21881fa64e-5C7GfCeVMHo@public.gmane.org>
2019-11-27 0:41 ` Deng, Emily
2019-11-27 0:41 ` Deng, Emily
2019-12-02 19:24 ` Deng, Emily
2019-12-03 19:10 ` Andrey Grodzovsky
2019-12-03 19:44 ` Deucher, Alexander
2019-12-03 19:57 ` Andrey Grodzovsky
2019-12-03 19:59 ` Deucher, Alexander
2019-12-03 20:32 ` Andrey Grodzovsky
2019-12-03 20:58 ` Deng, Emily
2019-12-03 19:53 ` Deng, Emily
2020-02-05 18:24 ` Lucas Stach
2020-02-06 11:10 ` Lucas Stach
2020-02-06 11:49 ` Christian König
2020-02-06 14:49 ` Alex Deucher
2020-02-06 14:51 ` Christian König [this message]
2020-02-06 15:49 ` Andrey Grodzovsky
2020-02-10 16:55 ` Andrey Grodzovsky
2020-02-10 21:50 ` Luben Tuikov
2020-02-11 15:55 ` Andrey Grodzovsky
2020-02-11 21:27 ` Andrey Grodzovsky
2020-02-12 0:53 ` Luben Tuikov
2020-02-12 16:33 ` Andrey Grodzovsky
2020-07-21 11:03 ` Lucas Stach
2020-07-21 13:36 ` Andrey Grodzovsky
2020-07-21 13:39 ` Christian König
2020-07-21 13:42 ` Andrey Grodzovsky
2020-07-21 18:29 ` Luben Tuikov
2020-11-25 3:17 ` [PATCH 0/6] Allow to extend the timeout without jobs disappearing Luben Tuikov
2020-11-25 3:17 ` [PATCH 1/6] drm/scheduler: "node" --> "list" Luben Tuikov
2020-11-25 9:44 ` Christian König
2020-11-25 3:17 ` [PATCH 2/6] gpu/drm: ring_mirror_list --> pending_list Luben Tuikov
2020-11-25 9:47 ` Christian König
2020-11-25 16:42 ` Luben Tuikov
2020-11-25 3:17 ` [PATCH 3/6] drm/scheduler: Job timeout handler returns status Luben Tuikov
2020-11-25 4:41 ` kernel test robot
2020-11-25 9:50 ` Christian König
2020-11-25 16:48 ` Luben Tuikov
2020-11-25 11:04 ` Steven Price
2020-11-25 11:15 ` Lucas Stach
2020-11-25 11:22 ` Steven Price
2020-11-25 11:47 ` Lucas Stach
2020-11-25 12:41 ` Christian König
2020-11-26 15:06 ` Andrey Grodzovsky
2020-11-25 3:17 ` [PATCH 4/6] drm/scheduler: Essentialize the job done callback Luben Tuikov
2020-11-25 9:51 ` Christian König
2020-11-25 3:17 ` [PATCH 5/6] drm/amdgpu: Don't hardcode thread name length Luben Tuikov
2020-11-25 9:55 ` Christian König
2020-11-25 17:01 ` Luben Tuikov
2020-11-26 8:11 ` Christian König
2020-11-25 3:17 ` [PATCH 6/6] drm/sched: Make use of a "done" thread Luben Tuikov
2020-11-25 10:10 ` Christian König
2020-11-26 0:24 ` Luben Tuikov
2020-11-25 11:09 ` Steven Price
2020-11-26 0:30 ` Luben Tuikov
2020-02-07 15:26 ` [PATCH v4] drm/scheduler: Avoid accessing freed bad job Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=61128c11-9e65-bc21-6306-ea4efea18b76@amd.com \
--to=christian.koenig@amd.com \
--cc=Emily.Deng@amd.com \
--cc=alexdeucher@gmail.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=andrey.grodzovsky@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=l.stach@pengutronix.de \
--cc=steven.price@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).