From: "Christian König" <christian.koenig@amd.com>
To: Andrey Grodzovsky <andrey.grodzovsky@amd.com>,
dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
linux-pci@vger.kernel.org, ckoenig.leichtzumerken@gmail.com,
daniel.vetter@ffwll.ch, Harry.Wentland@amd.com
Cc: ppaalanen@gmail.com, Alexander.Deucher@amd.com,
gregkh@linuxfoundation.org, helgaas@kernel.org,
Felix.Kuehling@amd.com
Subject: Re: [PATCH v7 13/16] drm/scheduler: Fix hang when sched_entity released
Date: Tue, 18 May 2021 18:33:38 +0200 [thread overview]
Message-ID: <8228ea6b-4faf-bb7e-aaf4-8949932e869a@amd.com> (raw)
In-Reply-To: <da1f9706-d918-cff8-2807-25da0c01fcde@amd.com>
Am 18.05.21 um 18:17 schrieb Andrey Grodzovsky:
>
>
> On 2021-05-18 11:15 a.m., Christian König wrote:
>> Am 18.05.21 um 17:03 schrieb Andrey Grodzovsky:
>>>
>>> On 2021-05-18 10:07 a.m., Christian König wrote:
>>>> In a separate discussion with Daniel we once more iterated over the
>>>> dma_resv requirements and I came to the conclusion that this
>>>> approach here won't work reliable.
>>>>
>>>> The problem is as following:
>>>> 1. device A schedules some rendering with into a buffer and exports
>>>> it as DMA-buf.
>>>> 2. device B imports the DMA-buf and wants to consume the rendering,
>>>> for the the fence of device A is replaced with a new operation.
>>>> 3. device B is hot plugged and the new operation canceled/newer
>>>> scheduled.
>>>>
>>>> The problem is now that we can't do this since the operation of
>>>> device A is still running and by signaling our fences we run into
>>>> the problem of potential memory corruption.
>
> By signaling s_fence->finished of the canceled operation from the
> removed device B we in fact cause memory corruption for the uncompleted
> job still running on device A ? Because there is someone waiting to
> read write from the imported buffer on device B and he only waits for
> the s_fence->finished of device B we signaled
> in drm_sched_entity_kill_jobs ?
Exactly that, yes.
In other words when you have a dependency chain like A->B->C then memory
management only waits for C before freeing up the memory for example.
When C now signaled because the device is hot-plugged before A or B are
finished they are essentially accessing freed up memory.
Christian.
>
> Andrey
>
>>>
>>>
>>> I am not sure this problem you describe above is related to this patch.
>>
>> Well it is kind of related.
>>
>>> Here we purely expand the criteria for when sched_entity is
>>> considered idle in order to prevent a hang on device remove.
>>
>> And exactly that is problematic. See the jobs on the entity need to
>> cleanly wait for their dependencies before they can be completed.
>>
>> drm_sched_entity_kill_jobs() is also not handling that correctly at
>> the moment, we only wait for the last scheduled fence but not for the
>> dependencies of the job.
>>
>>> Were you addressing the patch from yesterday in which you commented
>>> that you found a problem with how we finish fences ? It was
>>> '[PATCH v7 12/16] drm/amdgpu: Fix hang on device removal.'
>>>
>>> Also, in the patch series as it is now we only signal HW fences for the
>>> extracted device B, we are not touching any other fences. In fact as
>>> you
>>> may remember, I dropped all new logic to forcing fence completion in
>>> this patch series and only call amdgpu_fence_driver_force_completion
>>> for the HW fences of the extracted device as it's done today anyway.
>>
>> Signaling hardware fences is unproblematic since they are emitted
>> when the software scheduling is already completed.
>>
>> Christian.
>>
>>>
>>> Andrey
>>>
>>>>
>>>> Not sure how to handle that case. One possibility would be to wait
>>>> for all dependencies of unscheduled jobs before signaling their
>>>> fences as canceled.
>>>>
>>>> Christian.
>>>>
>>>> Am 12.05.21 um 16:26 schrieb Andrey Grodzovsky:
>>>>> Problem: If scheduler is already stopped by the time sched_entity
>>>>> is released and entity's job_queue not empty I encountred
>>>>> a hang in drm_sched_entity_flush. This is because
>>>>> drm_sched_entity_is_idle
>>>>> never becomes false.
>>>>>
>>>>> Fix: In drm_sched_fini detach all sched_entities from the
>>>>> scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
>>>>> Also wakeup all those processes stuck in sched_entity flushing
>>>>> as the scheduler main thread which wakes them up is stopped by now.
>>>>>
>>>>> v2:
>>>>> Reverse order of drm_sched_rq_remove_entity and marking
>>>>> s_entity as stopped to prevent reinserion back to rq due
>>>>> to race.
>>>>>
>>>>> v3:
>>>>> Drop drm_sched_rq_remove_entity, only modify entity->stopped
>>>>> and check for it in drm_sched_entity_is_idle
>>>>>
>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>> ---
>>>>> drivers/gpu/drm/scheduler/sched_entity.c | 3 ++-
>>>>> drivers/gpu/drm/scheduler/sched_main.c | 24
>>>>> ++++++++++++++++++++++++
>>>>> 2 files changed, 26 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c
>>>>> b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>> index 0249c7450188..2e93e881b65f 100644
>>>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>> @@ -116,7 +116,8 @@ static bool drm_sched_entity_is_idle(struct
>>>>> drm_sched_entity *entity)
>>>>> rmb(); /* for list_empty to work without lock */
>>>>> if (list_empty(&entity->list) ||
>>>>> - spsc_queue_count(&entity->job_queue) == 0)
>>>>> + spsc_queue_count(&entity->job_queue) == 0 ||
>>>>> + entity->stopped)
>>>>> return true;
>>>>> return false;
>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> index 8d1211e87101..a2a953693b45 100644
>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> @@ -898,9 +898,33 @@ EXPORT_SYMBOL(drm_sched_init);
>>>>> */
>>>>> void drm_sched_fini(struct drm_gpu_scheduler *sched)
>>>>> {
>>>>> + struct drm_sched_entity *s_entity;
>>>>> + int i;
>>>>> +
>>>>> if (sched->thread)
>>>>> kthread_stop(sched->thread);
>>>>> + for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >=
>>>>> DRM_SCHED_PRIORITY_MIN; i--) {
>>>>> + struct drm_sched_rq *rq = &sched->sched_rq[i];
>>>>> +
>>>>> + if (!rq)
>>>>> + continue;
>>>>> +
>>>>> + spin_lock(&rq->lock);
>>>>> + list_for_each_entry(s_entity, &rq->entities, list)
>>>>> + /*
>>>>> + * Prevents reinsertion and marks job_queue as idle,
>>>>> + * it will removed from rq in drm_sched_entity_fini
>>>>> + * eventually
>>>>> + */
>>>>> + s_entity->stopped = true;
>>>>> + spin_unlock(&rq->lock);
>>>>> +
>>>>> + }
>>>>> +
>>>>> + /* Wakeup everyone stuck in drm_sched_entity_flush for this
>>>>> scheduler */
>>>>> + wake_up_all(&sched->job_scheduled);
>>>>> +
>>>>> /* Confirm no work left behind accessing device structures */
>>>>> cancel_delayed_work_sync(&sched->work_tdr);
>>>>
>>
next prev parent reply other threads:[~2021-05-18 16:33 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-12 14:26 [PATCH v7 00/16] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 01/16] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 02/16] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 03/16] drm/amdkfd: Split kfd suspend from device exit Andrey Grodzovsky
2021-05-12 20:33 ` Felix Kuehling
2021-05-12 20:38 ` Andrey Grodzovsky
2021-05-20 3:20 ` [PATCH] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2021-05-20 3:29 ` Felix Kuehling
2021-05-20 3:58 ` Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 04/16] " Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 05/16] drm/amdgpu: Handle IOMMU enabled case Andrey Grodzovsky
2021-05-14 14:41 ` Andrey Grodzovsky
2021-05-14 16:25 ` Felix Kuehling
2021-05-14 16:26 ` Andrey Grodzovsky
2021-05-17 14:38 ` [PATCH] " Andrey Grodzovsky
2021-05-17 14:48 ` Felix Kuehling
2021-05-12 14:26 ` [PATCH v7 06/16] drm/amdgpu: Remap all page faults to per process dummy page Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 07/16] PCI: Add support for dev_groups to struct pci_driver Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 08/16] drm/amdgpu: Convert driver sysfs attributes to static attributes Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 09/16] drm/amdgpu: Guard against write accesses after device removal Andrey Grodzovsky
2021-05-12 20:17 ` Alex Deucher
2021-05-12 20:30 ` Andrey Grodzovsky
2021-05-12 20:50 ` Alex Deucher
2021-05-13 14:47 ` Andrey Grodzovsky
2021-05-13 14:54 ` Alex Deucher
2021-05-12 14:26 ` [PATCH v7 10/16] drm/sched: Make timeout timer rearm conditional Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 11/16] drm/amdgpu: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 12/16] drm/amdgpu: Fix hang on device removal Andrey Grodzovsky
2021-05-14 14:42 ` Andrey Grodzovsky
2021-05-17 14:40 ` Andrey Grodzovsky
2021-05-17 17:39 ` Alex Deucher
2021-05-17 19:39 ` Christian König
2021-05-17 19:46 ` Andrey Grodzovsky
2021-05-17 19:54 ` Christian König
2021-05-12 14:26 ` [PATCH v7 13/16] drm/scheduler: Fix hang when sched_entity released Andrey Grodzovsky
2021-05-18 14:07 ` Christian König
2021-05-18 15:03 ` Andrey Grodzovsky
2021-05-18 15:15 ` Christian König
2021-05-18 16:17 ` Andrey Grodzovsky
2021-05-18 16:33 ` Christian König [this message]
2021-05-18 17:43 ` Andrey Grodzovsky
2021-05-18 18:02 ` Christian König
2021-05-18 18:09 ` Andrey Grodzovsky
2021-05-18 18:13 ` Christian König
2021-05-18 18:48 ` Andrey Grodzovsky
2021-05-18 20:56 ` Andrey Grodzovsky
2021-05-19 10:57 ` Christian König
2021-05-19 11:03 ` Andrey Grodzovsky
2021-05-19 11:46 ` Christian König
2021-05-19 11:51 ` Andrey Grodzovsky
2021-05-19 11:56 ` Christian König
2021-05-19 14:14 ` [PATCH] drm/sched: Avoid data corruptions Andrey Grodzovsky
2021-05-19 14:15 ` Christian König
2021-05-12 14:26 ` [PATCH v7 14/16] drm/amd/display: Remove superfluous drm_mode_config_cleanup Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 15/16] drm/amdgpu: Verify DMA opearations from device are done Andrey Grodzovsky
2021-05-12 14:26 ` [PATCH v7 16/16] drm/amdgpu: Unmap all MMIO mappings Andrey Grodzovsky
2021-05-14 14:42 ` Andrey Grodzovsky
2021-05-17 14:41 ` Andrey Grodzovsky
2021-05-17 17:43 ` Alex Deucher
2021-05-17 18:46 ` Andrey Grodzovsky
2021-05-17 18:56 ` Alex Deucher
2021-05-17 19:22 ` Andrey Grodzovsky
2021-05-17 19:31 ` [PATCH] " Andrey Grodzovsky
2021-05-18 14:01 ` Andrey Grodzovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8228ea6b-4faf-bb7e-aaf4-8949932e869a@amd.com \
--to=christian.koenig@amd.com \
--cc=Alexander.Deucher@amd.com \
--cc=Felix.Kuehling@amd.com \
--cc=Harry.Wentland@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=andrey.grodzovsky@amd.com \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=daniel.vetter@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=gregkh@linuxfoundation.org \
--cc=helgaas@kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=ppaalanen@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).