From: ebiederm@xmission.com (Eric W. Biederman) To: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: "Panariti\, David" <David.Panariti@amd.com>, "linux-kernel\@vger.kernel.org" <linux-kernel@vger.kernel.org>, "amd-gfx\@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>, "Deucher\, Alexander" <Alexander.Deucher@amd.com>, "Koenig\, Christian" <Christian.Koenig@amd.com>, "oleg\@redhat.com" <oleg@redhat.com>, "akpm\@linux-foundation.org" <akpm@linux-foundation.org> Subject: Re: [PATCH 3/3] drm/amdgpu: Switch to interrupted wait to recover from ring hang. Date: Wed, 25 Apr 2018 15:55:24 -0500 [thread overview] Message-ID: <87h8nzt39f.fsf@xmission.com> (raw) In-Reply-To: <d5a1b82b-a9c2-e6e8-1957-aab8acac242c@amd.com> (Andrey Grodzovsky's message of "Wed, 25 Apr 2018 13:17:40 -0400") Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> writes: > On 04/24/2018 12:30 PM, Eric W. Biederman wrote: >> "Panariti, David" <David.Panariti@amd.com> writes: >> >>> Andrey Grodzovsky <andrey.grodzovsky@amd.com> writes: >>>> Kind of dma_fence_wait_killable, except that we don't have such API >>>> (maybe worth adding ?) >>> Depends on how many places it would be called, or think it might be called. Can always factor on the 2nd time it's needed. >>> Factoring, IMO, rarely hurts. The factored function can easily be visited using `M-.' ;-> >>> >>> Also, if the wait could be very long, would a log message, something like "xxx has run for Y seconds." help? >>> I personally hate hanging w/no info. >> Ugh. This loop appears susceptible to loosing wake ups. There are >> races between when a wake-up happens, when we clear the sleeping state, >> and when we test the stat to see if we should stat awake. So yes >> implementing a dma_fence_wait_killable that handles of all that >> correctly sounds like an very good idea. > > I am not clear here - could you be more specific about what races will happen > here, more bellow >> >> Eric >> >> >>>> If the ring is hanging for some reason allow to recover the waiting by sending fatal signal. >>>> >>>> Originally-by: David Panariti <David.Panariti@amd.com> >>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> >>>> --- >>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 14 ++++++++++---- >>>> 1 file changed, 10 insertions(+), 4 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >>>> index eb80edf..37a36af 100644 >>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >>>> @@ -421,10 +421,16 @@ int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx, unsigned ring_id) >>>> >>>> if (other) { >>>> signed long r; >>>> - r = dma_fence_wait_timeout(other, false, MAX_SCHEDULE_TIMEOUT); >>>> - if (r < 0) { >>>> - DRM_ERROR("Error (%ld) waiting for fence!\n", r); >>>> - return r; >>>> + >>>> + while (true) { >>>> + if ((r = dma_fence_wait_timeout(other, true, >>>> + MAX_SCHEDULE_TIMEOUT)) >= 0) >>>> + return 0; >>>> + > > Do you mean that by the time I reach here some other thread from my group > already might dequeued SIGKILL since it's a shared signal and hence > fatal_signal_pending will return false ? Or are you talking about the > dma_fence_wait_timeout implementation in dma_fence_default_wait with > schedule_timeout ? Given Oleg's earlier comment about the scheduler having special cases for signals I might be wrong. But in general there is a pattern: for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); if (loop_is_done()) break; schedule(); } set_current_state(TASK_RUNNING); If you violate that pattern by testing for a condition without having first set your task as TASK_UNINTERRUPTIBLE (or whatever your sleep state is). Then it is possible to miss a wake-up that tests the condidtion. Thus I am quite concerned that there is a subtle corner case where you can miss a wakeup and not retest fatal_signal_pending(). Given that there is is a timeout the worst case might have you sleep MAX_SCHEDULE_TIMEOUT instead of indefinitely. Without a comment why this is safe, or having fatal_signal_pending check integrated into dma_fence_wait_timeout I am not comfortable with this loop. Eric >>>> + if (fatal_signal_pending(current)) { >>>> + DRM_ERROR("Error (%ld) waiting for fence!\n", r); >>>> + return r; >>>> + } >>>> } >>>> } >>>> >>>> -- >>>> 2.7.4 >>>> >> Eric
WARNING: multiple messages have this Message-ID (diff)
From: ebiederm@xmission.com (Eric W. Biederman) To: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> Cc: "Panariti, David" <David.Panariti@amd.com>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>, "Deucher, Alexander" <Alexander.Deucher@amd.com>, "Koenig, Christian" <Christian.Koenig@amd.com>, "oleg@redhat.com" <oleg@redhat.com>, "akpm@linux-foundation.org" <akpm@linux-foundation.org> Subject: Re: [PATCH 3/3] drm/amdgpu: Switch to interrupted wait to recover from ring hang. Date: Wed, 25 Apr 2018 15:55:24 -0500 [thread overview] Message-ID: <87h8nzt39f.fsf@xmission.com> (raw) In-Reply-To: <d5a1b82b-a9c2-e6e8-1957-aab8acac242c@amd.com> (Andrey Grodzovsky's message of "Wed, 25 Apr 2018 13:17:40 -0400") Andrey Grodzovsky <Andrey.Grodzovsky@amd.com> writes: > On 04/24/2018 12:30 PM, Eric W. Biederman wrote: >> "Panariti, David" <David.Panariti@amd.com> writes: >> >>> Andrey Grodzovsky <andrey.grodzovsky@amd.com> writes: >>>> Kind of dma_fence_wait_killable, except that we don't have such API >>>> (maybe worth adding ?) >>> Depends on how many places it would be called, or think it might be called. Can always factor on the 2nd time it's needed. >>> Factoring, IMO, rarely hurts. The factored function can easily be visited using `M-.' ;-> >>> >>> Also, if the wait could be very long, would a log message, something like "xxx has run for Y seconds." help? >>> I personally hate hanging w/no info. >> Ugh. This loop appears susceptible to loosing wake ups. There are >> races between when a wake-up happens, when we clear the sleeping state, >> and when we test the stat to see if we should stat awake. So yes >> implementing a dma_fence_wait_killable that handles of all that >> correctly sounds like an very good idea. > > I am not clear here - could you be more specific about what races will happen > here, more bellow >> >> Eric >> >> >>>> If the ring is hanging for some reason allow to recover the waiting by sending fatal signal. >>>> >>>> Originally-by: David Panariti <David.Panariti@amd.com> >>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> >>>> --- >>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 14 ++++++++++---- >>>> 1 file changed, 10 insertions(+), 4 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >>>> index eb80edf..37a36af 100644 >>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >>>> @@ -421,10 +421,16 @@ int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx, unsigned ring_id) >>>> >>>> if (other) { >>>> signed long r; >>>> - r = dma_fence_wait_timeout(other, false, MAX_SCHEDULE_TIMEOUT); >>>> - if (r < 0) { >>>> - DRM_ERROR("Error (%ld) waiting for fence!\n", r); >>>> - return r; >>>> + >>>> + while (true) { >>>> + if ((r = dma_fence_wait_timeout(other, true, >>>> + MAX_SCHEDULE_TIMEOUT)) >= 0) >>>> + return 0; >>>> + > > Do you mean that by the time I reach here some other thread from my group > already might dequeued SIGKILL since it's a shared signal and hence > fatal_signal_pending will return false ? Or are you talking about the > dma_fence_wait_timeout implementation in dma_fence_default_wait with > schedule_timeout ? Given Oleg's earlier comment about the scheduler having special cases for signals I might be wrong. But in general there is a pattern: for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); if (loop_is_done()) break; schedule(); } set_current_state(TASK_RUNNING); If you violate that pattern by testing for a condition without having first set your task as TASK_UNINTERRUPTIBLE (or whatever your sleep state is). Then it is possible to miss a wake-up that tests the condidtion. Thus I am quite concerned that there is a subtle corner case where you can miss a wakeup and not retest fatal_signal_pending(). Given that there is is a timeout the worst case might have you sleep MAX_SCHEDULE_TIMEOUT instead of indefinitely. Without a comment why this is safe, or having fatal_signal_pending check integrated into dma_fence_wait_timeout I am not comfortable with this loop. Eric >>>> + if (fatal_signal_pending(current)) { >>>> + DRM_ERROR("Error (%ld) waiting for fence!\n", r); >>>> + return r; >>>> + } >>>> } >>>> } >>>> >>>> -- >>>> 2.7.4 >>>> >> Eric
next prev parent reply other threads:[~2018-04-25 20:57 UTC|newest] Thread overview: 122+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-04-24 15:30 Avoid uninterruptible sleep during process exit Andrey Grodzovsky 2018-04-24 15:30 ` Andrey Grodzovsky 2018-04-24 15:30 ` [PATCH 1/3] signals: Allow generation of SIGKILL to exiting task Andrey Grodzovsky 2018-04-24 15:30 ` Andrey Grodzovsky 2018-04-24 16:10 ` Eric W. Biederman 2018-04-24 16:10 ` Eric W. Biederman 2018-04-24 16:42 ` Eric W. Biederman 2018-04-24 16:42 ` Eric W. Biederman 2018-04-24 16:51 ` Andrey Grodzovsky 2018-04-24 16:51 ` Andrey Grodzovsky 2018-04-24 17:29 ` Eric W. Biederman 2018-04-25 13:13 ` Oleg Nesterov 2018-04-24 15:30 ` [PATCH 2/3] drm/scheduler: Don't call wait_event_killable for signaled process Andrey Grodzovsky 2018-04-24 15:30 ` Andrey Grodzovsky 2018-04-24 15:46 ` Michel Dänzer 2018-04-24 15:51 ` Andrey Grodzovsky 2018-04-24 15:51 ` Andrey Grodzovsky 2018-04-24 15:52 ` Andrey Grodzovsky 2018-04-24 15:52 ` Andrey Grodzovsky 2018-04-24 19:44 ` Daniel Vetter 2018-04-24 19:44 ` Daniel Vetter 2018-04-24 21:00 ` Eric W. Biederman 2018-04-24 21:02 ` Andrey Grodzovsky 2018-04-24 21:02 ` Andrey Grodzovsky 2018-04-24 21:21 ` Eric W. Biederman 2018-04-24 21:37 ` Andrey Grodzovsky 2018-04-24 21:37 ` Andrey Grodzovsky 2018-04-24 22:11 ` Eric W. Biederman 2018-04-25 7:14 ` Daniel Vetter 2018-04-25 13:08 ` Andrey Grodzovsky 2018-04-25 13:08 ` Andrey Grodzovsky 2018-04-25 15:29 ` Eric W. Biederman 2018-04-25 16:13 ` Andrey Grodzovsky 2018-04-25 16:31 ` Eric W. Biederman 2018-04-24 21:40 ` Daniel Vetter 2018-04-24 21:40 ` Daniel Vetter 2018-04-25 13:22 ` Oleg Nesterov 2018-04-25 13:36 ` Daniel Vetter 2018-04-25 14:18 ` Oleg Nesterov 2018-04-25 14:18 ` Oleg Nesterov 2018-04-25 13:43 ` Andrey Grodzovsky 2018-04-25 13:43 ` Andrey Grodzovsky 2018-04-24 16:23 ` Eric W. Biederman 2018-04-24 16:23 ` Eric W. Biederman 2018-04-24 16:43 ` Andrey Grodzovsky 2018-04-24 16:43 ` Andrey Grodzovsky 2018-04-24 17:12 ` Eric W. Biederman 2018-04-25 13:55 ` Oleg Nesterov 2018-04-25 14:21 ` Andrey Grodzovsky 2018-04-25 14:21 ` Andrey Grodzovsky 2018-04-25 17:17 ` Oleg Nesterov 2018-04-25 18:40 ` Andrey Grodzovsky 2018-04-25 18:40 ` Andrey Grodzovsky 2018-04-26 0:01 ` Eric W. Biederman 2018-04-26 12:34 ` Andrey Grodzovsky 2018-04-26 12:34 ` Andrey Grodzovsky 2018-04-26 12:52 ` Andrey Grodzovsky 2018-04-26 12:52 ` Andrey Grodzovsky 2018-04-26 15:57 ` Eric W. Biederman 2018-04-26 20:43 ` Andrey Grodzovsky 2018-04-26 20:43 ` Andrey Grodzovsky 2018-04-30 12:08 ` Christian König 2018-04-30 12:08 ` Christian König 2018-04-30 14:32 ` Andrey Grodzovsky 2018-04-30 14:32 ` Andrey Grodzovsky 2018-04-30 15:25 ` Christian König 2018-04-30 15:25 ` Christian König 2018-04-30 16:00 ` Oleg Nesterov 2018-04-30 16:10 ` Andrey Grodzovsky 2018-04-30 16:10 ` Andrey Grodzovsky 2018-04-30 18:29 ` Christian König 2018-04-30 18:29 ` Christian König 2018-04-30 19:28 ` Andrey Grodzovsky 2018-04-30 19:28 ` Andrey Grodzovsky 2018-05-02 11:48 ` Christian König 2018-05-02 11:48 ` Christian König 2018-05-17 11:18 ` Andrey Grodzovsky 2018-05-17 14:48 ` Michel Dänzer 2018-05-17 15:33 ` Andrey Grodzovsky 2018-05-17 15:52 ` Michel Dänzer 2018-05-17 19:05 ` Andrey Grodzovsky 2018-05-18 8:46 ` Michel Dänzer 2018-05-18 9:42 ` Christian König 2018-05-18 14:44 ` Michel Dänzer 2018-05-18 14:50 ` Christian König 2018-05-18 15:02 ` Andrey Grodzovsky 2018-05-22 12:58 ` Christian König 2018-05-22 15:49 ` Andrey Grodzovsky 2018-05-22 16:09 ` Michel Dänzer 2018-05-22 16:30 ` Andrey Grodzovsky 2018-05-22 16:33 ` Michel Dänzer 2018-05-22 16:37 ` Andrey Grodzovsky 2018-05-01 14:35 ` Oleg Nesterov 2018-05-23 15:08 ` Andrey Grodzovsky 2018-05-23 15:08 ` Andrey Grodzovsky 2018-04-30 15:29 ` Oleg Nesterov 2018-04-30 16:25 ` Eric W. Biederman 2018-04-30 17:18 ` Andrey Grodzovsky 2018-04-30 17:18 ` Andrey Grodzovsky 2018-04-25 13:05 ` Oleg Nesterov 2018-04-24 15:30 ` [PATCH 3/3] drm/amdgpu: Switch to interrupted wait to recover from ring hang Andrey Grodzovsky 2018-04-24 15:30 ` Andrey Grodzovsky 2018-04-24 15:52 ` Panariti, David 2018-04-24 15:52 ` Panariti, David 2018-04-24 15:58 ` Andrey Grodzovsky 2018-04-24 15:58 ` Andrey Grodzovsky 2018-04-24 16:20 ` Panariti, David 2018-04-24 16:20 ` Panariti, David 2018-04-24 16:30 ` Eric W. Biederman 2018-04-24 16:30 ` Eric W. Biederman 2018-04-25 17:17 ` Andrey Grodzovsky 2018-04-25 17:17 ` Andrey Grodzovsky 2018-04-25 20:55 ` Eric W. Biederman [this message] 2018-04-25 20:55 ` Eric W. Biederman 2018-04-26 12:28 ` Andrey Grodzovsky 2018-04-26 12:28 ` Andrey Grodzovsky 2018-04-24 16:14 ` Eric W. Biederman 2018-04-24 16:14 ` Eric W. Biederman 2018-04-24 16:38 ` Andrey Grodzovsky 2018-04-24 16:38 ` Andrey Grodzovsky 2018-04-30 11:34 ` Christian König 2018-04-30 11:34 ` Christian König
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=87h8nzt39f.fsf@xmission.com \ --to=ebiederm@xmission.com \ --cc=Alexander.Deucher@amd.com \ --cc=Andrey.Grodzovsky@amd.com \ --cc=Christian.Koenig@amd.com \ --cc=David.Panariti@amd.com \ --cc=akpm@linux-foundation.org \ --cc=amd-gfx@lists.freedesktop.org \ --cc=linux-kernel@vger.kernel.org \ --cc=oleg@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.