amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: "Pan, Xinhui" <Xinhui.Pan@amd.com>,
	"Tao, Yintian" <Yintian.Tao@amd.com>,
	 "Koenig, Christian" <Christian.Koenig@amd.com>,
	"Deucher, Alexander" <Alexander.Deucher@amd.com>
Cc: "amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>
Subject: Re: [PATCH] drm/amdgpu: hold the reference of finished fence
Date: Tue, 24 Mar 2020 10:52:28 -0400	[thread overview]
Message-ID: <be0e40cf-3ecf-ebe8-2d73-1dd937450c18@amd.com> (raw)
In-Reply-To: <SN6PR12MB2800A5049C6AB62B7A002AC987F10@SN6PR12MB2800.namprd12.prod.outlook.com>


[-- Attachment #1.1: Type: text/plain, Size: 4948 bytes --]

This is only for the guilty job which was removed from the 
ring_mirror_list due to completion and hence will not be resubmitted by 
recovery and will not be freed by the usual flow in 
drm_sched_get_cleanup_job (see drm_sched_stop)

Andrey

On 3/24/20 10:45 AM, Pan, Xinhui wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
>
> Does this issue occur when gpu recovery?
> I just check the code,  fence timedout will free job and put its 
> fence. but gpu recovery might resubmit job.
> Correct me if I am wrong.
> ------------------------------------------------------------------------
> *From:* amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of 
> Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
> *Sent:* Tuesday, March 24, 2020 11:40:06 AM
> *To:* Tao, Yintian <Yintian.Tao@amd.com>; Koenig, Christian 
> <Christian.Koenig@amd.com>; Deucher, Alexander <Alexander.Deucher@amd.com>
> *Cc:* amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
> *Subject:* Re: [PATCH] drm/amdgpu: hold the reference of finished fence
>
> On 3/23/20 10:22 AM, Yintian Tao wrote:
> > There is one one corner case at dma_fence_signal_locked
> > which will raise the NULL pointer problem just like below.
> > ->dma_fence_signal
> >      ->dma_fence_signal_locked
> >        ->test_and_set_bit
> > here trigger dma_fence_release happen due to the zero of fence refcount.
>
>
> Did you find out why the zero refcount on the finished fence happens
> before the fence was signaled ? The finished fence is created with
> refcount set to 1 in drm_sched_fence_create->dma_fence_init and then the
> refcount is decremented in
> drm_sched_main->amdgpu_job_free_cb->drm_sched_job_cleanup. This should
> only happen after fence is already signaled (see
> drm_sched_get_cleanup_job). On top of that the finished fence is
> referenced from other places (e.g. entity->last_scheduled e.t.c)...
>
>
> >
> > ->dma_fence_put
> >      ->dma_fence_release
> >        ->drm_sched_fence_release_scheduled
> >            ->call_rcu
> > here make the union fled “cb_list” at finished fence
> > to NULL because struct rcu_head contains two pointer
> > which is same as struct list_head cb_list
> >
> > Therefore, to hold the reference of finished fence at 
> drm_sched_process_job
> > to prevent the null pointer during finished fence dma_fence_signal
> >
> > [  732.912867] BUG: kernel NULL pointer dereference, address: 
> 0000000000000008
> > [  732.914815] #PF: supervisor write access in kernel mode
> > [  732.915731] #PF: error_code(0x0002) - not-present page
> > [  732.916621] PGD 0 P4D 0
> > [  732.917072] Oops: 0002 [#1] SMP PTI
> > [  732.917682] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G           
> OE     5.4.0-rc7 #1
> > [  732.918980] Hardware name: QEMU Standard PC (i440FX + PIIX, 
> 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
> > [  732.920906] RIP: 0010:dma_fence_signal_locked+0x3e/0x100
> > [  732.938569] Call Trace:
> > [  732.939003]  <IRQ>
> > [  732.939364]  dma_fence_signal+0x29/0x50
> > [  732.940036]  drm_sched_fence_finished+0x12/0x20 [gpu_sched]
> > [  732.940996]  drm_sched_process_job+0x34/0xa0 [gpu_sched]
> > [  732.941910]  dma_fence_signal_locked+0x85/0x100
> > [  732.942692]  dma_fence_signal+0x29/0x50
> > [  732.943457]  amdgpu_fence_process+0x99/0x120 [amdgpu]
> > [  732.944393] sdma_v4_0_process_trap_irq+0x81/0xa0 [amdgpu]
> >
> > v2: hold the finished fence at drm_sched_process_job instead of
> >      amdgpu_fence_process
> > v3: resume the blank line
> >
> > Signed-off-by: Yintian Tao <yttao@amd.com>
> > ---
> >   drivers/gpu/drm/scheduler/sched_main.c | 2 ++
> >   1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> b/drivers/gpu/drm/scheduler/sched_main.c
> > index a18eabf692e4..8e731ed0d9d9 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -651,7 +651,9 @@ static void drm_sched_process_job(struct 
> dma_fence *f, struct dma_fence_cb *cb)
> >
> >        trace_drm_sched_process_job(s_fence);
> >
> > +     dma_fence_get(&s_fence->finished);
> >        drm_sched_fence_finished(s_fence);
>
>
> If the fence was already released during call to
> drm_sched_fence_finished->dma_fence_signal->... why is it safe to
> reference the s_fence just before that call ? Can't it already be
> released by this time ?
>
> Andrey
>
>
>
> > +     dma_fence_put(&s_fence->finished);
> > wake_up_interruptible(&sched->wake_up_worker);
> >   }
> >
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cxinhui.pan%40amd.com%7C65933fca0b414d12aab408d7cfa51165%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637206180230440562&amp;sdata=z6ec%2BcWkwjaDgZvkpL3jOMYkBtDjbNOxlXiAk4Ri5Ck%3D&amp;reserved=0

[-- Attachment #1.2: Type: text/html, Size: 10597 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-03-24 14:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-23 14:22 [PATCH] drm/amdgpu: hold the reference of finished fence Yintian Tao
2020-03-24  3:40 ` Andrey Grodzovsky
2020-03-24 14:45   ` Pan, Xinhui
2020-03-24 14:52     ` Andrey Grodzovsky [this message]
2020-03-24 17:58       ` Christian König
2020-03-24 19:50         ` Grodzovsky, Andrey
  -- strict thread matches above, loose matches on Subject: below --
2020-03-23 14:14 Yintian Tao
2020-03-23 11:49 Yintian Tao
2020-03-23 12:05 ` Christian König
2020-03-23 12:26   ` Tao, Yintian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=be0e40cf-3ecf-ebe8-2d73-1dd937450c18@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Christian.Koenig@amd.com \
    --cc=Xinhui.Pan@amd.com \
    --cc=Yintian.Tao@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).