amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Tao, Yintian" <Yintian.Tao@amd.com>
To: "Koenig, Christian" <Christian.Koenig@amd.com>,
	"Deucher, Alexander" <Alexander.Deucher@amd.com>
Cc: "amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>
Subject: RE: [PATCH] drm/amdgpu: hold the reference of finished fence
Date: Mon, 23 Mar 2020 12:26:28 +0000	[thread overview]
Message-ID: <MN2PR12MB303933FD1C0EF7F353ED7B94E5F00@MN2PR12MB3039.namprd12.prod.outlook.com> (raw)
In-Reply-To: <673cbed6-557a-e3c6-3871-799d86e4a5cd@amd.com>

Hi  Christian

The error fence is finished fence not the hw ring fence. Please the call trace below.
[  732.920906] RIP: 0010:dma_fence_signal_locked+0x3e/0x100
[  732.939364]  dma_fence_signal+0x29/0x50	===>drm sched finished fence
[  732.940036]  drm_sched_fence_finished+0x12/0x20 [gpu_sched]
[  732.940996]  drm_sched_process_job+0x34/0xa0 [gpu_sched]
[  732.941910]  dma_fence_signal_locked+0x85/0x100
[  732.942692]  dma_fence_signal+0x29/0x50 	====> hw fence
[  732.943457]  amdgpu_fence_process+0x99/0x120 [amdgpu]
[  732.944393]  sdma_v4_0_process_trap_irq+0x81/0xa0 [amdgpu]
[  732.945398]  amdgpu_irq_dispatch+0xaf/0x1d0 [amdgpu]
[  732.946317]  amdgpu_ih_process+0x8c/0x110 [amdgpu]
[  732.947206]  amdgpu_irq_handler+0x24/0xa0 [amdgpu]

Best Regards
Yintian Tao
-----Original Message-----
From: Koenig, Christian <Christian.Koenig@amd.com> 
Sent: 2020年3月23日 20:06
To: Tao, Yintian <Yintian.Tao@amd.com>; Deucher, Alexander <Alexander.Deucher@amd.com>
Cc: amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: hold the reference of finished fence

I've just double checked and your analyses actually can't be correct.

When we call dma_fence_signal() in amdgpu_fence_process() we still have a reference to the fence.

See the code here:
>                 r = dma_fence_signal(fence);
>                 if (!r)
>                         DMA_FENCE_TRACE(fence, "signaled from irq 
> context\n");
>                 else
>                         BUG();
>
>                 dma_fence_put(fence);

So I'm not sure how you ran into the crash in the first place, this is most likely something else.

Regards,
Christian.

Am 23.03.20 um 12:49 schrieb Yintian Tao:
> There is one one corner case at dma_fence_signal_locked which will 
> raise the NULL pointer problem just like below.
> ->dma_fence_signal
>      ->dma_fence_signal_locked
> 	->test_and_set_bit
> here trigger dma_fence_release happen due to the zero of fence refcount.
>
> ->dma_fence_put
>      ->dma_fence_release
> 	->drm_sched_fence_release_scheduled
> 	    ->call_rcu
> here make the union fled “cb_list” at finished fence to NULL because 
> struct rcu_head contains two pointer which is same as struct list_head 
> cb_list
>
> Therefore, to hold the reference of finished fence at amdgpu_job_run 
> to prevent the null pointer during dma_fence_signal
>
> [  732.912867] BUG: kernel NULL pointer dereference, address: 
> 0000000000000008 [  732.914815] #PF: supervisor write access in kernel 
> mode [  732.915731] #PF: error_code(0x0002) - not-present page [  
> 732.916621] PGD 0 P4D 0 [  732.917072] Oops: 0002 [#1] SMP PTI
> [  732.917682] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G           OE     5.4.0-rc7 #1
> [  732.918980] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 [  
> 732.920906] RIP: 0010:dma_fence_signal_locked+0x3e/0x100
> [  732.938569] Call Trace:
> [  732.939003]  <IRQ>
> [  732.939364]  dma_fence_signal+0x29/0x50 [  732.940036]  
> drm_sched_fence_finished+0x12/0x20 [gpu_sched] [  732.940996]  
> drm_sched_process_job+0x34/0xa0 [gpu_sched] [  732.941910]  
> dma_fence_signal_locked+0x85/0x100
> [  732.942692]  dma_fence_signal+0x29/0x50 [  732.943457]  
> amdgpu_fence_process+0x99/0x120 [amdgpu] [  732.944393]  
> sdma_v4_0_process_trap_irq+0x81/0xa0 [amdgpu]
>
> Signed-off-by: Yintian Tao <yttao@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 19 ++++++++++++++++++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  2 ++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  |  3 +++
>   3 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 7531527067df..03573eff660a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -52,7 +52,7 @@
>   
>   struct amdgpu_fence {
>   	struct dma_fence base;
> -
> +	struct dma_fence *finished;
>   	/* RB, DMA, etc. */
>   	struct amdgpu_ring		*ring;
>   };
> @@ -149,6 +149,7 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, 
> struct dma_fence **f,
>   
>   	seq = ++ring->fence_drv.sync_seq;
>   	fence->ring = ring;
> +	fence->finished = NULL;
>   	dma_fence_init(&fence->base, &amdgpu_fence_ops,
>   		       &ring->fence_drv.lock,
>   		       adev->fence_context + ring->idx, @@ -182,6 +183,21 @@ int 
> amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
>   	return 0;
>   }
>   
> +void amdgpu_fence_get_finished(struct dma_fence *base,
> +			       struct dma_fence *finished) {
> +	struct amdgpu_fence *afence = to_amdgpu_fence(base);
> +
> +	afence->finished = dma_fence_get(finished); }
> +
> +void amdgpu_fence_put_finished(struct dma_fence *base) {
> +	struct amdgpu_fence *afence = to_amdgpu_fence(base);
> +
> +	dma_fence_put(afence->finished);
> +}
> +
>   /**
>    * amdgpu_fence_emit_polling - emit a fence on the requeste ring
>    *
> @@ -276,6 +292,7 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
>   			BUG();
>   
>   		dma_fence_put(fence);
> +		amdgpu_fence_put_finished(fence);
>   		pm_runtime_mark_last_busy(adev->ddev->dev);
>   		pm_runtime_put_autosuspend(adev->ddev->dev);
>   	} while (last_seq != seq);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 4981e443a884..deb2aeeadfb3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -229,6 +229,8 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job)
>   				       &fence);
>   		if (r)
>   			DRM_ERROR("Error scheduling IBs (%d)\n", r);
> +		else
> +			amdgpu_fence_get_finished(fence, finished);
>   	}
>   	/* if gpu reset, hw fence will be replaced here */
>   	dma_fence_put(job->fence);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 448c76cbf3ed..fd4da91859aa 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -96,6 +96,9 @@ void amdgpu_fence_driver_suspend(struct amdgpu_device *adev);
>   void amdgpu_fence_driver_resume(struct amdgpu_device *adev);
>   int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence,
>   		      unsigned flags);
> +void amdgpu_fence_get_finished(struct dma_fence *base,
> +			       struct dma_fence *finished); void 
> +amdgpu_fence_put_finished(struct dma_fence *base);
>   int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s);
>   bool amdgpu_fence_process(struct amdgpu_ring *ring);
>   int amdgpu_fence_wait_empty(struct amdgpu_ring *ring);

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-03-23 12:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-23 11:49 [PATCH] drm/amdgpu: hold the reference of finished fence Yintian Tao
2020-03-23 12:05 ` Christian König
2020-03-23 12:26   ` Tao, Yintian [this message]
2020-03-23 14:14 Yintian Tao
2020-03-23 14:22 Yintian Tao
2020-03-24  3:40 ` Andrey Grodzovsky
2020-03-24 14:45   ` Pan, Xinhui
2020-03-24 14:52     ` Andrey Grodzovsky
2020-03-24 17:58       ` Christian König
2020-03-24 19:50         ` Grodzovsky, Andrey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MN2PR12MB303933FD1C0EF7F353ED7B94E5F00@MN2PR12MB3039.namprd12.prod.outlook.com \
    --to=yintian.tao@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Christian.Koenig@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).