All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
	DRI Development <dri-devel@lists.freedesktop.org>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Steven Price <steven.price@arm.com>,
	Boris Brezillon <boris.brezillon@collabora.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Lee Jones <lee.jones@linaro.org>
Subject: Re: [PATCH] drm/sched: Barriers are needed for entity->last_scheduled
Date: Fri, 9 Jul 2021 08:57:55 +0200	[thread overview]
Message-ID: <14149638-6cc7-5281-c6b6-d6d08d13713f@amd.com> (raw)
In-Reply-To: <20210708215439.4093557-1-daniel.vetter@ffwll.ch>



Am 08.07.21 um 23:54 schrieb Daniel Vetter:
> It might be good enough on x86 with just READ_ONCE, but the write side
> should then at least be WRITE_ONCE because x86 has total store order.
>
> It's definitely not enough on arm.
>
> Fix this proplery, which means
> - explain the need for the barrier in both places
> - point at the other side in each comment
>
> Also pull out the !sched_list case as the first check, so that the
> code flow is clearer.
>
> While at it sprinkle some comments around because it was very
> non-obvious to me what's actually going on here and why.
>
> Note that we really need full barriers here, at first I thought
> store-release and load-acquire on ->last_scheduled would be enough,
> but we actually requiring ordering between that and the queue state.
>
> v2: Put smp_rmp() in the right place and fix up comment (Andrey)
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: Boris Brezillon <boris.brezillon@collabora.com>
> ---
>   drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++--
>   1 file changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index 64d398166644..6366006c0fcf 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>   		dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
>   
>   	dma_fence_put(entity->last_scheduled);
> +
>   	entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
>   
> +	/*
> +	 * If the queue is empty we allow drm_sched_entity_select_rq() to
> +	 * locklessly access ->last_scheduled. This only works if we set the
> +	 * pointer before we dequeue and if we a write barrier here.
> +	 */
> +	smp_wmb();
> +

That whole stuff needs to be inside the spsc queue, not outside.

Otherwise drm_sched_entity_is_idle() won't work either and cause a lot 
of trouble during process tear down.

Christian.

>   	spsc_queue_pop(&entity->job_queue);
>   	return sched_job;
>   }
> @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
>   	struct drm_gpu_scheduler *sched;
>   	struct drm_sched_rq *rq;
>   
> -	if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
> +	/* single possible engine and already selected */
> +	if (!entity->sched_list)
> +		return;
> +
> +	/* queue non-empty, stay on the same engine */
> +	if (spsc_queue_count(&entity->job_queue))
>   		return;
>   
> -	fence = READ_ONCE(entity->last_scheduled);
> +	/*
> +	 * Only when the queue is empty are we guaranteed that the scheduler
> +	 * thread cannot change ->last_scheduled. To enforce ordering we need
> +	 * a read barrier here. See drm_sched_entity_pop_job() for the other
> +	 * side.
> +	 */
> +	smp_rmb();
> +
> +	fence = entity->last_scheduled;
> +
> +	/* stay on the same engine if the previous job hasn't finished */
>   	if (fence && !dma_fence_is_signaled(fence))
>   		return;
>   


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
	DRI Development <dri-devel@lists.freedesktop.org>
Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>,
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Steven Price <steven.price@arm.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Lee Jones <lee.jones@linaro.org>
Subject: Re: [Intel-gfx] [PATCH] drm/sched: Barriers are needed for entity->last_scheduled
Date: Fri, 9 Jul 2021 08:57:55 +0200	[thread overview]
Message-ID: <14149638-6cc7-5281-c6b6-d6d08d13713f@amd.com> (raw)
In-Reply-To: <20210708215439.4093557-1-daniel.vetter@ffwll.ch>



Am 08.07.21 um 23:54 schrieb Daniel Vetter:
> It might be good enough on x86 with just READ_ONCE, but the write side
> should then at least be WRITE_ONCE because x86 has total store order.
>
> It's definitely not enough on arm.
>
> Fix this proplery, which means
> - explain the need for the barrier in both places
> - point at the other side in each comment
>
> Also pull out the !sched_list case as the first check, so that the
> code flow is clearer.
>
> While at it sprinkle some comments around because it was very
> non-obvious to me what's actually going on here and why.
>
> Note that we really need full barriers here, at first I thought
> store-release and load-acquire on ->last_scheduled would be enough,
> but we actually requiring ordering between that and the queue state.
>
> v2: Put smp_rmp() in the right place and fix up comment (Andrey)
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: Boris Brezillon <boris.brezillon@collabora.com>
> ---
>   drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++--
>   1 file changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index 64d398166644..6366006c0fcf 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>   		dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
>   
>   	dma_fence_put(entity->last_scheduled);
> +
>   	entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
>   
> +	/*
> +	 * If the queue is empty we allow drm_sched_entity_select_rq() to
> +	 * locklessly access ->last_scheduled. This only works if we set the
> +	 * pointer before we dequeue and if we a write barrier here.
> +	 */
> +	smp_wmb();
> +

That whole stuff needs to be inside the spsc queue, not outside.

Otherwise drm_sched_entity_is_idle() won't work either and cause a lot 
of trouble during process tear down.

Christian.

>   	spsc_queue_pop(&entity->job_queue);
>   	return sched_job;
>   }
> @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
>   	struct drm_gpu_scheduler *sched;
>   	struct drm_sched_rq *rq;
>   
> -	if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
> +	/* single possible engine and already selected */
> +	if (!entity->sched_list)
> +		return;
> +
> +	/* queue non-empty, stay on the same engine */
> +	if (spsc_queue_count(&entity->job_queue))
>   		return;
>   
> -	fence = READ_ONCE(entity->last_scheduled);
> +	/*
> +	 * Only when the queue is empty are we guaranteed that the scheduler
> +	 * thread cannot change ->last_scheduled. To enforce ordering we need
> +	 * a read barrier here. See drm_sched_entity_pop_job() for the other
> +	 * side.
> +	 */
> +	smp_rmb();
> +
> +	fence = entity->last_scheduled;
> +
> +	/* stay on the same engine if the previous job hasn't finished */
>   	if (fence && !dma_fence_is_signaled(fence))
>   		return;
>   

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-07-09  6:58 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-08 17:37 [PATCH v3 00/20] drm/sched dependency tracking and dma-resv fixes Daniel Vetter
2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 01/20] drm/sched: entity->rq selection cannot fail Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-09  6:53   ` Christian König
2021-07-09  6:53     ` [Intel-gfx] " Christian König
2021-07-09  7:14     ` Daniel Vetter
2021-07-09  7:14       ` [Intel-gfx] " Daniel Vetter
2021-07-09  7:23       ` Christian König
2021-07-09  7:23         ` [Intel-gfx] " Christian König
2021-07-09  8:00         ` Daniel Vetter
2021-07-09  8:00           ` [Intel-gfx] " Daniel Vetter
2021-07-09  8:11           ` Christian König
2021-07-09  8:11             ` [Intel-gfx] " Christian König
2021-07-08 17:37 ` [PATCH v3 02/20] drm/sched: Split drm_sched_job_init Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 03/20] drm/sched: Barriers are needed for entity->last_scheduled Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 18:56   ` Andrey Grodzovsky
2021-07-08 18:56     ` [Intel-gfx] " Andrey Grodzovsky
2021-07-08 19:53     ` Daniel Vetter
2021-07-08 19:53       ` [Intel-gfx] " Daniel Vetter
2021-07-08 21:54   ` [PATCH] " Daniel Vetter
2021-07-08 21:54     ` [Intel-gfx] " Daniel Vetter
2021-07-09  6:57     ` Christian König [this message]
2021-07-09  6:57       ` Christian König
2021-07-09  7:40       ` Daniel Vetter
2021-07-09  7:40         ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 04/20] drm/sched: Add dependency tracking Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 05/20] drm/sched: drop entity parameter from drm_sched_push_job Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 06/20] drm/sched: improve docs around drm_sched_entity Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 07/20] drm/panfrost: use scheduler dependency tracking Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-12  9:19   ` Steven Price
2021-07-12  9:19     ` [Intel-gfx] " Steven Price
2021-07-12  9:19     ` Steven Price
2021-07-08 17:37 ` [PATCH v3 08/20] drm/lima: " Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 09/20] drm/v3d: Move drm_sched_job_init to v3d_job_init Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 10/20] drm/v3d: Use scheduler dependency handling Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 11/20] drm/etnaviv: " Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 12/20] drm/gem: Delete gem array fencing helpers Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 13/20] drm/sched: Don't store self-dependencies Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 14/20] drm/sched: Check locking in drm_sched_job_await_implicit Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 15/20] drm/msm: Don't break exclusive fence ordering Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 16/20] drm/msm: always wait for the exclusive fence Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-09  8:48   ` Christian König
2021-07-09  8:48     ` [Intel-gfx] " Christian König
2021-07-09  8:48     ` Christian König
2021-07-09  9:15     ` Daniel Vetter
2021-07-09  9:15       ` [Intel-gfx] " Daniel Vetter
2021-07-09  9:15       ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 17/20] drm/etnaviv: Don't break exclusive fence ordering Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 18/20] drm/i915: delete exclude argument from i915_sw_fence_await_reservation Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 19/20] drm/i915: Don't break exclusive fence ordering Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 20/20] dma-resv: Give the docs a do-over Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-09  0:03 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency tracking and dma-resv fixes (rev2) Patchwork
2021-07-09  0:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-07-09 15:27 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=14149638-6cc7-5281-c6b6-d6d08d13713f@amd.com \
    --to=christian.koenig@amd.com \
    --cc=boris.brezillon@collabora.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=lee.jones@linaro.org \
    --cc=steven.price@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.