All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Cc: "Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Steven Price" <steven.price@arm.com>,
	"Boris Brezillon" <boris.brezillon@collabora.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Lee Jones" <lee.jones@linaro.org>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v3 03/20] drm/sched: Barriers are needed for entity->last_scheduled
Date: Thu, 8 Jul 2021 21:53:45 +0200	[thread overview]
Message-ID: <CAKMK7uGov-z-YB41DqrsEV=vgy5hBAmeEeuPpCBsUMFnNpeOpA@mail.gmail.com> (raw)
In-Reply-To: <9cc93adc-0a93-a0d5-422b-a57fac9bd450@amd.com>

On Thu, Jul 8, 2021 at 8:56 PM Andrey Grodzovsky
<andrey.grodzovsky@amd.com> wrote:
> On 2021-07-08 1:37 p.m., Daniel Vetter wrote:
> > It might be good enough on x86 with just READ_ONCE, but the write side
> > should then at least be WRITE_ONCE because x86 has total store order.
> >
> > It's definitely not enough on arm.
> >
> > Fix this proplery, which means
> > - explain the need for the barrier in both places
> > - point at the other side in each comment
> >
> > Also pull out the !sched_list case as the first check, so that the
> > code flow is clearer.
> >
> > While at it sprinkle some comments around because it was very
> > non-obvious to me what's actually going on here and why.
> >
> > Note that we really need full barriers here, at first I thought
> > store-release and load-acquire on ->last_scheduled would be enough,
> > but we actually requiring ordering between that and the queue state.
> >
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > Cc: Steven Price <steven.price@arm.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > Cc: Lee Jones <lee.jones@linaro.org>
> > Cc: Boris Brezillon <boris.brezillon@collabora.com>
> > ---
> >   drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++--
> >   1 file changed, 25 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> > index 64d398166644..4e1124ed80e0 100644
> > --- a/drivers/gpu/drm/scheduler/sched_entity.c
> > +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> > @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
> >               dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
> >
> >       dma_fence_put(entity->last_scheduled);
> > +
> >       entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
> >
> > +     /*
> > +      * if the queue is empty we allow drm_sched_job_arm() to locklessly
>
>
> Probably meant drm_sched_entity_select_rq here

Which is called from drm_sched_job_arm but yes. I'll switch it around.

> > +      * access ->last_scheduled. This only works if we set the pointer before
> > +      * we dequeue and if we a write barrier here.
> > +      */
> > +     smp_wmb();
> > +
> >       spsc_queue_pop(&entity->job_queue);
> >       return sched_job;
> >   }
> > @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
> >       struct drm_gpu_scheduler *sched;
> >       struct drm_sched_rq *rq;
> >
> > -     if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
> > +     /* single possible engine and already selected */
> > +     if (!entity->sched_list)
> > +             return;
> > +
> > +     /* queue non-empty, stay on the same engine */
> > +     if (spsc_queue_count(&entity->job_queue))
> >               return;
>
>
> Shouldn't smp_rmb be here in between ? Given the queue is empty we want to
> be certain we are reading the most recent value of entity->last_scheduled

Yeah I had a load_acquire barrier here earlier and then put the
smp_rmb() on the wrong side. Will fix.
>
> Andrey
>
>
>
> >
> > -     fence = READ_ONCE(entity->last_scheduled);
> > +     fence = entity->last_scheduled;
> > +
> > +     /*
> > +      * Only when the queue is empty are we guaranteed the the scheduler
> > +      * thread cannot change ->last_scheduled. To enforce ordering we need
> > +      * a read barrier here. See drm_sched_entity_pop_job() for the other
> > +      * side.
> > +      */
> > +     smp_rmb();
> > +
> > +     /* stay on the same engine if the previous job hasn't finished */
> >       if (fence && !dma_fence_is_signaled(fence))
> >               return;
> >



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Cc: "Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Steven Price" <steven.price@arm.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Lee Jones" <lee.jones@linaro.org>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [Intel-gfx] [PATCH v3 03/20] drm/sched: Barriers are needed for entity->last_scheduled
Date: Thu, 8 Jul 2021 21:53:45 +0200	[thread overview]
Message-ID: <CAKMK7uGov-z-YB41DqrsEV=vgy5hBAmeEeuPpCBsUMFnNpeOpA@mail.gmail.com> (raw)
In-Reply-To: <9cc93adc-0a93-a0d5-422b-a57fac9bd450@amd.com>

On Thu, Jul 8, 2021 at 8:56 PM Andrey Grodzovsky
<andrey.grodzovsky@amd.com> wrote:
> On 2021-07-08 1:37 p.m., Daniel Vetter wrote:
> > It might be good enough on x86 with just READ_ONCE, but the write side
> > should then at least be WRITE_ONCE because x86 has total store order.
> >
> > It's definitely not enough on arm.
> >
> > Fix this proplery, which means
> > - explain the need for the barrier in both places
> > - point at the other side in each comment
> >
> > Also pull out the !sched_list case as the first check, so that the
> > code flow is clearer.
> >
> > While at it sprinkle some comments around because it was very
> > non-obvious to me what's actually going on here and why.
> >
> > Note that we really need full barriers here, at first I thought
> > store-release and load-acquire on ->last_scheduled would be enough,
> > but we actually requiring ordering between that and the queue state.
> >
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > Cc: Steven Price <steven.price@arm.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > Cc: Lee Jones <lee.jones@linaro.org>
> > Cc: Boris Brezillon <boris.brezillon@collabora.com>
> > ---
> >   drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++--
> >   1 file changed, 25 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> > index 64d398166644..4e1124ed80e0 100644
> > --- a/drivers/gpu/drm/scheduler/sched_entity.c
> > +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> > @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
> >               dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
> >
> >       dma_fence_put(entity->last_scheduled);
> > +
> >       entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
> >
> > +     /*
> > +      * if the queue is empty we allow drm_sched_job_arm() to locklessly
>
>
> Probably meant drm_sched_entity_select_rq here

Which is called from drm_sched_job_arm but yes. I'll switch it around.

> > +      * access ->last_scheduled. This only works if we set the pointer before
> > +      * we dequeue and if we a write barrier here.
> > +      */
> > +     smp_wmb();
> > +
> >       spsc_queue_pop(&entity->job_queue);
> >       return sched_job;
> >   }
> > @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
> >       struct drm_gpu_scheduler *sched;
> >       struct drm_sched_rq *rq;
> >
> > -     if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
> > +     /* single possible engine and already selected */
> > +     if (!entity->sched_list)
> > +             return;
> > +
> > +     /* queue non-empty, stay on the same engine */
> > +     if (spsc_queue_count(&entity->job_queue))
> >               return;
>
>
> Shouldn't smp_rmb be here in between ? Given the queue is empty we want to
> be certain we are reading the most recent value of entity->last_scheduled

Yeah I had a load_acquire barrier here earlier and then put the
smp_rmb() on the wrong side. Will fix.
>
> Andrey
>
>
>
> >
> > -     fence = READ_ONCE(entity->last_scheduled);
> > +     fence = entity->last_scheduled;
> > +
> > +     /*
> > +      * Only when the queue is empty are we guaranteed the the scheduler
> > +      * thread cannot change ->last_scheduled. To enforce ordering we need
> > +      * a read barrier here. See drm_sched_entity_pop_job() for the other
> > +      * side.
> > +      */
> > +     smp_rmb();
> > +
> > +     /* stay on the same engine if the previous job hasn't finished */
> >       if (fence && !dma_fence_is_signaled(fence))
> >               return;
> >



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-07-08 19:53 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-08 17:37 [PATCH v3 00/20] drm/sched dependency tracking and dma-resv fixes Daniel Vetter
2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 01/20] drm/sched: entity->rq selection cannot fail Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-09  6:53   ` Christian König
2021-07-09  6:53     ` [Intel-gfx] " Christian König
2021-07-09  7:14     ` Daniel Vetter
2021-07-09  7:14       ` [Intel-gfx] " Daniel Vetter
2021-07-09  7:23       ` Christian König
2021-07-09  7:23         ` [Intel-gfx] " Christian König
2021-07-09  8:00         ` Daniel Vetter
2021-07-09  8:00           ` [Intel-gfx] " Daniel Vetter
2021-07-09  8:11           ` Christian König
2021-07-09  8:11             ` [Intel-gfx] " Christian König
2021-07-08 17:37 ` [PATCH v3 02/20] drm/sched: Split drm_sched_job_init Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 03/20] drm/sched: Barriers are needed for entity->last_scheduled Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 18:56   ` Andrey Grodzovsky
2021-07-08 18:56     ` [Intel-gfx] " Andrey Grodzovsky
2021-07-08 19:53     ` Daniel Vetter [this message]
2021-07-08 19:53       ` Daniel Vetter
2021-07-08 21:54   ` [PATCH] " Daniel Vetter
2021-07-08 21:54     ` [Intel-gfx] " Daniel Vetter
2021-07-09  6:57     ` Christian König
2021-07-09  6:57       ` [Intel-gfx] " Christian König
2021-07-09  7:40       ` Daniel Vetter
2021-07-09  7:40         ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 04/20] drm/sched: Add dependency tracking Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 05/20] drm/sched: drop entity parameter from drm_sched_push_job Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 06/20] drm/sched: improve docs around drm_sched_entity Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 07/20] drm/panfrost: use scheduler dependency tracking Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-12  9:19   ` Steven Price
2021-07-12  9:19     ` [Intel-gfx] " Steven Price
2021-07-12  9:19     ` Steven Price
2021-07-08 17:37 ` [PATCH v3 08/20] drm/lima: " Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 09/20] drm/v3d: Move drm_sched_job_init to v3d_job_init Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 10/20] drm/v3d: Use scheduler dependency handling Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 11/20] drm/etnaviv: " Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 12/20] drm/gem: Delete gem array fencing helpers Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 13/20] drm/sched: Don't store self-dependencies Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 14/20] drm/sched: Check locking in drm_sched_job_await_implicit Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 15/20] drm/msm: Don't break exclusive fence ordering Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 16/20] drm/msm: always wait for the exclusive fence Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-09  8:48   ` Christian König
2021-07-09  8:48     ` [Intel-gfx] " Christian König
2021-07-09  8:48     ` Christian König
2021-07-09  9:15     ` Daniel Vetter
2021-07-09  9:15       ` [Intel-gfx] " Daniel Vetter
2021-07-09  9:15       ` Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 17/20] drm/etnaviv: Don't break exclusive fence ordering Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 18/20] drm/i915: delete exclude argument from i915_sw_fence_await_reservation Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 19/20] drm/i915: Don't break exclusive fence ordering Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37 ` [PATCH v3 20/20] dma-resv: Give the docs a do-over Daniel Vetter
2021-07-08 17:37   ` [Intel-gfx] " Daniel Vetter
2021-07-08 17:37   ` Daniel Vetter
2021-07-09  0:03 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency tracking and dma-resv fixes (rev2) Patchwork
2021-07-09  0:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-07-09 15:27 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKMK7uGov-z-YB41DqrsEV=vgy5hBAmeEeuPpCBsUMFnNpeOpA@mail.gmail.com' \
    --to=daniel.vetter@ffwll.ch \
    --cc=andrey.grodzovsky@amd.com \
    --cc=boris.brezillon@collabora.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=lee.jones@linaro.org \
    --cc=steven.price@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.