All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out
Date: Mon, 14 Oct 2019 13:06:23 +0100	[thread overview]
Message-ID: <157105478333.18859.11636359770694964440@skylake-alporthouse-com> (raw)
In-Reply-To: <8b030734-330f-49e1-cbd0-d7d44c965983@linux.intel.com>

Quoting Tvrtko Ursulin (2019-10-14 13:00:01)
> 
> On 14/10/2019 10:07, Chris Wilson wrote:
> > On schedule-out (CS completion) of a banned context, scrub the context
> > image so that we do not replay the active payload. The intent is that we
> > skip banned payloads on request submission so that the timeline
> > advancement continues on in the background. However, if we are returning
> > to a preempted request, i915_request_skip() is ineffective and instead we
> > need to patch up the context image so that it continues from the start
> > of the next request.
> > 
> > v2: Fixup cancellation so that we only scrub the payload of the active
> > request and do not short-circuit the breadcrumbs (which might cause
> > other contexts to execute out of order).
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c    | 129 ++++++++----
> >   drivers/gpu/drm/i915/gt/selftest_lrc.c | 273 +++++++++++++++++++++++++
> >   2 files changed, 361 insertions(+), 41 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index e16ede75412b..b76b35194114 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -234,6 +234,9 @@ static void execlists_init_reg_state(u32 *reg_state,
> >                                    const struct intel_engine_cs *engine,
> >                                    const struct intel_ring *ring,
> >                                    bool close);
> > +static void
> > +__execlists_update_reg_state(const struct intel_context *ce,
> > +                          const struct intel_engine_cs *engine);
> >   
> >   static void cancel_timer(struct timer_list *t)
> >   {
> > @@ -270,6 +273,31 @@ static void mark_eio(struct i915_request *rq)
> >       i915_request_mark_complete(rq);
> >   }
> >   
> > +static struct i915_request *active_request(struct i915_request *rq)
> > +{
> > +     const struct intel_context * const ce = rq->hw_context;
> > +     struct i915_request *active = NULL;
> > +     struct list_head *list;
> > +
> > +     if (!i915_request_is_active(rq)) /* unwound, but incomplete! */
> > +             return rq;
> > +
> > +     rcu_read_lock();
> > +     list = &rcu_dereference(rq->timeline)->requests;
> > +     list_for_each_entry_from_reverse(rq, list, link) {
> > +             if (i915_request_completed(rq))
> > +                     break;
> > +
> > +             if (rq->hw_context != ce)
> > +                     break;
> > +
> > +             active = rq;
> > +     }
> > +     rcu_read_unlock();
> > +
> > +     return active;
> > +}
> > +
> >   static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
> >   {
> >       return (i915_ggtt_offset(engine->status_page.vma) +
> > @@ -991,6 +1019,56 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> >               tasklet_schedule(&ve->base.execlists.tasklet);
> >   }
> >   
> > +static void restore_default_state(struct intel_context *ce,
> > +                               struct intel_engine_cs *engine)
> > +{
> > +     u32 *regs = ce->lrc_reg_state;
> > +
> > +     if (engine->pinned_default_state)
> > +             memcpy(regs, /* skip restoring the vanilla PPHWSP */
> > +                    engine->pinned_default_state + LRC_STATE_PN * PAGE_SIZE,
> > +                    engine->context_size - PAGE_SIZE);
> > +
> > +     execlists_init_reg_state(regs, ce, engine, ce->ring, false);
> > +}
> > +
> > +static void cancel_active(struct i915_request *rq,
> > +                       struct intel_engine_cs *engine)
> > +{
> > +     struct intel_context * const ce = rq->hw_context;
> > +
> > +     /*
> > +      * The executing context has been cancelled. Fixup the context so that
> > +      * it will be marked as incomplete [-EIO] upon resubmission and not
> > +      * execute any user payloads. We preserve the breadcrumbs and
> > +      * semaphores of the incomplete requests so that inter-timeline
> > +      * dependencies (i.e other timelines) remain correctly ordered.
> > +      *
> > +      * See __i915_request_submit() for applying -EIO and removing the
> > +      * payload on resubmission.
> > +      */
> > +     GEM_TRACE("%s(%s): { rq=%llx:%lld }\n",
> > +               __func__, engine->name, rq->fence.context, rq->fence.seqno);
> > +     __context_pin_acquire(ce);
> > +
> > +     /* On resubmission of the active request, payload will be scrubbed */
> > +     rq = active_request(rq);
> > +     if (rq)
> > +             ce->ring->head = intel_ring_wrap(ce->ring, rq->head);
> 
> Without this change, where would the head be pointing after 
> schedule_out? Somewhere in the middle of the active request? And with 
> this change it is rewound to the start of it?

RING_HEAD could be anywhere between rq->head and rq->tail. We could just
leave it be, but it would be better if we reset it if it was past
rq->postfix (as we rewrite those instructions). It's just simpler if we
reset it back to the start, and scrub the request.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2019-10-14 12:06 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-14  9:07 [PATCH 01/15] drm/i915/display: Squelch kerneldoc warnings Chris Wilson
2019-10-14  9:07 ` [PATCH 02/15] drm/i915/gem: Distinguish each object type Chris Wilson
2019-10-14  9:07 ` [PATCH 03/15] drm/i915/execlists: Assert tasklet is locked for process_csb() Chris Wilson
2019-10-14  9:07 ` [PATCH 04/15] drm/i915/execlists: Clear semaphore immediately upon ELSP promotion Chris Wilson
2019-10-14  9:07 ` [PATCH 05/15] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
2019-10-14  9:07 ` [PATCH 06/15] drm/i915/selftests: Check known register values within the context Chris Wilson
2019-10-14  9:59   ` Tvrtko Ursulin
2019-10-14 10:06     ` Chris Wilson
2019-10-14  9:07 ` [PATCH 07/15] drm/i915/selftests: Check that GPR are cleared for new contexts Chris Wilson
2019-10-14 10:08   ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 08/15] drm/i915: Expose engine properties via sysfs Chris Wilson
2019-10-14 10:17   ` Tvrtko Ursulin
2019-10-14 10:27     ` Chris Wilson
2019-10-14  9:07 ` [PATCH 09/15] drm/i915/execlists: Force preemption Chris Wilson
2019-10-14  9:07 ` [PATCH 10/15] drm/i915/gt: Introduce barrier pulses along engines Chris Wilson
2019-10-14 11:03   ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 11/15] drm/i915/execlists: Cancel banned contexts on schedule-out Chris Wilson
2019-10-14 12:00   ` Tvrtko Ursulin
2019-10-14 12:06     ` Chris Wilson [this message]
2019-10-14 12:25       ` Tvrtko Ursulin
2019-10-14 12:34         ` Chris Wilson
2019-10-14 13:13           ` Chris Wilson
2019-10-14 13:19             ` Tvrtko Ursulin
2019-10-14 13:23               ` Chris Wilson
2019-10-14 13:38                 ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 12/15] drm/i915/gem: Cancel non-persistent contexts on close Chris Wilson
2019-10-14 12:11   ` Tvrtko Ursulin
2019-10-14 12:21     ` Chris Wilson
2019-10-14 13:10       ` Tvrtko Ursulin
2019-10-14 13:34         ` Chris Wilson
2019-10-14 16:06           ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 13/15] drm/i915: Replace hangcheck by heartbeats Chris Wilson
2019-10-14 12:13   ` Tvrtko Ursulin
2019-10-14  9:07 ` [PATCH 14/15] drm/i915: Flush idle barriers when waiting Chris Wilson
2019-10-14  9:07 ` [PATCH 15/15] drm/i915/execlist: Trim immediate timeslice expiry Chris Wilson
2019-10-14 16:15 ` ✗ Fi.CI.BUILD: failure for series starting with [01/15] drm/i915/display: Squelch kerneldoc warnings Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=157105478333.18859.11636359770694964440@skylake-alporthouse-com \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.