intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 7/8] drm/i915/gt: Decouple inflight virtual engines
Date: Mon, 18 May 2020 16:40:15 +0100	[thread overview]
Message-ID: <158981641568.7442.12836609150291014605@build.alporthouse.com> (raw)
In-Reply-To: <3878c571-9353-67f7-b979-9d03209fa8c4@linux.intel.com>

Quoting Tvrtko Ursulin (2020-05-18 15:55:46)
> 
> On 18/05/2020 14:00, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-05-18 13:53:29)
> >>
> >> On 18/05/2020 09:14, Chris Wilson wrote:
> >>> Once a virtual engine has been bound to a sibling, it will remain bound
> >>> until we finally schedule out the last active request. We can not rebind
> >>> the context to a new sibling while it is inflight as the context save
> >>> will conflict, hence we wait. As we cannot then use any other sibliing
> >>> while the context is inflight, only kick the bound sibling while it
> >>> inflight and upon scheduling out the kick the rest (so that we can swap
> >>> engines on timeslicing if the previously bound engine becomes
> >>> oversubscribed).
> >>>
> >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> ---
> >>>    drivers/gpu/drm/i915/gt/intel_lrc.c | 30 +++++++++++++----------------
> >>>    1 file changed, 13 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> >>> index 7a5ac3375225..fe8f3518d6b8 100644
> >>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> >>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> >>> @@ -1398,9 +1398,8 @@ execlists_schedule_in(struct i915_request *rq, int idx)
> >>>    static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> >>>    {
> >>>        struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
> >>> -     struct i915_request *next = READ_ONCE(ve->request);
> >>>    
> >>> -     if (next == rq || (next && next->execution_mask & ~rq->execution_mask))
> >>> +     if (READ_ONCE(ve->request))
> >>>                tasklet_hi_schedule(&ve->base.execlists.tasklet);
> >>>    }
> >>>    
> >>> @@ -1821,18 +1820,14 @@ first_virtual_engine(struct intel_engine_cs *engine)
> >>>                        rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
> >>>                struct i915_request *rq = READ_ONCE(ve->request);
> >>>    
> >>> -             if (!rq) { /* lazily cleanup after another engine handled rq */
> >>> +             /* lazily cleanup after another engine handled rq */
> >>> +             if (!rq || !virtual_matches(ve, rq, engine)) {
> >>>                        rb_erase_cached(rb, &el->virtual);
> >>>                        RB_CLEAR_NODE(rb);
> >>>                        rb = rb_first_cached(&el->virtual);
> >>>                        continue;
> >>>                }
> >>>    
> >>> -             if (!virtual_matches(ve, rq, engine)) {
> >>> -                     rb = rb_next(rb);
> >>> -                     continue;
> >>> -             }
> >>> -
> >>>                return ve;
> >>>        }
> >>>    
> >>> @@ -5478,7 +5473,6 @@ static void virtual_submission_tasklet(unsigned long data)
> >>>        if (unlikely(!mask))
> >>>                return;
> >>>    
> >>> -     local_irq_disable();
> >>>        for (n = 0; n < ve->num_siblings; n++) {
> >>>                struct intel_engine_cs *sibling = READ_ONCE(ve->siblings[n]);
> >>>                struct ve_node * const node = &ve->nodes[sibling->id];
> >>> @@ -5488,20 +5482,19 @@ static void virtual_submission_tasklet(unsigned long data)
> >>>                if (!READ_ONCE(ve->request))
> >>>                        break; /* already handled by a sibling's tasklet */
> >>>    
> >>> +             spin_lock_irq(&sibling->active.lock);
> >>> +
> >>>                if (unlikely(!(mask & sibling->mask))) {
> >>>                        if (!RB_EMPTY_NODE(&node->rb)) {
> >>> -                             spin_lock(&sibling->active.lock);
> >>>                                rb_erase_cached(&node->rb,
> >>>                                                &sibling->execlists.virtual);
> >>>                                RB_CLEAR_NODE(&node->rb);
> >>> -                             spin_unlock(&sibling->active.lock);
> >>>                        }
> >>> -                     continue;
> >>> -             }
> >>>    
> >>> -             spin_lock(&sibling->active.lock);
> >>> +                     goto unlock_engine;
> >>> +             }
> >>>    
> >>> -             if (!RB_EMPTY_NODE(&node->rb)) {
> >>> +             if (unlikely(!RB_EMPTY_NODE(&node->rb))) {
> >>>                        /*
> >>>                         * Cheat and avoid rebalancing the tree if we can
> >>>                         * reuse this node in situ.
> >>> @@ -5541,9 +5534,12 @@ static void virtual_submission_tasklet(unsigned long data)
> >>>                if (first && prio >= sibling->execlists.queue_priority_hint)
> >>>                        tasklet_hi_schedule(&sibling->execlists.tasklet);
> >>>    
> >>> -             spin_unlock(&sibling->active.lock);
> >>> +unlock_engine:
> >>> +             spin_unlock_irq(&sibling->active.lock);
> >>> +
> >>> +             if (intel_context_inflight(&ve->context))
> >>> +                     break;
> >>
> >> So virtual request may not be added to all siblings any longer. Will it
> >> still be able to schedule it on any if time slicing kicks in under these
> >> conditions?
> > 
> > Yes.
> >   
> >> This is equivalent to the hunk in first_virtual_engine which also
> >> removes it from all other siblings.
> >>
> >> I guess it's inline with what the commit messages says - that new
> >> sibling will be picked upon time slicing. I just don't quite see the
> >> path which would do it. Only path which shuffles the siblings array
> >> around is in dequeue, and dequeue on other that the engine which first
> >> picked it will not happen any more. I must be missing something..
> > 
> > It's all on the execlists_schedule_out. During timeslicing we call
> > unwind_incomplete_requests which moves the requests back to the priotree
> > (and in this patch back to the virtual engine).
> > 
> > But... We cannot use the virtual request on any other engine until it has
> > been scheduled out. That only happens after we complete execlists_dequeue()
> > and submit a new ELSP[]. Once the HW acks the change, we call
> > execlists_schedule_out on the virtual_request.
> > 
> > Now we known that intel_context_inflight() will return false so any
> > engine can pick up the request, and so it's time to kick the virtual
> > tasklet and in turn kick all the siblings.
> > 
> > So timeslicing works by not submitting the virtual request again and
> > when it expires on this sibling[0], we wake up all the other siblings
> > and the first that is idle wins the race.
> 
> If a virtual request is on hw and timeslice expires:
> 
> 1. Unwinds the request.
>        -> kicks the virtual tasklet
> 2. Virtual tasklet runs and puts the request back on siblings.
>        -> kicks the physical tasklets
> 3. Siblings tasklet runs and submits the request.
> 
> So two tasklets latency even if no other runnable requests?

Yes. It's worse than that when you look at it. Since the other
execlists_submission_tasklet will run *before* we schedule out, they do
nothing as they cannot use the virtual request. So they will only
function when we kick them again after the schedule-out.

This was "bug" number 3 I mentioned you found with your tiny snippet.
In the next patch we optimise away the ineffective timeslice preemptions
of the virtual request. The inter-engine inefficiencies are inherent to
the system though -- we can only schedule the other engines once we have
scheduled out after the context save of the virtual engine.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-05-18 15:40 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-18  8:14 [Intel-gfx] [PATCH 1/8] drm/i915: Move saturated workload detection back to the context Chris Wilson
2020-05-18  8:14 ` [Intel-gfx] [PATCH 2/8] drm/i915/selftests: Add tests for timeslicing virtual engines Chris Wilson
2020-05-18 10:12   ` Tvrtko Ursulin
2020-05-18 10:21     ` Chris Wilson
2020-05-18  8:14 ` [Intel-gfx] [PATCH 3/8] drm/i915/gt: Reuse the tasklet priority for virtual as their siblings Chris Wilson
2020-05-18 10:13   ` Tvrtko Ursulin
2020-05-18  8:14 ` [Intel-gfx] [PATCH 4/8] drm/i915/gt: Kick virtual siblings on timeslice out Chris Wilson
2020-05-18 10:29   ` Tvrtko Ursulin
2020-05-18  8:14 ` [Intel-gfx] [PATCH 5/8] drm/i915/gt: Incorporate the virtual engine into timeslicing Chris Wilson
2020-05-18 10:36   ` Tvrtko Ursulin
2020-05-18 10:38     ` Chris Wilson
2020-05-18  8:14 ` [Intel-gfx] [PATCH 6/8] drm/i915/gt: Use virtual_engine during execlists_dequeue Chris Wilson
2020-05-18 10:51   ` Tvrtko Ursulin
2020-05-18 10:57     ` Chris Wilson
2020-05-18 12:33   ` [Intel-gfx] [PATCH v2] " Chris Wilson
2020-05-18 13:01     ` Tvrtko Ursulin
2020-05-18 13:09       ` Chris Wilson
2020-05-18  8:14 ` [Intel-gfx] [PATCH 7/8] drm/i915/gt: Decouple inflight virtual engines Chris Wilson
2020-05-18 12:53   ` Tvrtko Ursulin
2020-05-18 13:00     ` Chris Wilson
2020-05-18 14:55       ` Tvrtko Ursulin
2020-05-18 15:40         ` Chris Wilson [this message]
2020-05-18 15:48           ` Chris Wilson
2020-05-18  8:14 ` [Intel-gfx] [PATCH 8/8] drm/i915/gt: Resubmit the virtual engine on schedule-out Chris Wilson
2020-05-18  9:53 ` [Intel-gfx] [PATCH 1/8] drm/i915: Move saturated workload detection back to the context Tvrtko Ursulin
2020-05-18 10:11   ` Chris Wilson
2020-05-18 11:55 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/8] " Patchwork
2020-05-18 11:56 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-05-18 12:17 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-05-18 15:55 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/8] drm/i915: Move saturated workload detection back to the context (rev2) Patchwork
2020-05-18 15:57 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-05-18 16:28 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2020-05-18  7:57 [Intel-gfx] [PATCH 1/8] drm/i915: Move saturated workload detection back to the context Chris Wilson
2020-05-18  7:57 ` [Intel-gfx] [PATCH 7/8] drm/i915/gt: Decouple inflight virtual engines Chris Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=158981641568.7442.12836609150291014605@build.alporthouse.com \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).