All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/i915/execlists: Tweak virtual unsubmission
Date: Mon, 14 Oct 2019 14:15:28 +0100	[thread overview]
Message-ID: <b1c38729-a4e7-109d-eb06-c725309b97f0@linux.intel.com> (raw)
In-Reply-To: <157104719244.18859.1213204779926721229@skylake-alporthouse-com>


On 14/10/2019 10:59, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-10-14 10:50:25)
>>
>> On 14/10/2019 10:41, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2019-10-14 10:34:31)
>>>>
>>>> On 13/10/2019 21:30, Chris Wilson wrote:
>>>>> Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
>>>>> overtaking each other on preemption") we have restricted requests to run
>>>>> on their chosen engine across preemption events. We can take this
>>>>> restriction into account to know that we will want to resubmit those
>>>>> requests onto the same physical engine, and so can shortcircuit the
>>>>> virtual engine selection process and keep the request on the same
>>>>> engine during unwind.
>>>>>
>>>>> References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
>>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>> ---
>>>>>     drivers/gpu/drm/i915/gt/intel_lrc.c | 6 +++---
>>>>>     drivers/gpu/drm/i915/i915_request.c | 2 +-
>>>>>     2 files changed, 4 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>>>> index e6bf633b48d5..03732e3f5ec7 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
>>>>> @@ -895,7 +895,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>>>         list_for_each_entry_safe_reverse(rq, rn,
>>>>>                                          &engine->active.requests,
>>>>>                                          sched.link) {
>>>>> -             struct intel_engine_cs *owner;
>>>>>     
>>>>>                 if (i915_request_completed(rq))
>>>>>                         continue; /* XXX */
>>>>> @@ -910,8 +909,7 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>>>                  * engine so that it can be moved across onto another physical
>>>>>                  * engine as load dictates.
>>>>>                  */
>>>>> -             owner = rq->hw_context->engine;
>>>>> -             if (likely(owner == engine)) {
>>>>> +             if (likely(rq->execution_mask == engine->mask)) {
>>>>>                         GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID);
>>>>>                         if (rq_prio(rq) != prio) {
>>>>>                                 prio = rq_prio(rq);
>>>>> @@ -922,6 +920,8 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
>>>>>                         list_move(&rq->sched.link, pl);
>>>>>                         active = rq;
>>>>>                 } else {
>>>>> +                     struct intel_engine_cs *owner = rq->hw_context->engine;
>>>>
>>>> I guess there is some benefit in doing fewer operations as long as we
>>>> are fixing the engine anyway (at the moment at least).
>>>>
>>>> However on this branch here the concern was request completion racing
>>>> with preemption handling and with this change the breadcrumb will not
>>>> get canceled any longer and may get signaled on the virtual engine.
>>>> Which then leads to the explosion this branch fixed. At least that's
>>>> what I remembered in combination with the comment below..
>>>
>>> No, we don't change back to the virtual engine, so that is not an issue.
>>> The problem was only because of the rq->engine = owner where the
>>> breadcrumbs were still on the previous engine lists and assumed to be
>>> under that engine->breadcrumbs.lock (but would in future be assumed to be
>>> under rq->engine->breadcrumbs.lock).
>>
>> Breadcrumb signaling can only be set up on the physical engine? Hm, must
>> be fine since without preemption that would be the scenario exactly.
>> Okay, I see there is r-b from Ram already so no need for another one.
> 
> With no disrespect to Ram, as the expert you raised a technical point that
> I would be happier to record as resolved with an r-b from yourself.

I went back to the patch I reviewed in July and it checks out.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2019-10-14 13:15 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-13 20:30 [PATCH] drm/i915/execlists: Tweak virtual unsubmission Chris Wilson
2019-10-13 20:37 ` ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2019-10-14  9:28 ` [PATCH] " Ramalingam C
2019-10-14  9:45   ` Chris Wilson
2019-10-14  9:34 ` Tvrtko Ursulin
2019-10-14  9:41   ` Chris Wilson
2019-10-14  9:50     ` Tvrtko Ursulin
2019-10-14  9:59       ` Chris Wilson
2019-10-14 13:15         ` Tvrtko Ursulin [this message]
2019-10-14  9:42   ` Chris Wilson
2019-10-14 16:14 ` ✗ Fi.CI.BUILD: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b1c38729-a4e7-109d-eb06-c725309b97f0@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.