All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Harrison <john.c.harrison@intel.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	<Intel-GFX@Lists.FreeDesktop.Org>
Cc: DRI-Devel@Lists.FreeDesktop.Org
Subject: Re: [Intel-gfx] [PATCH 3/4] drm/i915/guc: Look for a guilty context when an engine reset fails
Date: Thu, 12 Jan 2023 12:59:30 -0800	[thread overview]
Message-ID: <8531ce98-78af-d818-b5bb-0af753a026d3@intel.com> (raw)
In-Reply-To: <393edad8-fa78-4b28-46ac-86da56d03de0@linux.intel.com>

On 1/12/2023 02:15, Tvrtko Ursulin wrote:
> On 12/01/2023 02:53, John.C.Harrison@Intel.com wrote:
>> From: John Harrison <John.C.Harrison@Intel.com>
>>
>> Engine resets are supposed to never fail. But in the case when one
>> does (due to unknown reasons that normally come down to a missing
>> w/a), it is useful to get as much information out of the system as
>> possible. Given that the GuC effectively dies on such a situation, it
>> is not possible to get a guilty context notification back. So do a
>> manual search instead. Given that GuC is dead, this is safe because
>> GuC won't be changing the engine state asynchronously.
>>
>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>> ---
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c   | 17 +++++++++++++++--
>>   1 file changed, 15 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index b436dd7f12e42..99d09e3394597 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -4754,11 +4754,24 @@ static void reset_fail_worker_func(struct 
>> work_struct *w)
>>       guc->submission_state.reset_fail_mask = 0;
>>       spin_unlock_irqrestore(&guc->submission_state.lock, flags);
>>   -    if (likely(reset_fail_mask))
>> +    if (likely(reset_fail_mask)) {
>> +        struct intel_engine_cs *engine;
>> +        enum intel_engine_id id;
>> +
>> +        /*
>> +         * GuC is toast at this point - it dead loops after sending 
>> the failed
>> +         * reset notification. So need to manually determine the 
>> guilty context.
>> +         * Note that it should be safe/reliable to do this here 
>> because the GuC
>> +         * is toast and will not be scheduling behind the KMD's back.
>> +         */
>> +        for_each_engine_masked(engine, gt, reset_fail_mask, id)
>> +            intel_guc_find_hung_context(engine);
>> +
>>           intel_gt_handle_error(gt, reset_fail_mask,
>>                         I915_ERROR_CAPTURE,
>> -                      "GuC failed to reset engine mask=0x%x\n",
>> +                      "GuC failed to reset engine mask=0x%x",
>>                         reset_fail_mask);
>> +    }
>>   }
>>     int intel_guc_engine_failure_process_msg(struct intel_guc *guc,
>
> This one I don't feel "at home" enough to r-b. Just a question - can 
> we be sure at this point that GuC is 100% stuck and there isn't a 
> chance it somehow comes alive and starts running in parallel (being 
> driven in parallel by a different "thread" in i915), interfering with 
> the assumption made in the comment?
The GuC API definition for the engine reset failure notification is that 
GuC will dead loop itself after sending - to quote "This is a 
catastrophic failure that requires a full GT reset, or FLR to recover.". 
So yes, GuC is 100% stuck and is not going to self recover. Guaranteed. 
If that changes in the future then that would be a backwards breaking 
API change and would require a corresponding driver update to go with 
supporting the new GuC firmware version.

There is the potential for a GT reset to maybe occur in parallel and 
resurrect the GuC that way. Not sure how that could happen though. The 
heartbeat timeout is significantly longer than the GuC's pre-emption 
timeout + engine reset timeout. That just leaves manual resets from the 
user or maybe from a selftest. If the user is manually poking reset 
debugfs files then it is already known that all bets are off in terms of 
getting an accurate error capture. And if a selftest is triggering GT 
resets in parallel with engine resets then either it is a broken test or 
it is attempting to test an evil corner case in which it is expected 
that error capture results will be unreliable. Having said all that, 
given that the submission_state lock is held here, such a GT reset would 
not get very far in bring the GuC back up anyway. Certainly, it would 
not be able to get as far as submitting new work and thus potentially 
changing the engine state.

So yes, if multiple impossible events occur back to back then the error 
capture may be wonky. Where wonky means a potentially innocent 
context/request gets blamed for breaking the hardware. Oh dear. I can 
live with that.

John.


>
> Regards,
>
> Tvrtko


  reply	other threads:[~2023-01-12 20:59 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12  2:53 [PATCH 0/4] Allow error capture without a request / on reset failure John.C.Harrison
2023-01-12  2:53 ` [Intel-gfx] " John.C.Harrison
2023-01-12  2:53 ` [PATCH 1/4] drm/i915: Allow error capture without a request John.C.Harrison
2023-01-12  2:53   ` [Intel-gfx] " John.C.Harrison
2023-01-12 10:01   ` Tvrtko Ursulin
2023-01-12 20:40     ` John Harrison
2023-01-13  9:51       ` Tvrtko Ursulin
2023-01-13 17:46         ` Hellstrom, Thomas
2023-01-13 21:29           ` John Harrison
2023-01-16 12:38             ` Tvrtko Ursulin
2023-01-17 19:40               ` John Harrison
2023-01-16 12:13           ` Tvrtko Ursulin
2023-01-12  2:53 ` [PATCH 2/4] drm/i915: Allow error capture of a pending request John.C.Harrison
2023-01-12  2:53   ` [Intel-gfx] " John.C.Harrison
2023-01-12 10:06   ` Tvrtko Ursulin
2023-01-12 20:46     ` John Harrison
2023-01-13  9:10       ` Tvrtko Ursulin
2023-01-12  2:53 ` [PATCH 3/4] drm/i915/guc: Look for a guilty context when an engine reset fails John.C.Harrison
2023-01-12  2:53   ` [Intel-gfx] " John.C.Harrison
2023-01-12 10:15   ` Tvrtko Ursulin
2023-01-12 20:59     ` John Harrison [this message]
2023-01-13  9:22       ` Tvrtko Ursulin
2023-01-14  1:27         ` John Harrison
2023-01-16 12:43           ` Tvrtko Ursulin
2023-01-17 21:14             ` John Harrison
2023-01-12  2:53 ` [PATCH 4/4] drm/i915/guc: Add a debug print on GuC triggered reset John.C.Harrison
2023-01-12  2:53   ` [Intel-gfx] " John.C.Harrison
2023-01-12 10:11   ` Tvrtko Ursulin
2023-01-12  3:21 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Allow error capture without a request / on reset failure (rev2) Patchwork
2023-01-12  3:36 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-01-12  5:36 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8531ce98-78af-d818-b5bb-0af753a026d3@intel.com \
    --to=john.c.harrison@intel.com \
    --cc=DRI-Devel@Lists.FreeDesktop.Org \
    --cc=Intel-GFX@Lists.FreeDesktop.Org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.