All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/i915/guc: Log engine resets
@ 2021-12-14 15:07 ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-14 15:07 UTC (permalink / raw)
  To: Intel-gfx; +Cc: Matthew Brost, John Harrison, dri-devel, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Log engine resets done by the GuC firmware in the similar way it is done
by the execlists backend.

This way we have notion of where the hangs are before the GuC gains
support for proper error capture.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 97311119da6f..51512123dc1a 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -11,6 +11,7 @@
 #include "gt/intel_context.h"
 #include "gt/intel_engine_pm.h"
 #include "gt/intel_engine_heartbeat.h"
+#include "gt/intel_engine_user.h"
 #include "gt/intel_gpu_commands.h"
 #include "gt/intel_gt.h"
 #include "gt/intel_gt_clock_utils.h"
@@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
 {
 	struct intel_gt *gt = guc_to_gt(guc);
 	struct drm_i915_private *i915 = gt->i915;
-	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
+	struct intel_engine_cs *engine = ce->engine;
 	intel_wakeref_t wakeref;
 
+	if (intel_engine_is_virtual(engine)) {
+		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
+			   intel_engine_class_repr(engine->class),
+			   engine->mask);
+		engine = guc_virtual_get_sibling(engine, 0);
+	} else {
+		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
+	}
+
 	intel_engine_set_hung_context(engine, ce);
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
 		i915_capture_error_state(gt, engine->mask);
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
@ 2021-12-14 15:07 ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-14 15:07 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Log engine resets done by the GuC firmware in the similar way it is done
by the execlists backend.

This way we have notion of where the hangs are before the GuC gains
support for proper error capture.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 97311119da6f..51512123dc1a 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -11,6 +11,7 @@
 #include "gt/intel_context.h"
 #include "gt/intel_engine_pm.h"
 #include "gt/intel_engine_heartbeat.h"
+#include "gt/intel_engine_user.h"
 #include "gt/intel_gpu_commands.h"
 #include "gt/intel_gt.h"
 #include "gt/intel_gt_clock_utils.h"
@@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
 {
 	struct intel_gt *gt = guc_to_gt(guc);
 	struct drm_i915_private *i915 = gt->i915;
-	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
+	struct intel_engine_cs *engine = ce->engine;
 	intel_wakeref_t wakeref;
 
+	if (intel_engine_is_virtual(engine)) {
+		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
+			   intel_engine_class_repr(engine->class),
+			   engine->mask);
+		engine = guc_virtual_get_sibling(engine, 0);
+	} else {
+		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
+	}
+
 	intel_engine_set_hung_context(engine, ce);
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
 		i915_capture_error_state(gt, engine->mask);
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/guc: Log engine resets
  2021-12-14 15:07 ` [Intel-gfx] " Tvrtko Ursulin
  (?)
@ 2021-12-14 16:33 ` Patchwork
  -1 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2021-12-14 16:33 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 5469 bytes --]

== Series Details ==

Series: drm/i915/guc: Log engine resets
URL   : https://patchwork.freedesktop.org/series/98020/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11001 -> Patchwork_21846
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/index.html

Participating hosts (46 -> 35)
------------------------------

  Additional (1): fi-kbl-soraka 
  Missing    (12): bat-dg1-6 bat-dg1-5 fi-hsw-4200u fi-icl-u2 fi-bsw-cyan bat-adlp-6 bat-adlp-4 fi-ctg-p8600 fi-pnv-d510 fi-bdw-samus bat-jsl-2 bat-jsl-1 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_21846:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_selftest@live@hangcheck:
    - {fi-jsl-1}:         [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/fi-jsl-1/igt@i915_selftest@live@hangcheck.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-jsl-1/igt@i915_selftest@live@hangcheck.html

  
Known issues
------------

  Here are the changes found in Patchwork_21846 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_fence@basic-busy@bcs0:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][3] ([fdo#109271]) +2 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-kbl-soraka/igt@gem_exec_fence@basic-busy@bcs0.html

  * igt@gem_exec_suspend@basic-s0:
    - fi-kbl-soraka:      NOTRUN -> [INCOMPLETE][4] ([i915#4782])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-kbl-soraka/igt@gem_exec_suspend@basic-s0.html

  * igt@i915_selftest@live@execlists:
    - fi-bsw-kefka:       [PASS][5] -> [INCOMPLETE][6] ([i915#2940])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/fi-bsw-kefka/igt@i915_selftest@live@execlists.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-bsw-kefka/igt@i915_selftest@live@execlists.html
    - fi-bsw-n3050:       [PASS][7] -> [INCOMPLETE][8] ([i915#2940])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/fi-bsw-n3050/igt@i915_selftest@live@execlists.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-bsw-n3050/igt@i915_selftest@live@execlists.html

  * igt@i915_selftest@live@hangcheck:
    - fi-snb-2600:        [PASS][9] -> [INCOMPLETE][10] ([i915#3921])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/fi-snb-2600/igt@i915_selftest@live@hangcheck.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-snb-2600/igt@i915_selftest@live@hangcheck.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-b:
    - fi-cfl-8109u:       [PASS][11] -> [DMESG-WARN][12] ([i915#295]) +12 similar issues
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/fi-cfl-8109u/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-b.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-cfl-8109u/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-b.html

  * igt@runner@aborted:
    - fi-bsw-kefka:       NOTRUN -> [FAIL][13] ([fdo#109271] / [i915#1436] / [i915#3428] / [i915#4312])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-bsw-kefka/igt@runner@aborted.html
    - fi-bsw-n3050:       NOTRUN -> [FAIL][14] ([fdo#109271] / [i915#1436] / [i915#3428] / [i915#4312])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-bsw-n3050/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@gt_lrc:
    - fi-rkl-11600:       [DMESG-FAIL][15] ([i915#2373]) -> [PASS][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/fi-rkl-11600/igt@i915_selftest@live@gt_lrc.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/fi-rkl-11600/igt@i915_selftest@live@gt_lrc.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#2373]: https://gitlab.freedesktop.org/drm/intel/issues/2373
  [i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
  [i915#295]: https://gitlab.freedesktop.org/drm/intel/issues/295
  [i915#3428]: https://gitlab.freedesktop.org/drm/intel/issues/3428
  [i915#3921]: https://gitlab.freedesktop.org/drm/intel/issues/3921
  [i915#3970]: https://gitlab.freedesktop.org/drm/intel/issues/3970
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4782]: https://gitlab.freedesktop.org/drm/intel/issues/4782


Build changes
-------------

  * Linux: CI_DRM_11001 -> Patchwork_21846

  CI-20190529: 20190529
  CI_DRM_11001: 1fe82991d64d5fa0cd37387f11d01c8f78ee5042 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6307: be84fe4f151bc092e068cab5cd0cd19c34948b40 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_21846: bed926281ba07ebae243a4128f660c94fd79a317 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

bed926281ba0 drm/i915/guc: Log engine resets

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/index.html

[-- Attachment #2: Type: text/html, Size: 6637 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915/guc: Log engine resets
  2021-12-14 15:07 ` [Intel-gfx] " Tvrtko Ursulin
  (?)
  (?)
@ 2021-12-14 22:25 ` Patchwork
  -1 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2021-12-14 22:25 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30254 bytes --]

== Series Details ==

Series: drm/i915/guc: Log engine resets
URL   : https://patchwork.freedesktop.org/series/98020/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11001_full -> Patchwork_21846_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in Patchwork_21846_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-kbl:          NOTRUN -> [FAIL][1] ([i915#2842])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-kbl:          [PASS][2] -> [FAIL][3] ([i915#2842]) +2 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-kbl4/igt@gem_exec_fair@basic-none@vcs0.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl2/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglb:         [PASS][4] -> [FAIL][5] ([i915#2842]) +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@bcs0:
    - shard-iclb:         [PASS][6] -> [FAIL][7] ([i915#2842])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb7/igt@gem_exec_fair@basic-pace@bcs0.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb8/igt@gem_exec_fair@basic-pace@bcs0.html

  * igt@gem_exec_fair@basic-pace@vcs0:
    - shard-kbl:          [PASS][8] -> [SKIP][9] ([fdo#109271])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-kbl2/igt@gem_exec_fair@basic-pace@vcs0.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl4/igt@gem_exec_fair@basic-pace@vcs0.html

  * igt@gem_lmem_swapping@parallel-multi:
    - shard-skl:          NOTRUN -> [SKIP][10] ([fdo#109271] / [i915#4613])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl1/igt@gem_lmem_swapping@parallel-multi.html

  * igt@gem_lmem_swapping@random:
    - shard-apl:          NOTRUN -> [SKIP][11] ([fdo#109271] / [i915#4613]) +2 similar issues
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl2/igt@gem_lmem_swapping@random.html
    - shard-kbl:          NOTRUN -> [SKIP][12] ([fdo#109271] / [i915#4613])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@gem_lmem_swapping@random.html

  * igt@gem_lmem_swapping@smem-oom:
    - shard-glk:          NOTRUN -> [SKIP][13] ([fdo#109271] / [i915#4613]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk5/igt@gem_lmem_swapping@smem-oom.html
    - shard-tglb:         NOTRUN -> [SKIP][14] ([i915#4613])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@gem_lmem_swapping@smem-oom.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-kbl:          NOTRUN -> [WARN][15] ([i915#2658])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@gem_pwrite@basic-exhaustion.html
    - shard-apl:          NOTRUN -> [WARN][16] ([i915#2658])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl2/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_pxp@fail-invalid-protected-context:
    - shard-tglb:         NOTRUN -> [SKIP][17] ([i915#4270])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@gem_pxp@fail-invalid-protected-context.html
    - shard-iclb:         NOTRUN -> [SKIP][18] ([i915#4270])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb3/igt@gem_pxp@fail-invalid-protected-context.html

  * igt@gen3_mixed_blits:
    - shard-tglb:         NOTRUN -> [SKIP][19] ([fdo#109289])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb3/igt@gen3_mixed_blits.html

  * igt@gen9_exec_parse@batch-zero-length:
    - shard-tglb:         NOTRUN -> [SKIP][20] ([i915#2856]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb3/igt@gen9_exec_parse@batch-zero-length.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [PASS][21] -> [FAIL][22] ([i915#454])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb3/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-tglb:         NOTRUN -> [SKIP][23] ([i915#4281])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@i915_pm_dc@dc9-dpms.html
    - shard-iclb:         [PASS][24] -> [SKIP][25] ([i915#4281])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb6/igt@i915_pm_dc@dc9-dpms.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb3/igt@i915_pm_dc@dc9-dpms.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-kbl:          NOTRUN -> [SKIP][26] ([fdo#109271] / [i915#3777])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-270:
    - shard-tglb:         NOTRUN -> [SKIP][27] ([fdo#111614]) +1 similar issue
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_big_fb@y-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip:
    - shard-glk:          NOTRUN -> [SKIP][28] ([fdo#109271] / [i915#3777]) +1 similar issue
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
    - shard-apl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3777]) +2 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl7/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][30] ([fdo#111615])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_big_fb@yf-tiled-32bpp-rotate-90.html

  * igt@kms_ccs@pipe-a-ccs-on-another-bo-yf_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][31] ([fdo#111615] / [i915#3689])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_ccs@pipe-a-ccs-on-another-bo-yf_tiled_ccs.html

  * igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][32] ([fdo#109271] / [i915#3886]) +5 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl6/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs:
    - shard-glk:          NOTRUN -> [SKIP][33] ([fdo#109271] / [i915#3886]) +2 similar issues
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk5/igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_mc_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][34] ([fdo#109271] / [i915#3886]) +3 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_rc_ccs_cc:
    - shard-skl:          NOTRUN -> [SKIP][35] ([fdo#109271] / [i915#3886])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl10/igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][36] ([i915#3689])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_ccs.html

  * igt@kms_ccs@pipe-c-bad-pixel-format-y_tiled_gen12_mc_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][37] ([i915#3689] / [i915#3886])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_ccs@pipe-c-bad-pixel-format-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-d-crc-primary-basic-y_tiled_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][38] ([fdo#109271]) +81 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@kms_ccs@pipe-d-crc-primary-basic-y_tiled_ccs.html

  * igt@kms_chamelium@vga-edid-read:
    - shard-apl:          NOTRUN -> [SKIP][39] ([fdo#109271] / [fdo#111827]) +11 similar issues
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl7/igt@kms_chamelium@vga-edid-read.html

  * igt@kms_chamelium@vga-hpd-fast:
    - shard-skl:          NOTRUN -> [SKIP][40] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl10/igt@kms_chamelium@vga-hpd-fast.html

  * igt@kms_color@pipe-c-ctm-0-75:
    - shard-skl:          [PASS][41] -> [DMESG-WARN][42] ([i915#1982])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl10/igt@kms_color@pipe-c-ctm-0-75.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl6/igt@kms_color@pipe-c-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-a-ctm-blue-to-red:
    - shard-kbl:          NOTRUN -> [SKIP][43] ([fdo#109271] / [fdo#111827]) +5 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@kms_color_chamelium@pipe-a-ctm-blue-to-red.html

  * igt@kms_color_chamelium@pipe-a-ctm-green-to-red:
    - shard-glk:          NOTRUN -> [SKIP][44] ([fdo#109271] / [fdo#111827]) +5 similar issues
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk5/igt@kms_color_chamelium@pipe-a-ctm-green-to-red.html

  * igt@kms_color_chamelium@pipe-c-degamma:
    - shard-tglb:         NOTRUN -> [SKIP][45] ([fdo#109284] / [fdo#111827]) +2 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_color_chamelium@pipe-c-degamma.html

  * igt@kms_content_protection@lic:
    - shard-apl:          NOTRUN -> [TIMEOUT][46] ([i915#1319])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl6/igt@kms_content_protection@lic.html

  * igt@kms_cursor_crc@pipe-b-cursor-32x32-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([i915#3319]) +1 similar issue
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_cursor_crc@pipe-b-cursor-32x32-onscreen.html

  * igt@kms_cursor_crc@pipe-c-cursor-max-size-rapid-movement:
    - shard-tglb:         NOTRUN -> [SKIP][48] ([i915#3359]) +1 similar issue
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_cursor_crc@pipe-c-cursor-max-size-rapid-movement.html

  * igt@kms_cursor_crc@pipe-d-cursor-256x85-rapid-movement:
    - shard-glk:          NOTRUN -> [SKIP][49] ([fdo#109271]) +79 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk1/igt@kms_cursor_crc@pipe-d-cursor-256x85-rapid-movement.html

  * igt@kms_cursor_crc@pipe-d-cursor-512x512-rapid-movement:
    - shard-tglb:         NOTRUN -> [SKIP][50] ([fdo#109279] / [i915#3359])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_cursor_crc@pipe-d-cursor-512x512-rapid-movement.html

  * igt@kms_cursor_legacy@flip-vs-cursor-toggle:
    - shard-iclb:         [PASS][51] -> [FAIL][52] ([i915#2346])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb5/igt@kms_cursor_legacy@flip-vs-cursor-toggle.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor-toggle.html

  * igt@kms_cursor_legacy@pipe-d-torture-bo:
    - shard-apl:          NOTRUN -> [SKIP][53] ([fdo#109271] / [i915#533])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl6/igt@kms_cursor_legacy@pipe-d-torture-bo.html

  * igt@kms_flip@2x-flip-vs-dpms-off-vs-modeset-interruptible:
    - shard-tglb:         NOTRUN -> [SKIP][54] ([fdo#111825]) +11 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_flip@2x-flip-vs-dpms-off-vs-modeset-interruptible.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2:
    - shard-glk:          [PASS][55] -> [FAIL][56] ([i915#79])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-glk6/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk9/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@2x-flip-vs-panning-vs-hang:
    - shard-skl:          NOTRUN -> [SKIP][57] ([fdo#109271]) +36 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl10/igt@kms_flip@2x-flip-vs-panning-vs-hang.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-kbl:          [PASS][58] -> [DMESG-WARN][59] ([i915#180]) +6 similar issues
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-kbl2/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl1/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_flip@plain-flip-ts-check-interruptible@c-edp1:
    - shard-skl:          [PASS][60] -> [FAIL][61] ([i915#2122]) +1 similar issue
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl8/igt@kms_flip@plain-flip-ts-check-interruptible@c-edp1.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl7/igt@kms_flip@plain-flip-ts-check-interruptible@c-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
    - shard-skl:          NOTRUN -> [SKIP][62] ([fdo#109271] / [i915#2672])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl10/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html

  * igt@kms_flip_tiling@flip-change-tiling@dp-1-pipe-a-y-to-yf-ccs:
    - shard-apl:          NOTRUN -> [DMESG-WARN][63] ([i915#1226])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl6/igt@kms_flip_tiling@flip-change-tiling@dp-1-pipe-a-y-to-yf-ccs.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([fdo#109280]) +1 similar issue
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-suspend:
    - shard-skl:          [PASS][65] -> [INCOMPLETE][66] ([i915#123])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl4/igt@kms_frontbuffer_tracking@psr-suspend.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl10/igt@kms_frontbuffer_tracking@psr-suspend.html

  * igt@kms_hdr@bpc-switch-dpms:
    - shard-skl:          [PASS][67] -> [FAIL][68] ([i915#1188]) +1 similar issue
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl7/igt@kms_hdr@bpc-switch-dpms.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl6/igt@kms_hdr@bpc-switch-dpms.html

  * igt@kms_hdr@static-swap:
    - shard-tglb:         NOTRUN -> [SKIP][69] ([i915#1187])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_hdr@static-swap.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - shard-glk:          NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#533]) +1 similar issue
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk5/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_pipe_crc_basic@disable-crc-after-crtc-pipe-d:
    - shard-iclb:         NOTRUN -> [SKIP][71] ([fdo#109278]) +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb3/igt@kms_pipe_crc_basic@disable-crc-after-crtc-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-7efc:
    - shard-kbl:          NOTRUN -> [FAIL][72] ([fdo#108145] / [i915#265])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl6/igt@kms_plane_alpha_blend@pipe-b-alpha-7efc.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-basic:
    - shard-glk:          NOTRUN -> [FAIL][73] ([fdo#108145] / [i915#265])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk1/igt@kms_plane_alpha_blend@pipe-b-alpha-basic.html
    - shard-apl:          NOTRUN -> [FAIL][74] ([fdo#108145] / [i915#265])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl7/igt@kms_plane_alpha_blend@pipe-b-alpha-basic.html

  * igt@kms_plane_lowres@pipe-b-tiling-yf:
    - shard-tglb:         NOTRUN -> [SKIP][75] ([fdo#111615] / [fdo#112054])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb3/igt@kms_plane_lowres@pipe-b-tiling-yf.html

  * igt@kms_plane_lowres@pipe-c-tiling-y:
    - shard-tglb:         NOTRUN -> [SKIP][76] ([i915#3536])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_plane_lowres@pipe-c-tiling-y.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area:
    - shard-tglb:         NOTRUN -> [SKIP][77] ([i915#2920])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area.html
    - shard-glk:          NOTRUN -> [SKIP][78] ([fdo#109271] / [i915#658])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk5/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area:
    - shard-kbl:          NOTRUN -> [SKIP][79] ([fdo#109271] / [i915#658]) +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl3/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html
    - shard-apl:          NOTRUN -> [SKIP][80] ([fdo#109271] / [i915#658])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html

  * igt@kms_psr@psr2_basic:
    - shard-tglb:         NOTRUN -> [FAIL][81] ([i915#132] / [i915#3467]) +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_psr@psr2_basic.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [PASS][82] -> [SKIP][83] ([fdo#109441]) +1 similar issue
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb4/igt@kms_psr@psr2_primary_page_flip.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-tglb:         NOTRUN -> [SKIP][84] ([fdo#109309])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@kms_tv_load_detect@load-detect.html

  * igt@kms_vblank@pipe-b-ts-continuation-suspend:
    - shard-apl:          [PASS][85] -> [DMESG-WARN][86] ([i915#180]) +1 similar issue
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-apl7/igt@kms_vblank@pipe-b-ts-continuation-suspend.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl1/igt@kms_vblank@pipe-b-ts-continuation-suspend.html

  * igt@nouveau_crc@pipe-b-ctx-flip-detection:
    - shard-tglb:         NOTRUN -> [SKIP][87] ([i915#2530]) +1 similar issue
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@nouveau_crc@pipe-b-ctx-flip-detection.html

  * igt@nouveau_crc@pipe-b-ctx-flip-skip-current-frame:
    - shard-apl:          NOTRUN -> [SKIP][88] ([fdo#109271]) +136 similar issues
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl7/igt@nouveau_crc@pipe-b-ctx-flip-skip-current-frame.html

  * igt@sysfs_clients@fair-0:
    - shard-skl:          NOTRUN -> [SKIP][89] ([fdo#109271] / [i915#2994])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl1/igt@sysfs_clients@fair-0.html

  * igt@sysfs_clients@fair-7:
    - shard-apl:          NOTRUN -> [SKIP][90] ([fdo#109271] / [i915#2994]) +1 similar issue
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl4/igt@sysfs_clients@fair-7.html

  * igt@sysfs_clients@pidname:
    - shard-glk:          NOTRUN -> [SKIP][91] ([fdo#109271] / [i915#2994]) +1 similar issue
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk1/igt@sysfs_clients@pidname.html

  * igt@sysfs_clients@recycle:
    - shard-kbl:          NOTRUN -> [SKIP][92] ([fdo#109271] / [i915#2994])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl6/igt@sysfs_clients@recycle.html

  * igt@sysfs_clients@sema-25:
    - shard-tglb:         NOTRUN -> [SKIP][93] ([i915#2994]) +1 similar issue
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb1/igt@sysfs_clients@sema-25.html

  
#### Possible fixes ####

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [TIMEOUT][94] ([i915#3063] / [i915#3648]) -> [PASS][95]
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-tglb3/igt@gem_eio@unwedge-stress.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb3/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_capture@pi@vcs0:
    - shard-skl:          [INCOMPLETE][96] ([i915#4547]) -> [PASS][97]
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl4/igt@gem_exec_capture@pi@vcs0.html
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl10/igt@gem_exec_capture@pi@vcs0.html

  * igt@gem_exec_endless@dispatch@vcs1:
    - shard-tglb:         [INCOMPLETE][98] -> [PASS][99]
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-tglb8/igt@gem_exec_endless@dispatch@vcs1.html
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-tglb6/igt@gem_exec_endless@dispatch@vcs1.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-kbl:          [FAIL][100] ([i915#2846]) -> [PASS][101]
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-kbl1/igt@gem_exec_fair@basic-deadline.html
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl2/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-glk:          [FAIL][102] ([i915#2842]) -> [PASS][103]
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-glk8/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk7/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-iclb:         [FAIL][104] ([i915#2842]) -> [PASS][105]
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb6/igt@gem_exec_fair@basic-none-share@rcs0.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb3/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-kbl:          [FAIL][106] ([i915#2842]) -> [PASS][107]
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-kbl2/igt@gem_exec_fair@basic-pace@rcs0.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-kbl4/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_whisper@basic-contexts-all:
    - shard-glk:          [DMESG-WARN][108] ([i915#118]) -> [PASS][109]
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-glk3/igt@gem_exec_whisper@basic-contexts-all.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk3/igt@gem_exec_whisper@basic-contexts-all.html

  * igt@gem_userptr_blits@huge-split:
    - shard-snb:          [FAIL][110] -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-snb5/igt@gem_userptr_blits@huge-split.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-snb6/igt@gem_userptr_blits@huge-split.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-iclb:         [FAIL][112] ([i915#454]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb1/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rpm@basic-pci-d3-state:
    - {shard-rkl}:        [SKIP][114] ([fdo#109308]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@i915_pm_rpm@basic-pci-d3-state.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@i915_pm_rpm@basic-pci-d3-state.html

  * igt@i915_suspend@fence-restore-tiled2untiled:
    - shard-apl:          [DMESG-WARN][116] ([i915#180]) -> [PASS][117] +4 similar issues
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-apl3/igt@i915_suspend@fence-restore-tiled2untiled.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-apl4/igt@i915_suspend@fence-restore-tiled2untiled.html

  * igt@kms_atomic@plane-immutable-zpos:
    - {shard-rkl}:        [SKIP][118] ([i915#1845]) -> [PASS][119] +6 similar issues
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@kms_atomic@plane-immutable-zpos.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@kms_atomic@plane-immutable-zpos.html

  * igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_rc_ccs_cc:
    - {shard-rkl}:        [SKIP][120] ([i915#1845] / [i915#4098]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_rc_ccs_cc.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_cursor_crc@pipe-b-cursor-dpms:
    - {shard-rkl}:        [SKIP][122] ([fdo#112022] / [i915#4070]) -> [PASS][123] +1 similar issue
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@kms_cursor_crc@pipe-b-cursor-dpms.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-dpms.html

  * igt@kms_cursor_edge_walk@pipe-b-64x64-right-edge:
    - {shard-rkl}:        [SKIP][124] ([i915#1849] / [i915#4070]) -> [PASS][125] +3 similar issues
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@kms_cursor_edge_walk@pipe-b-64x64-right-edge.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@kms_cursor_edge_walk@pipe-b-64x64-right-edge.html

  * igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions:
    - {shard-rkl}:        [SKIP][126] ([fdo#111825] / [i915#4070]) -> [PASS][127] +1 similar issue
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions.html

  * igt@kms_cursor_legacy@pipe-c-torture-bo:
    - {shard-rkl}:        [SKIP][128] ([i915#4070]) -> [PASS][129]
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-2/igt@kms_cursor_legacy@pipe-c-torture-bo.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-4/igt@kms_cursor_legacy@pipe-c-torture-bo.html

  * igt@kms_dp_aux_dev:
    - shard-iclb:         [DMESG-WARN][130] ([i915#262] / [i915#4391]) -> [PASS][131]
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb7/igt@kms_dp_aux_dev.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-iclb8/igt@kms_dp_aux_dev.html

  * igt@kms_draw_crc@draw-method-rgb565-blt-xtiled:
    - {shard-rkl}:        [SKIP][132] ([fdo#111314]) -> [PASS][133] +1 similar issue
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-rkl-1/igt@kms_draw_crc@draw-method-rgb565-blt-xtiled.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-rkl-6/igt@kms_draw_crc@draw-method-rgb565-blt-xtiled.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-skl:          [FAIL][134] ([i915#79]) -> [PASS][135] +1 similar issue
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl10/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl4/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1:
    - shard-glk:          [FAIL][136] ([i915#79]) -> [PASS][137]
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-glk2/igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-glk2/igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1:
    - shard-skl:          [FAIL][138] ([i915#2122]) -> [PASS][139]
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-skl1/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/shard-skl8/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile:
    - shard-iclb:         [SKIP][140] ([i915#3701]) -> [PASS][141]
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11001/shard-iclb2/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile.html
   [

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21846/index.html

[-- Attachment #2: Type: text/html, Size: 33591 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-14 15:07 ` [Intel-gfx] " Tvrtko Ursulin
@ 2021-12-17 12:15   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-17 12:15 UTC (permalink / raw)
  To: Intel-gfx; +Cc: Matthew Brost, John Harrison, dri-devel


On 14/12/2021 15:07, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Log engine resets done by the GuC firmware in the similar way it is done
> by the execlists backend.
> 
> This way we have notion of where the hangs are before the GuC gains
> support for proper error capture.

Ping - any interest to log this info?

All there currently is a non-descriptive "[drm] GPU HANG: ecode 
12:0:00000000".

Also, will GuC be reporting the reason for the engine reset at any point?

Regards,

Tvrtko

> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: John Harrison <John.C.Harrison@Intel.com>
> ---
>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
>   1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 97311119da6f..51512123dc1a 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -11,6 +11,7 @@
>   #include "gt/intel_context.h"
>   #include "gt/intel_engine_pm.h"
>   #include "gt/intel_engine_heartbeat.h"
> +#include "gt/intel_engine_user.h"
>   #include "gt/intel_gpu_commands.h"
>   #include "gt/intel_gt.h"
>   #include "gt/intel_gt_clock_utils.h"
> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
>   {
>   	struct intel_gt *gt = guc_to_gt(guc);
>   	struct drm_i915_private *i915 = gt->i915;
> -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
> +	struct intel_engine_cs *engine = ce->engine;
>   	intel_wakeref_t wakeref;
>   
> +	if (intel_engine_is_virtual(engine)) {
> +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
> +			   intel_engine_class_repr(engine->class),
> +			   engine->mask);
> +		engine = guc_virtual_get_sibling(engine, 0);
> +	} else {
> +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
> +	}
> +
>   	intel_engine_set_hung_context(engine, ce);
>   	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
>   		i915_capture_error_state(gt, engine->mask);
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
@ 2021-12-17 12:15   ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-17 12:15 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel


On 14/12/2021 15:07, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Log engine resets done by the GuC firmware in the similar way it is done
> by the execlists backend.
> 
> This way we have notion of where the hangs are before the GuC gains
> support for proper error capture.

Ping - any interest to log this info?

All there currently is a non-descriptive "[drm] GPU HANG: ecode 
12:0:00000000".

Also, will GuC be reporting the reason for the engine reset at any point?

Regards,

Tvrtko

> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: John Harrison <John.C.Harrison@Intel.com>
> ---
>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
>   1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 97311119da6f..51512123dc1a 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -11,6 +11,7 @@
>   #include "gt/intel_context.h"
>   #include "gt/intel_engine_pm.h"
>   #include "gt/intel_engine_heartbeat.h"
> +#include "gt/intel_engine_user.h"
>   #include "gt/intel_gpu_commands.h"
>   #include "gt/intel_gt.h"
>   #include "gt/intel_gt_clock_utils.h"
> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
>   {
>   	struct intel_gt *gt = guc_to_gt(guc);
>   	struct drm_i915_private *i915 = gt->i915;
> -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
> +	struct intel_engine_cs *engine = ce->engine;
>   	intel_wakeref_t wakeref;
>   
> +	if (intel_engine_is_virtual(engine)) {
> +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
> +			   intel_engine_class_repr(engine->class),
> +			   engine->mask);
> +		engine = guc_virtual_get_sibling(engine, 0);
> +	} else {
> +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
> +	}
> +
>   	intel_engine_set_hung_context(engine, ce);
>   	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
>   		i915_capture_error_state(gt, engine->mask);
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-17 12:15   ` Tvrtko Ursulin
@ 2021-12-17 16:22     ` Matthew Brost
  -1 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2021-12-17 16:22 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel-gfx, John Harrison, dri-devel

On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
> 
> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > Log engine resets done by the GuC firmware in the similar way it is done
> > by the execlists backend.
> > 
> > This way we have notion of where the hangs are before the GuC gains
> > support for proper error capture.
> 
> Ping - any interest to log this info?
> 
> All there currently is a non-descriptive "[drm] GPU HANG: ecode
> 12:0:00000000".
>

Yea, this could be helpful. One suggestion below.

> Also, will GuC be reporting the reason for the engine reset at any point?
>

We are working on the error state capture, presumably the registers will
give a clue what caused the hang.

As for the GuC providing a reason, that isn't defined in the interface
but that is decent idea to provide a hint in G2H what the issue was. Let
me run that by the i915 GuC developers / GuC firmware team and see what
they think. 

> Regards,
> 
> Tvrtko
> 
> > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: John Harrison <John.C.Harrison@Intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
> >   1 file changed, 11 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 97311119da6f..51512123dc1a 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -11,6 +11,7 @@
> >   #include "gt/intel_context.h"
> >   #include "gt/intel_engine_pm.h"
> >   #include "gt/intel_engine_heartbeat.h"
> > +#include "gt/intel_engine_user.h"
> >   #include "gt/intel_gpu_commands.h"
> >   #include "gt/intel_gt.h"
> >   #include "gt/intel_gt_clock_utils.h"
> > @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
> >   {
> >   	struct intel_gt *gt = guc_to_gt(guc);
> >   	struct drm_i915_private *i915 = gt->i915;
> > -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
> > +	struct intel_engine_cs *engine = ce->engine;
> >   	intel_wakeref_t wakeref;
> > +	if (intel_engine_is_virtual(engine)) {
> > +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
> > +			   intel_engine_class_repr(engine->class),
> > +			   engine->mask);
> > +		engine = guc_virtual_get_sibling(engine, 0);
> > +	} else {
> > +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);

Probably include the guc_id of the context too then?

Matt

> > +	}
> > +
> >   	intel_engine_set_hung_context(engine, ce);
> >   	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
> >   		i915_capture_error_state(gt, engine->mask);
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
@ 2021-12-17 16:22     ` Matthew Brost
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2021-12-17 16:22 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel-gfx, dri-devel

On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
> 
> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > Log engine resets done by the GuC firmware in the similar way it is done
> > by the execlists backend.
> > 
> > This way we have notion of where the hangs are before the GuC gains
> > support for proper error capture.
> 
> Ping - any interest to log this info?
> 
> All there currently is a non-descriptive "[drm] GPU HANG: ecode
> 12:0:00000000".
>

Yea, this could be helpful. One suggestion below.

> Also, will GuC be reporting the reason for the engine reset at any point?
>

We are working on the error state capture, presumably the registers will
give a clue what caused the hang.

As for the GuC providing a reason, that isn't defined in the interface
but that is decent idea to provide a hint in G2H what the issue was. Let
me run that by the i915 GuC developers / GuC firmware team and see what
they think. 

> Regards,
> 
> Tvrtko
> 
> > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: John Harrison <John.C.Harrison@Intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
> >   1 file changed, 11 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 97311119da6f..51512123dc1a 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -11,6 +11,7 @@
> >   #include "gt/intel_context.h"
> >   #include "gt/intel_engine_pm.h"
> >   #include "gt/intel_engine_heartbeat.h"
> > +#include "gt/intel_engine_user.h"
> >   #include "gt/intel_gpu_commands.h"
> >   #include "gt/intel_gt.h"
> >   #include "gt/intel_gt_clock_utils.h"
> > @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
> >   {
> >   	struct intel_gt *gt = guc_to_gt(guc);
> >   	struct drm_i915_private *i915 = gt->i915;
> > -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
> > +	struct intel_engine_cs *engine = ce->engine;
> >   	intel_wakeref_t wakeref;
> > +	if (intel_engine_is_virtual(engine)) {
> > +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
> > +			   intel_engine_class_repr(engine->class),
> > +			   engine->mask);
> > +		engine = guc_virtual_get_sibling(engine, 0);
> > +	} else {
> > +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);

Probably include the guc_id of the context too then?

Matt

> > +	}
> > +
> >   	intel_engine_set_hung_context(engine, ce);
> >   	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
> >   		i915_capture_error_state(gt, engine->mask);
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-17 16:22     ` Matthew Brost
@ 2021-12-20 15:00       ` Tvrtko Ursulin
  -1 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-20 15:00 UTC (permalink / raw)
  To: Matthew Brost; +Cc: Intel-gfx, John Harrison, dri-devel


On 17/12/2021 16:22, Matthew Brost wrote:
> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>
>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> Log engine resets done by the GuC firmware in the similar way it is done
>>> by the execlists backend.
>>>
>>> This way we have notion of where the hangs are before the GuC gains
>>> support for proper error capture.
>>
>> Ping - any interest to log this info?
>>
>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>> 12:0:00000000".
>>
> 
> Yea, this could be helpful. One suggestion below.
> 
>> Also, will GuC be reporting the reason for the engine reset at any point?
>>
> 
> We are working on the error state capture, presumably the registers will
> give a clue what caused the hang.
> 
> As for the GuC providing a reason, that isn't defined in the interface
> but that is decent idea to provide a hint in G2H what the issue was. Let
> me run that by the i915 GuC developers / GuC firmware team and see what
> they think.
> 
>> Regards,
>>
>> Tvrtko
>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> index 97311119da6f..51512123dc1a 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> @@ -11,6 +11,7 @@
>>>    #include "gt/intel_context.h"
>>>    #include "gt/intel_engine_pm.h"
>>>    #include "gt/intel_engine_heartbeat.h"
>>> +#include "gt/intel_engine_user.h"
>>>    #include "gt/intel_gpu_commands.h"
>>>    #include "gt/intel_gt.h"
>>>    #include "gt/intel_gt_clock_utils.h"
>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
>>>    {
>>>    	struct intel_gt *gt = guc_to_gt(guc);
>>>    	struct drm_i915_private *i915 = gt->i915;
>>> -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
>>> +	struct intel_engine_cs *engine = ce->engine;
>>>    	intel_wakeref_t wakeref;
>>> +	if (intel_engine_is_virtual(engine)) {
>>> +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
>>> +			   intel_engine_class_repr(engine->class),
>>> +			   engine->mask);
>>> +		engine = guc_virtual_get_sibling(engine, 0);
>>> +	} else {
>>> +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
> 
> Probably include the guc_id of the context too then?

Is the guc id stable and useful on its own - who would be the user?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
@ 2021-12-20 15:00       ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-20 15:00 UTC (permalink / raw)
  To: Matthew Brost; +Cc: Intel-gfx, dri-devel


On 17/12/2021 16:22, Matthew Brost wrote:
> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>
>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> Log engine resets done by the GuC firmware in the similar way it is done
>>> by the execlists backend.
>>>
>>> This way we have notion of where the hangs are before the GuC gains
>>> support for proper error capture.
>>
>> Ping - any interest to log this info?
>>
>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>> 12:0:00000000".
>>
> 
> Yea, this could be helpful. One suggestion below.
> 
>> Also, will GuC be reporting the reason for the engine reset at any point?
>>
> 
> We are working on the error state capture, presumably the registers will
> give a clue what caused the hang.
> 
> As for the GuC providing a reason, that isn't defined in the interface
> but that is decent idea to provide a hint in G2H what the issue was. Let
> me run that by the i915 GuC developers / GuC firmware team and see what
> they think.
> 
>> Regards,
>>
>> Tvrtko
>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> index 97311119da6f..51512123dc1a 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> @@ -11,6 +11,7 @@
>>>    #include "gt/intel_context.h"
>>>    #include "gt/intel_engine_pm.h"
>>>    #include "gt/intel_engine_heartbeat.h"
>>> +#include "gt/intel_engine_user.h"
>>>    #include "gt/intel_gpu_commands.h"
>>>    #include "gt/intel_gt.h"
>>>    #include "gt/intel_gt_clock_utils.h"
>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
>>>    {
>>>    	struct intel_gt *gt = guc_to_gt(guc);
>>>    	struct drm_i915_private *i915 = gt->i915;
>>> -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
>>> +	struct intel_engine_cs *engine = ce->engine;
>>>    	intel_wakeref_t wakeref;
>>> +	if (intel_engine_is_virtual(engine)) {
>>> +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
>>> +			   intel_engine_class_repr(engine->class),
>>> +			   engine->mask);
>>> +		engine = guc_virtual_get_sibling(engine, 0);
>>> +	} else {
>>> +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
> 
> Probably include the guc_id of the context too then?

Is the guc id stable and useful on its own - who would be the user?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-20 15:00       ` Tvrtko Ursulin
@ 2021-12-20 17:55         ` Matthew Brost
  -1 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2021-12-20 17:55 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel-gfx, John Harrison, dri-devel

On Mon, Dec 20, 2021 at 03:00:53PM +0000, Tvrtko Ursulin wrote:
> 
> On 17/12/2021 16:22, Matthew Brost wrote:
> > On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
> > > 
> > > On 14/12/2021 15:07, Tvrtko Ursulin wrote:
> > > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > > 
> > > > Log engine resets done by the GuC firmware in the similar way it is done
> > > > by the execlists backend.
> > > > 
> > > > This way we have notion of where the hangs are before the GuC gains
> > > > support for proper error capture.
> > > 
> > > Ping - any interest to log this info?
> > > 
> > > All there currently is a non-descriptive "[drm] GPU HANG: ecode
> > > 12:0:00000000".
> > > 
> > 
> > Yea, this could be helpful. One suggestion below.
> > 
> > > Also, will GuC be reporting the reason for the engine reset at any point?
> > > 
> > 
> > We are working on the error state capture, presumably the registers will
> > give a clue what caused the hang.
> > 
> > As for the GuC providing a reason, that isn't defined in the interface
> > but that is decent idea to provide a hint in G2H what the issue was. Let
> > me run that by the i915 GuC developers / GuC firmware team and see what
> > they think.
> > 
> > > Regards,
> > > 
> > > Tvrtko
> > > 
> > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > Cc: John Harrison <John.C.Harrison@Intel.com>
> > > > ---
> > > >    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
> > > >    1 file changed, 11 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > index 97311119da6f..51512123dc1a 100644
> > > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > @@ -11,6 +11,7 @@
> > > >    #include "gt/intel_context.h"
> > > >    #include "gt/intel_engine_pm.h"
> > > >    #include "gt/intel_engine_heartbeat.h"
> > > > +#include "gt/intel_engine_user.h"
> > > >    #include "gt/intel_gpu_commands.h"
> > > >    #include "gt/intel_gt.h"
> > > >    #include "gt/intel_gt_clock_utils.h"
> > > > @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
> > > >    {
> > > >    	struct intel_gt *gt = guc_to_gt(guc);
> > > >    	struct drm_i915_private *i915 = gt->i915;
> > > > -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
> > > > +	struct intel_engine_cs *engine = ce->engine;
> > > >    	intel_wakeref_t wakeref;
> > > > +	if (intel_engine_is_virtual(engine)) {
> > > > +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
> > > > +			   intel_engine_class_repr(engine->class),
> > > > +			   engine->mask);
> > > > +		engine = guc_virtual_get_sibling(engine, 0);
> > > > +	} else {
> > > > +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
> > 
> > Probably include the guc_id of the context too then?
> 
> Is the guc id stable and useful on its own - who would be the user?
> 

Techincally not stable, but in practice it is. The user could be
corresponding the context that was reset to a GuC log.

More debug info is typically better.

Matt

> Regards,
> 
> Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
@ 2021-12-20 17:55         ` Matthew Brost
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2021-12-20 17:55 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel-gfx, dri-devel

On Mon, Dec 20, 2021 at 03:00:53PM +0000, Tvrtko Ursulin wrote:
> 
> On 17/12/2021 16:22, Matthew Brost wrote:
> > On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
> > > 
> > > On 14/12/2021 15:07, Tvrtko Ursulin wrote:
> > > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > > 
> > > > Log engine resets done by the GuC firmware in the similar way it is done
> > > > by the execlists backend.
> > > > 
> > > > This way we have notion of where the hangs are before the GuC gains
> > > > support for proper error capture.
> > > 
> > > Ping - any interest to log this info?
> > > 
> > > All there currently is a non-descriptive "[drm] GPU HANG: ecode
> > > 12:0:00000000".
> > > 
> > 
> > Yea, this could be helpful. One suggestion below.
> > 
> > > Also, will GuC be reporting the reason for the engine reset at any point?
> > > 
> > 
> > We are working on the error state capture, presumably the registers will
> > give a clue what caused the hang.
> > 
> > As for the GuC providing a reason, that isn't defined in the interface
> > but that is decent idea to provide a hint in G2H what the issue was. Let
> > me run that by the i915 GuC developers / GuC firmware team and see what
> > they think.
> > 
> > > Regards,
> > > 
> > > Tvrtko
> > > 
> > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > Cc: John Harrison <John.C.Harrison@Intel.com>
> > > > ---
> > > >    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
> > > >    1 file changed, 11 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > index 97311119da6f..51512123dc1a 100644
> > > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > @@ -11,6 +11,7 @@
> > > >    #include "gt/intel_context.h"
> > > >    #include "gt/intel_engine_pm.h"
> > > >    #include "gt/intel_engine_heartbeat.h"
> > > > +#include "gt/intel_engine_user.h"
> > > >    #include "gt/intel_gpu_commands.h"
> > > >    #include "gt/intel_gt.h"
> > > >    #include "gt/intel_gt_clock_utils.h"
> > > > @@ -3934,9 +3935,18 @@ static void capture_error_state(struct intel_guc *guc,
> > > >    {
> > > >    	struct intel_gt *gt = guc_to_gt(guc);
> > > >    	struct drm_i915_private *i915 = gt->i915;
> > > > -	struct intel_engine_cs *engine = __context_to_physical_engine(ce);
> > > > +	struct intel_engine_cs *engine = ce->engine;
> > > >    	intel_wakeref_t wakeref;
> > > > +	if (intel_engine_is_virtual(engine)) {
> > > > +		drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine reset\n",
> > > > +			   intel_engine_class_repr(engine->class),
> > > > +			   engine->mask);
> > > > +		engine = guc_virtual_get_sibling(engine, 0);
> > > > +	} else {
> > > > +		drm_notice(&i915->drm, "%s GuC engine reset\n", engine->name);
> > 
> > Probably include the guc_id of the context too then?
> 
> Is the guc id stable and useful on its own - who would be the user?
> 

Techincally not stable, but in practice it is. The user could be
corresponding the context that was reset to a GuC log.

More debug info is typically better.

Matt

> Regards,
> 
> Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-20 15:00       ` Tvrtko Ursulin
  (?)
  (?)
@ 2021-12-20 18:34       ` John Harrison
  2021-12-21 13:37         ` Tvrtko Ursulin
  -1 siblings, 1 reply; 20+ messages in thread
From: John Harrison @ 2021-12-20 18:34 UTC (permalink / raw)
  To: Tvrtko Ursulin, Matthew Brost; +Cc: Intel-gfx, dri-devel

On 12/20/2021 07:00, Tvrtko Ursulin wrote:
> On 17/12/2021 16:22, Matthew Brost wrote:
>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>
>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>
>>>> Log engine resets done by the GuC firmware in the similar way it is 
>>>> done
>>>> by the execlists backend.
>>>>
>>>> This way we have notion of where the hangs are before the GuC gains
>>>> support for proper error capture.
>>>
>>> Ping - any interest to log this info?
>>>
>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>> 12:0:00000000".
>>>
>>
>> Yea, this could be helpful. One suggestion below.
>>
>>> Also, will GuC be reporting the reason for the engine reset at any 
>>> point?
>>>
>>
>> We are working on the error state capture, presumably the registers will
>> give a clue what caused the hang.
>>
>> As for the GuC providing a reason, that isn't defined in the interface
>> but that is decent idea to provide a hint in G2H what the issue was. Let
>> me run that by the i915 GuC developers / GuC firmware team and see what
>> they think.
>>
The GuC does not do any hang analysis. So as far as GuC is concerned, 
the reason is pretty much always going to be pre-emption timeout. There 
are a few ways the pre-emption itself could be triggered but basically, 
if GuC resets an active context then it is because it did not pre-empt 
quickly enough when requested.


>>> Regards,
>>>
>>> Tvrtko
>>>
>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>> ---
>>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> index 97311119da6f..51512123dc1a 100644
>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> @@ -11,6 +11,7 @@
>>>>    #include "gt/intel_context.h"
>>>>    #include "gt/intel_engine_pm.h"
>>>>    #include "gt/intel_engine_heartbeat.h"
>>>> +#include "gt/intel_engine_user.h"
>>>>    #include "gt/intel_gpu_commands.h"
>>>>    #include "gt/intel_gt.h"
>>>>    #include "gt/intel_gt_clock_utils.h"
>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>> intel_guc *guc,
>>>>    {
>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>        struct drm_i915_private *i915 = gt->i915;
>>>> -    struct intel_engine_cs *engine = 
>>>> __context_to_physical_engine(ce);
>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>        intel_wakeref_t wakeref;
>>>> +    if (intel_engine_is_virtual(engine)) {
>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine 
>>>> reset\n",
>>>> +               intel_engine_class_repr(engine->class),
>>>> +               engine->mask);
>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>> +    } else {
>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>> engine->name);
>>
>> Probably include the guc_id of the context too then?
>
> Is the guc id stable and useful on its own - who would be the user?
The GuC id is the only thing that matters when trying to correlate KMD 
activity with a GuC log. So while it might not be of any use or interest 
to an end user, it is extremely important and useful to a kernel 
developer attempting to debug an issue. And that includes bug reports 
from end users that are hard to repro given that the standard error 
capture will include the GuC log.

Also, note that GuC really resets contexts rather than engines. What it 
reports back to i915 on a reset is simply the GuC id of the context. It 
is up to i915 to work back from that to determine engine 
instances/classes if required. And in the case of a virtual context, it 
is impossible to extract the actual instance number. So your above print 
about resetting all instances within the virtual engine mask is 
incorrect/misleading. The reset would have been applied to one and only 
one of those engines. If you really need to know exactly which engine 
was poked, you need to look inside the GuC log.

However, the follow up point is to ask why you need to report the exact 
class/instance? The end user doesn't care about which specific engine 
got reset. They only care that their context was reset. Even a KMD 
developer doesn't really care unless the concern is about a hardware bug 
rather than a software bug.

My view is that the current message is indeed woefully uninformative. 
However, it is more important to be reporting context identification 
than engine instances. So sure, add the engine instance description but 
also add something specific to the ce as well. Ideally (for me) the GuC 
id and maybe something else that uniquely identifies the context in KMD 
land for when not using GuC?

John


>
> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-20 18:34       ` John Harrison
@ 2021-12-21 13:37         ` Tvrtko Ursulin
  2021-12-21 22:14           ` John Harrison
  0 siblings, 1 reply; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-21 13:37 UTC (permalink / raw)
  To: John Harrison, Matthew Brost; +Cc: Intel-gfx, dri-devel


On 20/12/2021 18:34, John Harrison wrote:
> On 12/20/2021 07:00, Tvrtko Ursulin wrote:
>> On 17/12/2021 16:22, Matthew Brost wrote:
>>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>>
>>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>
>>>>> Log engine resets done by the GuC firmware in the similar way it is 
>>>>> done
>>>>> by the execlists backend.
>>>>>
>>>>> This way we have notion of where the hangs are before the GuC gains
>>>>> support for proper error capture.
>>>>
>>>> Ping - any interest to log this info?
>>>>
>>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>>> 12:0:00000000".
>>>>
>>>
>>> Yea, this could be helpful. One suggestion below.
>>>
>>>> Also, will GuC be reporting the reason for the engine reset at any 
>>>> point?
>>>>
>>>
>>> We are working on the error state capture, presumably the registers will
>>> give a clue what caused the hang.
>>>
>>> As for the GuC providing a reason, that isn't defined in the interface
>>> but that is decent idea to provide a hint in G2H what the issue was. Let
>>> me run that by the i915 GuC developers / GuC firmware team and see what
>>> they think.
>>>
> The GuC does not do any hang analysis. So as far as GuC is concerned, 
> the reason is pretty much always going to be pre-emption timeout. There 
> are a few ways the pre-emption itself could be triggered but basically, 
> if GuC resets an active context then it is because it did not pre-empt 
> quickly enough when requested.
> 
> 
>>>> Regards,
>>>>
>>>> Tvrtko
>>>>
>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>>> ---
>>>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 +++++++++++-
>>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> index 97311119da6f..51512123dc1a 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> @@ -11,6 +11,7 @@
>>>>>    #include "gt/intel_context.h"
>>>>>    #include "gt/intel_engine_pm.h"
>>>>>    #include "gt/intel_engine_heartbeat.h"
>>>>> +#include "gt/intel_engine_user.h"
>>>>>    #include "gt/intel_gpu_commands.h"
>>>>>    #include "gt/intel_gt.h"
>>>>>    #include "gt/intel_gt_clock_utils.h"
>>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>>> intel_guc *guc,
>>>>>    {
>>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>>        struct drm_i915_private *i915 = gt->i915;
>>>>> -    struct intel_engine_cs *engine = 
>>>>> __context_to_physical_engine(ce);
>>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>>        intel_wakeref_t wakeref;
>>>>> +    if (intel_engine_is_virtual(engine)) {
>>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC engine 
>>>>> reset\n",
>>>>> +               intel_engine_class_repr(engine->class),
>>>>> +               engine->mask);
>>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>>> +    } else {
>>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>>> engine->name);
>>>
>>> Probably include the guc_id of the context too then?
>>
>> Is the guc id stable and useful on its own - who would be the user?
> The GuC id is the only thing that matters when trying to correlate KMD 
> activity with a GuC log. So while it might not be of any use or interest 
> to an end user, it is extremely important and useful to a kernel 
> developer attempting to debug an issue. And that includes bug reports 
> from end users that are hard to repro given that the standard error 
> capture will include the GuC log.

On the topic of GuC log - is there a tool in IGT (or will be) which will 
parse the bit saved in the error capture or how is that supposed to be used?

> Also, note that GuC really resets contexts rather than engines. What it 
> reports back to i915 on a reset is simply the GuC id of the context. It 
> is up to i915 to work back from that to determine engine 
> instances/classes if required. And in the case of a virtual context, it 
> is impossible to extract the actual instance number. So your above print 
> about resetting all instances within the virtual engine mask is 
> incorrect/misleading. The reset would have been applied to one and only 
> one of those engines. If you really need to know exactly which engine 
> was poked, you need to look inside the GuC log.

I think I understood that part. :) It wasn't my intent to imply in the 
message multiple engines have been reset, but in the case of veng, log 
the class and mask and the fact there was an engine reset (singular). 
Clearer message can probably be written.

> However, the follow up point is to ask why you need to report the exact 
> class/instance? The end user doesn't care about which specific engine 
> got reset. They only care that their context was reset. Even a KMD 
> developer doesn't really care unless the concern is about a hardware bug 
> rather than a software bug.

I was simply aligning both backends to log as similar information as 
possible. Information is there, just not logged.

Concerning the wider topic, my thinking is end user is mainly interested 
to know there are any engine resets happening (to tie with the 
experience of UI/video glitching or whatever). Going for deeper analysis 
than that is probably beyond the scope of the kernel log and indeed 
error capture territory.

> My view is that the current message is indeed woefully uninformative. 
> However, it is more important to be reporting context identification 
> than engine instances. So sure, add the engine instance description but 
> also add something specific to the ce as well. Ideally (for me) the GuC 
> id and maybe something else that uniquely identifies the context in KMD 
> land for when not using GuC?

Not sure we need to go that far at this level, but even if we do it 
could be a follow up to add new data to both backends. Not sure yet I 
care enough to drive this. My patch was simply a reaction to noticing 
there is zero information currently logged while debugging some DG2 hangs.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-21 13:37         ` Tvrtko Ursulin
@ 2021-12-21 22:14           ` John Harrison
  2021-12-22 16:21             ` Tvrtko Ursulin
  0 siblings, 1 reply; 20+ messages in thread
From: John Harrison @ 2021-12-21 22:14 UTC (permalink / raw)
  To: Tvrtko Ursulin, Matthew Brost; +Cc: Intel-gfx, dri-devel

On 12/21/2021 05:37, Tvrtko Ursulin wrote:
> On 20/12/2021 18:34, John Harrison wrote:
>> On 12/20/2021 07:00, Tvrtko Ursulin wrote:
>>> On 17/12/2021 16:22, Matthew Brost wrote:
>>>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>>>
>>>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>
>>>>>> Log engine resets done by the GuC firmware in the similar way it 
>>>>>> is done
>>>>>> by the execlists backend.
>>>>>>
>>>>>> This way we have notion of where the hangs are before the GuC gains
>>>>>> support for proper error capture.
>>>>>
>>>>> Ping - any interest to log this info?
>>>>>
>>>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>>>> 12:0:00000000".
>>>>>
>>>>
>>>> Yea, this could be helpful. One suggestion below.
>>>>
>>>>> Also, will GuC be reporting the reason for the engine reset at any 
>>>>> point?
>>>>>
>>>>
>>>> We are working on the error state capture, presumably the registers 
>>>> will
>>>> give a clue what caused the hang.
>>>>
>>>> As for the GuC providing a reason, that isn't defined in the interface
>>>> but that is decent idea to provide a hint in G2H what the issue 
>>>> was. Let
>>>> me run that by the i915 GuC developers / GuC firmware team and see 
>>>> what
>>>> they think.
>>>>
>> The GuC does not do any hang analysis. So as far as GuC is concerned, 
>> the reason is pretty much always going to be pre-emption timeout. 
>> There are a few ways the pre-emption itself could be triggered but 
>> basically, if GuC resets an active context then it is because it did 
>> not pre-empt quickly enough when requested.
>>
>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>>
>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>>>> ---
>>>>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 
>>>>>> +++++++++++-
>>>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>> index 97311119da6f..51512123dc1a 100644
>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>> @@ -11,6 +11,7 @@
>>>>>>    #include "gt/intel_context.h"
>>>>>>    #include "gt/intel_engine_pm.h"
>>>>>>    #include "gt/intel_engine_heartbeat.h"
>>>>>> +#include "gt/intel_engine_user.h"
>>>>>>    #include "gt/intel_gpu_commands.h"
>>>>>>    #include "gt/intel_gt.h"
>>>>>>    #include "gt/intel_gt_clock_utils.h"
>>>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>>>> intel_guc *guc,
>>>>>>    {
>>>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>>>        struct drm_i915_private *i915 = gt->i915;
>>>>>> -    struct intel_engine_cs *engine = 
>>>>>> __context_to_physical_engine(ce);
>>>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>>>        intel_wakeref_t wakeref;
>>>>>> +    if (intel_engine_is_virtual(engine)) {
>>>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC 
>>>>>> engine reset\n",
>>>>>> + intel_engine_class_repr(engine->class),
>>>>>> +               engine->mask);
>>>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>>>> +    } else {
>>>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>>>> engine->name);
>>>>
>>>> Probably include the guc_id of the context too then?
>>>
>>> Is the guc id stable and useful on its own - who would be the user?
>> The GuC id is the only thing that matters when trying to correlate 
>> KMD activity with a GuC log. So while it might not be of any use or 
>> interest to an end user, it is extremely important and useful to a 
>> kernel developer attempting to debug an issue. And that includes bug 
>> reports from end users that are hard to repro given that the standard 
>> error capture will include the GuC log.
>
> On the topic of GuC log - is there a tool in IGT (or will be) which 
> will parse the bit saved in the error capture or how is that supposed 
> to be used?
Nope.

However, Alan is currently working on supporting the GuC error capture 
mechanism. Prior to sending the reset notification to the KMD, the GuC 
will save a whole bunch of register state to a memory buffer and send a 
notification to the KMD that this is available. When we then get the 
actual reset notification, we need to match the two together and include 
a parsed, human readable version of the GuC's capture state buffer in 
the sysfs error log output.

The GuC log should not be involved in this process. And note that any 
register dumps in the GuC log are limited in scope and only enabled at 
higher verbosity levels. Whereas, the official state capture is based on 
a register list provided by the KMD and is available irrespective of 
debug CONFIG settings, verbosity levels, etc.

>
>> Also, note that GuC really resets contexts rather than engines. What 
>> it reports back to i915 on a reset is simply the GuC id of the 
>> context. It is up to i915 to work back from that to determine engine 
>> instances/classes if required. And in the case of a virtual context, 
>> it is impossible to extract the actual instance number. So your above 
>> print about resetting all instances within the virtual engine mask is 
>> incorrect/misleading. The reset would have been applied to one and 
>> only one of those engines. If you really need to know exactly which 
>> engine was poked, you need to look inside the GuC log.
>
> I think I understood that part. :) It wasn't my intent to imply in the 
> message multiple engines have been reset, but in the case of veng, log 
> the class and mask and the fact there was an engine reset (singular). 
> Clearer message can probably be written.
>
>> However, the follow up point is to ask why you need to report the 
>> exact class/instance? The end user doesn't care about which specific 
>> engine got reset. They only care that their context was reset. Even a 
>> KMD developer doesn't really care unless the concern is about a 
>> hardware bug rather than a software bug.
>
> I was simply aligning both backends to log as similar information as 
> possible. Information is there, just not logged.
>
> Concerning the wider topic, my thinking is end user is mainly 
> interested to know there are any engine resets happening (to tie with 
> the experience of UI/video glitching or whatever). Going for deeper 
> analysis than that is probably beyond the scope of the kernel log and 
> indeed error capture territory.
I would still say that the important information is which context was 
killed not which engine. Sure, knowing the engine is better than nothing 
but if we can report something more useful then why not?

>
>> My view is that the current message is indeed woefully uninformative. 
>> However, it is more important to be reporting context identification 
>> than engine instances. So sure, add the engine instance description 
>> but also add something specific to the ce as well. Ideally (for me) 
>> the GuC id and maybe something else that uniquely identifies the 
>> context in KMD land for when not using GuC?
>
> Not sure we need to go that far at this level, but even if we do it 
> could be a follow up to add new data to both backends. Not sure yet I 
> care enough to drive this. My patch was simply a reaction to noticing 
> there is zero information currently logged while debugging some DG2 
> hangs.
In terms of just reporting that a reset occurred, we already have the 
'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. The ecode 
is a somewhat bizarre value but it does act as a 'something went wrong, 
your system is not happy' type message. Going beyond that, I think 
context identification is the next most useful thing to add.

But if you aren't even getting the 'GPU HANG' message then it sounds 
like something is broken with what we already have. So we should fix 
that as a first priority. If that message isn't appearing then it means 
there was no error capture so adding extra info to the capture won't help!

John.


>
> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-21 22:14           ` John Harrison
@ 2021-12-22 16:21             ` Tvrtko Ursulin
  2021-12-22 21:58               ` John Harrison
  0 siblings, 1 reply; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-22 16:21 UTC (permalink / raw)
  To: John Harrison, Matthew Brost; +Cc: Intel-gfx, dri-devel


On 21/12/2021 22:14, John Harrison wrote:
> On 12/21/2021 05:37, Tvrtko Ursulin wrote:
>> On 20/12/2021 18:34, John Harrison wrote:
>>> On 12/20/2021 07:00, Tvrtko Ursulin wrote:
>>>> On 17/12/2021 16:22, Matthew Brost wrote:
>>>>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>>>>
>>>>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>
>>>>>>> Log engine resets done by the GuC firmware in the similar way it 
>>>>>>> is done
>>>>>>> by the execlists backend.
>>>>>>>
>>>>>>> This way we have notion of where the hangs are before the GuC gains
>>>>>>> support for proper error capture.
>>>>>>
>>>>>> Ping - any interest to log this info?
>>>>>>
>>>>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>>>>> 12:0:00000000".
>>>>>>
>>>>>
>>>>> Yea, this could be helpful. One suggestion below.
>>>>>
>>>>>> Also, will GuC be reporting the reason for the engine reset at any 
>>>>>> point?
>>>>>>
>>>>>
>>>>> We are working on the error state capture, presumably the registers 
>>>>> will
>>>>> give a clue what caused the hang.
>>>>>
>>>>> As for the GuC providing a reason, that isn't defined in the interface
>>>>> but that is decent idea to provide a hint in G2H what the issue 
>>>>> was. Let
>>>>> me run that by the i915 GuC developers / GuC firmware team and see 
>>>>> what
>>>>> they think.
>>>>>
>>> The GuC does not do any hang analysis. So as far as GuC is concerned, 
>>> the reason is pretty much always going to be pre-emption timeout. 
>>> There are a few ways the pre-emption itself could be triggered but 
>>> basically, if GuC resets an active context then it is because it did 
>>> not pre-empt quickly enough when requested.
>>>
>>>
>>>>>> Regards,
>>>>>>
>>>>>> Tvrtko
>>>>>>
>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>>>>> ---
>>>>>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 
>>>>>>> +++++++++++-
>>>>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> index 97311119da6f..51512123dc1a 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> @@ -11,6 +11,7 @@
>>>>>>>    #include "gt/intel_context.h"
>>>>>>>    #include "gt/intel_engine_pm.h"
>>>>>>>    #include "gt/intel_engine_heartbeat.h"
>>>>>>> +#include "gt/intel_engine_user.h"
>>>>>>>    #include "gt/intel_gpu_commands.h"
>>>>>>>    #include "gt/intel_gt.h"
>>>>>>>    #include "gt/intel_gt_clock_utils.h"
>>>>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>>>>> intel_guc *guc,
>>>>>>>    {
>>>>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>>>>        struct drm_i915_private *i915 = gt->i915;
>>>>>>> -    struct intel_engine_cs *engine = 
>>>>>>> __context_to_physical_engine(ce);
>>>>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>>>>        intel_wakeref_t wakeref;
>>>>>>> +    if (intel_engine_is_virtual(engine)) {
>>>>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC 
>>>>>>> engine reset\n",
>>>>>>> + intel_engine_class_repr(engine->class),
>>>>>>> +               engine->mask);
>>>>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>>>>> +    } else {
>>>>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>>>>> engine->name);
>>>>>
>>>>> Probably include the guc_id of the context too then?
>>>>
>>>> Is the guc id stable and useful on its own - who would be the user?
>>> The GuC id is the only thing that matters when trying to correlate 
>>> KMD activity with a GuC log. So while it might not be of any use or 
>>> interest to an end user, it is extremely important and useful to a 
>>> kernel developer attempting to debug an issue. And that includes bug 
>>> reports from end users that are hard to repro given that the standard 
>>> error capture will include the GuC log.
>>
>> On the topic of GuC log - is there a tool in IGT (or will be) which 
>> will parse the bit saved in the error capture or how is that supposed 
>> to be used?
> Nope.
> 
> However, Alan is currently working on supporting the GuC error capture 
> mechanism. Prior to sending the reset notification to the KMD, the GuC 
> will save a whole bunch of register state to a memory buffer and send a 
> notification to the KMD that this is available. When we then get the 
> actual reset notification, we need to match the two together and include 
> a parsed, human readable version of the GuC's capture state buffer in 
> the sysfs error log output.
> 
> The GuC log should not be involved in this process. And note that any 
> register dumps in the GuC log are limited in scope and only enabled at 
> higher verbosity levels. Whereas, the official state capture is based on 
> a register list provided by the KMD and is available irrespective of 
> debug CONFIG settings, verbosity levels, etc.

Hm why should GuC log not be involved now? I thought earlier you said:

"""
And that includes bug reports from end users that are hard to repro 
given that the standard error capture will include the GuC log.
"""

Hence I thought there would be a tool in IGT which would parse the part 
saved inside the error capture.

>>> Also, note that GuC really resets contexts rather than engines. What 
>>> it reports back to i915 on a reset is simply the GuC id of the 
>>> context. It is up to i915 to work back from that to determine engine 
>>> instances/classes if required. And in the case of a virtual context, 
>>> it is impossible to extract the actual instance number. So your above 
>>> print about resetting all instances within the virtual engine mask is 
>>> incorrect/misleading. The reset would have been applied to one and 
>>> only one of those engines. If you really need to know exactly which 
>>> engine was poked, you need to look inside the GuC log.
>>
>> I think I understood that part. :) It wasn't my intent to imply in the 
>> message multiple engines have been reset, but in the case of veng, log 
>> the class and mask and the fact there was an engine reset (singular). 
>> Clearer message can probably be written.
>>
>>> However, the follow up point is to ask why you need to report the 
>>> exact class/instance? The end user doesn't care about which specific 
>>> engine got reset. They only care that their context was reset. Even a 
>>> KMD developer doesn't really care unless the concern is about a 
>>> hardware bug rather than a software bug.
>>
>> I was simply aligning both backends to log as similar information as 
>> possible. Information is there, just not logged.
>>
>> Concerning the wider topic, my thinking is end user is mainly 
>> interested to know there are any engine resets happening (to tie with 
>> the experience of UI/video glitching or whatever). Going for deeper 
>> analysis than that is probably beyond the scope of the kernel log and 
>> indeed error capture territory.
> I would still say that the important information is which context was 
> killed not which engine. Sure, knowing the engine is better than nothing 
> but if we can report something more useful then why not?

Make it so. :)

>>> My view is that the current message is indeed woefully uninformative. 
>>> However, it is more important to be reporting context identification 
>>> than engine instances. So sure, add the engine instance description 
>>> but also add something specific to the ce as well. Ideally (for me) 
>>> the GuC id and maybe something else that uniquely identifies the 
>>> context in KMD land for when not using GuC?
>>
>> Not sure we need to go that far at this level, but even if we do it 
>> could be a follow up to add new data to both backends. Not sure yet I 
>> care enough to drive this. My patch was simply a reaction to noticing 
>> there is zero information currently logged while debugging some DG2 
>> hangs.
> In terms of just reporting that a reset occurred, we already have the 
> 'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. The ecode 
> is a somewhat bizarre value but it does act as a 'something went wrong, 
> your system is not happy' type message. Going beyond that, I think 
> context identification is the next most useful thing to add.
> 
> But if you aren't even getting the 'GPU HANG' message then it sounds 
> like something is broken with what we already have. So we should fix 
> that as a first priority. If that message isn't appearing then it means 
> there was no error capture so adding extra info to the capture won't help!

The issue I have is that "GPU HANG ecode" messages are always "all 
zeros". It thought that was because GuC error capture was not there, but 
maybe its something else.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-22 16:21             ` Tvrtko Ursulin
@ 2021-12-22 21:58               ` John Harrison
  2021-12-23 10:23                 ` Tvrtko Ursulin
  0 siblings, 1 reply; 20+ messages in thread
From: John Harrison @ 2021-12-22 21:58 UTC (permalink / raw)
  To: Tvrtko Ursulin, Matthew Brost; +Cc: Intel-gfx, dri-devel

On 12/22/2021 08:21, Tvrtko Ursulin wrote:
> On 21/12/2021 22:14, John Harrison wrote:
>> On 12/21/2021 05:37, Tvrtko Ursulin wrote:
>>> On 20/12/2021 18:34, John Harrison wrote:
>>>> On 12/20/2021 07:00, Tvrtko Ursulin wrote:
>>>>> On 17/12/2021 16:22, Matthew Brost wrote:
>>>>>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>>>>>
>>>>>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>>
>>>>>>>> Log engine resets done by the GuC firmware in the similar way 
>>>>>>>> it is done
>>>>>>>> by the execlists backend.
>>>>>>>>
>>>>>>>> This way we have notion of where the hangs are before the GuC 
>>>>>>>> gains
>>>>>>>> support for proper error capture.
>>>>>>>
>>>>>>> Ping - any interest to log this info?
>>>>>>>
>>>>>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>>>>>> 12:0:00000000".
>>>>>>>
>>>>>>
>>>>>> Yea, this could be helpful. One suggestion below.
>>>>>>
>>>>>>> Also, will GuC be reporting the reason for the engine reset at 
>>>>>>> any point?
>>>>>>>
>>>>>>
>>>>>> We are working on the error state capture, presumably the 
>>>>>> registers will
>>>>>> give a clue what caused the hang.
>>>>>>
>>>>>> As for the GuC providing a reason, that isn't defined in the 
>>>>>> interface
>>>>>> but that is decent idea to provide a hint in G2H what the issue 
>>>>>> was. Let
>>>>>> me run that by the i915 GuC developers / GuC firmware team and 
>>>>>> see what
>>>>>> they think.
>>>>>>
>>>> The GuC does not do any hang analysis. So as far as GuC is 
>>>> concerned, the reason is pretty much always going to be pre-emption 
>>>> timeout. There are a few ways the pre-emption itself could be 
>>>> triggered but basically, if GuC resets an active context then it is 
>>>> because it did not pre-empt quickly enough when requested.
>>>>
>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Tvrtko
>>>>>>>
>>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>> ---
>>>>>>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 
>>>>>>>> +++++++++++-
>>>>>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>> index 97311119da6f..51512123dc1a 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>> @@ -11,6 +11,7 @@
>>>>>>>>    #include "gt/intel_context.h"
>>>>>>>>    #include "gt/intel_engine_pm.h"
>>>>>>>>    #include "gt/intel_engine_heartbeat.h"
>>>>>>>> +#include "gt/intel_engine_user.h"
>>>>>>>>    #include "gt/intel_gpu_commands.h"
>>>>>>>>    #include "gt/intel_gt.h"
>>>>>>>>    #include "gt/intel_gt_clock_utils.h"
>>>>>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>>>>>> intel_guc *guc,
>>>>>>>>    {
>>>>>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>>>>>        struct drm_i915_private *i915 = gt->i915;
>>>>>>>> -    struct intel_engine_cs *engine = 
>>>>>>>> __context_to_physical_engine(ce);
>>>>>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>>>>>        intel_wakeref_t wakeref;
>>>>>>>> +    if (intel_engine_is_virtual(engine)) {
>>>>>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC 
>>>>>>>> engine reset\n",
>>>>>>>> + intel_engine_class_repr(engine->class),
>>>>>>>> +               engine->mask);
>>>>>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>>>>>> +    } else {
>>>>>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>>>>>> engine->name);
>>>>>>
>>>>>> Probably include the guc_id of the context too then?
>>>>>
>>>>> Is the guc id stable and useful on its own - who would be the user?
>>>> The GuC id is the only thing that matters when trying to correlate 
>>>> KMD activity with a GuC log. So while it might not be of any use or 
>>>> interest to an end user, it is extremely important and useful to a 
>>>> kernel developer attempting to debug an issue. And that includes 
>>>> bug reports from end users that are hard to repro given that the 
>>>> standard error capture will include the GuC log.
>>>
>>> On the topic of GuC log - is there a tool in IGT (or will be) which 
>>> will parse the bit saved in the error capture or how is that 
>>> supposed to be used?
>> Nope.
>>
>> However, Alan is currently working on supporting the GuC error 
>> capture mechanism. Prior to sending the reset notification to the 
>> KMD, the GuC will save a whole bunch of register state to a memory 
>> buffer and send a notification to the KMD that this is available. 
>> When we then get the actual reset notification, we need to match the 
>> two together and include a parsed, human readable version of the 
>> GuC's capture state buffer in the sysfs error log output.
>>
>> The GuC log should not be involved in this process. And note that any 
>> register dumps in the GuC log are limited in scope and only enabled 
>> at higher verbosity levels. Whereas, the official state capture is 
>> based on a register list provided by the KMD and is available 
>> irrespective of debug CONFIG settings, verbosity levels, etc.
>
> Hm why should GuC log not be involved now? I thought earlier you said:
>
> """
> And that includes bug reports from end users that are hard to repro 
> given that the standard error capture will include the GuC log.
> """
>
> Hence I thought there would be a tool in IGT which would parse the 
> part saved inside the error capture.
Different things.

The GuC log is not involved in capturing hardware register state and 
reporting that as part of the sysfs error capture that user's can read 
out. The GuC needs to do the state capture for us if it is doing the 
reset, but it is provided via a dedicated state capture API. There 
should be no requirement to set GuC log sizes/verbosity levels or to 
decode and parse the GuC log just to get the engine register state at 
the time of the hang.

On the other hand, the GuC log is useful for debugging certain issues 
and it is included automatically in the sysfs error capture along with 
all the other hardware and software state that we save.

There is intended to be a publicly released tool to decode the GuC log 
into a human readable format. So end users will be able to see what 
engine scheduling decisions were taken prior to the hang, for example. 
Unfortunately, that is not yet ready for release for a number of 
reasons. I don't know whether that would be released as part of the IGT 
suite or in some other manner.

>
>>>> Also, note that GuC really resets contexts rather than engines. 
>>>> What it reports back to i915 on a reset is simply the GuC id of the 
>>>> context. It is up to i915 to work back from that to determine 
>>>> engine instances/classes if required. And in the case of a virtual 
>>>> context, it is impossible to extract the actual instance number. So 
>>>> your above print about resetting all instances within the virtual 
>>>> engine mask is incorrect/misleading. The reset would have been 
>>>> applied to one and only one of those engines. If you really need to 
>>>> know exactly which engine was poked, you need to look inside the 
>>>> GuC log.
>>>
>>> I think I understood that part. :) It wasn't my intent to imply in 
>>> the message multiple engines have been reset, but in the case of 
>>> veng, log the class and mask and the fact there was an engine reset 
>>> (singular). Clearer message can probably be written.
>>>
>>>> However, the follow up point is to ask why you need to report the 
>>>> exact class/instance? The end user doesn't care about which 
>>>> specific engine got reset. They only care that their context was 
>>>> reset. Even a KMD developer doesn't really care unless the concern 
>>>> is about a hardware bug rather than a software bug.
>>>
>>> I was simply aligning both backends to log as similar information as 
>>> possible. Information is there, just not logged.
>>>
>>> Concerning the wider topic, my thinking is end user is mainly 
>>> interested to know there are any engine resets happening (to tie 
>>> with the experience of UI/video glitching or whatever). Going for 
>>> deeper analysis than that is probably beyond the scope of the kernel 
>>> log and indeed error capture territory.
>> I would still say that the important information is which context was 
>> killed not which engine. Sure, knowing the engine is better than 
>> nothing but if we can report something more useful then why not?
>
> Make it so. :)
>
>>>> My view is that the current message is indeed woefully 
>>>> uninformative. However, it is more important to be reporting 
>>>> context identification than engine instances. So sure, add the 
>>>> engine instance description but also add something specific to the 
>>>> ce as well. Ideally (for me) the GuC id and maybe something else 
>>>> that uniquely identifies the context in KMD land for when not using 
>>>> GuC?
>>>
>>> Not sure we need to go that far at this level, but even if we do it 
>>> could be a follow up to add new data to both backends. Not sure yet 
>>> I care enough to drive this. My patch was simply a reaction to 
>>> noticing there is zero information currently logged while debugging 
>>> some DG2 hangs.
>> In terms of just reporting that a reset occurred, we already have the 
>> 'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. The 
>> ecode is a somewhat bizarre value but it does act as a 'something 
>> went wrong, your system is not happy' type message. Going beyond 
>> that, I think context identification is the next most useful thing to 
>> add.
>>
>> But if you aren't even getting the 'GPU HANG' message then it sounds 
>> like something is broken with what we already have. So we should fix 
>> that as a first priority. If that message isn't appearing then it 
>> means there was no error capture so adding extra info to the capture 
>> won't help!
>
> The issue I have is that "GPU HANG ecode" messages are always "all 
> zeros". It thought that was because GuC error capture was not there, 
> but maybe its something else.
Hmm. All zeros including the platform and engine class part or just the 
instdone part?

The instdone value is engine register state and will have been cleared 
before i915 can read it if the reset was handled by GuC. That comes 
under the heading of state we need the new error capture API for. 
However, the context should be correctly identified as should the 
platform/engine class.

Currently, the recommended w/a is to set i915.reset=1 when debugging a 
hang issue. That will disable GuC based resets and fall back to old 
school i915 based (but full GT not engine) resets. And that means that 
i915 will be able to read the engine state prior to the reset happening 
and thus produce a valid error capture / GPU HANG message.

John.

>
> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-22 21:58               ` John Harrison
@ 2021-12-23 10:23                 ` Tvrtko Ursulin
  2021-12-23 17:35                   ` John Harrison
  0 siblings, 1 reply; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-23 10:23 UTC (permalink / raw)
  To: John Harrison, Matthew Brost; +Cc: Intel-gfx, dri-devel


On 22/12/2021 21:58, John Harrison wrote:
> On 12/22/2021 08:21, Tvrtko Ursulin wrote:
>> On 21/12/2021 22:14, John Harrison wrote:
>>> On 12/21/2021 05:37, Tvrtko Ursulin wrote:
>>>> On 20/12/2021 18:34, John Harrison wrote:
>>>>> On 12/20/2021 07:00, Tvrtko Ursulin wrote:
>>>>>> On 17/12/2021 16:22, Matthew Brost wrote:
>>>>>>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>>>>>>
>>>>>>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>>>
>>>>>>>>> Log engine resets done by the GuC firmware in the similar way 
>>>>>>>>> it is done
>>>>>>>>> by the execlists backend.
>>>>>>>>>
>>>>>>>>> This way we have notion of where the hangs are before the GuC 
>>>>>>>>> gains
>>>>>>>>> support for proper error capture.
>>>>>>>>
>>>>>>>> Ping - any interest to log this info?
>>>>>>>>
>>>>>>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>>>>>>> 12:0:00000000".
>>>>>>>>
>>>>>>>
>>>>>>> Yea, this could be helpful. One suggestion below.
>>>>>>>
>>>>>>>> Also, will GuC be reporting the reason for the engine reset at 
>>>>>>>> any point?
>>>>>>>>
>>>>>>>
>>>>>>> We are working on the error state capture, presumably the 
>>>>>>> registers will
>>>>>>> give a clue what caused the hang.
>>>>>>>
>>>>>>> As for the GuC providing a reason, that isn't defined in the 
>>>>>>> interface
>>>>>>> but that is decent idea to provide a hint in G2H what the issue 
>>>>>>> was. Let
>>>>>>> me run that by the i915 GuC developers / GuC firmware team and 
>>>>>>> see what
>>>>>>> they think.
>>>>>>>
>>>>> The GuC does not do any hang analysis. So as far as GuC is 
>>>>> concerned, the reason is pretty much always going to be pre-emption 
>>>>> timeout. There are a few ways the pre-emption itself could be 
>>>>> triggered but basically, if GuC resets an active context then it is 
>>>>> because it did not pre-empt quickly enough when requested.
>>>>>
>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Tvrtko
>>>>>>>>
>>>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>> ---
>>>>>>>>>    drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 
>>>>>>>>> +++++++++++-
>>>>>>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> index 97311119da6f..51512123dc1a 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> @@ -11,6 +11,7 @@
>>>>>>>>>    #include "gt/intel_context.h"
>>>>>>>>>    #include "gt/intel_engine_pm.h"
>>>>>>>>>    #include "gt/intel_engine_heartbeat.h"
>>>>>>>>> +#include "gt/intel_engine_user.h"
>>>>>>>>>    #include "gt/intel_gpu_commands.h"
>>>>>>>>>    #include "gt/intel_gt.h"
>>>>>>>>>    #include "gt/intel_gt_clock_utils.h"
>>>>>>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>>>>>>> intel_guc *guc,
>>>>>>>>>    {
>>>>>>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>>>>>>        struct drm_i915_private *i915 = gt->i915;
>>>>>>>>> -    struct intel_engine_cs *engine = 
>>>>>>>>> __context_to_physical_engine(ce);
>>>>>>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>>>>>>        intel_wakeref_t wakeref;
>>>>>>>>> +    if (intel_engine_is_virtual(engine)) {
>>>>>>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC 
>>>>>>>>> engine reset\n",
>>>>>>>>> + intel_engine_class_repr(engine->class),
>>>>>>>>> +               engine->mask);
>>>>>>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>>>>>>> +    } else {
>>>>>>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>>>>>>> engine->name);
>>>>>>>
>>>>>>> Probably include the guc_id of the context too then?
>>>>>>
>>>>>> Is the guc id stable and useful on its own - who would be the user?
>>>>> The GuC id is the only thing that matters when trying to correlate 
>>>>> KMD activity with a GuC log. So while it might not be of any use or 
>>>>> interest to an end user, it is extremely important and useful to a 
>>>>> kernel developer attempting to debug an issue. And that includes 
>>>>> bug reports from end users that are hard to repro given that the 
>>>>> standard error capture will include the GuC log.
>>>>
>>>> On the topic of GuC log - is there a tool in IGT (or will be) which 
>>>> will parse the bit saved in the error capture or how is that 
>>>> supposed to be used?
>>> Nope.
>>>
>>> However, Alan is currently working on supporting the GuC error 
>>> capture mechanism. Prior to sending the reset notification to the 
>>> KMD, the GuC will save a whole bunch of register state to a memory 
>>> buffer and send a notification to the KMD that this is available. 
>>> When we then get the actual reset notification, we need to match the 
>>> two together and include a parsed, human readable version of the 
>>> GuC's capture state buffer in the sysfs error log output.
>>>
>>> The GuC log should not be involved in this process. And note that any 
>>> register dumps in the GuC log are limited in scope and only enabled 
>>> at higher verbosity levels. Whereas, the official state capture is 
>>> based on a register list provided by the KMD and is available 
>>> irrespective of debug CONFIG settings, verbosity levels, etc.
>>
>> Hm why should GuC log not be involved now? I thought earlier you said:
>>
>> """
>> And that includes bug reports from end users that are hard to repro 
>> given that the standard error capture will include the GuC log.
>> """
>>
>> Hence I thought there would be a tool in IGT which would parse the 
>> part saved inside the error capture.
> Different things.
> 
> The GuC log is not involved in capturing hardware register state and 
> reporting that as part of the sysfs error capture that user's can read 
> out. The GuC needs to do the state capture for us if it is doing the 
> reset, but it is provided via a dedicated state capture API. There 
> should be no requirement to set GuC log sizes/verbosity levels or to 
> decode and parse the GuC log just to get the engine register state at 
> the time of the hang.
> 
> On the other hand, the GuC log is useful for debugging certain issues 
> and it is included automatically in the sysfs error capture along with 
> all the other hardware and software state that we save.
> 
> There is intended to be a publicly released tool to decode the GuC log 
> into a human readable format. So end users will be able to see what 
> engine scheduling decisions were taken prior to the hang, for example. 
> Unfortunately, that is not yet ready for release for a number of 
> reasons. I don't know whether that would be released as part of the IGT 
> suite or in some other manner.

Okay, it would be handy if it was part of IGT, maybe even 
intel_error_decode being able to use it to show everything interesting 
to kernel developers in one go. Or at least the log parsing tool being 
able to consume raw error capture.

>>>>> Also, note that GuC really resets contexts rather than engines. 
>>>>> What it reports back to i915 on a reset is simply the GuC id of the 
>>>>> context. It is up to i915 to work back from that to determine 
>>>>> engine instances/classes if required. And in the case of a virtual 
>>>>> context, it is impossible to extract the actual instance number. So 
>>>>> your above print about resetting all instances within the virtual 
>>>>> engine mask is incorrect/misleading. The reset would have been 
>>>>> applied to one and only one of those engines. If you really need to 
>>>>> know exactly which engine was poked, you need to look inside the 
>>>>> GuC log.
>>>>
>>>> I think I understood that part. :) It wasn't my intent to imply in 
>>>> the message multiple engines have been reset, but in the case of 
>>>> veng, log the class and mask and the fact there was an engine reset 
>>>> (singular). Clearer message can probably be written.
>>>>
>>>>> However, the follow up point is to ask why you need to report the 
>>>>> exact class/instance? The end user doesn't care about which 
>>>>> specific engine got reset. They only care that their context was 
>>>>> reset. Even a KMD developer doesn't really care unless the concern 
>>>>> is about a hardware bug rather than a software bug.
>>>>
>>>> I was simply aligning both backends to log as similar information as 
>>>> possible. Information is there, just not logged.
>>>>
>>>> Concerning the wider topic, my thinking is end user is mainly 
>>>> interested to know there are any engine resets happening (to tie 
>>>> with the experience of UI/video glitching or whatever). Going for 
>>>> deeper analysis than that is probably beyond the scope of the kernel 
>>>> log and indeed error capture territory.
>>> I would still say that the important information is which context was 
>>> killed not which engine. Sure, knowing the engine is better than 
>>> nothing but if we can report something more useful then why not?
>>
>> Make it so. :)
>>
>>>>> My view is that the current message is indeed woefully 
>>>>> uninformative. However, it is more important to be reporting 
>>>>> context identification than engine instances. So sure, add the 
>>>>> engine instance description but also add something specific to the 
>>>>> ce as well. Ideally (for me) the GuC id and maybe something else 
>>>>> that uniquely identifies the context in KMD land for when not using 
>>>>> GuC?
>>>>
>>>> Not sure we need to go that far at this level, but even if we do it 
>>>> could be a follow up to add new data to both backends. Not sure yet 
>>>> I care enough to drive this. My patch was simply a reaction to 
>>>> noticing there is zero information currently logged while debugging 
>>>> some DG2 hangs.
>>> In terms of just reporting that a reset occurred, we already have the 
>>> 'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. The 
>>> ecode is a somewhat bizarre value but it does act as a 'something 
>>> went wrong, your system is not happy' type message. Going beyond 
>>> that, I think context identification is the next most useful thing to 
>>> add.
>>>
>>> But if you aren't even getting the 'GPU HANG' message then it sounds 
>>> like something is broken with what we already have. So we should fix 
>>> that as a first priority. If that message isn't appearing then it 
>>> means there was no error capture so adding extra info to the capture 
>>> won't help!
>>
>> The issue I have is that "GPU HANG ecode" messages are always "all 
>> zeros". It thought that was because GuC error capture was not there, 
>> but maybe its something else.
> Hmm. All zeros including the platform and engine class part or just the 
> instdone part?

Class and instdone - all we were seeing was "[drm] GPU HANG: ecode
12:0:00000000".

Even on the CI run for this patch I see in the logs:

<5>[  157.243472] i915 0000:00:02.0: [drm] rcs0 GuC engine reset
<6>[  157.254568] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000

So there seem circumstances when the GPU HANG line somehow misses to 
figure out the engine class.

> The instdone value is engine register state and will have been cleared 
> before i915 can read it if the reset was handled by GuC. That comes 
> under the heading of state we need the new error capture API for. 
> However, the context should be correctly identified as should the 
> platform/engine class.
> 
> Currently, the recommended w/a is to set i915.reset=1 when debugging a 
> hang issue. That will disable GuC based resets and fall back to old 
> school i915 based (but full GT not engine) resets. And that means that 
> i915 will be able to read the engine state prior to the reset happening 
> and thus produce a valid error capture / GPU HANG message.

Ah.. that's the piece of the puzzle I was missing. Perhaps it should 
even be the default until error capture works.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-23 10:23                 ` Tvrtko Ursulin
@ 2021-12-23 17:35                   ` John Harrison
  2021-12-24 11:57                     ` Tvrtko Ursulin
  0 siblings, 1 reply; 20+ messages in thread
From: John Harrison @ 2021-12-23 17:35 UTC (permalink / raw)
  To: Tvrtko Ursulin, Matthew Brost; +Cc: Intel-gfx, dri-devel

On 12/23/2021 02:23, Tvrtko Ursulin wrote:
> On 22/12/2021 21:58, John Harrison wrote:
>> On 12/22/2021 08:21, Tvrtko Ursulin wrote:
>>> On 21/12/2021 22:14, John Harrison wrote:
>>>> On 12/21/2021 05:37, Tvrtko Ursulin wrote:
>>>>> On 20/12/2021 18:34, John Harrison wrote:
>>>>>> On 12/20/2021 07:00, Tvrtko Ursulin wrote:
>>>>>>> On 17/12/2021 16:22, Matthew Brost wrote:
>>>>>>>> On Fri, Dec 17, 2021 at 12:15:53PM +0000, Tvrtko Ursulin wrote:
>>>>>>>>>
>>>>>>>>> On 14/12/2021 15:07, Tvrtko Ursulin wrote:
>>>>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>>>>
>>>>>>>>>> Log engine resets done by the GuC firmware in the similar way 
>>>>>>>>>> it is done
>>>>>>>>>> by the execlists backend.
>>>>>>>>>>
>>>>>>>>>> This way we have notion of where the hangs are before the GuC 
>>>>>>>>>> gains
>>>>>>>>>> support for proper error capture.
>>>>>>>>>
>>>>>>>>> Ping - any interest to log this info?
>>>>>>>>>
>>>>>>>>> All there currently is a non-descriptive "[drm] GPU HANG: ecode
>>>>>>>>> 12:0:00000000".
>>>>>>>>>
>>>>>>>>
>>>>>>>> Yea, this could be helpful. One suggestion below.
>>>>>>>>
>>>>>>>>> Also, will GuC be reporting the reason for the engine reset at 
>>>>>>>>> any point?
>>>>>>>>>
>>>>>>>>
>>>>>>>> We are working on the error state capture, presumably the 
>>>>>>>> registers will
>>>>>>>> give a clue what caused the hang.
>>>>>>>>
>>>>>>>> As for the GuC providing a reason, that isn't defined in the 
>>>>>>>> interface
>>>>>>>> but that is decent idea to provide a hint in G2H what the issue 
>>>>>>>> was. Let
>>>>>>>> me run that by the i915 GuC developers / GuC firmware team and 
>>>>>>>> see what
>>>>>>>> they think.
>>>>>>>>
>>>>>> The GuC does not do any hang analysis. So as far as GuC is 
>>>>>> concerned, the reason is pretty much always going to be 
>>>>>> pre-emption timeout. There are a few ways the pre-emption itself 
>>>>>> could be triggered but basically, if GuC resets an active context 
>>>>>> then it is because it did not pre-empt quickly enough when 
>>>>>> requested.
>>>>>>
>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>> Tvrtko
>>>>>>>>>
>>>>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>>>> Cc: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>> ---
>>>>>>>>>> drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 12 
>>>>>>>>>> +++++++++++-
>>>>>>>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>>>>>>>
>>>>>>>>>> diff --git 
>>>>>>>>>> a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>> index 97311119da6f..51512123dc1a 100644
>>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>> @@ -11,6 +11,7 @@
>>>>>>>>>>    #include "gt/intel_context.h"
>>>>>>>>>>    #include "gt/intel_engine_pm.h"
>>>>>>>>>>    #include "gt/intel_engine_heartbeat.h"
>>>>>>>>>> +#include "gt/intel_engine_user.h"
>>>>>>>>>>    #include "gt/intel_gpu_commands.h"
>>>>>>>>>>    #include "gt/intel_gt.h"
>>>>>>>>>>    #include "gt/intel_gt_clock_utils.h"
>>>>>>>>>> @@ -3934,9 +3935,18 @@ static void capture_error_state(struct 
>>>>>>>>>> intel_guc *guc,
>>>>>>>>>>    {
>>>>>>>>>>        struct intel_gt *gt = guc_to_gt(guc);
>>>>>>>>>>        struct drm_i915_private *i915 = gt->i915;
>>>>>>>>>> -    struct intel_engine_cs *engine = 
>>>>>>>>>> __context_to_physical_engine(ce);
>>>>>>>>>> +    struct intel_engine_cs *engine = ce->engine;
>>>>>>>>>>        intel_wakeref_t wakeref;
>>>>>>>>>> +    if (intel_engine_is_virtual(engine)) {
>>>>>>>>>> +        drm_notice(&i915->drm, "%s class, engines 0x%x; GuC 
>>>>>>>>>> engine reset\n",
>>>>>>>>>> + intel_engine_class_repr(engine->class),
>>>>>>>>>> +               engine->mask);
>>>>>>>>>> +        engine = guc_virtual_get_sibling(engine, 0);
>>>>>>>>>> +    } else {
>>>>>>>>>> +        drm_notice(&i915->drm, "%s GuC engine reset\n", 
>>>>>>>>>> engine->name);
>>>>>>>>
>>>>>>>> Probably include the guc_id of the context too then?
>>>>>>>
>>>>>>> Is the guc id stable and useful on its own - who would be the user?
>>>>>> The GuC id is the only thing that matters when trying to 
>>>>>> correlate KMD activity with a GuC log. So while it might not be 
>>>>>> of any use or interest to an end user, it is extremely important 
>>>>>> and useful to a kernel developer attempting to debug an issue. 
>>>>>> And that includes bug reports from end users that are hard to 
>>>>>> repro given that the standard error capture will include the GuC 
>>>>>> log.
>>>>>
>>>>> On the topic of GuC log - is there a tool in IGT (or will be) 
>>>>> which will parse the bit saved in the error capture or how is that 
>>>>> supposed to be used?
>>>> Nope.
>>>>
>>>> However, Alan is currently working on supporting the GuC error 
>>>> capture mechanism. Prior to sending the reset notification to the 
>>>> KMD, the GuC will save a whole bunch of register state to a memory 
>>>> buffer and send a notification to the KMD that this is available. 
>>>> When we then get the actual reset notification, we need to match 
>>>> the two together and include a parsed, human readable version of 
>>>> the GuC's capture state buffer in the sysfs error log output.
>>>>
>>>> The GuC log should not be involved in this process. And note that 
>>>> any register dumps in the GuC log are limited in scope and only 
>>>> enabled at higher verbosity levels. Whereas, the official state 
>>>> capture is based on a register list provided by the KMD and is 
>>>> available irrespective of debug CONFIG settings, verbosity levels, 
>>>> etc.
>>>
>>> Hm why should GuC log not be involved now? I thought earlier you said:
>>>
>>> """
>>> And that includes bug reports from end users that are hard to repro 
>>> given that the standard error capture will include the GuC log.
>>> """
>>>
>>> Hence I thought there would be a tool in IGT which would parse the 
>>> part saved inside the error capture.
>> Different things.
>>
>> The GuC log is not involved in capturing hardware register state and 
>> reporting that as part of the sysfs error capture that user's can 
>> read out. The GuC needs to do the state capture for us if it is doing 
>> the reset, but it is provided via a dedicated state capture API. 
>> There should be no requirement to set GuC log sizes/verbosity levels 
>> or to decode and parse the GuC log just to get the engine register 
>> state at the time of the hang.
>>
>> On the other hand, the GuC log is useful for debugging certain issues 
>> and it is included automatically in the sysfs error capture along 
>> with all the other hardware and software state that we save.
>>
>> There is intended to be a publicly released tool to decode the GuC 
>> log into a human readable format. So end users will be able to see 
>> what engine scheduling decisions were taken prior to the hang, for 
>> example. Unfortunately, that is not yet ready for release for a 
>> number of reasons. I don't know whether that would be released as 
>> part of the IGT suite or in some other manner.
>
> Okay, it would be handy if it was part of IGT, maybe even 
> intel_error_decode being able to use it to show everything interesting 
> to kernel developers in one go. Or at least the log parsing tool being 
> able to consume raw error capture.
I have some wrapper scripts which can do things like take a raw error 
capture, run intel_error_decode, extract the GuC log portion, convert it 
to the binary format the decoder tool expects, extract the GuC firmware 
version from the capture to give to the decoder tool and finally run the 
decoder tool. The intention is that all of the helper scripts will also 
be part of the log decoder release.

If you want to try it all out now, see the GuC log decoder wiki page 
(internal developers only).

>
>>>>>> Also, note that GuC really resets contexts rather than engines. 
>>>>>> What it reports back to i915 on a reset is simply the GuC id of 
>>>>>> the context. It is up to i915 to work back from that to determine 
>>>>>> engine instances/classes if required. And in the case of a 
>>>>>> virtual context, it is impossible to extract the actual instance 
>>>>>> number. So your above print about resetting all instances within 
>>>>>> the virtual engine mask is incorrect/misleading. The reset would 
>>>>>> have been applied to one and only one of those engines. If you 
>>>>>> really need to know exactly which engine was poked, you need to 
>>>>>> look inside the GuC log.
>>>>>
>>>>> I think I understood that part. :) It wasn't my intent to imply in 
>>>>> the message multiple engines have been reset, but in the case of 
>>>>> veng, log the class and mask and the fact there was an engine 
>>>>> reset (singular). Clearer message can probably be written.
>>>>>
>>>>>> However, the follow up point is to ask why you need to report the 
>>>>>> exact class/instance? The end user doesn't care about which 
>>>>>> specific engine got reset. They only care that their context was 
>>>>>> reset. Even a KMD developer doesn't really care unless the 
>>>>>> concern is about a hardware bug rather than a software bug.
>>>>>
>>>>> I was simply aligning both backends to log as similar information 
>>>>> as possible. Information is there, just not logged.
>>>>>
>>>>> Concerning the wider topic, my thinking is end user is mainly 
>>>>> interested to know there are any engine resets happening (to tie 
>>>>> with the experience of UI/video glitching or whatever). Going for 
>>>>> deeper analysis than that is probably beyond the scope of the 
>>>>> kernel log and indeed error capture territory.
>>>> I would still say that the important information is which context 
>>>> was killed not which engine. Sure, knowing the engine is better 
>>>> than nothing but if we can report something more useful then why not?
>>>
>>> Make it so. :)
>>>
>>>>>> My view is that the current message is indeed woefully 
>>>>>> uninformative. However, it is more important to be reporting 
>>>>>> context identification than engine instances. So sure, add the 
>>>>>> engine instance description but also add something specific to 
>>>>>> the ce as well. Ideally (for me) the GuC id and maybe something 
>>>>>> else that uniquely identifies the context in KMD land for when 
>>>>>> not using GuC?
>>>>>
>>>>> Not sure we need to go that far at this level, but even if we do 
>>>>> it could be a follow up to add new data to both backends. Not sure 
>>>>> yet I care enough to drive this. My patch was simply a reaction to 
>>>>> noticing there is zero information currently logged while 
>>>>> debugging some DG2 hangs.
>>>> In terms of just reporting that a reset occurred, we already have 
>>>> the 'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. 
>>>> The ecode is a somewhat bizarre value but it does act as a 
>>>> 'something went wrong, your system is not happy' type message. 
>>>> Going beyond that, I think context identification is the next most 
>>>> useful thing to add.
>>>>
>>>> But if you aren't even getting the 'GPU HANG' message then it 
>>>> sounds like something is broken with what we already have. So we 
>>>> should fix that as a first priority. If that message isn't 
>>>> appearing then it means there was no error capture so adding extra 
>>>> info to the capture won't help!
>>>
>>> The issue I have is that "GPU HANG ecode" messages are always "all 
>>> zeros". It thought that was because GuC error capture was not there, 
>>> but maybe its something else.
>> Hmm. All zeros including the platform and engine class part or just 
>> the instdone part?
>
> Class and instdone - all we were seeing was "[drm] GPU HANG: ecode
> 12:0:00000000".
>
> Even on the CI run for this patch I see in the logs:
>
> <5>[  157.243472] i915 0000:00:02.0: [drm] rcs0 GuC engine reset
> <6>[  157.254568] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000
>
> So there seem circumstances when the GPU HANG line somehow misses to 
> figure out the engine class.
Class zero is render. So it is correct.

>
>> The instdone value is engine register state and will have been 
>> cleared before i915 can read it if the reset was handled by GuC. That 
>> comes under the heading of state we need the new error capture API 
>> for. However, the context should be correctly identified as should 
>> the platform/engine class.
>>
>> Currently, the recommended w/a is to set i915.reset=1 when debugging 
>> a hang issue. That will disable GuC based resets and fall back to old 
>> school i915 based (but full GT not engine) resets. And that means 
>> that i915 will be able to read the engine state prior to the reset 
>> happening and thus produce a valid error capture / GPU HANG message.
>
> Ah.. that's the piece of the puzzle I was missing. Perhaps it should 
> even be the default until error capture works.
Decision was taken that per engine resets are of real use to end users 
but valid register state in an error capture is only of use to i915 
developers. Therefore, we can take the hit of less debuggability. Plus, 
you do get a lot of that information in the GuC log (as debug prints, 
essentially) if you have the verbosity set suitably high. So it is not 
impossible to get the information out even with GuC based engine resets. 
But the reset=1 fallback is certainly the easiest debug option.

John.


>
> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915/guc: Log engine resets
  2021-12-23 17:35                   ` John Harrison
@ 2021-12-24 11:57                     ` Tvrtko Ursulin
  0 siblings, 0 replies; 20+ messages in thread
From: Tvrtko Ursulin @ 2021-12-24 11:57 UTC (permalink / raw)
  To: John Harrison, Matthew Brost; +Cc: Intel-gfx, dri-devel


On 23/12/2021 17:35, John Harrison wrote:

[snip]

>>> On the other hand, the GuC log is useful for debugging certain issues 
>>> and it is included automatically in the sysfs error capture along 
>>> with all the other hardware and software state that we save.
>>>
>>> There is intended to be a publicly released tool to decode the GuC 
>>> log into a human readable format. So end users will be able to see 
>>> what engine scheduling decisions were taken prior to the hang, for 
>>> example. Unfortunately, that is not yet ready for release for a 
>>> number of reasons. I don't know whether that would be released as 
>>> part of the IGT suite or in some other manner.
>>
>> Okay, it would be handy if it was part of IGT, maybe even 
>> intel_error_decode being able to use it to show everything interesting 
>> to kernel developers in one go. Or at least the log parsing tool being 
>> able to consume raw error capture.
> I have some wrapper scripts which can do things like take a raw error 
> capture, run intel_error_decode, extract the GuC log portion, convert it 
> to the binary format the decoder tool expects, extract the GuC firmware 
> version from the capture to give to the decoder tool and finally run the 
> decoder tool. The intention is that all of the helper scripts will also 
> be part of the log decoder release.
> 
> If you want to try it all out now, see the GuC log decoder wiki page 
> (internal developers only).

Thanks, I'll see after the holiday break where we are with certain project in terms of are we still hitting hangs.

[snip]

>>>>>>> My view is that the current message is indeed woefully 
>>>>>>> uninformative. However, it is more important to be reporting 
>>>>>>> context identification than engine instances. So sure, add the 
>>>>>>> engine instance description but also add something specific to 
>>>>>>> the ce as well. Ideally (for me) the GuC id and maybe something 
>>>>>>> else that uniquely identifies the context in KMD land for when 
>>>>>>> not using GuC?
>>>>>>
>>>>>> Not sure we need to go that far at this level, but even if we do 
>>>>>> it could be a follow up to add new data to both backends. Not sure 
>>>>>> yet I care enough to drive this. My patch was simply a reaction to 
>>>>>> noticing there is zero information currently logged while 
>>>>>> debugging some DG2 hangs.
>>>>> In terms of just reporting that a reset occurred, we already have 
>>>>> the 'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. 
>>>>> The ecode is a somewhat bizarre value but it does act as a 
>>>>> 'something went wrong, your system is not happy' type message. 
>>>>> Going beyond that, I think context identification is the next most 
>>>>> useful thing to add.
>>>>>
>>>>> But if you aren't even getting the 'GPU HANG' message then it 
>>>>> sounds like something is broken with what we already have. So we 
>>>>> should fix that as a first priority. If that message isn't 
>>>>> appearing then it means there was no error capture so adding extra 
>>>>> info to the capture won't help!
>>>>
>>>> The issue I have is that "GPU HANG ecode" messages are always "all 
>>>> zeros". It thought that was because GuC error capture was not there, 
>>>> but maybe its something else.
>>> Hmm. All zeros including the platform and engine class part or just 
>>> the instdone part?
>>
>> Class and instdone - all we were seeing was "[drm] GPU HANG: ecode
>> 12:0:00000000".
>>
>> Even on the CI run for this patch I see in the logs:
>>
>> <5>[  157.243472] i915 0000:00:02.0: [drm] rcs0 GuC engine reset
>> <6>[  157.254568] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000
>>
>> So there seem circumstances when the GPU HANG line somehow misses to 
>> figure out the engine class.
> Class zero is render. So it is correct.

It's a bitmask, so not quite correct, something is missing:

		for (cs = gt->engine; cs; cs = cs->next) {
			if (cs->hung) {
				hung_classes |= BIT(cs->engine->uabi_class);

>>> The instdone value is engine register state and will have been 
>>> cleared before i915 can read it if the reset was handled by GuC. That 
>>> comes under the heading of state we need the new error capture API 
>>> for. However, the context should be correctly identified as should 
>>> the platform/engine class.
>>>
>>> Currently, the recommended w/a is to set i915.reset=1 when debugging 
>>> a hang issue. That will disable GuC based resets and fall back to old 
>>> school i915 based (but full GT not engine) resets. And that means 
>>> that i915 will be able to read the engine state prior to the reset 
>>> happening and thus produce a valid error capture / GPU HANG message.
>>
>> Ah.. that's the piece of the puzzle I was missing. Perhaps it should 
>> even be the default until error capture works.
> Decision was taken that per engine resets are of real use to end users 
> but valid register state in an error capture is only of use to i915 
> developers. Therefore, we can take the hit of less debuggability. Plus, 
> you do get a lot of that information in the GuC log (as debug prints, 
> essentially) if you have the verbosity set suitably high. So it is not 
> impossible to get the information out even with GuC based engine resets. 
> But the reset=1 fallback is certainly the easiest debug option.

It's tricky, error capture is useful for developers but when debugging issues reported by end users. And DG1 is available on the shelves to buy. You say data is available in GuC logs but there is no upstream tool to read it. Luckily DG1 is behind force probe so we get away with it for now.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-12-24 11:57 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-14 15:07 [PATCH] drm/i915/guc: Log engine resets Tvrtko Ursulin
2021-12-14 15:07 ` [Intel-gfx] " Tvrtko Ursulin
2021-12-14 16:33 ` [Intel-gfx] ✓ Fi.CI.BAT: success for " Patchwork
2021-12-14 22:25 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2021-12-17 12:15 ` [Intel-gfx] [PATCH] " Tvrtko Ursulin
2021-12-17 12:15   ` Tvrtko Ursulin
2021-12-17 16:22   ` Matthew Brost
2021-12-17 16:22     ` Matthew Brost
2021-12-20 15:00     ` Tvrtko Ursulin
2021-12-20 15:00       ` Tvrtko Ursulin
2021-12-20 17:55       ` Matthew Brost
2021-12-20 17:55         ` Matthew Brost
2021-12-20 18:34       ` John Harrison
2021-12-21 13:37         ` Tvrtko Ursulin
2021-12-21 22:14           ` John Harrison
2021-12-22 16:21             ` Tvrtko Ursulin
2021-12-22 21:58               ` John Harrison
2021-12-23 10:23                 ` Tvrtko Ursulin
2021-12-23 17:35                   ` John Harrison
2021-12-24 11:57                     ` Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.