All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-01 23:50 ` John.C.Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John.C.Harrison @ 2022-11-01 23:50 UTC (permalink / raw)
  To: Intel-GFX; +Cc: John Harrison, DRI-Devel

From: John Harrison <John.C.Harrison@Intel.com>

At the end of each test, IGT does a drop caches call via sysfs with
special flags set. One of the possible paths waits for idle with an
infinite timeout. That causes problems for debugging issues when CI
catches a "can't go idle" test failure. Best case, the CI system times
out (after 90s), attempts a bunch of state dump actions and then
reboots the system to recover it. Worst case, the CI system can't do
anything at all and then times out (after 1000s) and simply reboots.
Sometimes a serial port log of dmesg might be available, sometimes not.

So rather than making life hard for ourselves, change the timeout to
be 10s rather than infinite. Also, trigger the standard
wedge/reset/recover sequence so that testing can continue with a
working system (if possible).

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index ae987e92251dd..9d916fbbfc27c 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
 		  DROP_RESET_ACTIVE | \
 		  DROP_RESET_SEQNO | \
 		  DROP_RCU)
+
+#define DROP_IDLE_TIMEOUT	(HZ * 10)
+
 static int
 i915_drop_caches_get(void *data, u64 *val)
 {
@@ -661,7 +664,9 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
 		intel_gt_retire_requests(gt);
 
 	if (val & (DROP_IDLE | DROP_ACTIVE)) {
-		ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
+		ret = intel_gt_wait_for_idle(gt, DROP_IDLE_TIMEOUT);
+		if (ret == -ETIME)
+			intel_gt_set_wedged(gt);
 		if (ret)
 			return ret;
 	}
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-01 23:50 ` John.C.Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John.C.Harrison @ 2022-11-01 23:50 UTC (permalink / raw)
  To: Intel-GFX; +Cc: DRI-Devel

From: John Harrison <John.C.Harrison@Intel.com>

At the end of each test, IGT does a drop caches call via sysfs with
special flags set. One of the possible paths waits for idle with an
infinite timeout. That causes problems for debugging issues when CI
catches a "can't go idle" test failure. Best case, the CI system times
out (after 90s), attempts a bunch of state dump actions and then
reboots the system to recover it. Worst case, the CI system can't do
anything at all and then times out (after 1000s) and simply reboots.
Sometimes a serial port log of dmesg might be available, sometimes not.

So rather than making life hard for ourselves, change the timeout to
be 10s rather than infinite. Also, trigger the standard
wedge/reset/recover sequence so that testing can continue with a
working system (if possible).

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index ae987e92251dd..9d916fbbfc27c 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
 		  DROP_RESET_ACTIVE | \
 		  DROP_RESET_SEQNO | \
 		  DROP_RCU)
+
+#define DROP_IDLE_TIMEOUT	(HZ * 10)
+
 static int
 i915_drop_caches_get(void *data, u64 *val)
 {
@@ -661,7 +664,9 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
 		intel_gt_retire_requests(gt);
 
 	if (val & (DROP_IDLE | DROP_ACTIVE)) {
-		ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
+		ret = intel_gt_wait_for_idle(gt, DROP_IDLE_TIMEOUT);
+		if (ret == -ETIME)
+			intel_gt_set_wedged(gt);
 		if (ret)
 			return ret;
 	}
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-gfx] ✗ Fi.CI.DOCS: warning for drm/i915: Don't wait forever in drop_caches
  2022-11-01 23:50 ` [Intel-gfx] " John.C.Harrison
  (?)
@ 2022-11-02  0:10 ` Patchwork
  -1 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2022-11-02  0:10 UTC (permalink / raw)
  To: john.c.harrison; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Don't wait forever in drop_caches
URL   : https://patchwork.freedesktop.org/series/110395/
State : warning

== Summary ==

Error: make htmldocs had i915 warnings
./drivers/gpu/drm/i915/i915_perf_types.h:319: warning: Function parameter or member 'lock' not described in 'i915_perf_stream'



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Don't wait forever in drop_caches
  2022-11-01 23:50 ` [Intel-gfx] " John.C.Harrison
  (?)
  (?)
@ 2022-11-02  0:29 ` Patchwork
  -1 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2022-11-02  0:29 UTC (permalink / raw)
  To: john.c.harrison; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 2718 bytes --]

== Series Details ==

Series: drm/i915: Don't wait forever in drop_caches
URL   : https://patchwork.freedesktop.org/series/110395/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_12329 -> Patchwork_110395v1
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/index.html

Participating hosts (40 -> 26)
------------------------------

  Missing    (14): bat-dg2-8 bat-adlm-1 fi-icl-u2 bat-dg2-9 bat-adlp-6 bat-adlp-4 fi-hsw-4770 bat-adln-1 fi-pnv-d510 bat-rplp-1 bat-rpls-1 bat-rpls-2 bat-dg2-11 bat-jsl-1 

Known issues
------------

  Here are the changes found in Patchwork_110395v1 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@execlists:
    - fi-bsw-nick:        [PASS][1] -> [INCOMPLETE][2] ([i915#6972])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/fi-bsw-nick/igt@i915_selftest@live@execlists.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/fi-bsw-nick/igt@i915_selftest@live@execlists.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor@atomic-transitions-varying-size:
    - fi-bsw-kefka:       [PASS][3] -> [FAIL][4] ([i915#6298])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/fi-bsw-kefka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor@atomic-transitions-varying-size.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/fi-bsw-kefka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor@atomic-transitions-varying-size.html

  * igt@runner@aborted:
    - fi-bsw-nick:        NOTRUN -> [FAIL][5] ([fdo#109271] / [i915#4312])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/fi-bsw-nick/igt@runner@aborted.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#6298]: https://gitlab.freedesktop.org/drm/intel/issues/6298
  [i915#6972]: https://gitlab.freedesktop.org/drm/intel/issues/6972


Build changes
-------------

  * Linux: CI_DRM_12329 -> Patchwork_110395v1

  CI-20190529: 20190529
  CI_DRM_12329: aeb0d740b4011006d27dc0ac4d5c2ae7c6da4066 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7037: 6a25c53624502fc85cec3cf0a0bf244a2346e30f @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_110395v1: aeb0d740b4011006d27dc0ac4d5c2ae7c6da4066 @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

f43794f44a06 drm/i915: Don't wait forever in drop_caches

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/index.html

[-- Attachment #2: Type: text/html, Size: 3379 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915: Don't wait forever in drop_caches
  2022-11-01 23:50 ` [Intel-gfx] " John.C.Harrison
                   ` (2 preceding siblings ...)
  (?)
@ 2022-11-02  9:13 ` Patchwork
  -1 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2022-11-02  9:13 UTC (permalink / raw)
  To: john.c.harrison; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 24786 bytes --]

== Series Details ==

Series: drm/i915: Don't wait forever in drop_caches
URL   : https://patchwork.freedesktop.org/series/110395/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_12329_full -> Patchwork_110395v1_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_110395v1_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_110395v1_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (9 -> 9)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_110395v1_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_exec_schedule@deep@vcs0:
    - shard-skl:          NOTRUN -> [INCOMPLETE][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl4/igt@gem_exec_schedule@deep@vcs0.html

  
Known issues
------------

  Here are the changes found in Patchwork_110395v1_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_balancer@parallel-contexts:
    - shard-iclb:         [PASS][2] -> [SKIP][3] ([i915#4525]) +1 similar issue
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb2/igt@gem_exec_balancer@parallel-contexts.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb7/igt@gem_exec_balancer@parallel-contexts.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-glk:          [PASS][4] -> [FAIL][5] ([i915#2842]) +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-glk5/igt@gem_exec_fair@basic-throttle@rcs0.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-glk3/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_pxp@create-protected-buffer:
    - shard-tglb:         NOTRUN -> [SKIP][6] ([i915#4270])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@gem_pxp@create-protected-buffer.html

  * igt@gem_userptr_blits@dmabuf-unsync:
    - shard-tglb:         NOTRUN -> [SKIP][7] ([i915#3297])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@gem_userptr_blits@dmabuf-unsync.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-apl:          [PASS][8] -> [DMESG-WARN][9] ([i915#5566] / [i915#716])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-apl1/igt@gen9_exec_parse@allowed-single.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl6/igt@gen9_exec_parse@allowed-single.html

  * igt@gen9_exec_parse@batch-without-end:
    - shard-tglb:         NOTRUN -> [SKIP][10] ([i915#2527] / [i915#2856])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@gen9_exec_parse@batch-without-end.html

  * igt@i915_module_load@resize-bar:
    - shard-tglb:         NOTRUN -> [SKIP][11] ([i915#6412])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@i915_module_load@resize-bar.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-iclb:         [PASS][12] -> [SKIP][13] ([i915#4281])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb8/igt@i915_pm_dc@dc9-dpms.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb3/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_rps@engine-order:
    - shard-apl:          [PASS][14] -> [FAIL][15] ([i915#6537])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-apl1/igt@i915_pm_rps@engine-order.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl6/igt@i915_pm_rps@engine-order.html

  * igt@i915_query@hwconfig_table:
    - shard-tglb:         NOTRUN -> [SKIP][16] ([i915#6245])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@i915_query@hwconfig_table.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][17] ([i915#5286]) +1 similar issue
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-90:
    - shard-tglb:         NOTRUN -> [SKIP][18] ([fdo#111615])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_big_fb@yf-tiled-64bpp-rotate-90.html

  * igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc:
    - shard-skl:          NOTRUN -> [SKIP][19] ([fdo#109271] / [i915#3886]) +2 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl10/igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-a-crc-primary-basic-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][20] ([fdo#109271] / [i915#3886]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl1/igt@kms_ccs@pipe-a-crc-primary-basic-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-c-crc-sprite-planes-basic-y_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][21] ([i915#3689]) +2 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_ccs@pipe-c-crc-sprite-planes-basic-y_tiled_ccs.html

  * igt@kms_chamelium@hdmi-crc-multiple:
    - shard-skl:          NOTRUN -> [SKIP][22] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl10/igt@kms_chamelium@hdmi-crc-multiple.html

  * igt@kms_chamelium@hdmi-hpd-storm:
    - shard-apl:          NOTRUN -> [SKIP][23] ([fdo#109271] / [fdo#111827])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl1/igt@kms_chamelium@hdmi-hpd-storm.html

  * igt@kms_chamelium@vga-frame-dump:
    - shard-tglb:         NOTRUN -> [SKIP][24] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_chamelium@vga-frame-dump.html

  * igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-atomic:
    - shard-skl:          [PASS][25] -> [FAIL][26] ([i915#2346])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-atomic.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl7/igt@kms_cursor_legacy@flip-vs-cursor-busy-crc-atomic.html

  * igt@kms_dp_tiled_display@basic-test-pattern:
    - shard-tglb:         NOTRUN -> [SKIP][27] ([i915#426])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_dp_tiled_display@basic-test-pattern.html

  * igt@kms_flip@2x-nonexisting-fb:
    - shard-apl:          NOTRUN -> [SKIP][28] ([fdo#109271]) +20 similar issues
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl1/igt@kms_flip@2x-nonexisting-fb.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-edp1:
    - shard-skl:          NOTRUN -> [INCOMPLETE][29] ([i915#6614])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl10/igt@kms_flip@flip-vs-suspend-interruptible@a-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode:
    - shard-iclb:         NOTRUN -> [SKIP][30] ([i915#2672]) +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb3/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-upscaling@pipe-a-valid-mode:
    - shard-iclb:         NOTRUN -> [SKIP][31] ([i915#2587] / [i915#2672])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb6/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode:
    - shard-iclb:         [PASS][32] -> [SKIP][33] ([i915#3555])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode:
    - shard-iclb:         NOTRUN -> [SKIP][34] ([i915#2672] / [i915#3555])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb7/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - shard-skl:          NOTRUN -> [SKIP][35] ([fdo#109271]) +69 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl10/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-move:
    - shard-tglb:         NOTRUN -> [SKIP][36] ([fdo#109280] / [fdo#111825]) +2 similar issues
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbcpsr-modesetfrombusy:
    - shard-tglb:         NOTRUN -> [SKIP][37] ([i915#6497]) +2 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_frontbuffer_tracking@fbcpsr-modesetfrombusy.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-c-edp-1:
    - shard-tglb:         NOTRUN -> [SKIP][38] ([i915#5235]) +3 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-c-edp-1.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-a-edp-1:
    - shard-iclb:         [PASS][39] -> [SKIP][40] ([i915#5235]) +5 similar issues
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-a-edp-1.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-a-edp-1.html

  * igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-sf:
    - shard-tglb:         NOTRUN -> [SKIP][41] ([i915#2920])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_psr2_sf@cursor-plane-move-continuous-exceed-sf.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
    - shard-apl:          NOTRUN -> [SKIP][42] ([fdo#109271] / [i915#658])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl1/igt@kms_psr2_sf@overlay-plane-move-continuous-sf.html

  * igt@kms_psr2_su@frontbuffer-xrgb8888:
    - shard-iclb:         NOTRUN -> [SKIP][43] ([fdo#109642] / [fdo#111068] / [i915#658])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb6/igt@kms_psr2_su@frontbuffer-xrgb8888.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-tglb:         NOTRUN -> [SKIP][44] ([i915#7037])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@psr2_cursor_mmap_gtt:
    - shard-iclb:         [PASS][45] -> [SKIP][46] ([fdo#109441]) +2 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_gtt.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb7/igt@kms_psr@psr2_cursor_mmap_gtt.html

  * igt@kms_setmode@clone-exclusive-crtc:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([i915#3555]) +1 similar issue
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_setmode@clone-exclusive-crtc.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-tglb:         NOTRUN -> [SKIP][48] ([fdo#109309])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb1/igt@kms_tv_load_detect@load-detect.html

  * igt@perf_pmu@interrupts:
    - shard-skl:          [PASS][49] -> [FAIL][50] ([i915#7318])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl7/igt@perf_pmu@interrupts.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl6/igt@perf_pmu@interrupts.html

  * igt@syncobj_timeline@wait-all-for-submit-delayed-submit:
    - shard-skl:          [PASS][51] -> [DMESG-WARN][52] ([i915#1982])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl6/igt@syncobj_timeline@wait-all-for-submit-delayed-submit.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl9/igt@syncobj_timeline@wait-all-for-submit-delayed-submit.html

  * igt@sysfs_clients@split-25:
    - shard-skl:          NOTRUN -> [SKIP][53] ([fdo#109271] / [i915#2994])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl10/igt@sysfs_clients@split-25.html

  
#### Possible fixes ####

  * igt@gem_exec_balancer@parallel:
    - shard-iclb:         [SKIP][54] ([i915#4525]) -> [PASS][55] +1 similar issue
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@gem_exec_balancer@parallel.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@gem_exec_balancer@parallel.html

  * igt@gem_huc_copy@huc-copy:
    - shard-tglb:         [SKIP][56] ([i915#2190]) -> [PASS][57]
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-tglb6/igt@gem_huc_copy@huc-copy.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-tglb3/igt@gem_huc_copy@huc-copy.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-iclb:         [FAIL][58] ([i915#3989] / [i915#454]) -> [PASS][59]
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rc6_residency@rc6-idle@vcs0:
    - shard-skl:          [WARN][60] ([i915#1804]) -> [PASS][61]
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl6/igt@i915_pm_rc6_residency@rc6-idle@vcs0.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl6/igt@i915_pm_rc6_residency@rc6-idle@vcs0.html

  * igt@kms_cursor_crc@cursor-onscreen-256x85@pipe-b-hdmi-a-1:
    - shard-glk:          [FAIL][62] -> [PASS][63]
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-glk5/igt@kms_cursor_crc@cursor-onscreen-256x85@pipe-b-hdmi-a-1.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-glk3/igt@kms_cursor_crc@cursor-onscreen-256x85@pipe-b-hdmi-a-1.html

  * igt@kms_cursor_crc@cursor-suspend@pipe-c-dp-1:
    - shard-apl:          [DMESG-WARN][64] ([i915#180]) -> [PASS][65]
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-apl3/igt@kms_cursor_crc@cursor-suspend@pipe-c-dp-1.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl1/igt@kms_cursor_crc@cursor-suspend@pipe-c-dp-1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1:
    - shard-skl:          [FAIL][66] ([i915#79]) -> [PASS][67]
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl7/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl6/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render:
    - shard-skl:          [DMESG-WARN][68] ([i915#1982]) -> [PASS][69]
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-c-edp-1:
    - shard-iclb:         [SKIP][70] ([i915#5235]) -> [PASS][71] +2 similar issues
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb2/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-c-edp-1.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb6/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-c-edp-1.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         [SKIP][72] ([fdo#109441]) -> [PASS][73] +1 similar issue
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@kms_psr@psr2_cursor_render.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@kms_psr@psr2_cursor_render.html

  
#### Warnings ####

  * igt@gem_exec_balancer@parallel-ordering:
    - shard-iclb:         [SKIP][74] ([i915#4525]) -> [FAIL][75] ([i915#6117])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb7/igt@gem_exec_balancer@parallel-ordering.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb1/igt@gem_exec_balancer@parallel-ordering.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-skl:          [FAIL][76] ([i915#454]) -> [FAIL][77] ([i915#3989] / [i915#454]) +1 similar issue
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl4/igt@i915_pm_dc@dc6-dpms.html
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl1/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_pm_rc6_residency@rc6-idle@vcs0:
    - shard-iclb:         [WARN][78] ([i915#2684]) -> [FAIL][79] ([i915#2684])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb5/igt@i915_pm_rc6_residency@rc6-idle@vcs0.html
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb8/igt@i915_pm_rc6_residency@rc6-idle@vcs0.html

  * igt@i915_pm_sseu@full-enable:
    - shard-skl:          [FAIL][80] ([i915#7084]) -> [FAIL][81] ([i915#3524])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-skl6/igt@i915_pm_sseu@full-enable.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-skl6/igt@i915_pm_sseu@full-enable.html

  * igt@kms_psr2_sf@cursor-plane-move-continuous-sf:
    - shard-iclb:         [SKIP][82] ([i915#2920]) -> [SKIP][83] ([i915#658])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb2/igt@kms_psr2_sf@cursor-plane-move-continuous-sf.html
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb6/igt@kms_psr2_sf@cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-fully-sf:
    - shard-iclb:         [SKIP][84] ([i915#658]) -> [SKIP][85] ([i915#2920]) +1 similar issue
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-fully-sf.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area:
    - shard-iclb:         [SKIP][86] ([fdo#111068] / [i915#658]) -> [SKIP][87] ([i915#2920]) +1 similar issue
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-iclb3/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area.html

  * igt@runner@aborted:
    - shard-apl:          ([FAIL][88], [FAIL][89], [FAIL][90]) ([i915#3002] / [i915#4312]) -> ([FAIL][91], [FAIL][92], [FAIL][93]) ([fdo#109271] / [i915#3002] / [i915#4312])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-apl3/igt@runner@aborted.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-apl7/igt@runner@aborted.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12329/shard-apl8/igt@runner@aborted.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl3/igt@runner@aborted.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl6/igt@runner@aborted.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/shard-apl3/igt@runner@aborted.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
  [fdo#109309]: https://bugs.freedesktop.org/show_bug.cgi?id=109309
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1804]: https://gitlab.freedesktop.org/drm/intel/issues/1804
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2684]: https://gitlab.freedesktop.org/drm/intel/issues/2684
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
  [i915#2920]: https://gitlab.freedesktop.org/drm/intel/issues/2920
  [i915#2994]: https://gitlab.freedesktop.org/drm/intel/issues/2994
  [i915#3002]: https://gitlab.freedesktop.org/drm/intel/issues/3002
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3524]: https://gitlab.freedesktop.org/drm/intel/issues/3524
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#3989]: https://gitlab.freedesktop.org/drm/intel/issues/3989
  [i915#426]: https://gitlab.freedesktop.org/drm/intel/issues/426
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4281]: https://gitlab.freedesktop.org/drm/intel/issues/4281
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4525]: https://gitlab.freedesktop.org/drm/intel/issues/4525
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5566]: https://gitlab.freedesktop.org/drm/intel/issues/5566
  [i915#6117]: https://gitlab.freedesktop.org/drm/intel/issues/6117
  [i915#6245]: https://gitlab.freedesktop.org/drm/intel/issues/6245
  [i915#6412]: https://gitlab.freedesktop.org/drm/intel/issues/6412
  [i915#6497]: https://gitlab.freedesktop.org/drm/intel/issues/6497
  [i915#6537]: https://gitlab.freedesktop.org/drm/intel/issues/6537
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#6614]: https://gitlab.freedesktop.org/drm/intel/issues/6614
  [i915#7037]: https://gitlab.freedesktop.org/drm/intel/issues/7037
  [i915#7084]: https://gitlab.freedesktop.org/drm/intel/issues/7084
  [i915#716]: https://gitlab.freedesktop.org/drm/intel/issues/716
  [i915#7318]: https://gitlab.freedesktop.org/drm/intel/issues/7318
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79


Build changes
-------------

  * Linux: CI_DRM_12329 -> Patchwork_110395v1

  CI-20190529: 20190529
  CI_DRM_12329: aeb0d740b4011006d27dc0ac4d5c2ae7c6da4066 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7037: 6a25c53624502fc85cec3cf0a0bf244a2346e30f @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_110395v1: aeb0d740b4011006d27dc0ac4d5c2ae7c6da4066 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_110395v1/index.html

[-- Attachment #2: Type: text/html, Size: 29623 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-01 23:50 ` [Intel-gfx] " John.C.Harrison
@ 2022-11-02 12:12   ` Jani Nikula
  -1 siblings, 0 replies; 31+ messages in thread
From: Jani Nikula @ 2022-11-02 12:12 UTC (permalink / raw)
  To: John.C.Harrison, Intel-GFX; +Cc: DRI-Devel, John Harrison

On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
> From: John Harrison <John.C.Harrison@Intel.com>
>
> At the end of each test, IGT does a drop caches call via sysfs with

sysfs?

> special flags set. One of the possible paths waits for idle with an
> infinite timeout. That causes problems for debugging issues when CI
> catches a "can't go idle" test failure. Best case, the CI system times
> out (after 90s), attempts a bunch of state dump actions and then
> reboots the system to recover it. Worst case, the CI system can't do
> anything at all and then times out (after 1000s) and simply reboots.
> Sometimes a serial port log of dmesg might be available, sometimes not.
>
> So rather than making life hard for ourselves, change the timeout to
> be 10s rather than infinite. Also, trigger the standard
> wedge/reset/recover sequence so that testing can continue with a
> working system (if possible).
>
> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index ae987e92251dd..9d916fbbfc27c 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>  		  DROP_RESET_ACTIVE | \
>  		  DROP_RESET_SEQNO | \
>  		  DROP_RCU)
> +
> +#define DROP_IDLE_TIMEOUT	(HZ * 10)

I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
here.

I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
gt/intel_gt.c.

I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.

I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.

My head spins.


BR,
Jani.


> +
>  static int
>  i915_drop_caches_get(void *data, u64 *val)
>  {
> @@ -661,7 +664,9 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
>  		intel_gt_retire_requests(gt);
>  
>  	if (val & (DROP_IDLE | DROP_ACTIVE)) {
> -		ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
> +		ret = intel_gt_wait_for_idle(gt, DROP_IDLE_TIMEOUT);
> +		if (ret == -ETIME)
> +			intel_gt_set_wedged(gt);
>  		if (ret)
>  			return ret;
>  	}

-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-02 12:12   ` Jani Nikula
  0 siblings, 0 replies; 31+ messages in thread
From: Jani Nikula @ 2022-11-02 12:12 UTC (permalink / raw)
  To: John.C.Harrison, Intel-GFX; +Cc: DRI-Devel

On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
> From: John Harrison <John.C.Harrison@Intel.com>
>
> At the end of each test, IGT does a drop caches call via sysfs with

sysfs?

> special flags set. One of the possible paths waits for idle with an
> infinite timeout. That causes problems for debugging issues when CI
> catches a "can't go idle" test failure. Best case, the CI system times
> out (after 90s), attempts a bunch of state dump actions and then
> reboots the system to recover it. Worst case, the CI system can't do
> anything at all and then times out (after 1000s) and simply reboots.
> Sometimes a serial port log of dmesg might be available, sometimes not.
>
> So rather than making life hard for ourselves, change the timeout to
> be 10s rather than infinite. Also, trigger the standard
> wedge/reset/recover sequence so that testing can continue with a
> working system (if possible).
>
> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index ae987e92251dd..9d916fbbfc27c 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>  		  DROP_RESET_ACTIVE | \
>  		  DROP_RESET_SEQNO | \
>  		  DROP_RCU)
> +
> +#define DROP_IDLE_TIMEOUT	(HZ * 10)

I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
here.

I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
gt/intel_gt.c.

I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.

I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.

My head spins.


BR,
Jani.


> +
>  static int
>  i915_drop_caches_get(void *data, u64 *val)
>  {
> @@ -661,7 +664,9 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
>  		intel_gt_retire_requests(gt);
>  
>  	if (val & (DROP_IDLE | DROP_ACTIVE)) {
> -		ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
> +		ret = intel_gt_wait_for_idle(gt, DROP_IDLE_TIMEOUT);
> +		if (ret == -ETIME)
> +			intel_gt_set_wedged(gt);
>  		if (ret)
>  			return ret;
>  	}

-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-02 12:12   ` [Intel-gfx] " Jani Nikula
  (?)
@ 2022-11-02 14:20   ` Tvrtko Ursulin
  2022-11-03  1:33     ` John Harrison
  -1 siblings, 1 reply; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-02 14:20 UTC (permalink / raw)
  To: Jani Nikula, John.C.Harrison, Intel-GFX; +Cc: DRI-Devel


On 02/11/2022 12:12, Jani Nikula wrote:
> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>> From: John Harrison <John.C.Harrison@Intel.com>
>>
>> At the end of each test, IGT does a drop caches call via sysfs with
> 
> sysfs?
> 
>> special flags set. One of the possible paths waits for idle with an
>> infinite timeout. That causes problems for debugging issues when CI
>> catches a "can't go idle" test failure. Best case, the CI system times
>> out (after 90s), attempts a bunch of state dump actions and then
>> reboots the system to recover it. Worst case, the CI system can't do
>> anything at all and then times out (after 1000s) and simply reboots.
>> Sometimes a serial port log of dmesg might be available, sometimes not.
>>
>> So rather than making life hard for ourselves, change the timeout to
>> be 10s rather than infinite. Also, trigger the standard
>> wedge/reset/recover sequence so that testing can continue with a
>> working system (if possible).
>>
>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>> ---
>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
>> index ae987e92251dd..9d916fbbfc27c 100644
>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>   		  DROP_RESET_ACTIVE | \
>>   		  DROP_RESET_SEQNO | \
>>   		  DROP_RCU)
>> +
>> +#define DROP_IDLE_TIMEOUT	(HZ * 10)
> 
> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
> here.

So move here, dropping i915 prefix, next to the newly proposed one?

> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
> gt/intel_gt.c.

Move there and rename to GT_IDLE_TIMEOUT?

> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.

No action needed, maybe drop i915 prefix if wanted.

> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.

Add _MS suffix if wanted.

> My head spins.

I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies to 
DROP_ACTIVE and not only DROP_IDLE.

Things get refactored, code moves around, bits get left behind, who 
knows. No reason to get too worked up. :) As long as people are taking a 
wider view when touching the code base, and are not afraid to send 
cleanups, things should be good.

For the actual functional change at hand - it would be nice if code 
paths in question could handle SIGINT and then we could punt the 
decision on how long someone wants to wait purely to userspace. But it's 
probably hard and it's only debugfs so whatever.

Whether or not 10s is enough CI will hopefully tell us. I'd probably err 
on the side of safety and make it longer, but at most half from the test 
runner timeout.

I am not convinced that wedging is correct though. Conceptually could be 
just that the timeout is too short. What does wedging really give us, on 
top of limiting the wait, when latter AFAIU is the key factor which 
would prevent the need to reboot the machine?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-02 14:20   ` Tvrtko Ursulin
@ 2022-11-03  1:33     ` John Harrison
  2022-11-03  9:18       ` Tvrtko Ursulin
  2022-11-03 10:45       ` Jani Nikula
  0 siblings, 2 replies; 31+ messages in thread
From: John Harrison @ 2022-11-03  1:33 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/2/2022 07:20, Tvrtko Ursulin wrote:
> On 02/11/2022 12:12, Jani Nikula wrote:
>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>
>>> At the end of each test, IGT does a drop caches call via sysfs with
>>
>> sysfs?
Sorry, that was meant to say debugfs. I've also been working on some 
sysfs IGT issues and evidently got my wires crossed!

>>
>>> special flags set. One of the possible paths waits for idle with an
>>> infinite timeout. That causes problems for debugging issues when CI
>>> catches a "can't go idle" test failure. Best case, the CI system times
>>> out (after 90s), attempts a bunch of state dump actions and then
>>> reboots the system to recover it. Worst case, the CI system can't do
>>> anything at all and then times out (after 1000s) and simply reboots.
>>> Sometimes a serial port log of dmesg might be available, sometimes not.
>>>
>>> So rather than making life hard for ourselves, change the timeout to
>>> be 10s rather than infinite. Also, trigger the standard
>>> wedge/reset/recover sequence so that testing can continue with a
>>> working system (if possible).
>>>
>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>> index ae987e92251dd..9d916fbbfc27c 100644
>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>             DROP_RESET_ACTIVE | \
>>>             DROP_RESET_SEQNO | \
>>>             DROP_RCU)
>>> +
>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>
>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
>> here.
>
> So move here, dropping i915 prefix, next to the newly proposed one?
Sure, can do that.

>
>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>> gt/intel_gt.c.
>
> Move there and rename to GT_IDLE_TIMEOUT?
>
>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.
>
> No action needed, maybe drop i915 prefix if wanted.
>
These two are totally unrelated and in code not being touched by this 
change. I would rather not conflate changing random other things with 
fixing this specific issue.

>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>
> Add _MS suffix if wanted.
>
>> My head spins.
>
> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies 
> to DROP_ACTIVE and not only DROP_IDLE.
My original intention for the name was that is the 'drop caches timeout 
for intel_gt_wait_for_idle'. Which is quite the mouthful and hence 
abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later that name 
can be conflated with the DROP_IDLE flag. Will rename.


>
> Things get refactored, code moves around, bits get left behind, who 
> knows. No reason to get too worked up. :) As long as people are taking 
> a wider view when touching the code base, and are not afraid to send 
> cleanups, things should be good.
On the other hand, if every patch gets blocked in code review because 
someone points out some completely unrelated piece of code could be a 
bit better then nothing ever gets fixed. If you spot something that you 
think should be improved, isn't the general idea that you should post a 
patch yourself to improve it?


>
> For the actual functional change at hand - it would be nice if code 
> paths in question could handle SIGINT and then we could punt the 
> decision on how long someone wants to wait purely to userspace. But 
> it's probably hard and it's only debugfs so whatever.
>
The code paths in question will already abort on a signal won't they? 
Both intel_gt_wait_for_idle() and intel_guc_wait_for_pending_msg(), 
which is where the uc_wait_for_idle eventually ends up, have an 
'if(signal_pending) return -EINTR;' check. Beyond that, it sounds like 
what you are asking for is a change in the IGT libraries and/or CI 
framework to start sending signals after some specific timeout. That 
seems like a significantly more complex change (in terms of the number 
of entities affected and number of groups involved) and unnecessary.

> Whether or not 10s is enough CI will hopefully tell us. I'd probably 
> err on the side of safety and make it longer, but at most half from 
> the test runner timeout.
This is supposed to be test clean up. This is not about how long a 
particular test takes to complete but about how long it takes to declare 
the system broken after the test has already finished. I would argue 
that even 10s is massively longer than required.

>
> I am not convinced that wedging is correct though. Conceptually could 
> be just that the timeout is too short. What does wedging really give 
> us, on top of limiting the wait, when latter AFAIU is the key factor 
> which would prevent the need to reboot the machine?
>
It gives us a system that knows what state it is in. If we can't idle 
the GT then something is very badly wrong. Wedging indicates that. It 
also ensure that a full GT reset will be attempted before the next test 
is run. Helping to prevent a failure on test X from propagating into 
failures of unrelated tests X+1, X+2, ... And if the GT reset does not 
work because the system is really that badly broken then future tests 
will not run rather than report erroneous failures.

This is not about getting a more stable system for end users by sweeping 
issues under the carpet and pretending all is well. End users don't run 
IGTs or explicitly call dodgy debugfs entry points. The sole motivation 
here is to get more accurate results from CI. That is, correctly 
identifying which test has hit a problem, getting valid debug analysis 
for that test (logs and such) and allowing further testing to complete 
correctly in the case where the system can be recovered.

John.

> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03  1:33     ` John Harrison
@ 2022-11-03  9:18       ` Tvrtko Ursulin
  2022-11-03  9:38         ` Tvrtko Ursulin
  2022-11-03 19:37         ` John Harrison
  2022-11-03 10:45       ` Jani Nikula
  1 sibling, 2 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-03  9:18 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: DRI-Devel


On 03/11/2022 01:33, John Harrison wrote:
> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>> On 02/11/2022 12:12, Jani Nikula wrote:
>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>
>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>
>>> sysfs?
> Sorry, that was meant to say debugfs. I've also been working on some 
> sysfs IGT issues and evidently got my wires crossed!
> 
>>>
>>>> special flags set. One of the possible paths waits for idle with an
>>>> infinite timeout. That causes problems for debugging issues when CI
>>>> catches a "can't go idle" test failure. Best case, the CI system times
>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>> Sometimes a serial port log of dmesg might be available, sometimes not.
>>>>
>>>> So rather than making life hard for ourselves, change the timeout to
>>>> be 10s rather than infinite. Also, trigger the standard
>>>> wedge/reset/recover sequence so that testing can continue with a
>>>> working system (if possible).
>>>>
>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>> ---
>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>             DROP_RESET_ACTIVE | \
>>>>             DROP_RESET_SEQNO | \
>>>>             DROP_RCU)
>>>> +
>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>
>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
>>> here.
>>
>> So move here, dropping i915 prefix, next to the newly proposed one?
> Sure, can do that.
> 
>>
>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>> gt/intel_gt.c.
>>
>> Move there and rename to GT_IDLE_TIMEOUT?
>>
>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.
>>
>> No action needed, maybe drop i915 prefix if wanted.
>>
> These two are totally unrelated and in code not being touched by this 
> change. I would rather not conflate changing random other things with 
> fixing this specific issue.
> 
>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>
>> Add _MS suffix if wanted.
>>
>>> My head spins.
>>
>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies 
>> to DROP_ACTIVE and not only DROP_IDLE.
> My original intention for the name was that is the 'drop caches timeout 
> for intel_gt_wait_for_idle'. Which is quite the mouthful and hence 
> abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later that name 
> can be conflated with the DROP_IDLE flag. Will rename.
> 
> 
>>
>> Things get refactored, code moves around, bits get left behind, who 
>> knows. No reason to get too worked up. :) As long as people are taking 
>> a wider view when touching the code base, and are not afraid to send 
>> cleanups, things should be good.
> On the other hand, if every patch gets blocked in code review because 
> someone points out some completely unrelated piece of code could be a 
> bit better then nothing ever gets fixed. If you spot something that you 
> think should be improved, isn't the general idea that you should post a 
> patch yourself to improve it?

There's two maintainers per branch and an order of magnitude or two more 
developers so it'd be nice if cleanups would just be incoming on 
self-initiative basis. ;)

>> For the actual functional change at hand - it would be nice if code 
>> paths in question could handle SIGINT and then we could punt the 
>> decision on how long someone wants to wait purely to userspace. But 
>> it's probably hard and it's only debugfs so whatever.
>>
> The code paths in question will already abort on a signal won't they? 
> Both intel_gt_wait_for_idle() and intel_guc_wait_for_pending_msg(), 
> which is where the uc_wait_for_idle eventually ends up, have an 
> 'if(signal_pending) return -EINTR;' check. Beyond that, it sounds like 
> what you are asking for is a change in the IGT libraries and/or CI 
> framework to start sending signals after some specific timeout. That 
> seems like a significantly more complex change (in terms of the number 
> of entities affected and number of groups involved) and unnecessary.

If you say so, I haven't looked at them all. But if the code path in 
question already aborts on signals then I am not sure what is the patch 
fixing? I assumed you are trying to avoid the write stuck in D forever, 
which then prevents driver unload and everything, requiring the test 
runner to eventually reboot. If you say SIGINT works then you can 
already recover from userspace, no?

>> Whether or not 10s is enough CI will hopefully tell us. I'd probably 
>> err on the side of safety and make it longer, but at most half from 
>> the test runner timeout.
> This is supposed to be test clean up. This is not about how long a 
> particular test takes to complete but about how long it takes to declare 
> the system broken after the test has already finished. I would argue 
> that even 10s is massively longer than required.
> 
>>
>> I am not convinced that wedging is correct though. Conceptually could 
>> be just that the timeout is too short. What does wedging really give 
>> us, on top of limiting the wait, when latter AFAIU is the key factor 
>> which would prevent the need to reboot the machine?
>>
> It gives us a system that knows what state it is in. If we can't idle 
> the GT then something is very badly wrong. Wedging indicates that. It 
> also ensure that a full GT reset will be attempted before the next test 
> is run. Helping to prevent a failure on test X from propagating into 
> failures of unrelated tests X+1, X+2, ... And if the GT reset does not 
> work because the system is really that badly broken then future tests 
> will not run rather than report erroneous failures.
> 
> This is not about getting a more stable system for end users by sweeping 
> issues under the carpet and pretending all is well. End users don't run 
> IGTs or explicitly call dodgy debugfs entry points. The sole motivation 
> here is to get more accurate results from CI. That is, correctly 
> identifying which test has hit a problem, getting valid debug analysis 
> for that test (logs and such) and allowing further testing to complete 
> correctly in the case where the system can be recovered.

I don't really oppose shortening of the timeout in principle, just want 
a clear statement if this is something IGT / test runner could already 
do or not. It can apply a timeout, it can also send SIGINT, and it could 
even trigger a reset from outside. Sure it is debugfs hacks so general 
"kernel should not implement policy" need not be strictly followed, but 
lets have it clear what are the options.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03  9:18       ` Tvrtko Ursulin
@ 2022-11-03  9:38         ` Tvrtko Ursulin
  2022-11-03 19:16           ` John Harrison
  2022-11-03 19:37         ` John Harrison
  1 sibling, 1 reply; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-03  9:38 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: DRI-Devel


On 03/11/2022 09:18, Tvrtko Ursulin wrote:
> 
> On 03/11/2022 01:33, John Harrison wrote:
>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>
>>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>>
>>>> sysfs?
>> Sorry, that was meant to say debugfs. I've also been working on some 
>> sysfs IGT issues and evidently got my wires crossed!
>>
>>>>
>>>>> special flags set. One of the possible paths waits for idle with an
>>>>> infinite timeout. That causes problems for debugging issues when CI
>>>>> catches a "can't go idle" test failure. Best case, the CI system times
>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>>> Sometimes a serial port log of dmesg might be available, sometimes 
>>>>> not.
>>>>>
>>>>> So rather than making life hard for ourselves, change the timeout to
>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>> working system (if possible).
>>>>>
>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>> ---
>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>             DROP_RESET_ACTIVE | \
>>>>>             DROP_RESET_SEQNO | \
>>>>>             DROP_RCU)
>>>>> +
>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>
>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
>>>> here.
>>>
>>> So move here, dropping i915 prefix, next to the newly proposed one?
>> Sure, can do that.
>>
>>>
>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>> gt/intel_gt.c.
>>>
>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>
>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.
>>>
>>> No action needed, maybe drop i915 prefix if wanted.
>>>
>> These two are totally unrelated and in code not being touched by this 
>> change. I would rather not conflate changing random other things with 
>> fixing this specific issue.
>>
>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>
>>> Add _MS suffix if wanted.
>>>
>>>> My head spins.
>>>
>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies 
>>> to DROP_ACTIVE and not only DROP_IDLE.
>> My original intention for the name was that is the 'drop caches 
>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful and 
>> hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later that 
>> name can be conflated with the DROP_IDLE flag. Will rename.
>>
>>
>>>
>>> Things get refactored, code moves around, bits get left behind, who 
>>> knows. No reason to get too worked up. :) As long as people are 
>>> taking a wider view when touching the code base, and are not afraid 
>>> to send cleanups, things should be good.
>> On the other hand, if every patch gets blocked in code review because 
>> someone points out some completely unrelated piece of code could be a 
>> bit better then nothing ever gets fixed. If you spot something that 
>> you think should be improved, isn't the general idea that you should 
>> post a patch yourself to improve it?
> 
> There's two maintainers per branch and an order of magnitude or two more 
> developers so it'd be nice if cleanups would just be incoming on 
> self-initiative basis. ;)
> 
>>> For the actual functional change at hand - it would be nice if code 
>>> paths in question could handle SIGINT and then we could punt the 
>>> decision on how long someone wants to wait purely to userspace. But 
>>> it's probably hard and it's only debugfs so whatever.
>>>
>> The code paths in question will already abort on a signal won't they? 
>> Both intel_gt_wait_for_idle() and intel_guc_wait_for_pending_msg(), 
>> which is where the uc_wait_for_idle eventually ends up, have an 
>> 'if(signal_pending) return -EINTR;' check. Beyond that, it sounds like 
>> what you are asking for is a change in the IGT libraries and/or CI 
>> framework to start sending signals after some specific timeout. That 
>> seems like a significantly more complex change (in terms of the number 
>> of entities affected and number of groups involved) and unnecessary.
> 
> If you say so, I haven't looked at them all. But if the code path in 
> question already aborts on signals then I am not sure what is the patch 
> fixing? I assumed you are trying to avoid the write stuck in D forever, 
> which then prevents driver unload and everything, requiring the test 
> runner to eventually reboot. If you say SIGINT works then you can 
> already recover from userspace, no?
> 
>>> Whether or not 10s is enough CI will hopefully tell us. I'd probably 
>>> err on the side of safety and make it longer, but at most half from 
>>> the test runner timeout.
>> This is supposed to be test clean up. This is not about how long a 
>> particular test takes to complete but about how long it takes to 
>> declare the system broken after the test has already finished. I would 
>> argue that even 10s is massively longer than required.
>>
>>>
>>> I am not convinced that wedging is correct though. Conceptually could 
>>> be just that the timeout is too short. What does wedging really give 
>>> us, on top of limiting the wait, when latter AFAIU is the key factor 
>>> which would prevent the need to reboot the machine?
>>>
>> It gives us a system that knows what state it is in. If we can't idle 
>> the GT then something is very badly wrong. Wedging indicates that. It 
>> also ensure that a full GT reset will be attempted before the next 
>> test is run. Helping to prevent a failure on test X from propagating 
>> into failures of unrelated tests X+1, X+2, ... And if the GT reset 
>> does not work because the system is really that badly broken then 
>> future tests will not run rather than report erroneous failures.
>>
>> This is not about getting a more stable system for end users by 
>> sweeping issues under the carpet and pretending all is well. End users 
>> don't run IGTs or explicitly call dodgy debugfs entry points. The sole 
>> motivation here is to get more accurate results from CI. That is, 
>> correctly identifying which test has hit a problem, getting valid 
>> debug analysis for that test (logs and such) and allowing further 
>> testing to complete correctly in the case where the system can be 
>> recovered.
> 
> I don't really oppose shortening of the timeout in principle, just want 
> a clear statement if this is something IGT / test runner could already 
> do or not. It can apply a timeout, it can also send SIGINT, and it could 
> even trigger a reset from outside. Sure it is debugfs hacks so general 
> "kernel should not implement policy" need not be strictly followed, but 
> lets have it clear what are the options.

One conceptual problem with applying this policy is that the code is:

	if (val & (DROP_IDLE | DROP_ACTIVE)) {
		ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
		if (ret)
			return ret;
	}

	if (val & DROP_IDLE) {
		ret = intel_gt_pm_wait_for_idle(gt);
		if (ret)
			return ret;
	}

So if someone passes in DROP_IDLE and then why would only the first 
branch have a short timeout and wedge. Yeah some bug happens to be there 
at the moment, but put a bug in a different place and you hang on the 
second branch and then need another patch. Versus perhaps making it all 
respect SIGINT and handle from outside.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03  1:33     ` John Harrison
  2022-11-03  9:18       ` Tvrtko Ursulin
@ 2022-11-03 10:45       ` Jani Nikula
  2022-11-03 19:39         ` John Harrison
  1 sibling, 1 reply; 31+ messages in thread
From: Jani Nikula @ 2022-11-03 10:45 UTC (permalink / raw)
  To: John Harrison, Tvrtko Ursulin, Intel-GFX; +Cc: DRI-Devel

On Wed, 02 Nov 2022, John Harrison <john.c.harrison@intel.com> wrote:
> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>> On 02/11/2022 12:12, Jani Nikula wrote:
>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>
>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>
>>> sysfs?
> Sorry, that was meant to say debugfs. I've also been working on some 
> sysfs IGT issues and evidently got my wires crossed!
>
>>>
>>>> special flags set. One of the possible paths waits for idle with an
>>>> infinite timeout. That causes problems for debugging issues when CI
>>>> catches a "can't go idle" test failure. Best case, the CI system times
>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>> Sometimes a serial port log of dmesg might be available, sometimes not.
>>>>
>>>> So rather than making life hard for ourselves, change the timeout to
>>>> be 10s rather than infinite. Also, trigger the standard
>>>> wedge/reset/recover sequence so that testing can continue with a
>>>> working system (if possible).
>>>>
>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>> ---
>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>             DROP_RESET_ACTIVE | \
>>>>             DROP_RESET_SEQNO | \
>>>>             DROP_RCU)
>>>> +
>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>
>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
>>> here.
>>
>> So move here, dropping i915 prefix, next to the newly proposed one?
> Sure, can do that.
>
>>
>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>> gt/intel_gt.c.
>>
>> Move there and rename to GT_IDLE_TIMEOUT?
>>
>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.
>>
>> No action needed, maybe drop i915 prefix if wanted.
>>
> These two are totally unrelated and in code not being touched by this 
> change. I would rather not conflate changing random other things with 
> fixing this specific issue.
>
>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>
>> Add _MS suffix if wanted.
>>
>>> My head spins.
>>
>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies 
>> to DROP_ACTIVE and not only DROP_IDLE.
> My original intention for the name was that is the 'drop caches timeout 
> for intel_gt_wait_for_idle'. Which is quite the mouthful and hence 
> abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later that name 
> can be conflated with the DROP_IDLE flag. Will rename.
>
>
>>
>> Things get refactored, code moves around, bits get left behind, who 
>> knows. No reason to get too worked up. :) As long as people are taking 
>> a wider view when touching the code base, and are not afraid to send 
>> cleanups, things should be good.
> On the other hand, if every patch gets blocked in code review because 
> someone points out some completely unrelated piece of code could be a 
> bit better then nothing ever gets fixed. If you spot something that you 
> think should be improved, isn't the general idea that you should post a 
> patch yourself to improve it?

The general idea is that every change should improve the driver. If you
need to modify something that's a mess, you fix the mess instead of
adding to the mess. You can't put the onus on cleaning up after you on
someone else.

Sure, the patch at hand is neglible, but hey, so are the fixes.

BR,
Jani.


>
>
>>
>> For the actual functional change at hand - it would be nice if code 
>> paths in question could handle SIGINT and then we could punt the 
>> decision on how long someone wants to wait purely to userspace. But 
>> it's probably hard and it's only debugfs so whatever.
>>
> The code paths in question will already abort on a signal won't they? 
> Both intel_gt_wait_for_idle() and intel_guc_wait_for_pending_msg(), 
> which is where the uc_wait_for_idle eventually ends up, have an 
> 'if(signal_pending) return -EINTR;' check. Beyond that, it sounds like 
> what you are asking for is a change in the IGT libraries and/or CI 
> framework to start sending signals after some specific timeout. That 
> seems like a significantly more complex change (in terms of the number 
> of entities affected and number of groups involved) and unnecessary.
>
>> Whether or not 10s is enough CI will hopefully tell us. I'd probably 
>> err on the side of safety and make it longer, but at most half from 
>> the test runner timeout.
> This is supposed to be test clean up. This is not about how long a 
> particular test takes to complete but about how long it takes to declare 
> the system broken after the test has already finished. I would argue 
> that even 10s is massively longer than required.
>
>>
>> I am not convinced that wedging is correct though. Conceptually could 
>> be just that the timeout is too short. What does wedging really give 
>> us, on top of limiting the wait, when latter AFAIU is the key factor 
>> which would prevent the need to reboot the machine?
>>
> It gives us a system that knows what state it is in. If we can't idle 
> the GT then something is very badly wrong. Wedging indicates that. It 
> also ensure that a full GT reset will be attempted before the next test 
> is run. Helping to prevent a failure on test X from propagating into 
> failures of unrelated tests X+1, X+2, ... And if the GT reset does not 
> work because the system is really that badly broken then future tests 
> will not run rather than report erroneous failures.
>
> This is not about getting a more stable system for end users by sweeping 
> issues under the carpet and pretending all is well. End users don't run 
> IGTs or explicitly call dodgy debugfs entry points. The sole motivation 
> here is to get more accurate results from CI. That is, correctly 
> identifying which test has hit a problem, getting valid debug analysis 
> for that test (logs and such) and allowing further testing to complete 
> correctly in the case where the system can be recovered.
>
> John.
>
>> Regards,
>>
>> Tvrtko
>

-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03  9:38         ` Tvrtko Ursulin
@ 2022-11-03 19:16           ` John Harrison
  2022-11-04 10:01             ` Tvrtko Ursulin
  0 siblings, 1 reply; 31+ messages in thread
From: John Harrison @ 2022-11-03 19:16 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/3/2022 02:38, Tvrtko Ursulin wrote:
> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>> On 03/11/2022 01:33, John Harrison wrote:
>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>
>>>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>>>
>>>>> sysfs?
>>> Sorry, that was meant to say debugfs. I've also been working on some 
>>> sysfs IGT issues and evidently got my wires crossed!
>>>
>>>>>
>>>>>> special flags set. One of the possible paths waits for idle with an
>>>>>> infinite timeout. That causes problems for debugging issues when CI
>>>>>> catches a "can't go idle" test failure. Best case, the CI system 
>>>>>> times
>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>> sometimes not.
>>>>>>
>>>>>> So rather than making life hard for ourselves, change the timeout to
>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>> working system (if possible).
>>>>>>
>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>> ---
>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>> @@ -641,6 +641,9 @@ 
>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>             DROP_RESET_SEQNO | \
>>>>>>             DROP_RCU)
>>>>>> +
>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>
>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only 
>>>>> used
>>>>> here.
>>>>
>>>> So move here, dropping i915 prefix, next to the newly proposed one?
>>> Sure, can do that.
>>>
>>>>
>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>> gt/intel_gt.c.
>>>>
>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>
>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>> intel_gt_pm.c.
>>>>
>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>
>>> These two are totally unrelated and in code not being touched by 
>>> this change. I would rather not conflate changing random other 
>>> things with fixing this specific issue.
>>>
>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>
>>>> Add _MS suffix if wanted.
>>>>
>>>>> My head spins.
>>>>
>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>> My original intention for the name was that is the 'drop caches 
>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful and 
>>> hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later 
>>> that name can be conflated with the DROP_IDLE flag. Will rename.
>>>
>>>
>>>>
>>>> Things get refactored, code moves around, bits get left behind, who 
>>>> knows. No reason to get too worked up. :) As long as people are 
>>>> taking a wider view when touching the code base, and are not afraid 
>>>> to send cleanups, things should be good.
>>> On the other hand, if every patch gets blocked in code review 
>>> because someone points out some completely unrelated piece of code 
>>> could be a bit better then nothing ever gets fixed. If you spot 
>>> something that you think should be improved, isn't the general idea 
>>> that you should post a patch yourself to improve it?
>>
>> There's two maintainers per branch and an order of magnitude or two 
>> more developers so it'd be nice if cleanups would just be incoming on 
>> self-initiative basis. ;)
>>
>>>> For the actual functional change at hand - it would be nice if code 
>>>> paths in question could handle SIGINT and then we could punt the 
>>>> decision on how long someone wants to wait purely to userspace. But 
>>>> it's probably hard and it's only debugfs so whatever.
>>>>
>>> The code paths in question will already abort on a signal won't 
>>> they? Both intel_gt_wait_for_idle() and 
>>> intel_guc_wait_for_pending_msg(), which is where the 
>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>> asking for is a change in the IGT libraries and/or CI framework to 
>>> start sending signals after some specific timeout. That seems like a 
>>> significantly more complex change (in terms of the number of 
>>> entities affected and number of groups involved) and unnecessary.
>>
>> If you say so, I haven't looked at them all. But if the code path in 
>> question already aborts on signals then I am not sure what is the 
>> patch fixing? I assumed you are trying to avoid the write stuck in D 
>> forever, which then prevents driver unload and everything, requiring 
>> the test runner to eventually reboot. If you say SIGINT works then 
>> you can already recover from userspace, no?
>>
>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>> probably err on the side of safety and make it longer, but at most 
>>>> half from the test runner timeout.
>>> This is supposed to be test clean up. This is not about how long a 
>>> particular test takes to complete but about how long it takes to 
>>> declare the system broken after the test has already finished. I 
>>> would argue that even 10s is massively longer than required.
>>>
>>>>
>>>> I am not convinced that wedging is correct though. Conceptually 
>>>> could be just that the timeout is too short. What does wedging 
>>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>>> the key factor which would prevent the need to reboot the machine?
>>>>
>>> It gives us a system that knows what state it is in. If we can't 
>>> idle the GT then something is very badly wrong. Wedging indicates 
>>> that. It also ensure that a full GT reset will be attempted before 
>>> the next test is run. Helping to prevent a failure on test X from 
>>> propagating into failures of unrelated tests X+1, X+2, ... And if 
>>> the GT reset does not work because the system is really that badly 
>>> broken then future tests will not run rather than report erroneous 
>>> failures.
>>>
>>> This is not about getting a more stable system for end users by 
>>> sweeping issues under the carpet and pretending all is well. End 
>>> users don't run IGTs or explicitly call dodgy debugfs entry points. 
>>> The sole motivation here is to get more accurate results from CI. 
>>> That is, correctly identifying which test has hit a problem, getting 
>>> valid debug analysis for that test (logs and such) and allowing 
>>> further testing to complete correctly in the case where the system 
>>> can be recovered.
>>
>> I don't really oppose shortening of the timeout in principle, just 
>> want a clear statement if this is something IGT / test runner could 
>> already do or not. It can apply a timeout, it can also send SIGINT, 
>> and it could even trigger a reset from outside. Sure it is debugfs 
>> hacks so general "kernel should not implement policy" need not be 
>> strictly followed, but lets have it clear what are the options.
>
> One conceptual problem with applying this policy is that the code is:
>
>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>         if (ret)
>             return ret;
>     }
>
>     if (val & DROP_IDLE) {
>         ret = intel_gt_pm_wait_for_idle(gt);
>         if (ret)
>             return ret;
>     }
>
> So if someone passes in DROP_IDLE and then why would only the first 
> branch have a short timeout and wedge. Yeah some bug happens to be 
> there at the moment, but put a bug in a different place and you hang 
> on the second branch and then need another patch. Versus perhaps 
> making it all respect SIGINT and handle from outside.
>
The pm_wait_for_idle is can only called after gt_wait_for_idle has 
completed successfully. There is no route to skip the GT idle or to do 
the PM idle even if the GT idle fails. So the chances of the PM idle 
failing are greatly reduced. There would have to be something outside of 
a GT keeping the GPU awake and there isn't a whole lot of hardware left 
at that point!

Regarding signals, the PM idle code ends up at 
wait_var_event_killable(). I assume that is interruptible via at least a 
KILL signal if not any signal. Although it's not entirely clear trying 
to follow through the implementation of this code. Also, I have no idea 
if there is a safe way to add a timeout to that code (or why it wasn't 
already written with a timeout included). Someone more familiar with the 
wakeref internals would need to comment.

However, I strongly disagree that we should not fix the driver just 
because it is possible to workaround the issue by re-writing the CI 
framework. Feel free to bring a redesign plan to the IGT WG and whatever 
equivalent CI meetings in parallel. But we absolutely should not have 
infinite waits in the kernel if there is a trivial way to not have 
infinite waits.

Also, sending a signal does not result in the wedge happening. I 
specifically did not want to change that code path because I was 
assuming there was a valid reason for it. If you have been interrupted 
then you are in the territory of maybe it would have succeeded if you 
just left it for a moment longer. Whereas, hitting the timeout says that 
someone very deliberately said this is too long to wait and therefore 
the system must be broken.

Plus, infinite wait is not a valid code path in the first place so any 
change in behaviour is not really a change in behaviour. Code can't be 
relying on a kernel call to never return for its correct operation!

And if you don't wedge then you don't recover. Each subsequent test 
would just hit the infinite timeout, get killed by the CI framework's 
shiny new kill signal and be marked as yet another unrelated bug that 
will be logged separately. Whereas, using a sensible timeout and then 
wedging will at least attempt to recover the situation. And if it can be 
recovered, future tests will pass. If it can't then future testing will 
be aborted.

John.


> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03  9:18       ` Tvrtko Ursulin
  2022-11-03  9:38         ` Tvrtko Ursulin
@ 2022-11-03 19:37         ` John Harrison
  1 sibling, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-03 19:37 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/3/2022 02:18, Tvrtko Ursulin wrote:
> On 03/11/2022 01:33, John Harrison wrote:
>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>
>>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>>
>>>> sysfs?
>> Sorry, that was meant to say debugfs. I've also been working on some 
>> sysfs IGT issues and evidently got my wires crossed!
>>
>>>>
>>>>> special flags set. One of the possible paths waits for idle with an
>>>>> infinite timeout. That causes problems for debugging issues when CI
>>>>> catches a "can't go idle" test failure. Best case, the CI system 
>>>>> times
>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>>> Sometimes a serial port log of dmesg might be available, sometimes 
>>>>> not.
>>>>>
>>>>> So rather than making life hard for ourselves, change the timeout to
>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>> working system (if possible).
>>>>>
>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>> ---
>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>             DROP_RESET_ACTIVE | \
>>>>>             DROP_RESET_SEQNO | \
>>>>>             DROP_RCU)
>>>>> +
>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>
>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only 
>>>> used
>>>> here.
>>>
>>> So move here, dropping i915 prefix, next to the newly proposed one?
>> Sure, can do that.
>>
>>>
>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>> gt/intel_gt.c.
>>>
>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>
>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>> intel_gt_pm.c.
>>>
>>> No action needed, maybe drop i915 prefix if wanted.
>>>
>> These two are totally unrelated and in code not being touched by this 
>> change. I would rather not conflate changing random other things with 
>> fixing this specific issue.
>>
>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>
>>> Add _MS suffix if wanted.
>>>
>>>> My head spins.
>>>
>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies 
>>> to DROP_ACTIVE and not only DROP_IDLE.
>> My original intention for the name was that is the 'drop caches 
>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful and 
>> hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later 
>> that name can be conflated with the DROP_IDLE flag. Will rename.
>>
>>
>>>
>>> Things get refactored, code moves around, bits get left behind, who 
>>> knows. No reason to get too worked up. :) As long as people are 
>>> taking a wider view when touching the code base, and are not afraid 
>>> to send cleanups, things should be good.
>> On the other hand, if every patch gets blocked in code review because 
>> someone points out some completely unrelated piece of code could be a 
>> bit better then nothing ever gets fixed. If you spot something that 
>> you think should be improved, isn't the general idea that you should 
>> post a patch yourself to improve it?
>
> There's two maintainers per branch and an order of magnitude or two 
> more developers so it'd be nice if cleanups would just be incoming on 
> self-initiative basis. ;)
It's not just maintainers that look at the code and spot problems. Where 
do you think patch set came from? It was not on my list of tasks to work 
on. No-one had logged this as a super urgent bug that needs to be fixed. 
I noticed a problem when trying to debug another issue and saw a way to 
improve the i915 debuggability. So I tried to fix it on a 
'self-initiative basis'. And already that trivial fix has ballooned into 
I don't know how many hours of work that has not been spent on doing the 
things I'm actually supposed to working on.

Likewise, the a bunch of other patches I have recently posted. They are 
just things I happened to spot and spontaneously decided to fix.

And if you don't have time to fix something yourself, you can always 
just log it as a piece of work that needs to be done and add it to the 
backlog of tasks. It will then get assigned to whoever actually has the 
time to do it according to how important it really is.

John.


>
>>> For the actual functional change at hand - it would be nice if code 
>>> paths in question could handle SIGINT and then we could punt the 
>>> decision on how long someone wants to wait purely to userspace. But 
>>> it's probably hard and it's only debugfs so whatever.
>>>
>> The code paths in question will already abort on a signal won't they? 
>> Both intel_gt_wait_for_idle() and intel_guc_wait_for_pending_msg(), 
>> which is where the uc_wait_for_idle eventually ends up, have an 
>> 'if(signal_pending) return -EINTR;' check. Beyond that, it sounds 
>> like what you are asking for is a change in the IGT libraries and/or 
>> CI framework to start sending signals after some specific timeout. 
>> That seems like a significantly more complex change (in terms of the 
>> number of entities affected and number of groups involved) and 
>> unnecessary.
>
> If you say so, I haven't looked at them all. But if the code path in 
> question already aborts on signals then I am not sure what is the 
> patch fixing? I assumed you are trying to avoid the write stuck in D 
> forever, which then prevents driver unload and everything, requiring 
> the test runner to eventually reboot. If you say SIGINT works then you 
> can already recover from userspace, no?
>
>>> Whether or not 10s is enough CI will hopefully tell us. I'd probably 
>>> err on the side of safety and make it longer, but at most half from 
>>> the test runner timeout.
>> This is supposed to be test clean up. This is not about how long a 
>> particular test takes to complete but about how long it takes to 
>> declare the system broken after the test has already finished. I 
>> would argue that even 10s is massively longer than required.
>>
>>>
>>> I am not convinced that wedging is correct though. Conceptually 
>>> could be just that the timeout is too short. What does wedging 
>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>> the key factor which would prevent the need to reboot the machine?
>>>
>> It gives us a system that knows what state it is in. If we can't idle 
>> the GT then something is very badly wrong. Wedging indicates that. It 
>> also ensure that a full GT reset will be attempted before the next 
>> test is run. Helping to prevent a failure on test X from propagating 
>> into failures of unrelated tests X+1, X+2, ... And if the GT reset 
>> does not work because the system is really that badly broken then 
>> future tests will not run rather than report erroneous failures.
>>
>> This is not about getting a more stable system for end users by 
>> sweeping issues under the carpet and pretending all is well. End 
>> users don't run IGTs or explicitly call dodgy debugfs entry points. 
>> The sole motivation here is to get more accurate results from CI. 
>> That is, correctly identifying which test has hit a problem, getting 
>> valid debug analysis for that test (logs and such) and allowing 
>> further testing to complete correctly in the case where the system 
>> can be recovered.
>
> I don't really oppose shortening of the timeout in principle, just 
> want a clear statement if this is something IGT / test runner could 
> already do or not. It can apply a timeout, it can also send SIGINT, 
> and it could even trigger a reset from outside. Sure it is debugfs 
> hacks so general "kernel should not implement policy" need not be 
> strictly followed, but lets have it clear what are the options.
>
> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03 10:45       ` Jani Nikula
@ 2022-11-03 19:39         ` John Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-03 19:39 UTC (permalink / raw)
  To: Jani Nikula, Tvrtko Ursulin, Intel-GFX; +Cc: DRI-Devel



On 11/3/2022 03:45, Jani Nikula wrote:
> On Wed, 02 Nov 2022, John Harrison <john.c.harrison@intel.com> wrote:
>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>
>>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>> sysfs?
>> Sorry, that was meant to say debugfs. I've also been working on some
>> sysfs IGT issues and evidently got my wires crossed!
>>
>>>>> special flags set. One of the possible paths waits for idle with an
>>>>> infinite timeout. That causes problems for debugging issues when CI
>>>>> catches a "can't go idle" test failure. Best case, the CI system times
>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>>> Sometimes a serial port log of dmesg might be available, sometimes not.
>>>>>
>>>>> So rather than making life hard for ourselves, change the timeout to
>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>> working system (if possible).
>>>>>
>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>> ---
>>>>>    drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>    1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>> @@ -641,6 +641,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>              DROP_RESET_ACTIVE | \
>>>>>              DROP_RESET_SEQNO | \
>>>>>              DROP_RCU)
>>>>> +
>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only used
>>>> here.
>>> So move here, dropping i915 prefix, next to the newly proposed one?
>> Sure, can do that.
>>
>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>> gt/intel_gt.c.
>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>
>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in intel_gt_pm.c.
>>> No action needed, maybe drop i915 prefix if wanted.
>>>
>> These two are totally unrelated and in code not being touched by this
>> change. I would rather not conflate changing random other things with
>> fixing this specific issue.
>>
>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>> Add _MS suffix if wanted.
>>>
>>>> My head spins.
>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT applies
>>> to DROP_ACTIVE and not only DROP_IDLE.
>> My original intention for the name was that is the 'drop caches timeout
>> for intel_gt_wait_for_idle'. Which is quite the mouthful and hence
>> abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later that name
>> can be conflated with the DROP_IDLE flag. Will rename.
>>
>>
>>> Things get refactored, code moves around, bits get left behind, who
>>> knows. No reason to get too worked up. :) As long as people are taking
>>> a wider view when touching the code base, and are not afraid to send
>>> cleanups, things should be good.
>> On the other hand, if every patch gets blocked in code review because
>> someone points out some completely unrelated piece of code could be a
>> bit better then nothing ever gets fixed. If you spot something that you
>> think should be improved, isn't the general idea that you should post a
>> patch yourself to improve it?
> The general idea is that every change should improve the driver. If you
> need to modify something that's a mess, you fix the mess instead of
> adding to the mess. You can't put the onus on cleaning up after you on
> someone else.
Please point out in what way this patch is 'adding to the mess' or 
requiring some else to do additional 'cleaning up after'. As stated 
above, I have fixed up the issues pointed out which are related to the 
drop caches code. I don't agree that shoe-horning completely unrelated 
changes into random patches is a good thing. That makes it hard to work 
out what the patch is actually trying to do, it makes bisection more 
confusing, etc. Sure, maybe the unrelated change is trivial. But this 
change was supposed to be trivial too and already it has exploded into 
many hours of time spent not working on the things I am actually 
supposed to be working on.

> Sure, the patch at hand is neglible, but hey, so are the fixes.
So feel free to post a trivial patch to fix them. And if 'the patch at 
hand is negligible' then why is it generating so much discussion and 
argument over how it the problem should be fixed irrespective of adding 
in yet more unrelated changes?

John.

> BR,
> Jani.
>
>
>>
>>> For the actual functional change at hand - it would be nice if code
>>> paths in question could handle SIGINT and then we could punt the
>>> decision on how long someone wants to wait purely to userspace. But
>>> it's probably hard and it's only debugfs so whatever.
>>>
>> The code paths in question will already abort on a signal won't they?
>> Both intel_gt_wait_for_idle() and intel_guc_wait_for_pending_msg(),
>> which is where the uc_wait_for_idle eventually ends up, have an
>> 'if(signal_pending) return -EINTR;' check. Beyond that, it sounds like
>> what you are asking for is a change in the IGT libraries and/or CI
>> framework to start sending signals after some specific timeout. That
>> seems like a significantly more complex change (in terms of the number
>> of entities affected and number of groups involved) and unnecessary.
>>
>>> Whether or not 10s is enough CI will hopefully tell us. I'd probably
>>> err on the side of safety and make it longer, but at most half from
>>> the test runner timeout.
>> This is supposed to be test clean up. This is not about how long a
>> particular test takes to complete but about how long it takes to declare
>> the system broken after the test has already finished. I would argue
>> that even 10s is massively longer than required.
>>
>>> I am not convinced that wedging is correct though. Conceptually could
>>> be just that the timeout is too short. What does wedging really give
>>> us, on top of limiting the wait, when latter AFAIU is the key factor
>>> which would prevent the need to reboot the machine?
>>>
>> It gives us a system that knows what state it is in. If we can't idle
>> the GT then something is very badly wrong. Wedging indicates that. It
>> also ensure that a full GT reset will be attempted before the next test
>> is run. Helping to prevent a failure on test X from propagating into
>> failures of unrelated tests X+1, X+2, ... And if the GT reset does not
>> work because the system is really that badly broken then future tests
>> will not run rather than report erroneous failures.
>>
>> This is not about getting a more stable system for end users by sweeping
>> issues under the carpet and pretending all is well. End users don't run
>> IGTs or explicitly call dodgy debugfs entry points. The sole motivation
>> here is to get more accurate results from CI. That is, correctly
>> identifying which test has hit a problem, getting valid debug analysis
>> for that test (logs and such) and allowing further testing to complete
>> correctly in the case where the system can be recovered.
>>
>> John.
>>
>>> Regards,
>>>
>>> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-03 19:16           ` John Harrison
@ 2022-11-04 10:01             ` Tvrtko Ursulin
  2022-11-04 17:45                 ` John Harrison
  0 siblings, 1 reply; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-04 10:01 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: DRI-Devel


On 03/11/2022 19:16, John Harrison wrote:
> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>> On 03/11/2022 01:33, John Harrison wrote:
>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>
>>>>>>> At the end of each test, IGT does a drop caches call via sysfs with
>>>>>>
>>>>>> sysfs?
>>>> Sorry, that was meant to say debugfs. I've also been working on some 
>>>> sysfs IGT issues and evidently got my wires crossed!
>>>>
>>>>>>
>>>>>>> special flags set. One of the possible paths waits for idle with an
>>>>>>> infinite timeout. That causes problems for debugging issues when CI
>>>>>>> catches a "can't go idle" test failure. Best case, the CI system 
>>>>>>> times
>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>> reboots the system to recover it. Worst case, the CI system can't do
>>>>>>> anything at all and then times out (after 1000s) and simply reboots.
>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>> sometimes not.
>>>>>>>
>>>>>>> So rather than making life hard for ourselves, change the timeout to
>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>> working system (if possible).
>>>>>>>
>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>> ---
>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>             DROP_RCU)
>>>>>>> +
>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>
>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also only 
>>>>>> used
>>>>>> here.
>>>>>
>>>>> So move here, dropping i915 prefix, next to the newly proposed one?
>>>> Sure, can do that.
>>>>
>>>>>
>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>> gt/intel_gt.c.
>>>>>
>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>
>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>> intel_gt_pm.c.
>>>>>
>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>
>>>> These two are totally unrelated and in code not being touched by 
>>>> this change. I would rather not conflate changing random other 
>>>> things with fixing this specific issue.
>>>>
>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>
>>>>> Add _MS suffix if wanted.
>>>>>
>>>>>> My head spins.
>>>>>
>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>> My original intention for the name was that is the 'drop caches 
>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful and 
>>>> hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised later 
>>>> that name can be conflated with the DROP_IDLE flag. Will rename.
>>>>
>>>>
>>>>>
>>>>> Things get refactored, code moves around, bits get left behind, who 
>>>>> knows. No reason to get too worked up. :) As long as people are 
>>>>> taking a wider view when touching the code base, and are not afraid 
>>>>> to send cleanups, things should be good.
>>>> On the other hand, if every patch gets blocked in code review 
>>>> because someone points out some completely unrelated piece of code 
>>>> could be a bit better then nothing ever gets fixed. If you spot 
>>>> something that you think should be improved, isn't the general idea 
>>>> that you should post a patch yourself to improve it?
>>>
>>> There's two maintainers per branch and an order of magnitude or two 
>>> more developers so it'd be nice if cleanups would just be incoming on 
>>> self-initiative basis. ;)
>>>
>>>>> For the actual functional change at hand - it would be nice if code 
>>>>> paths in question could handle SIGINT and then we could punt the 
>>>>> decision on how long someone wants to wait purely to userspace. But 
>>>>> it's probably hard and it's only debugfs so whatever.
>>>>>
>>>> The code paths in question will already abort on a signal won't 
>>>> they? Both intel_gt_wait_for_idle() and 
>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>> asking for is a change in the IGT libraries and/or CI framework to 
>>>> start sending signals after some specific timeout. That seems like a 
>>>> significantly more complex change (in terms of the number of 
>>>> entities affected and number of groups involved) and unnecessary.
>>>
>>> If you say so, I haven't looked at them all. But if the code path in 
>>> question already aborts on signals then I am not sure what is the 
>>> patch fixing? I assumed you are trying to avoid the write stuck in D 
>>> forever, which then prevents driver unload and everything, requiring 
>>> the test runner to eventually reboot. If you say SIGINT works then 
>>> you can already recover from userspace, no?
>>>
>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>> probably err on the side of safety and make it longer, but at most 
>>>>> half from the test runner timeout.
>>>> This is supposed to be test clean up. This is not about how long a 
>>>> particular test takes to complete but about how long it takes to 
>>>> declare the system broken after the test has already finished. I 
>>>> would argue that even 10s is massively longer than required.
>>>>
>>>>>
>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>> could be just that the timeout is too short. What does wedging 
>>>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>>>> the key factor which would prevent the need to reboot the machine?
>>>>>
>>>> It gives us a system that knows what state it is in. If we can't 
>>>> idle the GT then something is very badly wrong. Wedging indicates 
>>>> that. It also ensure that a full GT reset will be attempted before 
>>>> the next test is run. Helping to prevent a failure on test X from 
>>>> propagating into failures of unrelated tests X+1, X+2, ... And if 
>>>> the GT reset does not work because the system is really that badly 
>>>> broken then future tests will not run rather than report erroneous 
>>>> failures.
>>>>
>>>> This is not about getting a more stable system for end users by 
>>>> sweeping issues under the carpet and pretending all is well. End 
>>>> users don't run IGTs or explicitly call dodgy debugfs entry points. 
>>>> The sole motivation here is to get more accurate results from CI. 
>>>> That is, correctly identifying which test has hit a problem, getting 
>>>> valid debug analysis for that test (logs and such) and allowing 
>>>> further testing to complete correctly in the case where the system 
>>>> can be recovered.
>>>
>>> I don't really oppose shortening of the timeout in principle, just 
>>> want a clear statement if this is something IGT / test runner could 
>>> already do or not. It can apply a timeout, it can also send SIGINT, 
>>> and it could even trigger a reset from outside. Sure it is debugfs 
>>> hacks so general "kernel should not implement policy" need not be 
>>> strictly followed, but lets have it clear what are the options.
>>
>> One conceptual problem with applying this policy is that the code is:
>>
>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>         if (ret)
>>             return ret;
>>     }
>>
>>     if (val & DROP_IDLE) {
>>         ret = intel_gt_pm_wait_for_idle(gt);
>>         if (ret)
>>             return ret;
>>     }
>>
>> So if someone passes in DROP_IDLE and then why would only the first 
>> branch have a short timeout and wedge. Yeah some bug happens to be 
>> there at the moment, but put a bug in a different place and you hang 
>> on the second branch and then need another patch. Versus perhaps 
>> making it all respect SIGINT and handle from outside.
>>
> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
> completed successfully. There is no route to skip the GT idle or to do 
> the PM idle even if the GT idle fails. So the chances of the PM idle 
> failing are greatly reduced. There would have to be something outside of 
> a GT keeping the GPU awake and there isn't a whole lot of hardware left 
> at that point!

Well "greatly reduced" is beside my point. Point is today bug is here 
and we add a timeout, tomorrow bug is there and then the same dance. It 
can be just a sw bug which forgets to release the pm ref in some 
circumstances, doesn't really matter.

> Regarding signals, the PM idle code ends up at 
> wait_var_event_killable(). I assume that is interruptible via at least a 
> KILL signal if not any signal. Although it's not entirely clear trying 
> to follow through the implementation of this code. Also, I have no idea 
> if there is a safe way to add a timeout to that code (or why it wasn't 
> already written with a timeout included). Someone more familiar with the 
> wakeref internals would need to comment.
> 
> However, I strongly disagree that we should not fix the driver just 
> because it is possible to workaround the issue by re-writing the CI 
> framework. Feel free to bring a redesign plan to the IGT WG and whatever 
> equivalent CI meetings in parallel. But we absolutely should not have 
> infinite waits in the kernel if there is a trivial way to not have 
> infinite waits.

I thought I was clear that I am not really opposed to the timeout.

The rest of the paragraph I don't really care - point is moot because 
it's debugfs so we can do whatever, as long as it is not burdensome to 
i915, which this isn't. If either wasn't the case then we certainly 
wouldn't be adding any workarounds in the kernel if it can be achieved 
in IGT.

> Also, sending a signal does not result in the wedge happening. I 
> specifically did not want to change that code path because I was 
> assuming there was a valid reason for it. If you have been interrupted 
> then you are in the territory of maybe it would have succeeded if you 
> just left it for a moment longer. Whereas, hitting the timeout says that 
> someone very deliberately said this is too long to wait and therefore 
> the system must be broken.

I wanted to know specifically about wedging - why can't you wedge/reset 
from IGT if DROP_IDLE times out in quiescent or wherever, if that's what 
you say is the right thing? That's a policy decision so why would i915 
wedge if an arbitrary timeout expired? I915 is not controlling how much 
work there is outstanding at the point the IGT decides to call DROP_IDLE.

> Plus, infinite wait is not a valid code path in the first place so any 
> change in behaviour is not really a change in behaviour. Code can't be 
> relying on a kernel call to never return for its correct operation!

Why infinite wait wouldn't be valid? Then you better change the other 
one as well. ;P

Regards,

Tvrtko

> And if you don't wedge then you don't recover. Each subsequent test 
> would just hit the infinite timeout, get killed by the CI framework's 
> shiny new kill signal and be marked as yet another unrelated bug that 
> will be logged separately. Whereas, using a sensible timeout and then 
> wedging will at least attempt to recover the situation. And if it can be 
> recovered, future tests will pass. If it can't then future testing will 
> be aborted.
> 
> John.
> 
> 
>> Regards,
>>
>> Tvrtko
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-04 10:01             ` Tvrtko Ursulin
@ 2022-11-04 17:45                 ` John Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-04 17:45 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel

On 11/4/2022 03:01, Tvrtko Ursulin wrote:
> On 03/11/2022 19:16, John Harrison wrote:
>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>
>>>>>>>> At the end of each test, IGT does a drop caches call via sysfs 
>>>>>>>> with
>>>>>>>
>>>>>>> sysfs?
>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>
>>>>>>>
>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>> with an
>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>> when CI
>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>> system times
>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>> can't do
>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>> reboots.
>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>> sometimes not.
>>>>>>>>
>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>> timeout to
>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>> working system (if possible).
>>>>>>>>
>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>> ---
>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>             DROP_RCU)
>>>>>>>> +
>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>
>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>> only used
>>>>>>> here.
>>>>>>
>>>>>> So move here, dropping i915 prefix, next to the newly proposed one?
>>>>> Sure, can do that.
>>>>>
>>>>>>
>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>> gt/intel_gt.c.
>>>>>>
>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>
>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>> intel_gt_pm.c.
>>>>>>
>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>
>>>>> These two are totally unrelated and in code not being touched by 
>>>>> this change. I would rather not conflate changing random other 
>>>>> things with fixing this specific issue.
>>>>>
>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>
>>>>>> Add _MS suffix if wanted.
>>>>>>
>>>>>>> My head spins.
>>>>>>
>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>> My original intention for the name was that is the 'drop caches 
>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>> rename.
>>>>>
>>>>>
>>>>>>
>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>> are taking a wider view when touching the code base, and are not 
>>>>>> afraid to send cleanups, things should be good.
>>>>> On the other hand, if every patch gets blocked in code review 
>>>>> because someone points out some completely unrelated piece of code 
>>>>> could be a bit better then nothing ever gets fixed. If you spot 
>>>>> something that you think should be improved, isn't the general 
>>>>> idea that you should post a patch yourself to improve it?
>>>>
>>>> There's two maintainers per branch and an order of magnitude or two 
>>>> more developers so it'd be nice if cleanups would just be incoming 
>>>> on self-initiative basis. ;)
>>>>
>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>> code paths in question could handle SIGINT and then we could punt 
>>>>>> the decision on how long someone wants to wait purely to 
>>>>>> userspace. But it's probably hard and it's only debugfs so whatever.
>>>>>>
>>>>> The code paths in question will already abort on a signal won't 
>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>> asking for is a change in the IGT libraries and/or CI framework to 
>>>>> start sending signals after some specific timeout. That seems like 
>>>>> a significantly more complex change (in terms of the number of 
>>>>> entities affected and number of groups involved) and unnecessary.
>>>>
>>>> If you say so, I haven't looked at them all. But if the code path 
>>>> in question already aborts on signals then I am not sure what is 
>>>> the patch fixing? I assumed you are trying to avoid the write stuck 
>>>> in D forever, which then prevents driver unload and everything, 
>>>> requiring the test runner to eventually reboot. If you say SIGINT 
>>>> works then you can already recover from userspace, no?
>>>>
>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>> most half from the test runner timeout.
>>>>> This is supposed to be test clean up. This is not about how long a 
>>>>> particular test takes to complete but about how long it takes to 
>>>>> declare the system broken after the test has already finished. I 
>>>>> would argue that even 10s is massively longer than required.
>>>>>
>>>>>>
>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>>>>> the key factor which would prevent the need to reboot the machine?
>>>>>>
>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>> idle the GT then something is very badly wrong. Wedging indicates 
>>>>> that. It also ensure that a full GT reset will be attempted before 
>>>>> the next test is run. Helping to prevent a failure on test X from 
>>>>> propagating into failures of unrelated tests X+1, X+2, ... And if 
>>>>> the GT reset does not work because the system is really that badly 
>>>>> broken then future tests will not run rather than report erroneous 
>>>>> failures.
>>>>>
>>>>> This is not about getting a more stable system for end users by 
>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>> points. The sole motivation here is to get more accurate results 
>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>> such) and allowing further testing to complete correctly in the 
>>>>> case where the system can be recovered.
>>>>
>>>> I don't really oppose shortening of the timeout in principle, just 
>>>> want a clear statement if this is something IGT / test runner could 
>>>> already do or not. It can apply a timeout, it can also send SIGINT, 
>>>> and it could even trigger a reset from outside. Sure it is debugfs 
>>>> hacks so general "kernel should not implement policy" need not be 
>>>> strictly followed, but lets have it clear what are the options.
>>>
>>> One conceptual problem with applying this policy is that the code is:
>>>
>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>         if (ret)
>>>             return ret;
>>>     }
>>>
>>>     if (val & DROP_IDLE) {
>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>         if (ret)
>>>             return ret;
>>>     }
>>>
>>> So if someone passes in DROP_IDLE and then why would only the first 
>>> branch have a short timeout and wedge. Yeah some bug happens to be 
>>> there at the moment, but put a bug in a different place and you hang 
>>> on the second branch and then need another patch. Versus perhaps 
>>> making it all respect SIGINT and handle from outside.
>>>
>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>> completed successfully. There is no route to skip the GT idle or to 
>> do the PM idle even if the GT idle fails. So the chances of the PM 
>> idle failing are greatly reduced. There would have to be something 
>> outside of a GT keeping the GPU awake and there isn't a whole lot of 
>> hardware left at that point!
>
> Well "greatly reduced" is beside my point. Point is today bug is here 
> and we add a timeout, tomorrow bug is there and then the same dance. 
> It can be just a sw bug which forgets to release the pm ref in some 
> circumstances, doesn't really matter.
>
Huh?

Greatly reduced is the whole point. Today there is a bug and it causes a 
kernel hang which requires the CI framework to reboot the system in an 
extremely unfriendly way which makes it very hard to work out what 
happened. Logs are likely not available. We don't even necessarily know 
which test was being run at the time. Etc. So we replace the infinite 
timeout with a meaningful timeout. CI now correctly marks the single 
test as failing, captures all the correct logs, creates a useful bug 
report and continues on testing more stuff.

Sure, there is still the chance of hitting an infinite timeout. But that 
one is significantly more complicated to remove. And the chances of 
hitting that one are significantly smaller than the chances of hitting 
the first one.

So you are arguing that because I can't fix the last 0.1% of possible 
failures, I am not allowed to fix the first 99.9% of the failures?


>> Regarding signals, the PM idle code ends up at 
>> wait_var_event_killable(). I assume that is interruptible via at 
>> least a KILL signal if not any signal. Although it's not entirely 
>> clear trying to follow through the implementation of this code. Also, 
>> I have no idea if there is a safe way to add a timeout to that code 
>> (or why it wasn't already written with a timeout included). Someone 
>> more familiar with the wakeref internals would need to comment.
>>
>> However, I strongly disagree that we should not fix the driver just 
>> because it is possible to workaround the issue by re-writing the CI 
>> framework. Feel free to bring a redesign plan to the IGT WG and 
>> whatever equivalent CI meetings in parallel. But we absolutely should 
>> not have infinite waits in the kernel if there is a trivial way to 
>> not have infinite waits.
>
> I thought I was clear that I am not really opposed to the timeout.
>
> The rest of the paragraph I don't really care - point is moot because 
> it's debugfs so we can do whatever, as long as it is not burdensome to 
> i915, which this isn't. If either wasn't the case then we certainly 
> wouldn't be adding any workarounds in the kernel if it can be achieved 
> in IGT.
>
>> Also, sending a signal does not result in the wedge happening. I 
>> specifically did not want to change that code path because I was 
>> assuming there was a valid reason for it. If you have been 
>> interrupted then you are in the territory of maybe it would have 
>> succeeded if you just left it for a moment longer. Whereas, hitting 
>> the timeout says that someone very deliberately said this is too long 
>> to wait and therefore the system must be broken.
>
> I wanted to know specifically about wedging - why can't you 
> wedge/reset from IGT if DROP_IDLE times out in quiescent or wherever, 
> if that's what you say is the right thing? 
Huh?

DROP_IDLE has two waits. One that I am trying to change from infinite to 
finite + wedge. One that would take considerable effort to change and 
would be quite invasive to a lot more of the driver and which can only 
be hit if the first timeout actually completed successfully and is 
therefore of less importance anyway. Both of those time outs appear to 
respect signal interrupts.

> That's a policy decision so why would i915 wedge if an arbitrary 
> timeout expired? I915 is not controlling how much work there is 
> outstanding at the point the IGT decides to call DROP_IDLE.

Because this is a debug test interface that is used solely by IGT after 
it has finished its testing. This is not about wedging the device at 
some random arbitrary point because an AI compute workload takes three 
hours to complete. This is about a very specific test framework cleaning 
up after testing is completed and making sure the test did not fry the 
system.

And even if an IGT test was calling DROP_IDLE in the middle of a test 
for some reason, it should not be deliberately pushing 10+ seconds of 
work through and then calling a debug only interface to flush it out. If 
a test wants to verify that the system can cope with submitting a 
minutes worth of rendering and then waiting for it to complete then the 
test should be using official channels for that wait.

>
>> Plus, infinite wait is not a valid code path in the first place so 
>> any change in behaviour is not really a change in behaviour. Code 
>> can't be relying on a kernel call to never return for its correct 
>> operation!
>
> Why infinite wait wouldn't be valid? Then you better change the other 
> one as well. ;P
In what universe is it ever valid to wait forever for a test to complete?

See above, the PM code would require much more invasive changes. This 
was low hanging fruit. It was supposed to be a two minute change to a 
very self contained section of code that would provide significant 
benefit to debugging a small class of very hard to debug problems.

John.


>
> Regards,
>
> Tvrtko
>
>> And if you don't wedge then you don't recover. Each subsequent test 
>> would just hit the infinite timeout, get killed by the CI framework's 
>> shiny new kill signal and be marked as yet another unrelated bug that 
>> will be logged separately. Whereas, using a sensible timeout and then 
>> wedging will at least attempt to recover the situation. And if it can 
>> be recovered, future tests will pass. If it can't then future testing 
>> will be aborted.
>>
>> John.
>>
>>
>>> Regards,
>>>
>>> Tvrtko
>>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-04 17:45                 ` John Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-04 17:45 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/4/2022 03:01, Tvrtko Ursulin wrote:
> On 03/11/2022 19:16, John Harrison wrote:
>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>
>>>>>>>> At the end of each test, IGT does a drop caches call via sysfs 
>>>>>>>> with
>>>>>>>
>>>>>>> sysfs?
>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>
>>>>>>>
>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>> with an
>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>> when CI
>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>> system times
>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>> can't do
>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>> reboots.
>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>> sometimes not.
>>>>>>>>
>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>> timeout to
>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>> working system (if possible).
>>>>>>>>
>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>> ---
>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>             DROP_RCU)
>>>>>>>> +
>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>
>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>> only used
>>>>>>> here.
>>>>>>
>>>>>> So move here, dropping i915 prefix, next to the newly proposed one?
>>>>> Sure, can do that.
>>>>>
>>>>>>
>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>> gt/intel_gt.c.
>>>>>>
>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>
>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>> intel_gt_pm.c.
>>>>>>
>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>
>>>>> These two are totally unrelated and in code not being touched by 
>>>>> this change. I would rather not conflate changing random other 
>>>>> things with fixing this specific issue.
>>>>>
>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>
>>>>>> Add _MS suffix if wanted.
>>>>>>
>>>>>>> My head spins.
>>>>>>
>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>> My original intention for the name was that is the 'drop caches 
>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>> rename.
>>>>>
>>>>>
>>>>>>
>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>> are taking a wider view when touching the code base, and are not 
>>>>>> afraid to send cleanups, things should be good.
>>>>> On the other hand, if every patch gets blocked in code review 
>>>>> because someone points out some completely unrelated piece of code 
>>>>> could be a bit better then nothing ever gets fixed. If you spot 
>>>>> something that you think should be improved, isn't the general 
>>>>> idea that you should post a patch yourself to improve it?
>>>>
>>>> There's two maintainers per branch and an order of magnitude or two 
>>>> more developers so it'd be nice if cleanups would just be incoming 
>>>> on self-initiative basis. ;)
>>>>
>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>> code paths in question could handle SIGINT and then we could punt 
>>>>>> the decision on how long someone wants to wait purely to 
>>>>>> userspace. But it's probably hard and it's only debugfs so whatever.
>>>>>>
>>>>> The code paths in question will already abort on a signal won't 
>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>> asking for is a change in the IGT libraries and/or CI framework to 
>>>>> start sending signals after some specific timeout. That seems like 
>>>>> a significantly more complex change (in terms of the number of 
>>>>> entities affected and number of groups involved) and unnecessary.
>>>>
>>>> If you say so, I haven't looked at them all. But if the code path 
>>>> in question already aborts on signals then I am not sure what is 
>>>> the patch fixing? I assumed you are trying to avoid the write stuck 
>>>> in D forever, which then prevents driver unload and everything, 
>>>> requiring the test runner to eventually reboot. If you say SIGINT 
>>>> works then you can already recover from userspace, no?
>>>>
>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>> most half from the test runner timeout.
>>>>> This is supposed to be test clean up. This is not about how long a 
>>>>> particular test takes to complete but about how long it takes to 
>>>>> declare the system broken after the test has already finished. I 
>>>>> would argue that even 10s is massively longer than required.
>>>>>
>>>>>>
>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>>>>> the key factor which would prevent the need to reboot the machine?
>>>>>>
>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>> idle the GT then something is very badly wrong. Wedging indicates 
>>>>> that. It also ensure that a full GT reset will be attempted before 
>>>>> the next test is run. Helping to prevent a failure on test X from 
>>>>> propagating into failures of unrelated tests X+1, X+2, ... And if 
>>>>> the GT reset does not work because the system is really that badly 
>>>>> broken then future tests will not run rather than report erroneous 
>>>>> failures.
>>>>>
>>>>> This is not about getting a more stable system for end users by 
>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>> points. The sole motivation here is to get more accurate results 
>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>> such) and allowing further testing to complete correctly in the 
>>>>> case where the system can be recovered.
>>>>
>>>> I don't really oppose shortening of the timeout in principle, just 
>>>> want a clear statement if this is something IGT / test runner could 
>>>> already do or not. It can apply a timeout, it can also send SIGINT, 
>>>> and it could even trigger a reset from outside. Sure it is debugfs 
>>>> hacks so general "kernel should not implement policy" need not be 
>>>> strictly followed, but lets have it clear what are the options.
>>>
>>> One conceptual problem with applying this policy is that the code is:
>>>
>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>         if (ret)
>>>             return ret;
>>>     }
>>>
>>>     if (val & DROP_IDLE) {
>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>         if (ret)
>>>             return ret;
>>>     }
>>>
>>> So if someone passes in DROP_IDLE and then why would only the first 
>>> branch have a short timeout and wedge. Yeah some bug happens to be 
>>> there at the moment, but put a bug in a different place and you hang 
>>> on the second branch and then need another patch. Versus perhaps 
>>> making it all respect SIGINT and handle from outside.
>>>
>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>> completed successfully. There is no route to skip the GT idle or to 
>> do the PM idle even if the GT idle fails. So the chances of the PM 
>> idle failing are greatly reduced. There would have to be something 
>> outside of a GT keeping the GPU awake and there isn't a whole lot of 
>> hardware left at that point!
>
> Well "greatly reduced" is beside my point. Point is today bug is here 
> and we add a timeout, tomorrow bug is there and then the same dance. 
> It can be just a sw bug which forgets to release the pm ref in some 
> circumstances, doesn't really matter.
>
Huh?

Greatly reduced is the whole point. Today there is a bug and it causes a 
kernel hang which requires the CI framework to reboot the system in an 
extremely unfriendly way which makes it very hard to work out what 
happened. Logs are likely not available. We don't even necessarily know 
which test was being run at the time. Etc. So we replace the infinite 
timeout with a meaningful timeout. CI now correctly marks the single 
test as failing, captures all the correct logs, creates a useful bug 
report and continues on testing more stuff.

Sure, there is still the chance of hitting an infinite timeout. But that 
one is significantly more complicated to remove. And the chances of 
hitting that one are significantly smaller than the chances of hitting 
the first one.

So you are arguing that because I can't fix the last 0.1% of possible 
failures, I am not allowed to fix the first 99.9% of the failures?


>> Regarding signals, the PM idle code ends up at 
>> wait_var_event_killable(). I assume that is interruptible via at 
>> least a KILL signal if not any signal. Although it's not entirely 
>> clear trying to follow through the implementation of this code. Also, 
>> I have no idea if there is a safe way to add a timeout to that code 
>> (or why it wasn't already written with a timeout included). Someone 
>> more familiar with the wakeref internals would need to comment.
>>
>> However, I strongly disagree that we should not fix the driver just 
>> because it is possible to workaround the issue by re-writing the CI 
>> framework. Feel free to bring a redesign plan to the IGT WG and 
>> whatever equivalent CI meetings in parallel. But we absolutely should 
>> not have infinite waits in the kernel if there is a trivial way to 
>> not have infinite waits.
>
> I thought I was clear that I am not really opposed to the timeout.
>
> The rest of the paragraph I don't really care - point is moot because 
> it's debugfs so we can do whatever, as long as it is not burdensome to 
> i915, which this isn't. If either wasn't the case then we certainly 
> wouldn't be adding any workarounds in the kernel if it can be achieved 
> in IGT.
>
>> Also, sending a signal does not result in the wedge happening. I 
>> specifically did not want to change that code path because I was 
>> assuming there was a valid reason for it. If you have been 
>> interrupted then you are in the territory of maybe it would have 
>> succeeded if you just left it for a moment longer. Whereas, hitting 
>> the timeout says that someone very deliberately said this is too long 
>> to wait and therefore the system must be broken.
>
> I wanted to know specifically about wedging - why can't you 
> wedge/reset from IGT if DROP_IDLE times out in quiescent or wherever, 
> if that's what you say is the right thing? 
Huh?

DROP_IDLE has two waits. One that I am trying to change from infinite to 
finite + wedge. One that would take considerable effort to change and 
would be quite invasive to a lot more of the driver and which can only 
be hit if the first timeout actually completed successfully and is 
therefore of less importance anyway. Both of those time outs appear to 
respect signal interrupts.

> That's a policy decision so why would i915 wedge if an arbitrary 
> timeout expired? I915 is not controlling how much work there is 
> outstanding at the point the IGT decides to call DROP_IDLE.

Because this is a debug test interface that is used solely by IGT after 
it has finished its testing. This is not about wedging the device at 
some random arbitrary point because an AI compute workload takes three 
hours to complete. This is about a very specific test framework cleaning 
up after testing is completed and making sure the test did not fry the 
system.

And even if an IGT test was calling DROP_IDLE in the middle of a test 
for some reason, it should not be deliberately pushing 10+ seconds of 
work through and then calling a debug only interface to flush it out. If 
a test wants to verify that the system can cope with submitting a 
minutes worth of rendering and then waiting for it to complete then the 
test should be using official channels for that wait.

>
>> Plus, infinite wait is not a valid code path in the first place so 
>> any change in behaviour is not really a change in behaviour. Code 
>> can't be relying on a kernel call to never return for its correct 
>> operation!
>
> Why infinite wait wouldn't be valid? Then you better change the other 
> one as well. ;P
In what universe is it ever valid to wait forever for a test to complete?

See above, the PM code would require much more invasive changes. This 
was low hanging fruit. It was supposed to be a two minute change to a 
very self contained section of code that would provide significant 
benefit to debugging a small class of very hard to debug problems.

John.


>
> Regards,
>
> Tvrtko
>
>> And if you don't wedge then you don't recover. Each subsequent test 
>> would just hit the infinite timeout, get killed by the CI framework's 
>> shiny new kill signal and be marked as yet another unrelated bug that 
>> will be logged separately. Whereas, using a sensible timeout and then 
>> wedging will at least attempt to recover the situation. And if it can 
>> be recovered, future tests will pass. If it can't then future testing 
>> will be aborted.
>>
>> John.
>>
>>
>>> Regards,
>>>
>>> Tvrtko
>>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-04 17:45                 ` John Harrison
@ 2022-11-07 14:09                   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-07 14:09 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel


On 04/11/2022 17:45, John Harrison wrote:
> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>> On 03/11/2022 19:16, John Harrison wrote:
>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>
>>>>>>>>> At the end of each test, IGT does a drop caches call via sysfs 
>>>>>>>>> with
>>>>>>>>
>>>>>>>> sysfs?
>>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>>
>>>>>>>>
>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>> with an
>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>> when CI
>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>> system times
>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>> can't do
>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>> reboots.
>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>> sometimes not.
>>>>>>>>>
>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>> timeout to
>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>>> working system (if possible).
>>>>>>>>>
>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>> ---
>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>             DROP_RCU)
>>>>>>>>> +
>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>
>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>>> only used
>>>>>>>> here.
>>>>>>>
>>>>>>> So move here, dropping i915 prefix, next to the newly proposed one?
>>>>>> Sure, can do that.
>>>>>>
>>>>>>>
>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>>> gt/intel_gt.c.
>>>>>>>
>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>
>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>> intel_gt_pm.c.
>>>>>>>
>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>
>>>>>> These two are totally unrelated and in code not being touched by 
>>>>>> this change. I would rather not conflate changing random other 
>>>>>> things with fixing this specific issue.
>>>>>>
>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>
>>>>>>> Add _MS suffix if wanted.
>>>>>>>
>>>>>>>> My head spins.
>>>>>>>
>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>> My original intention for the name was that is the 'drop caches 
>>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>>> rename.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>>> are taking a wider view when touching the code base, and are not 
>>>>>>> afraid to send cleanups, things should be good.
>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>> because someone points out some completely unrelated piece of code 
>>>>>> could be a bit better then nothing ever gets fixed. If you spot 
>>>>>> something that you think should be improved, isn't the general 
>>>>>> idea that you should post a patch yourself to improve it?
>>>>>
>>>>> There's two maintainers per branch and an order of magnitude or two 
>>>>> more developers so it'd be nice if cleanups would just be incoming 
>>>>> on self-initiative basis. ;)
>>>>>
>>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>>> code paths in question could handle SIGINT and then we could punt 
>>>>>>> the decision on how long someone wants to wait purely to 
>>>>>>> userspace. But it's probably hard and it's only debugfs so whatever.
>>>>>>>
>>>>>> The code paths in question will already abort on a signal won't 
>>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>>> asking for is a change in the IGT libraries and/or CI framework to 
>>>>>> start sending signals after some specific timeout. That seems like 
>>>>>> a significantly more complex change (in terms of the number of 
>>>>>> entities affected and number of groups involved) and unnecessary.
>>>>>
>>>>> If you say so, I haven't looked at them all. But if the code path 
>>>>> in question already aborts on signals then I am not sure what is 
>>>>> the patch fixing? I assumed you are trying to avoid the write stuck 
>>>>> in D forever, which then prevents driver unload and everything, 
>>>>> requiring the test runner to eventually reboot. If you say SIGINT 
>>>>> works then you can already recover from userspace, no?
>>>>>
>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>> most half from the test runner timeout.
>>>>>> This is supposed to be test clean up. This is not about how long a 
>>>>>> particular test takes to complete but about how long it takes to 
>>>>>> declare the system broken after the test has already finished. I 
>>>>>> would argue that even 10s is massively longer than required.
>>>>>>
>>>>>>>
>>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>>>>>> the key factor which would prevent the need to reboot the machine?
>>>>>>>
>>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>>> idle the GT then something is very badly wrong. Wedging indicates 
>>>>>> that. It also ensure that a full GT reset will be attempted before 
>>>>>> the next test is run. Helping to prevent a failure on test X from 
>>>>>> propagating into failures of unrelated tests X+1, X+2, ... And if 
>>>>>> the GT reset does not work because the system is really that badly 
>>>>>> broken then future tests will not run rather than report erroneous 
>>>>>> failures.
>>>>>>
>>>>>> This is not about getting a more stable system for end users by 
>>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>>> points. The sole motivation here is to get more accurate results 
>>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>>> such) and allowing further testing to complete correctly in the 
>>>>>> case where the system can be recovered.
>>>>>
>>>>> I don't really oppose shortening of the timeout in principle, just 
>>>>> want a clear statement if this is something IGT / test runner could 
>>>>> already do or not. It can apply a timeout, it can also send SIGINT, 
>>>>> and it could even trigger a reset from outside. Sure it is debugfs 
>>>>> hacks so general "kernel should not implement policy" need not be 
>>>>> strictly followed, but lets have it clear what are the options.
>>>>
>>>> One conceptual problem with applying this policy is that the code is:
>>>>
>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>         if (ret)
>>>>             return ret;
>>>>     }
>>>>
>>>>     if (val & DROP_IDLE) {
>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>         if (ret)
>>>>             return ret;
>>>>     }
>>>>
>>>> So if someone passes in DROP_IDLE and then why would only the first 
>>>> branch have a short timeout and wedge. Yeah some bug happens to be 
>>>> there at the moment, but put a bug in a different place and you hang 
>>>> on the second branch and then need another patch. Versus perhaps 
>>>> making it all respect SIGINT and handle from outside.
>>>>
>>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>>> completed successfully. There is no route to skip the GT idle or to 
>>> do the PM idle even if the GT idle fails. So the chances of the PM 
>>> idle failing are greatly reduced. There would have to be something 
>>> outside of a GT keeping the GPU awake and there isn't a whole lot of 
>>> hardware left at that point!
>>
>> Well "greatly reduced" is beside my point. Point is today bug is here 
>> and we add a timeout, tomorrow bug is there and then the same dance. 
>> It can be just a sw bug which forgets to release the pm ref in some 
>> circumstances, doesn't really matter.
>>
> Huh?
> 
> Greatly reduced is the whole point. Today there is a bug and it causes a 
> kernel hang which requires the CI framework to reboot the system in an 
> extremely unfriendly way which makes it very hard to work out what 
> happened. Logs are likely not available. We don't even necessarily know 
> which test was being run at the time. Etc. So we replace the infinite 
> timeout with a meaningful timeout. CI now correctly marks the single 
> test as failing, captures all the correct logs, creates a useful bug 
> report and continues on testing more stuff.

So what is preventing CI to collect logs if IGT is forever stuck in 
interruptible wait? Surely it can collect the logs at that point if the 
kernel is healthy enough. If it isn't then I don't see how wedging the 
GPU will make the kernel any healthier.

Is i915 preventing better log collection or could test runner be improved?

> Sure, there is still the chance of hitting an infinite timeout. But that 
> one is significantly more complicated to remove. And the chances of 
> hitting that one are significantly smaller than the chances of hitting 
> the first one.

This statement relies on intimate knowledge implementation details and a 
bit too much white box testing approach but that's okay, lets move past 
this one.

> So you are arguing that because I can't fix the last 0.1% of possible 
> failures, I am not allowed to fix the first 99.9% of the failures?

I am clearly not arguing for that. But we are also not talking about 
"fixing failures" here. Just how to make CI cope better with a class of 
i915 bugs.

>>> Regarding signals, the PM idle code ends up at 
>>> wait_var_event_killable(). I assume that is interruptible via at 
>>> least a KILL signal if not any signal. Although it's not entirely 
>>> clear trying to follow through the implementation of this code. Also, 
>>> I have no idea if there is a safe way to add a timeout to that code 
>>> (or why it wasn't already written with a timeout included). Someone 
>>> more familiar with the wakeref internals would need to comment.
>>>
>>> However, I strongly disagree that we should not fix the driver just 
>>> because it is possible to workaround the issue by re-writing the CI 
>>> framework. Feel free to bring a redesign plan to the IGT WG and 
>>> whatever equivalent CI meetings in parallel. But we absolutely should 
>>> not have infinite waits in the kernel if there is a trivial way to 
>>> not have infinite waits.
>>
>> I thought I was clear that I am not really opposed to the timeout.
>>
>> The rest of the paragraph I don't really care - point is moot because 
>> it's debugfs so we can do whatever, as long as it is not burdensome to 
>> i915, which this isn't. If either wasn't the case then we certainly 
>> wouldn't be adding any workarounds in the kernel if it can be achieved 
>> in IGT.
>>
>>> Also, sending a signal does not result in the wedge happening. I 
>>> specifically did not want to change that code path because I was 
>>> assuming there was a valid reason for it. If you have been 
>>> interrupted then you are in the territory of maybe it would have 
>>> succeeded if you just left it for a moment longer. Whereas, hitting 
>>> the timeout says that someone very deliberately said this is too long 
>>> to wait and therefore the system must be broken.
>>
>> I wanted to know specifically about wedging - why can't you 
>> wedge/reset from IGT if DROP_IDLE times out in quiescent or wherever, 
>> if that's what you say is the right thing? 
> Huh?
> 
> DROP_IDLE has two waits. One that I am trying to change from infinite to 
> finite + wedge. One that would take considerable effort to change and 
> would be quite invasive to a lot more of the driver and which can only 
> be hit if the first timeout actually completed successfully and is 
> therefore of less importance anyway. Both of those time outs appear to 
> respect signal interrupts.
> 
>> That's a policy decision so why would i915 wedge if an arbitrary 
>> timeout expired? I915 is not controlling how much work there is 
>> outstanding at the point the IGT decides to call DROP_IDLE.
> 
> Because this is a debug test interface that is used solely by IGT after 
> it has finished its testing. This is not about wedging the device at 
> some random arbitrary point because an AI compute workload takes three 
> hours to complete. This is about a very specific test framework cleaning 
> up after testing is completed and making sure the test did not fry the 
> system.
> 
> And even if an IGT test was calling DROP_IDLE in the middle of a test 
> for some reason, it should not be deliberately pushing 10+ seconds of 
> work through and then calling a debug only interface to flush it out. If 
> a test wants to verify that the system can cope with submitting a 
> minutes worth of rendering and then waiting for it to complete then the 
> test should be using official channels for that wait.
> 
>>
>>> Plus, infinite wait is not a valid code path in the first place so 
>>> any change in behaviour is not really a change in behaviour. Code 
>>> can't be relying on a kernel call to never return for its correct 
>>> operation!
>>
>> Why infinite wait wouldn't be valid? Then you better change the other 
>> one as well. ;P
> In what universe is it ever valid to wait forever for a test to complete?

Well above you claimed both paths respect SIGINT. If that is so then the 
wait is as infinite as the IGT wanted it to be.

> See above, the PM code would require much more invasive changes. This 
> was low hanging fruit. It was supposed to be a two minute change to a 
> very self contained section of code that would provide significant 
> benefit to debugging a small class of very hard to debug problems.

Sure, but I'd still like to know why can't you do what you want from the 
IGT framework.

Have the timeout reduction in i915, again that's fine assuming 10 
seconds it enough to not break something by accident.

With that change you already have broken the "infinite wait". It makes 
the debugfs write return -ETIME in time much shorter than the test 
runner timeout(s). What is the thing that you cannot do from IGT at that 
point is my question? You want to wedge then? Send DROP_RESET_ACTIVE to 
do it for you? If that doesn't work add a new flag which will wedge 
explicitly.

We are again degrading into a huge philosophical discussion and all I 
wanted to start with is to hear how exactly things go bad.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-07 14:09                   ` Tvrtko Ursulin
  0 siblings, 0 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-07 14:09 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: DRI-Devel


On 04/11/2022 17:45, John Harrison wrote:
> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>> On 03/11/2022 19:16, John Harrison wrote:
>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>
>>>>>>>>> At the end of each test, IGT does a drop caches call via sysfs 
>>>>>>>>> with
>>>>>>>>
>>>>>>>> sysfs?
>>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>>
>>>>>>>>
>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>> with an
>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>> when CI
>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>> system times
>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>> can't do
>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>> reboots.
>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>> sometimes not.
>>>>>>>>>
>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>> timeout to
>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>>> working system (if possible).
>>>>>>>>>
>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>> ---
>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>             DROP_RCU)
>>>>>>>>> +
>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>
>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>>> only used
>>>>>>>> here.
>>>>>>>
>>>>>>> So move here, dropping i915 prefix, next to the newly proposed one?
>>>>>> Sure, can do that.
>>>>>>
>>>>>>>
>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>>> gt/intel_gt.c.
>>>>>>>
>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>
>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>> intel_gt_pm.c.
>>>>>>>
>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>
>>>>>> These two are totally unrelated and in code not being touched by 
>>>>>> this change. I would rather not conflate changing random other 
>>>>>> things with fixing this specific issue.
>>>>>>
>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>
>>>>>>> Add _MS suffix if wanted.
>>>>>>>
>>>>>>>> My head spins.
>>>>>>>
>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>> My original intention for the name was that is the 'drop caches 
>>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>>> rename.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>>> are taking a wider view when touching the code base, and are not 
>>>>>>> afraid to send cleanups, things should be good.
>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>> because someone points out some completely unrelated piece of code 
>>>>>> could be a bit better then nothing ever gets fixed. If you spot 
>>>>>> something that you think should be improved, isn't the general 
>>>>>> idea that you should post a patch yourself to improve it?
>>>>>
>>>>> There's two maintainers per branch and an order of magnitude or two 
>>>>> more developers so it'd be nice if cleanups would just be incoming 
>>>>> on self-initiative basis. ;)
>>>>>
>>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>>> code paths in question could handle SIGINT and then we could punt 
>>>>>>> the decision on how long someone wants to wait purely to 
>>>>>>> userspace. But it's probably hard and it's only debugfs so whatever.
>>>>>>>
>>>>>> The code paths in question will already abort on a signal won't 
>>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>>> asking for is a change in the IGT libraries and/or CI framework to 
>>>>>> start sending signals after some specific timeout. That seems like 
>>>>>> a significantly more complex change (in terms of the number of 
>>>>>> entities affected and number of groups involved) and unnecessary.
>>>>>
>>>>> If you say so, I haven't looked at them all. But if the code path 
>>>>> in question already aborts on signals then I am not sure what is 
>>>>> the patch fixing? I assumed you are trying to avoid the write stuck 
>>>>> in D forever, which then prevents driver unload and everything, 
>>>>> requiring the test runner to eventually reboot. If you say SIGINT 
>>>>> works then you can already recover from userspace, no?
>>>>>
>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>> most half from the test runner timeout.
>>>>>> This is supposed to be test clean up. This is not about how long a 
>>>>>> particular test takes to complete but about how long it takes to 
>>>>>> declare the system broken after the test has already finished. I 
>>>>>> would argue that even 10s is massively longer than required.
>>>>>>
>>>>>>>
>>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>>> really give us, on top of limiting the wait, when latter AFAIU is 
>>>>>>> the key factor which would prevent the need to reboot the machine?
>>>>>>>
>>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>>> idle the GT then something is very badly wrong. Wedging indicates 
>>>>>> that. It also ensure that a full GT reset will be attempted before 
>>>>>> the next test is run. Helping to prevent a failure on test X from 
>>>>>> propagating into failures of unrelated tests X+1, X+2, ... And if 
>>>>>> the GT reset does not work because the system is really that badly 
>>>>>> broken then future tests will not run rather than report erroneous 
>>>>>> failures.
>>>>>>
>>>>>> This is not about getting a more stable system for end users by 
>>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>>> points. The sole motivation here is to get more accurate results 
>>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>>> such) and allowing further testing to complete correctly in the 
>>>>>> case where the system can be recovered.
>>>>>
>>>>> I don't really oppose shortening of the timeout in principle, just 
>>>>> want a clear statement if this is something IGT / test runner could 
>>>>> already do or not. It can apply a timeout, it can also send SIGINT, 
>>>>> and it could even trigger a reset from outside. Sure it is debugfs 
>>>>> hacks so general "kernel should not implement policy" need not be 
>>>>> strictly followed, but lets have it clear what are the options.
>>>>
>>>> One conceptual problem with applying this policy is that the code is:
>>>>
>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>         if (ret)
>>>>             return ret;
>>>>     }
>>>>
>>>>     if (val & DROP_IDLE) {
>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>         if (ret)
>>>>             return ret;
>>>>     }
>>>>
>>>> So if someone passes in DROP_IDLE and then why would only the first 
>>>> branch have a short timeout and wedge. Yeah some bug happens to be 
>>>> there at the moment, but put a bug in a different place and you hang 
>>>> on the second branch and then need another patch. Versus perhaps 
>>>> making it all respect SIGINT and handle from outside.
>>>>
>>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>>> completed successfully. There is no route to skip the GT idle or to 
>>> do the PM idle even if the GT idle fails. So the chances of the PM 
>>> idle failing are greatly reduced. There would have to be something 
>>> outside of a GT keeping the GPU awake and there isn't a whole lot of 
>>> hardware left at that point!
>>
>> Well "greatly reduced" is beside my point. Point is today bug is here 
>> and we add a timeout, tomorrow bug is there and then the same dance. 
>> It can be just a sw bug which forgets to release the pm ref in some 
>> circumstances, doesn't really matter.
>>
> Huh?
> 
> Greatly reduced is the whole point. Today there is a bug and it causes a 
> kernel hang which requires the CI framework to reboot the system in an 
> extremely unfriendly way which makes it very hard to work out what 
> happened. Logs are likely not available. We don't even necessarily know 
> which test was being run at the time. Etc. So we replace the infinite 
> timeout with a meaningful timeout. CI now correctly marks the single 
> test as failing, captures all the correct logs, creates a useful bug 
> report and continues on testing more stuff.

So what is preventing CI to collect logs if IGT is forever stuck in 
interruptible wait? Surely it can collect the logs at that point if the 
kernel is healthy enough. If it isn't then I don't see how wedging the 
GPU will make the kernel any healthier.

Is i915 preventing better log collection or could test runner be improved?

> Sure, there is still the chance of hitting an infinite timeout. But that 
> one is significantly more complicated to remove. And the chances of 
> hitting that one are significantly smaller than the chances of hitting 
> the first one.

This statement relies on intimate knowledge implementation details and a 
bit too much white box testing approach but that's okay, lets move past 
this one.

> So you are arguing that because I can't fix the last 0.1% of possible 
> failures, I am not allowed to fix the first 99.9% of the failures?

I am clearly not arguing for that. But we are also not talking about 
"fixing failures" here. Just how to make CI cope better with a class of 
i915 bugs.

>>> Regarding signals, the PM idle code ends up at 
>>> wait_var_event_killable(). I assume that is interruptible via at 
>>> least a KILL signal if not any signal. Although it's not entirely 
>>> clear trying to follow through the implementation of this code. Also, 
>>> I have no idea if there is a safe way to add a timeout to that code 
>>> (or why it wasn't already written with a timeout included). Someone 
>>> more familiar with the wakeref internals would need to comment.
>>>
>>> However, I strongly disagree that we should not fix the driver just 
>>> because it is possible to workaround the issue by re-writing the CI 
>>> framework. Feel free to bring a redesign plan to the IGT WG and 
>>> whatever equivalent CI meetings in parallel. But we absolutely should 
>>> not have infinite waits in the kernel if there is a trivial way to 
>>> not have infinite waits.
>>
>> I thought I was clear that I am not really opposed to the timeout.
>>
>> The rest of the paragraph I don't really care - point is moot because 
>> it's debugfs so we can do whatever, as long as it is not burdensome to 
>> i915, which this isn't. If either wasn't the case then we certainly 
>> wouldn't be adding any workarounds in the kernel if it can be achieved 
>> in IGT.
>>
>>> Also, sending a signal does not result in the wedge happening. I 
>>> specifically did not want to change that code path because I was 
>>> assuming there was a valid reason for it. If you have been 
>>> interrupted then you are in the territory of maybe it would have 
>>> succeeded if you just left it for a moment longer. Whereas, hitting 
>>> the timeout says that someone very deliberately said this is too long 
>>> to wait and therefore the system must be broken.
>>
>> I wanted to know specifically about wedging - why can't you 
>> wedge/reset from IGT if DROP_IDLE times out in quiescent or wherever, 
>> if that's what you say is the right thing? 
> Huh?
> 
> DROP_IDLE has two waits. One that I am trying to change from infinite to 
> finite + wedge. One that would take considerable effort to change and 
> would be quite invasive to a lot more of the driver and which can only 
> be hit if the first timeout actually completed successfully and is 
> therefore of less importance anyway. Both of those time outs appear to 
> respect signal interrupts.
> 
>> That's a policy decision so why would i915 wedge if an arbitrary 
>> timeout expired? I915 is not controlling how much work there is 
>> outstanding at the point the IGT decides to call DROP_IDLE.
> 
> Because this is a debug test interface that is used solely by IGT after 
> it has finished its testing. This is not about wedging the device at 
> some random arbitrary point because an AI compute workload takes three 
> hours to complete. This is about a very specific test framework cleaning 
> up after testing is completed and making sure the test did not fry the 
> system.
> 
> And even if an IGT test was calling DROP_IDLE in the middle of a test 
> for some reason, it should not be deliberately pushing 10+ seconds of 
> work through and then calling a debug only interface to flush it out. If 
> a test wants to verify that the system can cope with submitting a 
> minutes worth of rendering and then waiting for it to complete then the 
> test should be using official channels for that wait.
> 
>>
>>> Plus, infinite wait is not a valid code path in the first place so 
>>> any change in behaviour is not really a change in behaviour. Code 
>>> can't be relying on a kernel call to never return for its correct 
>>> operation!
>>
>> Why infinite wait wouldn't be valid? Then you better change the other 
>> one as well. ;P
> In what universe is it ever valid to wait forever for a test to complete?

Well above you claimed both paths respect SIGINT. If that is so then the 
wait is as infinite as the IGT wanted it to be.

> See above, the PM code would require much more invasive changes. This 
> was low hanging fruit. It was supposed to be a two minute change to a 
> very self contained section of code that would provide significant 
> benefit to debugging a small class of very hard to debug problems.

Sure, but I'd still like to know why can't you do what you want from the 
IGT framework.

Have the timeout reduction in i915, again that's fine assuming 10 
seconds it enough to not break something by accident.

With that change you already have broken the "infinite wait". It makes 
the debugfs write return -ETIME in time much shorter than the test 
runner timeout(s). What is the thing that you cannot do from IGT at that 
point is my question? You want to wedge then? Send DROP_RESET_ACTIVE to 
do it for you? If that doesn't work add a new flag which will wedge 
explicitly.

We are again degrading into a huge philosophical discussion and all I 
wanted to start with is to hear how exactly things go bad.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-07 14:09                   ` Tvrtko Ursulin
@ 2022-11-07 19:45                     ` John Harrison
  -1 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-07 19:45 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel

On 11/7/2022 06:09, Tvrtko Ursulin wrote:
> On 04/11/2022 17:45, John Harrison wrote:
>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>> On 03/11/2022 19:16, John Harrison wrote:
>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>
>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>> sysfs with
>>>>>>>>>
>>>>>>>>> sysfs?
>>>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>
>>>>>>>>>
>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>> with an
>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>> when CI
>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>> system times
>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>> can't do
>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>> reboots.
>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>> sometimes not.
>>>>>>>>>>
>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>> timeout to
>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>>>> working system (if possible).
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>> ---
>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>             DROP_RCU)
>>>>>>>>>> +
>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>
>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>>>> only used
>>>>>>>>> here.
>>>>>>>>
>>>>>>>> So move here, dropping i915 prefix, next to the newly proposed 
>>>>>>>> one?
>>>>>>> Sure, can do that.
>>>>>>>
>>>>>>>>
>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>>>> gt/intel_gt.c.
>>>>>>>>
>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>
>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>> intel_gt_pm.c.
>>>>>>>>
>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>
>>>>>>> These two are totally unrelated and in code not being touched by 
>>>>>>> this change. I would rather not conflate changing random other 
>>>>>>> things with fixing this specific issue.
>>>>>>>
>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>
>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>
>>>>>>>>> My head spins.
>>>>>>>>
>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>> My original intention for the name was that is the 'drop caches 
>>>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>>>> rename.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>>>> are taking a wider view when touching the code base, and are 
>>>>>>>> not afraid to send cleanups, things should be good.
>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>> code could be a bit better then nothing ever gets fixed. If you 
>>>>>>> spot something that you think should be improved, isn't the 
>>>>>>> general idea that you should post a patch yourself to improve it?
>>>>>>
>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>> incoming on self-initiative basis. ;)
>>>>>>
>>>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>>>> code paths in question could handle SIGINT and then we could 
>>>>>>>> punt the decision on how long someone wants to wait purely to 
>>>>>>>> userspace. But it's probably hard and it's only debugfs so 
>>>>>>>> whatever.
>>>>>>>>
>>>>>>> The code paths in question will already abort on a signal won't 
>>>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>>>> asking for is a change in the IGT libraries and/or CI framework 
>>>>>>> to start sending signals after some specific timeout. That seems 
>>>>>>> like a significantly more complex change (in terms of the number 
>>>>>>> of entities affected and number of groups involved) and 
>>>>>>> unnecessary.
>>>>>>
>>>>>> If you say so, I haven't looked at them all. But if the code path 
>>>>>> in question already aborts on signals then I am not sure what is 
>>>>>> the patch fixing? I assumed you are trying to avoid the write 
>>>>>> stuck in D forever, which then prevents driver unload and 
>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>> you say SIGINT works then you can already recover from userspace, 
>>>>>> no?
>>>>>>
>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>> most half from the test runner timeout.
>>>>>>> This is supposed to be test clean up. This is not about how long 
>>>>>>> a particular test takes to complete but about how long it takes 
>>>>>>> to declare the system broken after the test has already 
>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>> required.
>>>>>>>
>>>>>>>>
>>>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>>>> really give us, on top of limiting the wait, when latter AFAIU 
>>>>>>>> is the key factor which would prevent the need to reboot the 
>>>>>>>> machine?
>>>>>>>>
>>>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>>>> idle the GT then something is very badly wrong. Wedging 
>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>> the system is really that badly broken then future tests will 
>>>>>>> not run rather than report erroneous failures.
>>>>>>>
>>>>>>> This is not about getting a more stable system for end users by 
>>>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>>>> points. The sole motivation here is to get more accurate results 
>>>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>>>> such) and allowing further testing to complete correctly in the 
>>>>>>> case where the system can be recovered.
>>>>>>
>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>> just want a clear statement if this is something IGT / test 
>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>> also send SIGINT, and it could even trigger a reset from outside. 
>>>>>> Sure it is debugfs hacks so general "kernel should not implement 
>>>>>> policy" need not be strictly followed, but lets have it clear 
>>>>>> what are the options.
>>>>>
>>>>> One conceptual problem with applying this policy is that the code is:
>>>>>
>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>         if (ret)
>>>>>             return ret;
>>>>>     }
>>>>>
>>>>>     if (val & DROP_IDLE) {
>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>         if (ret)
>>>>>             return ret;
>>>>>     }
>>>>>
>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>> first branch have a short timeout and wedge. Yeah some bug happens 
>>>>> to be there at the moment, but put a bug in a different place and 
>>>>> you hang on the second branch and then need another patch. Versus 
>>>>> perhaps making it all respect SIGINT and handle from outside.
>>>>>
>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>>>> completed successfully. There is no route to skip the GT idle or to 
>>>> do the PM idle even if the GT idle fails. So the chances of the PM 
>>>> idle failing are greatly reduced. There would have to be something 
>>>> outside of a GT keeping the GPU awake and there isn't a whole lot 
>>>> of hardware left at that point!
>>>
>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>> here and we add a timeout, tomorrow bug is there and then the same 
>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>> in some circumstances, doesn't really matter.
>>>
>> Huh?
>>
>> Greatly reduced is the whole point. Today there is a bug and it 
>> causes a kernel hang which requires the CI framework to reboot the 
>> system in an extremely unfriendly way which makes it very hard to 
>> work out what happened. Logs are likely not available. We don't even 
>> necessarily know which test was being run at the time. Etc. So we 
>> replace the infinite timeout with a meaningful timeout. CI now 
>> correctly marks the single test as failing, captures all the correct 
>> logs, creates a useful bug report and continues on testing more stuff.
>
> So what is preventing CI to collect logs if IGT is forever stuck in 
> interruptible wait? Surely it can collect the logs at that point if 
> the kernel is healthy enough. If it isn't then I don't see how wedging 
> the GPU will make the kernel any healthier.
>
> Is i915 preventing better log collection or could test runner be 
> improved?
>
>> Sure, there is still the chance of hitting an infinite timeout. But 
>> that one is significantly more complicated to remove. And the chances 
>> of hitting that one are significantly smaller than the chances of 
>> hitting the first one.
>
> This statement relies on intimate knowledge implementation details and 
> a bit too much white box testing approach but that's okay, lets move 
> past this one.
>
>> So you are arguing that because I can't fix the last 0.1% of possible 
>> failures, I am not allowed to fix the first 99.9% of the failures?
>
> I am clearly not arguing for that. But we are also not talking about 
> "fixing failures" here. Just how to make CI cope better with a class 
> of i915 bugs.
>
>>>> Regarding signals, the PM idle code ends up at 
>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>> clear trying to follow through the implementation of this code. 
>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>> that code (or why it wasn't already written with a timeout 
>>>> included). Someone more familiar with the wakeref internals would 
>>>> need to comment.
>>>>
>>>> However, I strongly disagree that we should not fix the driver just 
>>>> because it is possible to workaround the issue by re-writing the CI 
>>>> framework. Feel free to bring a redesign plan to the IGT WG and 
>>>> whatever equivalent CI meetings in parallel. But we absolutely 
>>>> should not have infinite waits in the kernel if there is a trivial 
>>>> way to not have infinite waits.
>>>
>>> I thought I was clear that I am not really opposed to the timeout.
>>>
>>> The rest of the paragraph I don't really care - point is moot 
>>> because it's debugfs so we can do whatever, as long as it is not 
>>> burdensome to i915, which this isn't. If either wasn't the case then 
>>> we certainly wouldn't be adding any workarounds in the kernel if it 
>>> can be achieved in IGT.
>>>
>>>> Also, sending a signal does not result in the wedge happening. I 
>>>> specifically did not want to change that code path because I was 
>>>> assuming there was a valid reason for it. If you have been 
>>>> interrupted then you are in the territory of maybe it would have 
>>>> succeeded if you just left it for a moment longer. Whereas, hitting 
>>>> the timeout says that someone very deliberately said this is too 
>>>> long to wait and therefore the system must be broken.
>>>
>>> I wanted to know specifically about wedging - why can't you 
>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>> wherever, if that's what you say is the right thing? 
>> Huh?
>>
>> DROP_IDLE has two waits. One that I am trying to change from infinite 
>> to finite + wedge. One that would take considerable effort to change 
>> and would be quite invasive to a lot more of the driver and which can 
>> only be hit if the first timeout actually completed successfully and 
>> is therefore of less importance anyway. Both of those time outs 
>> appear to respect signal interrupts.
>>
>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>> timeout expired? I915 is not controlling how much work there is 
>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>
>> Because this is a debug test interface that is used solely by IGT 
>> after it has finished its testing. This is not about wedging the 
>> device at some random arbitrary point because an AI compute workload 
>> takes three hours to complete. This is about a very specific test 
>> framework cleaning up after testing is completed and making sure the 
>> test did not fry the system.
>>
>> And even if an IGT test was calling DROP_IDLE in the middle of a test 
>> for some reason, it should not be deliberately pushing 10+ seconds of 
>> work through and then calling a debug only interface to flush it out. 
>> If a test wants to verify that the system can cope with submitting a 
>> minutes worth of rendering and then waiting for it to complete then 
>> the test should be using official channels for that wait.
>>
>>>
>>>> Plus, infinite wait is not a valid code path in the first place so 
>>>> any change in behaviour is not really a change in behaviour. Code 
>>>> can't be relying on a kernel call to never return for its correct 
>>>> operation!
>>>
>>> Why infinite wait wouldn't be valid? Then you better change the 
>>> other one as well. ;P
>> In what universe is it ever valid to wait forever for a test to 
>> complete?
>
> Well above you claimed both paths respect SIGINT. If that is so then 
> the wait is as infinite as the IGT wanted it to be.
>
>> See above, the PM code would require much more invasive changes. This 
>> was low hanging fruit. It was supposed to be a two minute change to a 
>> very self contained section of code that would provide significant 
>> benefit to debugging a small class of very hard to debug problems.
>
> Sure, but I'd still like to know why can't you do what you want from 
> the IGT framework.
>
> Have the timeout reduction in i915, again that's fine assuming 10 
> seconds it enough to not break something by accident.
CI showed no regressions. And if someone does find a valid reason why a 
post test drop caches call should legitimately take a stupidly long time 
then it is easy to track back where the ETIME error came from and bump 
the timeout.

>
> With that change you already have broken the "infinite wait". It makes 
> the debugfs write return -ETIME in time much shorter than the test 
> runner timeout(s). What is the thing that you cannot do from IGT at 
> that point is my question? You want to wedge then? Send 
> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
> flag which will wedge explicitly.
>
> We are again degrading into a huge philosophical discussion and all I 
> wanted to start with is to hear how exactly things go bad.
>
I have no idea what you are wanting. I am trying to have a technical 
discussion about improving the stability of the driver during CI 
testing. I have no idea if you are arguing that this change is good, 
bad, broken, wrong direction or what.

Things go bad as explained in the commit message. The CI framework does 
not use signals. The IGT framework does not use signals. There is no 
watchdog that sends a TERM or KILL signal after a specified timeout. All 
that happens is the IGT sits there forever waiting for the drop caches 
IOCTL to return. The CI framework eventually gives up waiting for the 
test to complete and tries to recover. There are many different CI 
frameworks in use across Intel. Some timeout quickly, some timeout 
slowly. But basically, they all eventually give up and don't bother 
trying any kind of remedial action but just hit the reset button 
(sometimes by literally power cycling the DUT). As result, background 
processes that are saving dmesg, stdout, etc do not necessarily 
terminate cleanly. That results in logs that are at best truncated, at 
worst missing entirely. It also results in some frameworks aborting 
testing at that point. So no results are generated for all the other 
tests that have yet to be run. Some frameworks also run tests in 
batches. All they log is that something, somewhere in the batch died. So 
you don't even know which specific test actually hit the problem.

Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
that something we have the ability to do with a simple patch? No. Would 
re-writing the IGT framework to add watchdog mechanisms improve things? 
Yes. Can it be done with a simple patch? No. Would a simple patch to 
i915 significantly improve the situation? Yes. Will it solve every 
possible CI hang? No. Will it fix any actual end user visible bugs? No. 
Will it introduce any new bugs? No. Will it help us to debug at least 
some CI failures? Yes.

John.

> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-07 19:45                     ` John Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-07 19:45 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/7/2022 06:09, Tvrtko Ursulin wrote:
> On 04/11/2022 17:45, John Harrison wrote:
>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>> On 03/11/2022 19:16, John Harrison wrote:
>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>
>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>> sysfs with
>>>>>>>>>
>>>>>>>>> sysfs?
>>>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>
>>>>>>>>>
>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>> with an
>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>> when CI
>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>> system times
>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>> can't do
>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>> reboots.
>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>> sometimes not.
>>>>>>>>>>
>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>> timeout to
>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>>>> working system (if possible).
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>> ---
>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>             DROP_RCU)
>>>>>>>>>> +
>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>
>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>>>> only used
>>>>>>>>> here.
>>>>>>>>
>>>>>>>> So move here, dropping i915 prefix, next to the newly proposed 
>>>>>>>> one?
>>>>>>> Sure, can do that.
>>>>>>>
>>>>>>>>
>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>>>> gt/intel_gt.c.
>>>>>>>>
>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>
>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>> intel_gt_pm.c.
>>>>>>>>
>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>
>>>>>>> These two are totally unrelated and in code not being touched by 
>>>>>>> this change. I would rather not conflate changing random other 
>>>>>>> things with fixing this specific issue.
>>>>>>>
>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>
>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>
>>>>>>>>> My head spins.
>>>>>>>>
>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>> My original intention for the name was that is the 'drop caches 
>>>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>>>> rename.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>>>> are taking a wider view when touching the code base, and are 
>>>>>>>> not afraid to send cleanups, things should be good.
>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>> code could be a bit better then nothing ever gets fixed. If you 
>>>>>>> spot something that you think should be improved, isn't the 
>>>>>>> general idea that you should post a patch yourself to improve it?
>>>>>>
>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>> incoming on self-initiative basis. ;)
>>>>>>
>>>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>>>> code paths in question could handle SIGINT and then we could 
>>>>>>>> punt the decision on how long someone wants to wait purely to 
>>>>>>>> userspace. But it's probably hard and it's only debugfs so 
>>>>>>>> whatever.
>>>>>>>>
>>>>>>> The code paths in question will already abort on a signal won't 
>>>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>>>> asking for is a change in the IGT libraries and/or CI framework 
>>>>>>> to start sending signals after some specific timeout. That seems 
>>>>>>> like a significantly more complex change (in terms of the number 
>>>>>>> of entities affected and number of groups involved) and 
>>>>>>> unnecessary.
>>>>>>
>>>>>> If you say so, I haven't looked at them all. But if the code path 
>>>>>> in question already aborts on signals then I am not sure what is 
>>>>>> the patch fixing? I assumed you are trying to avoid the write 
>>>>>> stuck in D forever, which then prevents driver unload and 
>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>> you say SIGINT works then you can already recover from userspace, 
>>>>>> no?
>>>>>>
>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>> most half from the test runner timeout.
>>>>>>> This is supposed to be test clean up. This is not about how long 
>>>>>>> a particular test takes to complete but about how long it takes 
>>>>>>> to declare the system broken after the test has already 
>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>> required.
>>>>>>>
>>>>>>>>
>>>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>>>> really give us, on top of limiting the wait, when latter AFAIU 
>>>>>>>> is the key factor which would prevent the need to reboot the 
>>>>>>>> machine?
>>>>>>>>
>>>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>>>> idle the GT then something is very badly wrong. Wedging 
>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>> the system is really that badly broken then future tests will 
>>>>>>> not run rather than report erroneous failures.
>>>>>>>
>>>>>>> This is not about getting a more stable system for end users by 
>>>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>>>> points. The sole motivation here is to get more accurate results 
>>>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>>>> such) and allowing further testing to complete correctly in the 
>>>>>>> case where the system can be recovered.
>>>>>>
>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>> just want a clear statement if this is something IGT / test 
>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>> also send SIGINT, and it could even trigger a reset from outside. 
>>>>>> Sure it is debugfs hacks so general "kernel should not implement 
>>>>>> policy" need not be strictly followed, but lets have it clear 
>>>>>> what are the options.
>>>>>
>>>>> One conceptual problem with applying this policy is that the code is:
>>>>>
>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>         if (ret)
>>>>>             return ret;
>>>>>     }
>>>>>
>>>>>     if (val & DROP_IDLE) {
>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>         if (ret)
>>>>>             return ret;
>>>>>     }
>>>>>
>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>> first branch have a short timeout and wedge. Yeah some bug happens 
>>>>> to be there at the moment, but put a bug in a different place and 
>>>>> you hang on the second branch and then need another patch. Versus 
>>>>> perhaps making it all respect SIGINT and handle from outside.
>>>>>
>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>>>> completed successfully. There is no route to skip the GT idle or to 
>>>> do the PM idle even if the GT idle fails. So the chances of the PM 
>>>> idle failing are greatly reduced. There would have to be something 
>>>> outside of a GT keeping the GPU awake and there isn't a whole lot 
>>>> of hardware left at that point!
>>>
>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>> here and we add a timeout, tomorrow bug is there and then the same 
>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>> in some circumstances, doesn't really matter.
>>>
>> Huh?
>>
>> Greatly reduced is the whole point. Today there is a bug and it 
>> causes a kernel hang which requires the CI framework to reboot the 
>> system in an extremely unfriendly way which makes it very hard to 
>> work out what happened. Logs are likely not available. We don't even 
>> necessarily know which test was being run at the time. Etc. So we 
>> replace the infinite timeout with a meaningful timeout. CI now 
>> correctly marks the single test as failing, captures all the correct 
>> logs, creates a useful bug report and continues on testing more stuff.
>
> So what is preventing CI to collect logs if IGT is forever stuck in 
> interruptible wait? Surely it can collect the logs at that point if 
> the kernel is healthy enough. If it isn't then I don't see how wedging 
> the GPU will make the kernel any healthier.
>
> Is i915 preventing better log collection or could test runner be 
> improved?
>
>> Sure, there is still the chance of hitting an infinite timeout. But 
>> that one is significantly more complicated to remove. And the chances 
>> of hitting that one are significantly smaller than the chances of 
>> hitting the first one.
>
> This statement relies on intimate knowledge implementation details and 
> a bit too much white box testing approach but that's okay, lets move 
> past this one.
>
>> So you are arguing that because I can't fix the last 0.1% of possible 
>> failures, I am not allowed to fix the first 99.9% of the failures?
>
> I am clearly not arguing for that. But we are also not talking about 
> "fixing failures" here. Just how to make CI cope better with a class 
> of i915 bugs.
>
>>>> Regarding signals, the PM idle code ends up at 
>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>> clear trying to follow through the implementation of this code. 
>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>> that code (or why it wasn't already written with a timeout 
>>>> included). Someone more familiar with the wakeref internals would 
>>>> need to comment.
>>>>
>>>> However, I strongly disagree that we should not fix the driver just 
>>>> because it is possible to workaround the issue by re-writing the CI 
>>>> framework. Feel free to bring a redesign plan to the IGT WG and 
>>>> whatever equivalent CI meetings in parallel. But we absolutely 
>>>> should not have infinite waits in the kernel if there is a trivial 
>>>> way to not have infinite waits.
>>>
>>> I thought I was clear that I am not really opposed to the timeout.
>>>
>>> The rest of the paragraph I don't really care - point is moot 
>>> because it's debugfs so we can do whatever, as long as it is not 
>>> burdensome to i915, which this isn't. If either wasn't the case then 
>>> we certainly wouldn't be adding any workarounds in the kernel if it 
>>> can be achieved in IGT.
>>>
>>>> Also, sending a signal does not result in the wedge happening. I 
>>>> specifically did not want to change that code path because I was 
>>>> assuming there was a valid reason for it. If you have been 
>>>> interrupted then you are in the territory of maybe it would have 
>>>> succeeded if you just left it for a moment longer. Whereas, hitting 
>>>> the timeout says that someone very deliberately said this is too 
>>>> long to wait and therefore the system must be broken.
>>>
>>> I wanted to know specifically about wedging - why can't you 
>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>> wherever, if that's what you say is the right thing? 
>> Huh?
>>
>> DROP_IDLE has two waits. One that I am trying to change from infinite 
>> to finite + wedge. One that would take considerable effort to change 
>> and would be quite invasive to a lot more of the driver and which can 
>> only be hit if the first timeout actually completed successfully and 
>> is therefore of less importance anyway. Both of those time outs 
>> appear to respect signal interrupts.
>>
>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>> timeout expired? I915 is not controlling how much work there is 
>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>
>> Because this is a debug test interface that is used solely by IGT 
>> after it has finished its testing. This is not about wedging the 
>> device at some random arbitrary point because an AI compute workload 
>> takes three hours to complete. This is about a very specific test 
>> framework cleaning up after testing is completed and making sure the 
>> test did not fry the system.
>>
>> And even if an IGT test was calling DROP_IDLE in the middle of a test 
>> for some reason, it should not be deliberately pushing 10+ seconds of 
>> work through and then calling a debug only interface to flush it out. 
>> If a test wants to verify that the system can cope with submitting a 
>> minutes worth of rendering and then waiting for it to complete then 
>> the test should be using official channels for that wait.
>>
>>>
>>>> Plus, infinite wait is not a valid code path in the first place so 
>>>> any change in behaviour is not really a change in behaviour. Code 
>>>> can't be relying on a kernel call to never return for its correct 
>>>> operation!
>>>
>>> Why infinite wait wouldn't be valid? Then you better change the 
>>> other one as well. ;P
>> In what universe is it ever valid to wait forever for a test to 
>> complete?
>
> Well above you claimed both paths respect SIGINT. If that is so then 
> the wait is as infinite as the IGT wanted it to be.
>
>> See above, the PM code would require much more invasive changes. This 
>> was low hanging fruit. It was supposed to be a two minute change to a 
>> very self contained section of code that would provide significant 
>> benefit to debugging a small class of very hard to debug problems.
>
> Sure, but I'd still like to know why can't you do what you want from 
> the IGT framework.
>
> Have the timeout reduction in i915, again that's fine assuming 10 
> seconds it enough to not break something by accident.
CI showed no regressions. And if someone does find a valid reason why a 
post test drop caches call should legitimately take a stupidly long time 
then it is easy to track back where the ETIME error came from and bump 
the timeout.

>
> With that change you already have broken the "infinite wait". It makes 
> the debugfs write return -ETIME in time much shorter than the test 
> runner timeout(s). What is the thing that you cannot do from IGT at 
> that point is my question? You want to wedge then? Send 
> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
> flag which will wedge explicitly.
>
> We are again degrading into a huge philosophical discussion and all I 
> wanted to start with is to hear how exactly things go bad.
>
I have no idea what you are wanting. I am trying to have a technical 
discussion about improving the stability of the driver during CI 
testing. I have no idea if you are arguing that this change is good, 
bad, broken, wrong direction or what.

Things go bad as explained in the commit message. The CI framework does 
not use signals. The IGT framework does not use signals. There is no 
watchdog that sends a TERM or KILL signal after a specified timeout. All 
that happens is the IGT sits there forever waiting for the drop caches 
IOCTL to return. The CI framework eventually gives up waiting for the 
test to complete and tries to recover. There are many different CI 
frameworks in use across Intel. Some timeout quickly, some timeout 
slowly. But basically, they all eventually give up and don't bother 
trying any kind of remedial action but just hit the reset button 
(sometimes by literally power cycling the DUT). As result, background 
processes that are saving dmesg, stdout, etc do not necessarily 
terminate cleanly. That results in logs that are at best truncated, at 
worst missing entirely. It also results in some frameworks aborting 
testing at that point. So no results are generated for all the other 
tests that have yet to be run. Some frameworks also run tests in 
batches. All they log is that something, somewhere in the batch died. So 
you don't even know which specific test actually hit the problem.

Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
that something we have the ability to do with a simple patch? No. Would 
re-writing the IGT framework to add watchdog mechanisms improve things? 
Yes. Can it be done with a simple patch? No. Would a simple patch to 
i915 significantly improve the situation? Yes. Will it solve every 
possible CI hang? No. Will it fix any actual end user visible bugs? No. 
Will it introduce any new bugs? No. Will it help us to debug at least 
some CI failures? Yes.

John.

> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-07 19:45                     ` John Harrison
@ 2022-11-08  9:08                       ` Tvrtko Ursulin
  -1 siblings, 0 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-08  9:08 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel


On 07/11/2022 19:45, John Harrison wrote:
> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>> On 04/11/2022 17:45, John Harrison wrote:
>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>
>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>> sysfs with
>>>>>>>>>>
>>>>>>>>>> sysfs?
>>>>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>>> with an
>>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>>> when CI
>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>> system times
>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>>> can't do
>>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>>> reboots.
>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>> sometimes not.
>>>>>>>>>>>
>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>> timeout to
>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>> ---
>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>> +
>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>
>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>>>>> only used
>>>>>>>>>> here.
>>>>>>>>>
>>>>>>>>> So move here, dropping i915 prefix, next to the newly proposed 
>>>>>>>>> one?
>>>>>>>> Sure, can do that.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>
>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>
>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>
>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>
>>>>>>>> These two are totally unrelated and in code not being touched by 
>>>>>>>> this change. I would rather not conflate changing random other 
>>>>>>>> things with fixing this specific issue.
>>>>>>>>
>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>
>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>
>>>>>>>>>> My head spins.
>>>>>>>>>
>>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>>> My original intention for the name was that is the 'drop caches 
>>>>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>>>>> rename.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>>>>> are taking a wider view when touching the code base, and are 
>>>>>>>>> not afraid to send cleanups, things should be good.
>>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>>> code could be a bit better then nothing ever gets fixed. If you 
>>>>>>>> spot something that you think should be improved, isn't the 
>>>>>>>> general idea that you should post a patch yourself to improve it?
>>>>>>>
>>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>>> incoming on self-initiative basis. ;)
>>>>>>>
>>>>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>>>>> code paths in question could handle SIGINT and then we could 
>>>>>>>>> punt the decision on how long someone wants to wait purely to 
>>>>>>>>> userspace. But it's probably hard and it's only debugfs so 
>>>>>>>>> whatever.
>>>>>>>>>
>>>>>>>> The code paths in question will already abort on a signal won't 
>>>>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>>>>> asking for is a change in the IGT libraries and/or CI framework 
>>>>>>>> to start sending signals after some specific timeout. That seems 
>>>>>>>> like a significantly more complex change (in terms of the number 
>>>>>>>> of entities affected and number of groups involved) and 
>>>>>>>> unnecessary.
>>>>>>>
>>>>>>> If you say so, I haven't looked at them all. But if the code path 
>>>>>>> in question already aborts on signals then I am not sure what is 
>>>>>>> the patch fixing? I assumed you are trying to avoid the write 
>>>>>>> stuck in D forever, which then prevents driver unload and 
>>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>>> you say SIGINT works then you can already recover from userspace, 
>>>>>>> no?
>>>>>>>
>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>>> most half from the test runner timeout.
>>>>>>>> This is supposed to be test clean up. This is not about how long 
>>>>>>>> a particular test takes to complete but about how long it takes 
>>>>>>>> to declare the system broken after the test has already 
>>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>>> required.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>>>>> really give us, on top of limiting the wait, when latter AFAIU 
>>>>>>>>> is the key factor which would prevent the need to reboot the 
>>>>>>>>> machine?
>>>>>>>>>
>>>>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>>>>> idle the GT then something is very badly wrong. Wedging 
>>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>>> the system is really that badly broken then future tests will 
>>>>>>>> not run rather than report erroneous failures.
>>>>>>>>
>>>>>>>> This is not about getting a more stable system for end users by 
>>>>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>>>>> points. The sole motivation here is to get more accurate results 
>>>>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>>>>> such) and allowing further testing to complete correctly in the 
>>>>>>>> case where the system can be recovered.
>>>>>>>
>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>>> also send SIGINT, and it could even trigger a reset from outside. 
>>>>>>> Sure it is debugfs hacks so general "kernel should not implement 
>>>>>>> policy" need not be strictly followed, but lets have it clear 
>>>>>>> what are the options.
>>>>>>
>>>>>> One conceptual problem with applying this policy is that the code is:
>>>>>>
>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>         if (ret)
>>>>>>             return ret;
>>>>>>     }
>>>>>>
>>>>>>     if (val & DROP_IDLE) {
>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>         if (ret)
>>>>>>             return ret;
>>>>>>     }
>>>>>>
>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>> first branch have a short timeout and wedge. Yeah some bug happens 
>>>>>> to be there at the moment, but put a bug in a different place and 
>>>>>> you hang on the second branch and then need another patch. Versus 
>>>>>> perhaps making it all respect SIGINT and handle from outside.
>>>>>>
>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>>>>> completed successfully. There is no route to skip the GT idle or to 
>>>>> do the PM idle even if the GT idle fails. So the chances of the PM 
>>>>> idle failing are greatly reduced. There would have to be something 
>>>>> outside of a GT keeping the GPU awake and there isn't a whole lot 
>>>>> of hardware left at that point!
>>>>
>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>> here and we add a timeout, tomorrow bug is there and then the same 
>>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>>> in some circumstances, doesn't really matter.
>>>>
>>> Huh?
>>>
>>> Greatly reduced is the whole point. Today there is a bug and it 
>>> causes a kernel hang which requires the CI framework to reboot the 
>>> system in an extremely unfriendly way which makes it very hard to 
>>> work out what happened. Logs are likely not available. We don't even 
>>> necessarily know which test was being run at the time. Etc. So we 
>>> replace the infinite timeout with a meaningful timeout. CI now 
>>> correctly marks the single test as failing, captures all the correct 
>>> logs, creates a useful bug report and continues on testing more stuff.
>>
>> So what is preventing CI to collect logs if IGT is forever stuck in 
>> interruptible wait? Surely it can collect the logs at that point if 
>> the kernel is healthy enough. If it isn't then I don't see how wedging 
>> the GPU will make the kernel any healthier.
>>
>> Is i915 preventing better log collection or could test runner be 
>> improved?
>>
>>> Sure, there is still the chance of hitting an infinite timeout. But 
>>> that one is significantly more complicated to remove. And the chances 
>>> of hitting that one are significantly smaller than the chances of 
>>> hitting the first one.
>>
>> This statement relies on intimate knowledge implementation details and 
>> a bit too much white box testing approach but that's okay, lets move 
>> past this one.
>>
>>> So you are arguing that because I can't fix the last 0.1% of possible 
>>> failures, I am not allowed to fix the first 99.9% of the failures?
>>
>> I am clearly not arguing for that. But we are also not talking about 
>> "fixing failures" here. Just how to make CI cope better with a class 
>> of i915 bugs.
>>
>>>>> Regarding signals, the PM idle code ends up at 
>>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>>> clear trying to follow through the implementation of this code. 
>>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>>> that code (or why it wasn't already written with a timeout 
>>>>> included). Someone more familiar with the wakeref internals would 
>>>>> need to comment.
>>>>>
>>>>> However, I strongly disagree that we should not fix the driver just 
>>>>> because it is possible to workaround the issue by re-writing the CI 
>>>>> framework. Feel free to bring a redesign plan to the IGT WG and 
>>>>> whatever equivalent CI meetings in parallel. But we absolutely 
>>>>> should not have infinite waits in the kernel if there is a trivial 
>>>>> way to not have infinite waits.
>>>>
>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>
>>>> The rest of the paragraph I don't really care - point is moot 
>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>> burdensome to i915, which this isn't. If either wasn't the case then 
>>>> we certainly wouldn't be adding any workarounds in the kernel if it 
>>>> can be achieved in IGT.
>>>>
>>>>> Also, sending a signal does not result in the wedge happening. I 
>>>>> specifically did not want to change that code path because I was 
>>>>> assuming there was a valid reason for it. If you have been 
>>>>> interrupted then you are in the territory of maybe it would have 
>>>>> succeeded if you just left it for a moment longer. Whereas, hitting 
>>>>> the timeout says that someone very deliberately said this is too 
>>>>> long to wait and therefore the system must be broken.
>>>>
>>>> I wanted to know specifically about wedging - why can't you 
>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>> wherever, if that's what you say is the right thing? 
>>> Huh?
>>>
>>> DROP_IDLE has two waits. One that I am trying to change from infinite 
>>> to finite + wedge. One that would take considerable effort to change 
>>> and would be quite invasive to a lot more of the driver and which can 
>>> only be hit if the first timeout actually completed successfully and 
>>> is therefore of less importance anyway. Both of those time outs 
>>> appear to respect signal interrupts.
>>>
>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>> timeout expired? I915 is not controlling how much work there is 
>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>
>>> Because this is a debug test interface that is used solely by IGT 
>>> after it has finished its testing. This is not about wedging the 
>>> device at some random arbitrary point because an AI compute workload 
>>> takes three hours to complete. This is about a very specific test 
>>> framework cleaning up after testing is completed and making sure the 
>>> test did not fry the system.
>>>
>>> And even if an IGT test was calling DROP_IDLE in the middle of a test 
>>> for some reason, it should not be deliberately pushing 10+ seconds of 
>>> work through and then calling a debug only interface to flush it out. 
>>> If a test wants to verify that the system can cope with submitting a 
>>> minutes worth of rendering and then waiting for it to complete then 
>>> the test should be using official channels for that wait.
>>>
>>>>
>>>>> Plus, infinite wait is not a valid code path in the first place so 
>>>>> any change in behaviour is not really a change in behaviour. Code 
>>>>> can't be relying on a kernel call to never return for its correct 
>>>>> operation!
>>>>
>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>> other one as well. ;P
>>> In what universe is it ever valid to wait forever for a test to 
>>> complete?
>>
>> Well above you claimed both paths respect SIGINT. If that is so then 
>> the wait is as infinite as the IGT wanted it to be.
>>
>>> See above, the PM code would require much more invasive changes. This 
>>> was low hanging fruit. It was supposed to be a two minute change to a 
>>> very self contained section of code that would provide significant 
>>> benefit to debugging a small class of very hard to debug problems.
>>
>> Sure, but I'd still like to know why can't you do what you want from 
>> the IGT framework.
>>
>> Have the timeout reduction in i915, again that's fine assuming 10 
>> seconds it enough to not break something by accident.
> CI showed no regressions. And if someone does find a valid reason why a 
> post test drop caches call should legitimately take a stupidly long time 
> then it is easy to track back where the ETIME error came from and bump 
> the timeout.
> 
>>
>> With that change you already have broken the "infinite wait". It makes 
>> the debugfs write return -ETIME in time much shorter than the test 
>> runner timeout(s). What is the thing that you cannot do from IGT at 
>> that point is my question? You want to wedge then? Send 
>> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
>> flag which will wedge explicitly.
>>
>> We are again degrading into a huge philosophical discussion and all I 
>> wanted to start with is to hear how exactly things go bad.
>>
> I have no idea what you are wanting. I am trying to have a technical 
> discussion about improving the stability of the driver during CI 
> testing. I have no idea if you are arguing that this change is good, 
> bad, broken, wrong direction or what.
> 
> Things go bad as explained in the commit message. The CI framework does 
> not use signals. The IGT framework does not use signals. There is no 
> watchdog that sends a TERM or KILL signal after a specified timeout. All 
> that happens is the IGT sits there forever waiting for the drop caches 
> IOCTL to return. The CI framework eventually gives up waiting for the 
> test to complete and tries to recover. There are many different CI 
> frameworks in use across Intel. Some timeout quickly, some timeout 
> slowly. But basically, they all eventually give up and don't bother 
> trying any kind of remedial action but just hit the reset button 
> (sometimes by literally power cycling the DUT). As result, background 
> processes that are saving dmesg, stdout, etc do not necessarily 
> terminate cleanly. That results in logs that are at best truncated, at 
> worst missing entirely. It also results in some frameworks aborting 
> testing at that point. So no results are generated for all the other 
> tests that have yet to be run. Some frameworks also run tests in 
> batches. All they log is that something, somewhere in the batch died. So 
> you don't even know which specific test actually hit the problem.
> 
> Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
> that something we have the ability to do with a simple patch? No. Would 
> re-writing the IGT framework to add watchdog mechanisms improve things? 
> Yes. Can it be done with a simple patch? No. Would a simple patch to 
> i915 significantly improve the situation? Yes. Will it solve every 
> possible CI hang? No. Will it fix any actual end user visible bugs? No. 
> Will it introduce any new bugs? No. Will it help us to debug at least 
> some CI failures? Yes.

To unblock, I suggest you go with the patch which caps the wait only, 
and propose a wedging as an IGT patch to gem_quiescent_gpu(). That 
should involve the CI/IGT folks into discussion on what logs will be, or 
will not be collected once gem_quiescent_gpu() fails due -ETIME. In fact 
probably you should copy CI/IGT folks on the v2 of the i915 patch as 
well since I now think their acks would be good to have - from the point 
of view of the current test runner behaviour with hanging tests.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-08  9:08                       ` Tvrtko Ursulin
  0 siblings, 0 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-08  9:08 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: DRI-Devel


On 07/11/2022 19:45, John Harrison wrote:
> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>> On 04/11/2022 17:45, John Harrison wrote:
>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>
>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>> sysfs with
>>>>>>>>>>
>>>>>>>>>> sysfs?
>>>>>>>> Sorry, that was meant to say debugfs. I've also been working on 
>>>>>>>> some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>>> with an
>>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>>> when CI
>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>> system times
>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and then
>>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>>> can't do
>>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>>> reboots.
>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>> sometimes not.
>>>>>>>>>>>
>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>> timeout to
>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue with a
>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>> ---
>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>> +
>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>
>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's also 
>>>>>>>>>> only used
>>>>>>>>>> here.
>>>>>>>>>
>>>>>>>>> So move here, dropping i915 prefix, next to the newly proposed 
>>>>>>>>> one?
>>>>>>>> Sure, can do that.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only used in
>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>
>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>
>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>
>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>
>>>>>>>> These two are totally unrelated and in code not being touched by 
>>>>>>>> this change. I would rather not conflate changing random other 
>>>>>>>> things with fixing this specific issue.
>>>>>>>>
>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>
>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>
>>>>>>>>>> My head spins.
>>>>>>>>>
>>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>>> My original intention for the name was that is the 'drop caches 
>>>>>>>> timeout for intel_gt_wait_for_idle'. Which is quite the mouthful 
>>>>>>>> and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, I realised 
>>>>>>>> later that name can be conflated with the DROP_IDLE flag. Will 
>>>>>>>> rename.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Things get refactored, code moves around, bits get left behind, 
>>>>>>>>> who knows. No reason to get too worked up. :) As long as people 
>>>>>>>>> are taking a wider view when touching the code base, and are 
>>>>>>>>> not afraid to send cleanups, things should be good.
>>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>>> code could be a bit better then nothing ever gets fixed. If you 
>>>>>>>> spot something that you think should be improved, isn't the 
>>>>>>>> general idea that you should post a patch yourself to improve it?
>>>>>>>
>>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>>> incoming on self-initiative basis. ;)
>>>>>>>
>>>>>>>>> For the actual functional change at hand - it would be nice if 
>>>>>>>>> code paths in question could handle SIGINT and then we could 
>>>>>>>>> punt the decision on how long someone wants to wait purely to 
>>>>>>>>> userspace. But it's probably hard and it's only debugfs so 
>>>>>>>>> whatever.
>>>>>>>>>
>>>>>>>> The code paths in question will already abort on a signal won't 
>>>>>>>> they? Both intel_gt_wait_for_idle() and 
>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>> uc_wait_for_idle eventually ends up, have an 'if(signal_pending) 
>>>>>>>> return -EINTR;' check. Beyond that, it sounds like what you are 
>>>>>>>> asking for is a change in the IGT libraries and/or CI framework 
>>>>>>>> to start sending signals after some specific timeout. That seems 
>>>>>>>> like a significantly more complex change (in terms of the number 
>>>>>>>> of entities affected and number of groups involved) and 
>>>>>>>> unnecessary.
>>>>>>>
>>>>>>> If you say so, I haven't looked at them all. But if the code path 
>>>>>>> in question already aborts on signals then I am not sure what is 
>>>>>>> the patch fixing? I assumed you are trying to avoid the write 
>>>>>>> stuck in D forever, which then prevents driver unload and 
>>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>>> you say SIGINT works then you can already recover from userspace, 
>>>>>>> no?
>>>>>>>
>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>>> most half from the test runner timeout.
>>>>>>>> This is supposed to be test clean up. This is not about how long 
>>>>>>>> a particular test takes to complete but about how long it takes 
>>>>>>>> to declare the system broken after the test has already 
>>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>>> required.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not convinced that wedging is correct though. Conceptually 
>>>>>>>>> could be just that the timeout is too short. What does wedging 
>>>>>>>>> really give us, on top of limiting the wait, when latter AFAIU 
>>>>>>>>> is the key factor which would prevent the need to reboot the 
>>>>>>>>> machine?
>>>>>>>>>
>>>>>>>> It gives us a system that knows what state it is in. If we can't 
>>>>>>>> idle the GT then something is very badly wrong. Wedging 
>>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>>> the system is really that badly broken then future tests will 
>>>>>>>> not run rather than report erroneous failures.
>>>>>>>>
>>>>>>>> This is not about getting a more stable system for end users by 
>>>>>>>> sweeping issues under the carpet and pretending all is well. End 
>>>>>>>> users don't run IGTs or explicitly call dodgy debugfs entry 
>>>>>>>> points. The sole motivation here is to get more accurate results 
>>>>>>>> from CI. That is, correctly identifying which test has hit a 
>>>>>>>> problem, getting valid debug analysis for that test (logs and 
>>>>>>>> such) and allowing further testing to complete correctly in the 
>>>>>>>> case where the system can be recovered.
>>>>>>>
>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>>> also send SIGINT, and it could even trigger a reset from outside. 
>>>>>>> Sure it is debugfs hacks so general "kernel should not implement 
>>>>>>> policy" need not be strictly followed, but lets have it clear 
>>>>>>> what are the options.
>>>>>>
>>>>>> One conceptual problem with applying this policy is that the code is:
>>>>>>
>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>         if (ret)
>>>>>>             return ret;
>>>>>>     }
>>>>>>
>>>>>>     if (val & DROP_IDLE) {
>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>         if (ret)
>>>>>>             return ret;
>>>>>>     }
>>>>>>
>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>> first branch have a short timeout and wedge. Yeah some bug happens 
>>>>>> to be there at the moment, but put a bug in a different place and 
>>>>>> you hang on the second branch and then need another patch. Versus 
>>>>>> perhaps making it all respect SIGINT and handle from outside.
>>>>>>
>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle has 
>>>>> completed successfully. There is no route to skip the GT idle or to 
>>>>> do the PM idle even if the GT idle fails. So the chances of the PM 
>>>>> idle failing are greatly reduced. There would have to be something 
>>>>> outside of a GT keeping the GPU awake and there isn't a whole lot 
>>>>> of hardware left at that point!
>>>>
>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>> here and we add a timeout, tomorrow bug is there and then the same 
>>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>>> in some circumstances, doesn't really matter.
>>>>
>>> Huh?
>>>
>>> Greatly reduced is the whole point. Today there is a bug and it 
>>> causes a kernel hang which requires the CI framework to reboot the 
>>> system in an extremely unfriendly way which makes it very hard to 
>>> work out what happened. Logs are likely not available. We don't even 
>>> necessarily know which test was being run at the time. Etc. So we 
>>> replace the infinite timeout with a meaningful timeout. CI now 
>>> correctly marks the single test as failing, captures all the correct 
>>> logs, creates a useful bug report and continues on testing more stuff.
>>
>> So what is preventing CI to collect logs if IGT is forever stuck in 
>> interruptible wait? Surely it can collect the logs at that point if 
>> the kernel is healthy enough. If it isn't then I don't see how wedging 
>> the GPU will make the kernel any healthier.
>>
>> Is i915 preventing better log collection or could test runner be 
>> improved?
>>
>>> Sure, there is still the chance of hitting an infinite timeout. But 
>>> that one is significantly more complicated to remove. And the chances 
>>> of hitting that one are significantly smaller than the chances of 
>>> hitting the first one.
>>
>> This statement relies on intimate knowledge implementation details and 
>> a bit too much white box testing approach but that's okay, lets move 
>> past this one.
>>
>>> So you are arguing that because I can't fix the last 0.1% of possible 
>>> failures, I am not allowed to fix the first 99.9% of the failures?
>>
>> I am clearly not arguing for that. But we are also not talking about 
>> "fixing failures" here. Just how to make CI cope better with a class 
>> of i915 bugs.
>>
>>>>> Regarding signals, the PM idle code ends up at 
>>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>>> clear trying to follow through the implementation of this code. 
>>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>>> that code (or why it wasn't already written with a timeout 
>>>>> included). Someone more familiar with the wakeref internals would 
>>>>> need to comment.
>>>>>
>>>>> However, I strongly disagree that we should not fix the driver just 
>>>>> because it is possible to workaround the issue by re-writing the CI 
>>>>> framework. Feel free to bring a redesign plan to the IGT WG and 
>>>>> whatever equivalent CI meetings in parallel. But we absolutely 
>>>>> should not have infinite waits in the kernel if there is a trivial 
>>>>> way to not have infinite waits.
>>>>
>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>
>>>> The rest of the paragraph I don't really care - point is moot 
>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>> burdensome to i915, which this isn't. If either wasn't the case then 
>>>> we certainly wouldn't be adding any workarounds in the kernel if it 
>>>> can be achieved in IGT.
>>>>
>>>>> Also, sending a signal does not result in the wedge happening. I 
>>>>> specifically did not want to change that code path because I was 
>>>>> assuming there was a valid reason for it. If you have been 
>>>>> interrupted then you are in the territory of maybe it would have 
>>>>> succeeded if you just left it for a moment longer. Whereas, hitting 
>>>>> the timeout says that someone very deliberately said this is too 
>>>>> long to wait and therefore the system must be broken.
>>>>
>>>> I wanted to know specifically about wedging - why can't you 
>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>> wherever, if that's what you say is the right thing? 
>>> Huh?
>>>
>>> DROP_IDLE has two waits. One that I am trying to change from infinite 
>>> to finite + wedge. One that would take considerable effort to change 
>>> and would be quite invasive to a lot more of the driver and which can 
>>> only be hit if the first timeout actually completed successfully and 
>>> is therefore of less importance anyway. Both of those time outs 
>>> appear to respect signal interrupts.
>>>
>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>> timeout expired? I915 is not controlling how much work there is 
>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>
>>> Because this is a debug test interface that is used solely by IGT 
>>> after it has finished its testing. This is not about wedging the 
>>> device at some random arbitrary point because an AI compute workload 
>>> takes three hours to complete. This is about a very specific test 
>>> framework cleaning up after testing is completed and making sure the 
>>> test did not fry the system.
>>>
>>> And even if an IGT test was calling DROP_IDLE in the middle of a test 
>>> for some reason, it should not be deliberately pushing 10+ seconds of 
>>> work through and then calling a debug only interface to flush it out. 
>>> If a test wants to verify that the system can cope with submitting a 
>>> minutes worth of rendering and then waiting for it to complete then 
>>> the test should be using official channels for that wait.
>>>
>>>>
>>>>> Plus, infinite wait is not a valid code path in the first place so 
>>>>> any change in behaviour is not really a change in behaviour. Code 
>>>>> can't be relying on a kernel call to never return for its correct 
>>>>> operation!
>>>>
>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>> other one as well. ;P
>>> In what universe is it ever valid to wait forever for a test to 
>>> complete?
>>
>> Well above you claimed both paths respect SIGINT. If that is so then 
>> the wait is as infinite as the IGT wanted it to be.
>>
>>> See above, the PM code would require much more invasive changes. This 
>>> was low hanging fruit. It was supposed to be a two minute change to a 
>>> very self contained section of code that would provide significant 
>>> benefit to debugging a small class of very hard to debug problems.
>>
>> Sure, but I'd still like to know why can't you do what you want from 
>> the IGT framework.
>>
>> Have the timeout reduction in i915, again that's fine assuming 10 
>> seconds it enough to not break something by accident.
> CI showed no regressions. And if someone does find a valid reason why a 
> post test drop caches call should legitimately take a stupidly long time 
> then it is easy to track back where the ETIME error came from and bump 
> the timeout.
> 
>>
>> With that change you already have broken the "infinite wait". It makes 
>> the debugfs write return -ETIME in time much shorter than the test 
>> runner timeout(s). What is the thing that you cannot do from IGT at 
>> that point is my question? You want to wedge then? Send 
>> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
>> flag which will wedge explicitly.
>>
>> We are again degrading into a huge philosophical discussion and all I 
>> wanted to start with is to hear how exactly things go bad.
>>
> I have no idea what you are wanting. I am trying to have a technical 
> discussion about improving the stability of the driver during CI 
> testing. I have no idea if you are arguing that this change is good, 
> bad, broken, wrong direction or what.
> 
> Things go bad as explained in the commit message. The CI framework does 
> not use signals. The IGT framework does not use signals. There is no 
> watchdog that sends a TERM or KILL signal after a specified timeout. All 
> that happens is the IGT sits there forever waiting for the drop caches 
> IOCTL to return. The CI framework eventually gives up waiting for the 
> test to complete and tries to recover. There are many different CI 
> frameworks in use across Intel. Some timeout quickly, some timeout 
> slowly. But basically, they all eventually give up and don't bother 
> trying any kind of remedial action but just hit the reset button 
> (sometimes by literally power cycling the DUT). As result, background 
> processes that are saving dmesg, stdout, etc do not necessarily 
> terminate cleanly. That results in logs that are at best truncated, at 
> worst missing entirely. It also results in some frameworks aborting 
> testing at that point. So no results are generated for all the other 
> tests that have yet to be run. Some frameworks also run tests in 
> batches. All they log is that something, somewhere in the batch died. So 
> you don't even know which specific test actually hit the problem.
> 
> Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
> that something we have the ability to do with a simple patch? No. Would 
> re-writing the IGT framework to add watchdog mechanisms improve things? 
> Yes. Can it be done with a simple patch? No. Would a simple patch to 
> i915 significantly improve the situation? Yes. Will it solve every 
> possible CI hang? No. Will it fix any actual end user visible bugs? No. 
> Will it introduce any new bugs? No. Will it help us to debug at least 
> some CI failures? Yes.

To unblock, I suggest you go with the patch which caps the wait only, 
and propose a wedging as an IGT patch to gem_quiescent_gpu(). That 
should involve the CI/IGT folks into discussion on what logs will be, or 
will not be collected once gem_quiescent_gpu() fails due -ETIME. In fact 
probably you should copy CI/IGT folks on the v2 of the i915 patch as 
well since I now think their acks would be good to have - from the point 
of view of the current test runner behaviour with hanging tests.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-08  9:08                       ` Tvrtko Ursulin
@ 2022-11-08 19:37                         ` John Harrison
  -1 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-08 19:37 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel

On 11/8/2022 01:08, Tvrtko Ursulin wrote:
> On 07/11/2022 19:45, John Harrison wrote:
>> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>>> On 04/11/2022 17:45, John Harrison wrote:
>>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>
>>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>>> sysfs with
>>>>>>>>>>>
>>>>>>>>>>> sysfs?
>>>>>>>>> Sorry, that was meant to say debugfs. I've also been working 
>>>>>>>>> on some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>>>> with an
>>>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>>>> when CI
>>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>>> system times
>>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and 
>>>>>>>>>>>> then
>>>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>>>> can't do
>>>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>>>> reboots.
>>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>>> sometimes not.
>>>>>>>>>>>>
>>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>>> timeout to
>>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue 
>>>>>>>>>>>> with a
>>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>>> +
>>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>>
>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's 
>>>>>>>>>>> also only used
>>>>>>>>>>> here.
>>>>>>>>>>
>>>>>>>>>> So move here, dropping i915 prefix, next to the newly 
>>>>>>>>>> proposed one?
>>>>>>>>> Sure, can do that.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only 
>>>>>>>>>>> used in
>>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>>
>>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>>
>>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>>
>>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>>
>>>>>>>>> These two are totally unrelated and in code not being touched 
>>>>>>>>> by this change. I would rather not conflate changing random 
>>>>>>>>> other things with fixing this specific issue.
>>>>>>>>>
>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>>
>>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>>
>>>>>>>>>>> My head spins.
>>>>>>>>>>
>>>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>>>> My original intention for the name was that is the 'drop 
>>>>>>>>> caches timeout for intel_gt_wait_for_idle'. Which is quite the 
>>>>>>>>> mouthful and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, 
>>>>>>>>> I realised later that name can be conflated with the DROP_IDLE 
>>>>>>>>> flag. Will rename.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Things get refactored, code moves around, bits get left 
>>>>>>>>>> behind, who knows. No reason to get too worked up. :) As long 
>>>>>>>>>> as people are taking a wider view when touching the code 
>>>>>>>>>> base, and are not afraid to send cleanups, things should be 
>>>>>>>>>> good.
>>>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>>>> code could be a bit better then nothing ever gets fixed. If 
>>>>>>>>> you spot something that you think should be improved, isn't 
>>>>>>>>> the general idea that you should post a patch yourself to 
>>>>>>>>> improve it?
>>>>>>>>
>>>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>>>> incoming on self-initiative basis. ;)
>>>>>>>>
>>>>>>>>>> For the actual functional change at hand - it would be nice 
>>>>>>>>>> if code paths in question could handle SIGINT and then we 
>>>>>>>>>> could punt the decision on how long someone wants to wait 
>>>>>>>>>> purely to userspace. But it's probably hard and it's only 
>>>>>>>>>> debugfs so whatever.
>>>>>>>>>>
>>>>>>>>> The code paths in question will already abort on a signal 
>>>>>>>>> won't they? Both intel_gt_wait_for_idle() and 
>>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>>> uc_wait_for_idle eventually ends up, have an 
>>>>>>>>> 'if(signal_pending) return -EINTR;' check. Beyond that, it 
>>>>>>>>> sounds like what you are asking for is a change in the IGT 
>>>>>>>>> libraries and/or CI framework to start sending signals after 
>>>>>>>>> some specific timeout. That seems like a significantly more 
>>>>>>>>> complex change (in terms of the number of entities affected 
>>>>>>>>> and number of groups involved) and unnecessary.
>>>>>>>>
>>>>>>>> If you say so, I haven't looked at them all. But if the code 
>>>>>>>> path in question already aborts on signals then I am not sure 
>>>>>>>> what is the patch fixing? I assumed you are trying to avoid the 
>>>>>>>> write stuck in D forever, which then prevents driver unload and 
>>>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>>>> you say SIGINT works then you can already recover from 
>>>>>>>> userspace, no?
>>>>>>>>
>>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>>>> most half from the test runner timeout.
>>>>>>>>> This is supposed to be test clean up. This is not about how 
>>>>>>>>> long a particular test takes to complete but about how long it 
>>>>>>>>> takes to declare the system broken after the test has already 
>>>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>>>> required.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I am not convinced that wedging is correct though. 
>>>>>>>>>> Conceptually could be just that the timeout is too short. 
>>>>>>>>>> What does wedging really give us, on top of limiting the 
>>>>>>>>>> wait, when latter AFAIU is the key factor which would prevent 
>>>>>>>>>> the need to reboot the machine?
>>>>>>>>>>
>>>>>>>>> It gives us a system that knows what state it is in. If we 
>>>>>>>>> can't idle the GT then something is very badly wrong. Wedging 
>>>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>>>> the system is really that badly broken then future tests will 
>>>>>>>>> not run rather than report erroneous failures.
>>>>>>>>>
>>>>>>>>> This is not about getting a more stable system for end users 
>>>>>>>>> by sweeping issues under the carpet and pretending all is 
>>>>>>>>> well. End users don't run IGTs or explicitly call dodgy 
>>>>>>>>> debugfs entry points. The sole motivation here is to get more 
>>>>>>>>> accurate results from CI. That is, correctly identifying which 
>>>>>>>>> test has hit a problem, getting valid debug analysis for that 
>>>>>>>>> test (logs and such) and allowing further testing to complete 
>>>>>>>>> correctly in the case where the system can be recovered.
>>>>>>>>
>>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>>>> also send SIGINT, and it could even trigger a reset from 
>>>>>>>> outside. Sure it is debugfs hacks so general "kernel should not 
>>>>>>>> implement policy" need not be strictly followed, but lets have 
>>>>>>>> it clear what are the options.
>>>>>>>
>>>>>>> One conceptual problem with applying this policy is that the 
>>>>>>> code is:
>>>>>>>
>>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>>         if (ret)
>>>>>>>             return ret;
>>>>>>>     }
>>>>>>>
>>>>>>>     if (val & DROP_IDLE) {
>>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>>         if (ret)
>>>>>>>             return ret;
>>>>>>>     }
>>>>>>>
>>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>>> first branch have a short timeout and wedge. Yeah some bug 
>>>>>>> happens to be there at the moment, but put a bug in a different 
>>>>>>> place and you hang on the second branch and then need another 
>>>>>>> patch. Versus perhaps making it all respect SIGINT and handle 
>>>>>>> from outside.
>>>>>>>
>>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle 
>>>>>> has completed successfully. There is no route to skip the GT idle 
>>>>>> or to do the PM idle even if the GT idle fails. So the chances of 
>>>>>> the PM idle failing are greatly reduced. There would have to be 
>>>>>> something outside of a GT keeping the GPU awake and there isn't a 
>>>>>> whole lot of hardware left at that point!
>>>>>
>>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>>> here and we add a timeout, tomorrow bug is there and then the same 
>>>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>>>> in some circumstances, doesn't really matter.
>>>>>
>>>> Huh?
>>>>
>>>> Greatly reduced is the whole point. Today there is a bug and it 
>>>> causes a kernel hang which requires the CI framework to reboot the 
>>>> system in an extremely unfriendly way which makes it very hard to 
>>>> work out what happened. Logs are likely not available. We don't 
>>>> even necessarily know which test was being run at the time. Etc. So 
>>>> we replace the infinite timeout with a meaningful timeout. CI now 
>>>> correctly marks the single test as failing, captures all the 
>>>> correct logs, creates a useful bug report and continues on testing 
>>>> more stuff.
>>>
>>> So what is preventing CI to collect logs if IGT is forever stuck in 
>>> interruptible wait? Surely it can collect the logs at that point if 
>>> the kernel is healthy enough. If it isn't then I don't see how 
>>> wedging the GPU will make the kernel any healthier.
>>>
>>> Is i915 preventing better log collection or could test runner be 
>>> improved?
>>>
>>>> Sure, there is still the chance of hitting an infinite timeout. But 
>>>> that one is significantly more complicated to remove. And the 
>>>> chances of hitting that one are significantly smaller than the 
>>>> chances of hitting the first one.
>>>
>>> This statement relies on intimate knowledge implementation details 
>>> and a bit too much white box testing approach but that's okay, lets 
>>> move past this one.
>>>
>>>> So you are arguing that because I can't fix the last 0.1% of 
>>>> possible failures, I am not allowed to fix the first 99.9% of the 
>>>> failures?
>>>
>>> I am clearly not arguing for that. But we are also not talking about 
>>> "fixing failures" here. Just how to make CI cope better with a class 
>>> of i915 bugs.
>>>
>>>>>> Regarding signals, the PM idle code ends up at 
>>>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>>>> clear trying to follow through the implementation of this code. 
>>>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>>>> that code (or why it wasn't already written with a timeout 
>>>>>> included). Someone more familiar with the wakeref internals would 
>>>>>> need to comment.
>>>>>>
>>>>>> However, I strongly disagree that we should not fix the driver 
>>>>>> just because it is possible to workaround the issue by re-writing 
>>>>>> the CI framework. Feel free to bring a redesign plan to the IGT 
>>>>>> WG and whatever equivalent CI meetings in parallel. But we 
>>>>>> absolutely should not have infinite waits in the kernel if there 
>>>>>> is a trivial way to not have infinite waits.
>>>>>
>>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>>
>>>>> The rest of the paragraph I don't really care - point is moot 
>>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>>> burdensome to i915, which this isn't. If either wasn't the case 
>>>>> then we certainly wouldn't be adding any workarounds in the kernel 
>>>>> if it can be achieved in IGT.
>>>>>
>>>>>> Also, sending a signal does not result in the wedge happening. I 
>>>>>> specifically did not want to change that code path because I was 
>>>>>> assuming there was a valid reason for it. If you have been 
>>>>>> interrupted then you are in the territory of maybe it would have 
>>>>>> succeeded if you just left it for a moment longer. Whereas, 
>>>>>> hitting the timeout says that someone very deliberately said this 
>>>>>> is too long to wait and therefore the system must be broken.
>>>>>
>>>>> I wanted to know specifically about wedging - why can't you 
>>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>>> wherever, if that's what you say is the right thing? 
>>>> Huh?
>>>>
>>>> DROP_IDLE has two waits. One that I am trying to change from 
>>>> infinite to finite + wedge. One that would take considerable effort 
>>>> to change and would be quite invasive to a lot more of the driver 
>>>> and which can only be hit if the first timeout actually completed 
>>>> successfully and is therefore of less importance anyway. Both of 
>>>> those time outs appear to respect signal interrupts.
>>>>
>>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>>> timeout expired? I915 is not controlling how much work there is 
>>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>>
>>>> Because this is a debug test interface that is used solely by IGT 
>>>> after it has finished its testing. This is not about wedging the 
>>>> device at some random arbitrary point because an AI compute 
>>>> workload takes three hours to complete. This is about a very 
>>>> specific test framework cleaning up after testing is completed and 
>>>> making sure the test did not fry the system.
>>>>
>>>> And even if an IGT test was calling DROP_IDLE in the middle of a 
>>>> test for some reason, it should not be deliberately pushing 10+ 
>>>> seconds of work through and then calling a debug only interface to 
>>>> flush it out. If a test wants to verify that the system can cope 
>>>> with submitting a minutes worth of rendering and then waiting for 
>>>> it to complete then the test should be using official channels for 
>>>> that wait.
>>>>
>>>>>
>>>>>> Plus, infinite wait is not a valid code path in the first place 
>>>>>> so any change in behaviour is not really a change in behaviour. 
>>>>>> Code can't be relying on a kernel call to never return for its 
>>>>>> correct operation!
>>>>>
>>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>>> other one as well. ;P
>>>> In what universe is it ever valid to wait forever for a test to 
>>>> complete?
>>>
>>> Well above you claimed both paths respect SIGINT. If that is so then 
>>> the wait is as infinite as the IGT wanted it to be.
>>>
>>>> See above, the PM code would require much more invasive changes. 
>>>> This was low hanging fruit. It was supposed to be a two minute 
>>>> change to a very self contained section of code that would provide 
>>>> significant benefit to debugging a small class of very hard to 
>>>> debug problems.
>>>
>>> Sure, but I'd still like to know why can't you do what you want from 
>>> the IGT framework.
>>>
>>> Have the timeout reduction in i915, again that's fine assuming 10 
>>> seconds it enough to not break something by accident.
>> CI showed no regressions. And if someone does find a valid reason why 
>> a post test drop caches call should legitimately take a stupidly long 
>> time then it is easy to track back where the ETIME error came from 
>> and bump the timeout.
>>
>>>
>>> With that change you already have broken the "infinite wait". It 
>>> makes the debugfs write return -ETIME in time much shorter than the 
>>> test runner timeout(s). What is the thing that you cannot do from 
>>> IGT at that point is my question? You want to wedge then? Send 
>>> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
>>> flag which will wedge explicitly.
>>>
>>> We are again degrading into a huge philosophical discussion and all 
>>> I wanted to start with is to hear how exactly things go bad.
>>>
>> I have no idea what you are wanting. I am trying to have a technical 
>> discussion about improving the stability of the driver during CI 
>> testing. I have no idea if you are arguing that this change is good, 
>> bad, broken, wrong direction or what.
>>
>> Things go bad as explained in the commit message. The CI framework 
>> does not use signals. The IGT framework does not use signals. There 
>> is no watchdog that sends a TERM or KILL signal after a specified 
>> timeout. All that happens is the IGT sits there forever waiting for 
>> the drop caches IOCTL to return. The CI framework eventually gives up 
>> waiting for the test to complete and tries to recover. There are many 
>> different CI frameworks in use across Intel. Some timeout quickly, 
>> some timeout slowly. But basically, they all eventually give up and 
>> don't bother trying any kind of remedial action but just hit the 
>> reset button (sometimes by literally power cycling the DUT). As 
>> result, background processes that are saving dmesg, stdout, etc do 
>> not necessarily terminate cleanly. That results in logs that are at 
>> best truncated, at worst missing entirely. It also results in some 
>> frameworks aborting testing at that point. So no results are 
>> generated for all the other tests that have yet to be run. Some 
>> frameworks also run tests in batches. All they log is that something, 
>> somewhere in the batch died. So you don't even know which specific 
>> test actually hit the problem.
>>
>> Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
>> that something we have the ability to do with a simple patch? No. 
>> Would re-writing the IGT framework to add watchdog mechanisms improve 
>> things? Yes. Can it be done with a simple patch? No. Would a simple 
>> patch to i915 significantly improve the situation? Yes. Will it solve 
>> every possible CI hang? No. Will it fix any actual end user visible 
>> bugs? No. Will it introduce any new bugs? No. Will it help us to 
>> debug at least some CI failures? Yes.
>
> To unblock, I suggest you go with the patch which caps the wait only, 
> and propose a wedging as an IGT patch to gem_quiescent_gpu(). That 
> should involve the CI/IGT folks into discussion on what logs will be, 
> or will not be collected once gem_quiescent_gpu() fails due -ETIME. In 
> fact probably you should copy CI/IGT folks on the v2 of the i915 patch 
> as well since I now think their acks would be good to have - from the 
> point of view of the current test runner behaviour with hanging tests.
>
Simply returning -ETIME without wedging will actually make the situation 
worse. At the moment, you get 'all testing stopped due to machine not 
responding' bugs being logged. Which is a right pain and has very little 
useful information, but at least is not claiming random tests are broken 
when they are not. If you return ETIME without wedging then test A will 
hang and return ETIME. CI will log an ETIME bug against test A. CI will 
then try test B, which will fail with ETIME because the system is still 
broken but claiming to be working. So log a new bug against test B. Move 
on to test C, oh look, ETIME - log another bug and move on to test D... 
That is far worse, a whole slew of pointless and incorrect bugs have 
just been logged.

And how is it possibly considered a backwards breaking or dangerous 
change to wedge instead of hanging forever? Reboot versus wedge. 
Absolutely no defined behaviour at all because the system has simply 
stopped versus marking the system as broken and having a best effort at 
handling the situation. Yup, that's definitely a very dangerous change 
that could break all sorts of random user applications.

Re 'IGT folks' - whom? Ashutosh had already agreed to the original patch.

And CI folks are certainly aware of such issues. There are any number of 
comments in Jiras about 'no logs available, cannot analyse'.

John.


> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-08 19:37                         ` John Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-08 19:37 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/8/2022 01:08, Tvrtko Ursulin wrote:
> On 07/11/2022 19:45, John Harrison wrote:
>> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>>> On 04/11/2022 17:45, John Harrison wrote:
>>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>
>>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>>> sysfs with
>>>>>>>>>>>
>>>>>>>>>>> sysfs?
>>>>>>>>> Sorry, that was meant to say debugfs. I've also been working 
>>>>>>>>> on some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>>>> with an
>>>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>>>> when CI
>>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>>> system times
>>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and 
>>>>>>>>>>>> then
>>>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>>>> can't do
>>>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>>>> reboots.
>>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>>> sometimes not.
>>>>>>>>>>>>
>>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>>> timeout to
>>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue 
>>>>>>>>>>>> with a
>>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>>> +
>>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>>
>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's 
>>>>>>>>>>> also only used
>>>>>>>>>>> here.
>>>>>>>>>>
>>>>>>>>>> So move here, dropping i915 prefix, next to the newly 
>>>>>>>>>> proposed one?
>>>>>>>>> Sure, can do that.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only 
>>>>>>>>>>> used in
>>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>>
>>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>>
>>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>>
>>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>>
>>>>>>>>> These two are totally unrelated and in code not being touched 
>>>>>>>>> by this change. I would rather not conflate changing random 
>>>>>>>>> other things with fixing this specific issue.
>>>>>>>>>
>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>>
>>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>>
>>>>>>>>>>> My head spins.
>>>>>>>>>>
>>>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>>>> My original intention for the name was that is the 'drop 
>>>>>>>>> caches timeout for intel_gt_wait_for_idle'. Which is quite the 
>>>>>>>>> mouthful and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, 
>>>>>>>>> I realised later that name can be conflated with the DROP_IDLE 
>>>>>>>>> flag. Will rename.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Things get refactored, code moves around, bits get left 
>>>>>>>>>> behind, who knows. No reason to get too worked up. :) As long 
>>>>>>>>>> as people are taking a wider view when touching the code 
>>>>>>>>>> base, and are not afraid to send cleanups, things should be 
>>>>>>>>>> good.
>>>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>>>> code could be a bit better then nothing ever gets fixed. If 
>>>>>>>>> you spot something that you think should be improved, isn't 
>>>>>>>>> the general idea that you should post a patch yourself to 
>>>>>>>>> improve it?
>>>>>>>>
>>>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>>>> incoming on self-initiative basis. ;)
>>>>>>>>
>>>>>>>>>> For the actual functional change at hand - it would be nice 
>>>>>>>>>> if code paths in question could handle SIGINT and then we 
>>>>>>>>>> could punt the decision on how long someone wants to wait 
>>>>>>>>>> purely to userspace. But it's probably hard and it's only 
>>>>>>>>>> debugfs so whatever.
>>>>>>>>>>
>>>>>>>>> The code paths in question will already abort on a signal 
>>>>>>>>> won't they? Both intel_gt_wait_for_idle() and 
>>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>>> uc_wait_for_idle eventually ends up, have an 
>>>>>>>>> 'if(signal_pending) return -EINTR;' check. Beyond that, it 
>>>>>>>>> sounds like what you are asking for is a change in the IGT 
>>>>>>>>> libraries and/or CI framework to start sending signals after 
>>>>>>>>> some specific timeout. That seems like a significantly more 
>>>>>>>>> complex change (in terms of the number of entities affected 
>>>>>>>>> and number of groups involved) and unnecessary.
>>>>>>>>
>>>>>>>> If you say so, I haven't looked at them all. But if the code 
>>>>>>>> path in question already aborts on signals then I am not sure 
>>>>>>>> what is the patch fixing? I assumed you are trying to avoid the 
>>>>>>>> write stuck in D forever, which then prevents driver unload and 
>>>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>>>> you say SIGINT works then you can already recover from 
>>>>>>>> userspace, no?
>>>>>>>>
>>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>>>> most half from the test runner timeout.
>>>>>>>>> This is supposed to be test clean up. This is not about how 
>>>>>>>>> long a particular test takes to complete but about how long it 
>>>>>>>>> takes to declare the system broken after the test has already 
>>>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>>>> required.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I am not convinced that wedging is correct though. 
>>>>>>>>>> Conceptually could be just that the timeout is too short. 
>>>>>>>>>> What does wedging really give us, on top of limiting the 
>>>>>>>>>> wait, when latter AFAIU is the key factor which would prevent 
>>>>>>>>>> the need to reboot the machine?
>>>>>>>>>>
>>>>>>>>> It gives us a system that knows what state it is in. If we 
>>>>>>>>> can't idle the GT then something is very badly wrong. Wedging 
>>>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>>>> the system is really that badly broken then future tests will 
>>>>>>>>> not run rather than report erroneous failures.
>>>>>>>>>
>>>>>>>>> This is not about getting a more stable system for end users 
>>>>>>>>> by sweeping issues under the carpet and pretending all is 
>>>>>>>>> well. End users don't run IGTs or explicitly call dodgy 
>>>>>>>>> debugfs entry points. The sole motivation here is to get more 
>>>>>>>>> accurate results from CI. That is, correctly identifying which 
>>>>>>>>> test has hit a problem, getting valid debug analysis for that 
>>>>>>>>> test (logs and such) and allowing further testing to complete 
>>>>>>>>> correctly in the case where the system can be recovered.
>>>>>>>>
>>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>>>> also send SIGINT, and it could even trigger a reset from 
>>>>>>>> outside. Sure it is debugfs hacks so general "kernel should not 
>>>>>>>> implement policy" need not be strictly followed, but lets have 
>>>>>>>> it clear what are the options.
>>>>>>>
>>>>>>> One conceptual problem with applying this policy is that the 
>>>>>>> code is:
>>>>>>>
>>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>>         if (ret)
>>>>>>>             return ret;
>>>>>>>     }
>>>>>>>
>>>>>>>     if (val & DROP_IDLE) {
>>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>>         if (ret)
>>>>>>>             return ret;
>>>>>>>     }
>>>>>>>
>>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>>> first branch have a short timeout and wedge. Yeah some bug 
>>>>>>> happens to be there at the moment, but put a bug in a different 
>>>>>>> place and you hang on the second branch and then need another 
>>>>>>> patch. Versus perhaps making it all respect SIGINT and handle 
>>>>>>> from outside.
>>>>>>>
>>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle 
>>>>>> has completed successfully. There is no route to skip the GT idle 
>>>>>> or to do the PM idle even if the GT idle fails. So the chances of 
>>>>>> the PM idle failing are greatly reduced. There would have to be 
>>>>>> something outside of a GT keeping the GPU awake and there isn't a 
>>>>>> whole lot of hardware left at that point!
>>>>>
>>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>>> here and we add a timeout, tomorrow bug is there and then the same 
>>>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>>>> in some circumstances, doesn't really matter.
>>>>>
>>>> Huh?
>>>>
>>>> Greatly reduced is the whole point. Today there is a bug and it 
>>>> causes a kernel hang which requires the CI framework to reboot the 
>>>> system in an extremely unfriendly way which makes it very hard to 
>>>> work out what happened. Logs are likely not available. We don't 
>>>> even necessarily know which test was being run at the time. Etc. So 
>>>> we replace the infinite timeout with a meaningful timeout. CI now 
>>>> correctly marks the single test as failing, captures all the 
>>>> correct logs, creates a useful bug report and continues on testing 
>>>> more stuff.
>>>
>>> So what is preventing CI to collect logs if IGT is forever stuck in 
>>> interruptible wait? Surely it can collect the logs at that point if 
>>> the kernel is healthy enough. If it isn't then I don't see how 
>>> wedging the GPU will make the kernel any healthier.
>>>
>>> Is i915 preventing better log collection or could test runner be 
>>> improved?
>>>
>>>> Sure, there is still the chance of hitting an infinite timeout. But 
>>>> that one is significantly more complicated to remove. And the 
>>>> chances of hitting that one are significantly smaller than the 
>>>> chances of hitting the first one.
>>>
>>> This statement relies on intimate knowledge implementation details 
>>> and a bit too much white box testing approach but that's okay, lets 
>>> move past this one.
>>>
>>>> So you are arguing that because I can't fix the last 0.1% of 
>>>> possible failures, I am not allowed to fix the first 99.9% of the 
>>>> failures?
>>>
>>> I am clearly not arguing for that. But we are also not talking about 
>>> "fixing failures" here. Just how to make CI cope better with a class 
>>> of i915 bugs.
>>>
>>>>>> Regarding signals, the PM idle code ends up at 
>>>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>>>> clear trying to follow through the implementation of this code. 
>>>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>>>> that code (or why it wasn't already written with a timeout 
>>>>>> included). Someone more familiar with the wakeref internals would 
>>>>>> need to comment.
>>>>>>
>>>>>> However, I strongly disagree that we should not fix the driver 
>>>>>> just because it is possible to workaround the issue by re-writing 
>>>>>> the CI framework. Feel free to bring a redesign plan to the IGT 
>>>>>> WG and whatever equivalent CI meetings in parallel. But we 
>>>>>> absolutely should not have infinite waits in the kernel if there 
>>>>>> is a trivial way to not have infinite waits.
>>>>>
>>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>>
>>>>> The rest of the paragraph I don't really care - point is moot 
>>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>>> burdensome to i915, which this isn't. If either wasn't the case 
>>>>> then we certainly wouldn't be adding any workarounds in the kernel 
>>>>> if it can be achieved in IGT.
>>>>>
>>>>>> Also, sending a signal does not result in the wedge happening. I 
>>>>>> specifically did not want to change that code path because I was 
>>>>>> assuming there was a valid reason for it. If you have been 
>>>>>> interrupted then you are in the territory of maybe it would have 
>>>>>> succeeded if you just left it for a moment longer. Whereas, 
>>>>>> hitting the timeout says that someone very deliberately said this 
>>>>>> is too long to wait and therefore the system must be broken.
>>>>>
>>>>> I wanted to know specifically about wedging - why can't you 
>>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>>> wherever, if that's what you say is the right thing? 
>>>> Huh?
>>>>
>>>> DROP_IDLE has two waits. One that I am trying to change from 
>>>> infinite to finite + wedge. One that would take considerable effort 
>>>> to change and would be quite invasive to a lot more of the driver 
>>>> and which can only be hit if the first timeout actually completed 
>>>> successfully and is therefore of less importance anyway. Both of 
>>>> those time outs appear to respect signal interrupts.
>>>>
>>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>>> timeout expired? I915 is not controlling how much work there is 
>>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>>
>>>> Because this is a debug test interface that is used solely by IGT 
>>>> after it has finished its testing. This is not about wedging the 
>>>> device at some random arbitrary point because an AI compute 
>>>> workload takes three hours to complete. This is about a very 
>>>> specific test framework cleaning up after testing is completed and 
>>>> making sure the test did not fry the system.
>>>>
>>>> And even if an IGT test was calling DROP_IDLE in the middle of a 
>>>> test for some reason, it should not be deliberately pushing 10+ 
>>>> seconds of work through and then calling a debug only interface to 
>>>> flush it out. If a test wants to verify that the system can cope 
>>>> with submitting a minutes worth of rendering and then waiting for 
>>>> it to complete then the test should be using official channels for 
>>>> that wait.
>>>>
>>>>>
>>>>>> Plus, infinite wait is not a valid code path in the first place 
>>>>>> so any change in behaviour is not really a change in behaviour. 
>>>>>> Code can't be relying on a kernel call to never return for its 
>>>>>> correct operation!
>>>>>
>>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>>> other one as well. ;P
>>>> In what universe is it ever valid to wait forever for a test to 
>>>> complete?
>>>
>>> Well above you claimed both paths respect SIGINT. If that is so then 
>>> the wait is as infinite as the IGT wanted it to be.
>>>
>>>> See above, the PM code would require much more invasive changes. 
>>>> This was low hanging fruit. It was supposed to be a two minute 
>>>> change to a very self contained section of code that would provide 
>>>> significant benefit to debugging a small class of very hard to 
>>>> debug problems.
>>>
>>> Sure, but I'd still like to know why can't you do what you want from 
>>> the IGT framework.
>>>
>>> Have the timeout reduction in i915, again that's fine assuming 10 
>>> seconds it enough to not break something by accident.
>> CI showed no regressions. And if someone does find a valid reason why 
>> a post test drop caches call should legitimately take a stupidly long 
>> time then it is easy to track back where the ETIME error came from 
>> and bump the timeout.
>>
>>>
>>> With that change you already have broken the "infinite wait". It 
>>> makes the debugfs write return -ETIME in time much shorter than the 
>>> test runner timeout(s). What is the thing that you cannot do from 
>>> IGT at that point is my question? You want to wedge then? Send 
>>> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
>>> flag which will wedge explicitly.
>>>
>>> We are again degrading into a huge philosophical discussion and all 
>>> I wanted to start with is to hear how exactly things go bad.
>>>
>> I have no idea what you are wanting. I am trying to have a technical 
>> discussion about improving the stability of the driver during CI 
>> testing. I have no idea if you are arguing that this change is good, 
>> bad, broken, wrong direction or what.
>>
>> Things go bad as explained in the commit message. The CI framework 
>> does not use signals. The IGT framework does not use signals. There 
>> is no watchdog that sends a TERM or KILL signal after a specified 
>> timeout. All that happens is the IGT sits there forever waiting for 
>> the drop caches IOCTL to return. The CI framework eventually gives up 
>> waiting for the test to complete and tries to recover. There are many 
>> different CI frameworks in use across Intel. Some timeout quickly, 
>> some timeout slowly. But basically, they all eventually give up and 
>> don't bother trying any kind of remedial action but just hit the 
>> reset button (sometimes by literally power cycling the DUT). As 
>> result, background processes that are saving dmesg, stdout, etc do 
>> not necessarily terminate cleanly. That results in logs that are at 
>> best truncated, at worst missing entirely. It also results in some 
>> frameworks aborting testing at that point. So no results are 
>> generated for all the other tests that have yet to be run. Some 
>> frameworks also run tests in batches. All they log is that something, 
>> somewhere in the batch died. So you don't even know which specific 
>> test actually hit the problem.
>>
>> Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
>> that something we have the ability to do with a simple patch? No. 
>> Would re-writing the IGT framework to add watchdog mechanisms improve 
>> things? Yes. Can it be done with a simple patch? No. Would a simple 
>> patch to i915 significantly improve the situation? Yes. Will it solve 
>> every possible CI hang? No. Will it fix any actual end user visible 
>> bugs? No. Will it introduce any new bugs? No. Will it help us to 
>> debug at least some CI failures? Yes.
>
> To unblock, I suggest you go with the patch which caps the wait only, 
> and propose a wedging as an IGT patch to gem_quiescent_gpu(). That 
> should involve the CI/IGT folks into discussion on what logs will be, 
> or will not be collected once gem_quiescent_gpu() fails due -ETIME. In 
> fact probably you should copy CI/IGT folks on the v2 of the i915 patch 
> as well since I now think their acks would be good to have - from the 
> point of view of the current test runner behaviour with hanging tests.
>
Simply returning -ETIME without wedging will actually make the situation 
worse. At the moment, you get 'all testing stopped due to machine not 
responding' bugs being logged. Which is a right pain and has very little 
useful information, but at least is not claiming random tests are broken 
when they are not. If you return ETIME without wedging then test A will 
hang and return ETIME. CI will log an ETIME bug against test A. CI will 
then try test B, which will fail with ETIME because the system is still 
broken but claiming to be working. So log a new bug against test B. Move 
on to test C, oh look, ETIME - log another bug and move on to test D... 
That is far worse, a whole slew of pointless and incorrect bugs have 
just been logged.

And how is it possibly considered a backwards breaking or dangerous 
change to wedge instead of hanging forever? Reboot versus wedge. 
Absolutely no defined behaviour at all because the system has simply 
stopped versus marking the system as broken and having a best effort at 
handling the situation. Yup, that's definitely a very dangerous change 
that could break all sorts of random user applications.

Re 'IGT folks' - whom? Ashutosh had already agreed to the original patch.

And CI folks are certainly aware of such issues. There are any number of 
comments in Jiras about 'no logs available, cannot analyse'.

John.


> Regards,
>
> Tvrtko


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-08 19:37                         ` John Harrison
@ 2022-11-09 11:35                           ` Tvrtko Ursulin
  -1 siblings, 0 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-09 11:35 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel


On 08/11/2022 19:37, John Harrison wrote:
> On 11/8/2022 01:08, Tvrtko Ursulin wrote:
>> On 07/11/2022 19:45, John Harrison wrote:
>>> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>>>> On 04/11/2022 17:45, John Harrison wrote:
>>>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>>
>>>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>>>> sysfs with
>>>>>>>>>>>>
>>>>>>>>>>>> sysfs?
>>>>>>>>>> Sorry, that was meant to say debugfs. I've also been working 
>>>>>>>>>> on some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>>>>> with an
>>>>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>>>>> when CI
>>>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>>>> system times
>>>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and 
>>>>>>>>>>>>> then
>>>>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>>>>> can't do
>>>>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>>>>> reboots.
>>>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>>>> sometimes not.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>>>> timeout to
>>>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue 
>>>>>>>>>>>>> with a
>>>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>>>
>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's 
>>>>>>>>>>>> also only used
>>>>>>>>>>>> here.
>>>>>>>>>>>
>>>>>>>>>>> So move here, dropping i915 prefix, next to the newly 
>>>>>>>>>>> proposed one?
>>>>>>>>>> Sure, can do that.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only 
>>>>>>>>>>>> used in
>>>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>>>
>>>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>>>
>>>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>>>
>>>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>>>
>>>>>>>>>> These two are totally unrelated and in code not being touched 
>>>>>>>>>> by this change. I would rather not conflate changing random 
>>>>>>>>>> other things with fixing this specific issue.
>>>>>>>>>>
>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>>>
>>>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>>>
>>>>>>>>>>>> My head spins.
>>>>>>>>>>>
>>>>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>>>>> My original intention for the name was that is the 'drop 
>>>>>>>>>> caches timeout for intel_gt_wait_for_idle'. Which is quite the 
>>>>>>>>>> mouthful and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, 
>>>>>>>>>> I realised later that name can be conflated with the DROP_IDLE 
>>>>>>>>>> flag. Will rename.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Things get refactored, code moves around, bits get left 
>>>>>>>>>>> behind, who knows. No reason to get too worked up. :) As long 
>>>>>>>>>>> as people are taking a wider view when touching the code 
>>>>>>>>>>> base, and are not afraid to send cleanups, things should be 
>>>>>>>>>>> good.
>>>>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>>>>> code could be a bit better then nothing ever gets fixed. If 
>>>>>>>>>> you spot something that you think should be improved, isn't 
>>>>>>>>>> the general idea that you should post a patch yourself to 
>>>>>>>>>> improve it?
>>>>>>>>>
>>>>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>>>>> incoming on self-initiative basis. ;)
>>>>>>>>>
>>>>>>>>>>> For the actual functional change at hand - it would be nice 
>>>>>>>>>>> if code paths in question could handle SIGINT and then we 
>>>>>>>>>>> could punt the decision on how long someone wants to wait 
>>>>>>>>>>> purely to userspace. But it's probably hard and it's only 
>>>>>>>>>>> debugfs so whatever.
>>>>>>>>>>>
>>>>>>>>>> The code paths in question will already abort on a signal 
>>>>>>>>>> won't they? Both intel_gt_wait_for_idle() and 
>>>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>>>> uc_wait_for_idle eventually ends up, have an 
>>>>>>>>>> 'if(signal_pending) return -EINTR;' check. Beyond that, it 
>>>>>>>>>> sounds like what you are asking for is a change in the IGT 
>>>>>>>>>> libraries and/or CI framework to start sending signals after 
>>>>>>>>>> some specific timeout. That seems like a significantly more 
>>>>>>>>>> complex change (in terms of the number of entities affected 
>>>>>>>>>> and number of groups involved) and unnecessary.
>>>>>>>>>
>>>>>>>>> If you say so, I haven't looked at them all. But if the code 
>>>>>>>>> path in question already aborts on signals then I am not sure 
>>>>>>>>> what is the patch fixing? I assumed you are trying to avoid the 
>>>>>>>>> write stuck in D forever, which then prevents driver unload and 
>>>>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>>>>> you say SIGINT works then you can already recover from 
>>>>>>>>> userspace, no?
>>>>>>>>>
>>>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>>>>> most half from the test runner timeout.
>>>>>>>>>> This is supposed to be test clean up. This is not about how 
>>>>>>>>>> long a particular test takes to complete but about how long it 
>>>>>>>>>> takes to declare the system broken after the test has already 
>>>>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>>>>> required.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I am not convinced that wedging is correct though. 
>>>>>>>>>>> Conceptually could be just that the timeout is too short. 
>>>>>>>>>>> What does wedging really give us, on top of limiting the 
>>>>>>>>>>> wait, when latter AFAIU is the key factor which would prevent 
>>>>>>>>>>> the need to reboot the machine?
>>>>>>>>>>>
>>>>>>>>>> It gives us a system that knows what state it is in. If we 
>>>>>>>>>> can't idle the GT then something is very badly wrong. Wedging 
>>>>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>>>>> the system is really that badly broken then future tests will 
>>>>>>>>>> not run rather than report erroneous failures.
>>>>>>>>>>
>>>>>>>>>> This is not about getting a more stable system for end users 
>>>>>>>>>> by sweeping issues under the carpet and pretending all is 
>>>>>>>>>> well. End users don't run IGTs or explicitly call dodgy 
>>>>>>>>>> debugfs entry points. The sole motivation here is to get more 
>>>>>>>>>> accurate results from CI. That is, correctly identifying which 
>>>>>>>>>> test has hit a problem, getting valid debug analysis for that 
>>>>>>>>>> test (logs and such) and allowing further testing to complete 
>>>>>>>>>> correctly in the case where the system can be recovered.
>>>>>>>>>
>>>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>>>>> also send SIGINT, and it could even trigger a reset from 
>>>>>>>>> outside. Sure it is debugfs hacks so general "kernel should not 
>>>>>>>>> implement policy" need not be strictly followed, but lets have 
>>>>>>>>> it clear what are the options.
>>>>>>>>
>>>>>>>> One conceptual problem with applying this policy is that the 
>>>>>>>> code is:
>>>>>>>>
>>>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>>>         if (ret)
>>>>>>>>             return ret;
>>>>>>>>     }
>>>>>>>>
>>>>>>>>     if (val & DROP_IDLE) {
>>>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>>>         if (ret)
>>>>>>>>             return ret;
>>>>>>>>     }
>>>>>>>>
>>>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>>>> first branch have a short timeout and wedge. Yeah some bug 
>>>>>>>> happens to be there at the moment, but put a bug in a different 
>>>>>>>> place and you hang on the second branch and then need another 
>>>>>>>> patch. Versus perhaps making it all respect SIGINT and handle 
>>>>>>>> from outside.
>>>>>>>>
>>>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle 
>>>>>>> has completed successfully. There is no route to skip the GT idle 
>>>>>>> or to do the PM idle even if the GT idle fails. So the chances of 
>>>>>>> the PM idle failing are greatly reduced. There would have to be 
>>>>>>> something outside of a GT keeping the GPU awake and there isn't a 
>>>>>>> whole lot of hardware left at that point!
>>>>>>
>>>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>>>> here and we add a timeout, tomorrow bug is there and then the same 
>>>>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>>>>> in some circumstances, doesn't really matter.
>>>>>>
>>>>> Huh?
>>>>>
>>>>> Greatly reduced is the whole point. Today there is a bug and it 
>>>>> causes a kernel hang which requires the CI framework to reboot the 
>>>>> system in an extremely unfriendly way which makes it very hard to 
>>>>> work out what happened. Logs are likely not available. We don't 
>>>>> even necessarily know which test was being run at the time. Etc. So 
>>>>> we replace the infinite timeout with a meaningful timeout. CI now 
>>>>> correctly marks the single test as failing, captures all the 
>>>>> correct logs, creates a useful bug report and continues on testing 
>>>>> more stuff.
>>>>
>>>> So what is preventing CI to collect logs if IGT is forever stuck in 
>>>> interruptible wait? Surely it can collect the logs at that point if 
>>>> the kernel is healthy enough. If it isn't then I don't see how 
>>>> wedging the GPU will make the kernel any healthier.
>>>>
>>>> Is i915 preventing better log collection or could test runner be 
>>>> improved?
>>>>
>>>>> Sure, there is still the chance of hitting an infinite timeout. But 
>>>>> that one is significantly more complicated to remove. And the 
>>>>> chances of hitting that one are significantly smaller than the 
>>>>> chances of hitting the first one.
>>>>
>>>> This statement relies on intimate knowledge implementation details 
>>>> and a bit too much white box testing approach but that's okay, lets 
>>>> move past this one.
>>>>
>>>>> So you are arguing that because I can't fix the last 0.1% of 
>>>>> possible failures, I am not allowed to fix the first 99.9% of the 
>>>>> failures?
>>>>
>>>> I am clearly not arguing for that. But we are also not talking about 
>>>> "fixing failures" here. Just how to make CI cope better with a class 
>>>> of i915 bugs.
>>>>
>>>>>>> Regarding signals, the PM idle code ends up at 
>>>>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>>>>> clear trying to follow through the implementation of this code. 
>>>>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>>>>> that code (or why it wasn't already written with a timeout 
>>>>>>> included). Someone more familiar with the wakeref internals would 
>>>>>>> need to comment.
>>>>>>>
>>>>>>> However, I strongly disagree that we should not fix the driver 
>>>>>>> just because it is possible to workaround the issue by re-writing 
>>>>>>> the CI framework. Feel free to bring a redesign plan to the IGT 
>>>>>>> WG and whatever equivalent CI meetings in parallel. But we 
>>>>>>> absolutely should not have infinite waits in the kernel if there 
>>>>>>> is a trivial way to not have infinite waits.
>>>>>>
>>>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>>>
>>>>>> The rest of the paragraph I don't really care - point is moot 
>>>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>>>> burdensome to i915, which this isn't. If either wasn't the case 
>>>>>> then we certainly wouldn't be adding any workarounds in the kernel 
>>>>>> if it can be achieved in IGT.
>>>>>>
>>>>>>> Also, sending a signal does not result in the wedge happening. I 
>>>>>>> specifically did not want to change that code path because I was 
>>>>>>> assuming there was a valid reason for it. If you have been 
>>>>>>> interrupted then you are in the territory of maybe it would have 
>>>>>>> succeeded if you just left it for a moment longer. Whereas, 
>>>>>>> hitting the timeout says that someone very deliberately said this 
>>>>>>> is too long to wait and therefore the system must be broken.
>>>>>>
>>>>>> I wanted to know specifically about wedging - why can't you 
>>>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>>>> wherever, if that's what you say is the right thing? 
>>>>> Huh?
>>>>>
>>>>> DROP_IDLE has two waits. One that I am trying to change from 
>>>>> infinite to finite + wedge. One that would take considerable effort 
>>>>> to change and would be quite invasive to a lot more of the driver 
>>>>> and which can only be hit if the first timeout actually completed 
>>>>> successfully and is therefore of less importance anyway. Both of 
>>>>> those time outs appear to respect signal interrupts.
>>>>>
>>>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>>>> timeout expired? I915 is not controlling how much work there is 
>>>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>>>
>>>>> Because this is a debug test interface that is used solely by IGT 
>>>>> after it has finished its testing. This is not about wedging the 
>>>>> device at some random arbitrary point because an AI compute 
>>>>> workload takes three hours to complete. This is about a very 
>>>>> specific test framework cleaning up after testing is completed and 
>>>>> making sure the test did not fry the system.
>>>>>
>>>>> And even if an IGT test was calling DROP_IDLE in the middle of a 
>>>>> test for some reason, it should not be deliberately pushing 10+ 
>>>>> seconds of work through and then calling a debug only interface to 
>>>>> flush it out. If a test wants to verify that the system can cope 
>>>>> with submitting a minutes worth of rendering and then waiting for 
>>>>> it to complete then the test should be using official channels for 
>>>>> that wait.
>>>>>
>>>>>>
>>>>>>> Plus, infinite wait is not a valid code path in the first place 
>>>>>>> so any change in behaviour is not really a change in behaviour. 
>>>>>>> Code can't be relying on a kernel call to never return for its 
>>>>>>> correct operation!
>>>>>>
>>>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>>>> other one as well. ;P
>>>>> In what universe is it ever valid to wait forever for a test to 
>>>>> complete?
>>>>
>>>> Well above you claimed both paths respect SIGINT. If that is so then 
>>>> the wait is as infinite as the IGT wanted it to be.
>>>>
>>>>> See above, the PM code would require much more invasive changes. 
>>>>> This was low hanging fruit. It was supposed to be a two minute 
>>>>> change to a very self contained section of code that would provide 
>>>>> significant benefit to debugging a small class of very hard to 
>>>>> debug problems.
>>>>
>>>> Sure, but I'd still like to know why can't you do what you want from 
>>>> the IGT framework.
>>>>
>>>> Have the timeout reduction in i915, again that's fine assuming 10 
>>>> seconds it enough to not break something by accident.
>>> CI showed no regressions. And if someone does find a valid reason why 
>>> a post test drop caches call should legitimately take a stupidly long 
>>> time then it is easy to track back where the ETIME error came from 
>>> and bump the timeout.
>>>
>>>>
>>>> With that change you already have broken the "infinite wait". It 
>>>> makes the debugfs write return -ETIME in time much shorter than the 
>>>> test runner timeout(s). What is the thing that you cannot do from 
>>>> IGT at that point is my question? You want to wedge then? Send 
>>>> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
>>>> flag which will wedge explicitly.
>>>>
>>>> We are again degrading into a huge philosophical discussion and all 
>>>> I wanted to start with is to hear how exactly things go bad.
>>>>
>>> I have no idea what you are wanting. I am trying to have a technical 
>>> discussion about improving the stability of the driver during CI 
>>> testing. I have no idea if you are arguing that this change is good, 
>>> bad, broken, wrong direction or what.
>>>
>>> Things go bad as explained in the commit message. The CI framework 
>>> does not use signals. The IGT framework does not use signals. There 
>>> is no watchdog that sends a TERM or KILL signal after a specified 
>>> timeout. All that happens is the IGT sits there forever waiting for 
>>> the drop caches IOCTL to return. The CI framework eventually gives up 
>>> waiting for the test to complete and tries to recover. There are many 
>>> different CI frameworks in use across Intel. Some timeout quickly, 
>>> some timeout slowly. But basically, they all eventually give up and 
>>> don't bother trying any kind of remedial action but just hit the 
>>> reset button (sometimes by literally power cycling the DUT). As 
>>> result, background processes that are saving dmesg, stdout, etc do 
>>> not necessarily terminate cleanly. That results in logs that are at 
>>> best truncated, at worst missing entirely. It also results in some 
>>> frameworks aborting testing at that point. So no results are 
>>> generated for all the other tests that have yet to be run. Some 
>>> frameworks also run tests in batches. All they log is that something, 
>>> somewhere in the batch died. So you don't even know which specific 
>>> test actually hit the problem.
>>>
>>> Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
>>> that something we have the ability to do with a simple patch? No. 
>>> Would re-writing the IGT framework to add watchdog mechanisms improve 
>>> things? Yes. Can it be done with a simple patch? No. Would a simple 
>>> patch to i915 significantly improve the situation? Yes. Will it solve 
>>> every possible CI hang? No. Will it fix any actual end user visible 
>>> bugs? No. Will it introduce any new bugs? No. Will it help us to 
>>> debug at least some CI failures? Yes.
>>
>> To unblock, I suggest you go with the patch which caps the wait only, 
>> and propose a wedging as an IGT patch to gem_quiescent_gpu(). That 
>> should involve the CI/IGT folks into discussion on what logs will be, 
>> or will not be collected once gem_quiescent_gpu() fails due -ETIME. In 
>> fact probably you should copy CI/IGT folks on the v2 of the i915 patch 
>> as well since I now think their acks would be good to have - from the 
>> point of view of the current test runner behaviour with hanging tests.
>>
> Simply returning -ETIME without wedging will actually make the situation 
> worse. At the moment, you get 'all testing stopped due to machine not 
> responding' bugs being logged. Which is a right pain and has very little 
> useful information, but at least is not claiming random tests are broken 
> when they are not. If you return ETIME without wedging then test A will 

Several times I asked why can't you wedge from gem_quiescent_gpu() since 
that is done on driver open. So the chain of failing tests describing 
below is not relevant to my question.

Whole point is why add policy to i915 if it can be done from userspace. 
Current API is called "wait for idle", not "wait for idle ten seconds 
max" (although fine since IGT will fail on timeout already), and not 
"wait for idle or wedge, sometimes".

Yes it's only debugfs, I said that multiple times already, so it could 
be whatever, but in principle adding code to kernel should always be the 
2nd option. Especially since the implementation is only a 50% kludge (I 
am referring to the 2nd DROP_IDLE branch where the proposal does not add 
a timeout or wedging). So a half-policy even. Wedge if this stage of 
DROP_IDLE timed out, but don't wedge if this other stage of DROP_IDLE 
timed out or failed.

Which is why I was saying, if signals would be respected anyway, why 
couldn't you do the whole thing in IGT to start with.. "wrap" 
gem_quiescent_gpu with alarm(2), send SIGINT, wedge, whatever. If that 
works it would be the same effect. And policy where it belongs, zero 
kernel code required. If it works.. I haven't checked, you said it would 
though so what would be wrong with this approach?

And completely separate from the above line of discussion I am not even 
sure how the "no logs" relates to this all. Sure some bugs result in no 
logs since kernel crashes so badly. This patch will not improve that.

And if the kernel is not badly broken test runner will detect a timeout 
and sure all further testing will be invalid. It's not the first, or 
last, or the only way that can happen. There will be logs though so it 
can be debugged and fixed. (Unless there can't be logs anyway.) So you 
find the first failing test and fix the issue. How often does it happen 
anyway.

Or if I totally got this wrong please paste some CI or CIbuglog links to 
put me on the correct path.

Regards,

Tvrtko

> hang and return ETIME. CI will log an ETIME bug against test A. CI will 
> then try test B, which will fail with ETIME because the system is still 
> broken but claiming to be working. So log a new bug against test B. Move 
> on to test C, oh look, ETIME - log another bug and move on to test D... 
> That is far worse, a whole slew of pointless and incorrect bugs have 
> just been logged.
> 
> And how is it possibly considered a backwards breaking or dangerous 
> change to wedge instead of hanging forever? Reboot versus wedge. 
> Absolutely no defined behaviour at all because the system has simply 
> stopped versus marking the system as broken and having a best effort at 
> handling the situation. Yup, that's definitely a very dangerous change 
> that could break all sorts of random user applications.
> 
> Re 'IGT folks' - whom? Ashutosh had already agreed to the original patch.
> 
> And CI folks are certainly aware of such issues. There are any number of 
> comments in Jiras about 'no logs available, cannot analyse'.
> 
> John.
> 
> 
>> Regards,
>>
>> Tvrtko
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-09 11:35                           ` Tvrtko Ursulin
  0 siblings, 0 replies; 31+ messages in thread
From: Tvrtko Ursulin @ 2022-11-09 11:35 UTC (permalink / raw)
  To: John Harrison, Jani Nikula, Intel-GFX; +Cc: DRI-Devel


On 08/11/2022 19:37, John Harrison wrote:
> On 11/8/2022 01:08, Tvrtko Ursulin wrote:
>> On 07/11/2022 19:45, John Harrison wrote:
>>> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>>>> On 04/11/2022 17:45, John Harrison wrote:
>>>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>>
>>>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>>>> sysfs with
>>>>>>>>>>>>
>>>>>>>>>>>> sysfs?
>>>>>>>>>> Sorry, that was meant to say debugfs. I've also been working 
>>>>>>>>>> on some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> special flags set. One of the possible paths waits for idle 
>>>>>>>>>>>>> with an
>>>>>>>>>>>>> infinite timeout. That causes problems for debugging issues 
>>>>>>>>>>>>> when CI
>>>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>>>> system times
>>>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions and 
>>>>>>>>>>>>> then
>>>>>>>>>>>>> reboots the system to recover it. Worst case, the CI system 
>>>>>>>>>>>>> can't do
>>>>>>>>>>>>> anything at all and then times out (after 1000s) and simply 
>>>>>>>>>>>>> reboots.
>>>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>>>> sometimes not.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>>>> timeout to
>>>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue 
>>>>>>>>>>>>> with a
>>>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>>>
>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's 
>>>>>>>>>>>> also only used
>>>>>>>>>>>> here.
>>>>>>>>>>>
>>>>>>>>>>> So move here, dropping i915 prefix, next to the newly 
>>>>>>>>>>> proposed one?
>>>>>>>>>> Sure, can do that.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only 
>>>>>>>>>>>> used in
>>>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>>>
>>>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>>>
>>>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>>>
>>>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>>>
>>>>>>>>>> These two are totally unrelated and in code not being touched 
>>>>>>>>>> by this change. I would rather not conflate changing random 
>>>>>>>>>> other things with fixing this specific issue.
>>>>>>>>>>
>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>>>
>>>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>>>
>>>>>>>>>>>> My head spins.
>>>>>>>>>>>
>>>>>>>>>>> I follow and raise that the newly proposed DROP_IDLE_TIMEOUT 
>>>>>>>>>>> applies to DROP_ACTIVE and not only DROP_IDLE.
>>>>>>>>>> My original intention for the name was that is the 'drop 
>>>>>>>>>> caches timeout for intel_gt_wait_for_idle'. Which is quite the 
>>>>>>>>>> mouthful and hence abbreviated to DROP_IDLE_TIMEOUT. But yes, 
>>>>>>>>>> I realised later that name can be conflated with the DROP_IDLE 
>>>>>>>>>> flag. Will rename.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Things get refactored, code moves around, bits get left 
>>>>>>>>>>> behind, who knows. No reason to get too worked up. :) As long 
>>>>>>>>>>> as people are taking a wider view when touching the code 
>>>>>>>>>>> base, and are not afraid to send cleanups, things should be 
>>>>>>>>>>> good.
>>>>>>>>>> On the other hand, if every patch gets blocked in code review 
>>>>>>>>>> because someone points out some completely unrelated piece of 
>>>>>>>>>> code could be a bit better then nothing ever gets fixed. If 
>>>>>>>>>> you spot something that you think should be improved, isn't 
>>>>>>>>>> the general idea that you should post a patch yourself to 
>>>>>>>>>> improve it?
>>>>>>>>>
>>>>>>>>> There's two maintainers per branch and an order of magnitude or 
>>>>>>>>> two more developers so it'd be nice if cleanups would just be 
>>>>>>>>> incoming on self-initiative basis. ;)
>>>>>>>>>
>>>>>>>>>>> For the actual functional change at hand - it would be nice 
>>>>>>>>>>> if code paths in question could handle SIGINT and then we 
>>>>>>>>>>> could punt the decision on how long someone wants to wait 
>>>>>>>>>>> purely to userspace. But it's probably hard and it's only 
>>>>>>>>>>> debugfs so whatever.
>>>>>>>>>>>
>>>>>>>>>> The code paths in question will already abort on a signal 
>>>>>>>>>> won't they? Both intel_gt_wait_for_idle() and 
>>>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>>>> uc_wait_for_idle eventually ends up, have an 
>>>>>>>>>> 'if(signal_pending) return -EINTR;' check. Beyond that, it 
>>>>>>>>>> sounds like what you are asking for is a change in the IGT 
>>>>>>>>>> libraries and/or CI framework to start sending signals after 
>>>>>>>>>> some specific timeout. That seems like a significantly more 
>>>>>>>>>> complex change (in terms of the number of entities affected 
>>>>>>>>>> and number of groups involved) and unnecessary.
>>>>>>>>>
>>>>>>>>> If you say so, I haven't looked at them all. But if the code 
>>>>>>>>> path in question already aborts on signals then I am not sure 
>>>>>>>>> what is the patch fixing? I assumed you are trying to avoid the 
>>>>>>>>> write stuck in D forever, which then prevents driver unload and 
>>>>>>>>> everything, requiring the test runner to eventually reboot. If 
>>>>>>>>> you say SIGINT works then you can already recover from 
>>>>>>>>> userspace, no?
>>>>>>>>>
>>>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>>>> probably err on the side of safety and make it longer, but at 
>>>>>>>>>>> most half from the test runner timeout.
>>>>>>>>>> This is supposed to be test clean up. This is not about how 
>>>>>>>>>> long a particular test takes to complete but about how long it 
>>>>>>>>>> takes to declare the system broken after the test has already 
>>>>>>>>>> finished. I would argue that even 10s is massively longer than 
>>>>>>>>>> required.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I am not convinced that wedging is correct though. 
>>>>>>>>>>> Conceptually could be just that the timeout is too short. 
>>>>>>>>>>> What does wedging really give us, on top of limiting the 
>>>>>>>>>>> wait, when latter AFAIU is the key factor which would prevent 
>>>>>>>>>>> the need to reboot the machine?
>>>>>>>>>>>
>>>>>>>>>> It gives us a system that knows what state it is in. If we 
>>>>>>>>>> can't idle the GT then something is very badly wrong. Wedging 
>>>>>>>>>> indicates that. It also ensure that a full GT reset will be 
>>>>>>>>>> attempted before the next test is run. Helping to prevent a 
>>>>>>>>>> failure on test X from propagating into failures of unrelated 
>>>>>>>>>> tests X+1, X+2, ... And if the GT reset does not work because 
>>>>>>>>>> the system is really that badly broken then future tests will 
>>>>>>>>>> not run rather than report erroneous failures.
>>>>>>>>>>
>>>>>>>>>> This is not about getting a more stable system for end users 
>>>>>>>>>> by sweeping issues under the carpet and pretending all is 
>>>>>>>>>> well. End users don't run IGTs or explicitly call dodgy 
>>>>>>>>>> debugfs entry points. The sole motivation here is to get more 
>>>>>>>>>> accurate results from CI. That is, correctly identifying which 
>>>>>>>>>> test has hit a problem, getting valid debug analysis for that 
>>>>>>>>>> test (logs and such) and allowing further testing to complete 
>>>>>>>>>> correctly in the case where the system can be recovered.
>>>>>>>>>
>>>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>>>> runner could already do or not. It can apply a timeout, it can 
>>>>>>>>> also send SIGINT, and it could even trigger a reset from 
>>>>>>>>> outside. Sure it is debugfs hacks so general "kernel should not 
>>>>>>>>> implement policy" need not be strictly followed, but lets have 
>>>>>>>>> it clear what are the options.
>>>>>>>>
>>>>>>>> One conceptual problem with applying this policy is that the 
>>>>>>>> code is:
>>>>>>>>
>>>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>>>         if (ret)
>>>>>>>>             return ret;
>>>>>>>>     }
>>>>>>>>
>>>>>>>>     if (val & DROP_IDLE) {
>>>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>>>         if (ret)
>>>>>>>>             return ret;
>>>>>>>>     }
>>>>>>>>
>>>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>>>> first branch have a short timeout and wedge. Yeah some bug 
>>>>>>>> happens to be there at the moment, but put a bug in a different 
>>>>>>>> place and you hang on the second branch and then need another 
>>>>>>>> patch. Versus perhaps making it all respect SIGINT and handle 
>>>>>>>> from outside.
>>>>>>>>
>>>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle 
>>>>>>> has completed successfully. There is no route to skip the GT idle 
>>>>>>> or to do the PM idle even if the GT idle fails. So the chances of 
>>>>>>> the PM idle failing are greatly reduced. There would have to be 
>>>>>>> something outside of a GT keeping the GPU awake and there isn't a 
>>>>>>> whole lot of hardware left at that point!
>>>>>>
>>>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>>>> here and we add a timeout, tomorrow bug is there and then the same 
>>>>>> dance. It can be just a sw bug which forgets to release the pm ref 
>>>>>> in some circumstances, doesn't really matter.
>>>>>>
>>>>> Huh?
>>>>>
>>>>> Greatly reduced is the whole point. Today there is a bug and it 
>>>>> causes a kernel hang which requires the CI framework to reboot the 
>>>>> system in an extremely unfriendly way which makes it very hard to 
>>>>> work out what happened. Logs are likely not available. We don't 
>>>>> even necessarily know which test was being run at the time. Etc. So 
>>>>> we replace the infinite timeout with a meaningful timeout. CI now 
>>>>> correctly marks the single test as failing, captures all the 
>>>>> correct logs, creates a useful bug report and continues on testing 
>>>>> more stuff.
>>>>
>>>> So what is preventing CI to collect logs if IGT is forever stuck in 
>>>> interruptible wait? Surely it can collect the logs at that point if 
>>>> the kernel is healthy enough. If it isn't then I don't see how 
>>>> wedging the GPU will make the kernel any healthier.
>>>>
>>>> Is i915 preventing better log collection or could test runner be 
>>>> improved?
>>>>
>>>>> Sure, there is still the chance of hitting an infinite timeout. But 
>>>>> that one is significantly more complicated to remove. And the 
>>>>> chances of hitting that one are significantly smaller than the 
>>>>> chances of hitting the first one.
>>>>
>>>> This statement relies on intimate knowledge implementation details 
>>>> and a bit too much white box testing approach but that's okay, lets 
>>>> move past this one.
>>>>
>>>>> So you are arguing that because I can't fix the last 0.1% of 
>>>>> possible failures, I am not allowed to fix the first 99.9% of the 
>>>>> failures?
>>>>
>>>> I am clearly not arguing for that. But we are also not talking about 
>>>> "fixing failures" here. Just how to make CI cope better with a class 
>>>> of i915 bugs.
>>>>
>>>>>>> Regarding signals, the PM idle code ends up at 
>>>>>>> wait_var_event_killable(). I assume that is interruptible via at 
>>>>>>> least a KILL signal if not any signal. Although it's not entirely 
>>>>>>> clear trying to follow through the implementation of this code. 
>>>>>>> Also, I have no idea if there is a safe way to add a timeout to 
>>>>>>> that code (or why it wasn't already written with a timeout 
>>>>>>> included). Someone more familiar with the wakeref internals would 
>>>>>>> need to comment.
>>>>>>>
>>>>>>> However, I strongly disagree that we should not fix the driver 
>>>>>>> just because it is possible to workaround the issue by re-writing 
>>>>>>> the CI framework. Feel free to bring a redesign plan to the IGT 
>>>>>>> WG and whatever equivalent CI meetings in parallel. But we 
>>>>>>> absolutely should not have infinite waits in the kernel if there 
>>>>>>> is a trivial way to not have infinite waits.
>>>>>>
>>>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>>>
>>>>>> The rest of the paragraph I don't really care - point is moot 
>>>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>>>> burdensome to i915, which this isn't. If either wasn't the case 
>>>>>> then we certainly wouldn't be adding any workarounds in the kernel 
>>>>>> if it can be achieved in IGT.
>>>>>>
>>>>>>> Also, sending a signal does not result in the wedge happening. I 
>>>>>>> specifically did not want to change that code path because I was 
>>>>>>> assuming there was a valid reason for it. If you have been 
>>>>>>> interrupted then you are in the territory of maybe it would have 
>>>>>>> succeeded if you just left it for a moment longer. Whereas, 
>>>>>>> hitting the timeout says that someone very deliberately said this 
>>>>>>> is too long to wait and therefore the system must be broken.
>>>>>>
>>>>>> I wanted to know specifically about wedging - why can't you 
>>>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>>>> wherever, if that's what you say is the right thing? 
>>>>> Huh?
>>>>>
>>>>> DROP_IDLE has two waits. One that I am trying to change from 
>>>>> infinite to finite + wedge. One that would take considerable effort 
>>>>> to change and would be quite invasive to a lot more of the driver 
>>>>> and which can only be hit if the first timeout actually completed 
>>>>> successfully and is therefore of less importance anyway. Both of 
>>>>> those time outs appear to respect signal interrupts.
>>>>>
>>>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>>>> timeout expired? I915 is not controlling how much work there is 
>>>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>>>
>>>>> Because this is a debug test interface that is used solely by IGT 
>>>>> after it has finished its testing. This is not about wedging the 
>>>>> device at some random arbitrary point because an AI compute 
>>>>> workload takes three hours to complete. This is about a very 
>>>>> specific test framework cleaning up after testing is completed and 
>>>>> making sure the test did not fry the system.
>>>>>
>>>>> And even if an IGT test was calling DROP_IDLE in the middle of a 
>>>>> test for some reason, it should not be deliberately pushing 10+ 
>>>>> seconds of work through and then calling a debug only interface to 
>>>>> flush it out. If a test wants to verify that the system can cope 
>>>>> with submitting a minutes worth of rendering and then waiting for 
>>>>> it to complete then the test should be using official channels for 
>>>>> that wait.
>>>>>
>>>>>>
>>>>>>> Plus, infinite wait is not a valid code path in the first place 
>>>>>>> so any change in behaviour is not really a change in behaviour. 
>>>>>>> Code can't be relying on a kernel call to never return for its 
>>>>>>> correct operation!
>>>>>>
>>>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>>>> other one as well. ;P
>>>>> In what universe is it ever valid to wait forever for a test to 
>>>>> complete?
>>>>
>>>> Well above you claimed both paths respect SIGINT. If that is so then 
>>>> the wait is as infinite as the IGT wanted it to be.
>>>>
>>>>> See above, the PM code would require much more invasive changes. 
>>>>> This was low hanging fruit. It was supposed to be a two minute 
>>>>> change to a very self contained section of code that would provide 
>>>>> significant benefit to debugging a small class of very hard to 
>>>>> debug problems.
>>>>
>>>> Sure, but I'd still like to know why can't you do what you want from 
>>>> the IGT framework.
>>>>
>>>> Have the timeout reduction in i915, again that's fine assuming 10 
>>>> seconds it enough to not break something by accident.
>>> CI showed no regressions. And if someone does find a valid reason why 
>>> a post test drop caches call should legitimately take a stupidly long 
>>> time then it is easy to track back where the ETIME error came from 
>>> and bump the timeout.
>>>
>>>>
>>>> With that change you already have broken the "infinite wait". It 
>>>> makes the debugfs write return -ETIME in time much shorter than the 
>>>> test runner timeout(s). What is the thing that you cannot do from 
>>>> IGT at that point is my question? You want to wedge then? Send 
>>>> DROP_RESET_ACTIVE to do it for you? If that doesn't work add a new 
>>>> flag which will wedge explicitly.
>>>>
>>>> We are again degrading into a huge philosophical discussion and all 
>>>> I wanted to start with is to hear how exactly things go bad.
>>>>
>>> I have no idea what you are wanting. I am trying to have a technical 
>>> discussion about improving the stability of the driver during CI 
>>> testing. I have no idea if you are arguing that this change is good, 
>>> bad, broken, wrong direction or what.
>>>
>>> Things go bad as explained in the commit message. The CI framework 
>>> does not use signals. The IGT framework does not use signals. There 
>>> is no watchdog that sends a TERM or KILL signal after a specified 
>>> timeout. All that happens is the IGT sits there forever waiting for 
>>> the drop caches IOCTL to return. The CI framework eventually gives up 
>>> waiting for the test to complete and tries to recover. There are many 
>>> different CI frameworks in use across Intel. Some timeout quickly, 
>>> some timeout slowly. But basically, they all eventually give up and 
>>> don't bother trying any kind of remedial action but just hit the 
>>> reset button (sometimes by literally power cycling the DUT). As 
>>> result, background processes that are saving dmesg, stdout, etc do 
>>> not necessarily terminate cleanly. That results in logs that are at 
>>> best truncated, at worst missing entirely. It also results in some 
>>> frameworks aborting testing at that point. So no results are 
>>> generated for all the other tests that have yet to be run. Some 
>>> frameworks also run tests in batches. All they log is that something, 
>>> somewhere in the batch died. So you don't even know which specific 
>>> test actually hit the problem.
>>>
>>> Can the CI frameworks be improved? Undoubtedly. In very many ways. Is 
>>> that something we have the ability to do with a simple patch? No. 
>>> Would re-writing the IGT framework to add watchdog mechanisms improve 
>>> things? Yes. Can it be done with a simple patch? No. Would a simple 
>>> patch to i915 significantly improve the situation? Yes. Will it solve 
>>> every possible CI hang? No. Will it fix any actual end user visible 
>>> bugs? No. Will it introduce any new bugs? No. Will it help us to 
>>> debug at least some CI failures? Yes.
>>
>> To unblock, I suggest you go with the patch which caps the wait only, 
>> and propose a wedging as an IGT patch to gem_quiescent_gpu(). That 
>> should involve the CI/IGT folks into discussion on what logs will be, 
>> or will not be collected once gem_quiescent_gpu() fails due -ETIME. In 
>> fact probably you should copy CI/IGT folks on the v2 of the i915 patch 
>> as well since I now think their acks would be good to have - from the 
>> point of view of the current test runner behaviour with hanging tests.
>>
> Simply returning -ETIME without wedging will actually make the situation 
> worse. At the moment, you get 'all testing stopped due to machine not 
> responding' bugs being logged. Which is a right pain and has very little 
> useful information, but at least is not claiming random tests are broken 
> when they are not. If you return ETIME without wedging then test A will 

Several times I asked why can't you wedge from gem_quiescent_gpu() since 
that is done on driver open. So the chain of failing tests describing 
below is not relevant to my question.

Whole point is why add policy to i915 if it can be done from userspace. 
Current API is called "wait for idle", not "wait for idle ten seconds 
max" (although fine since IGT will fail on timeout already), and not 
"wait for idle or wedge, sometimes".

Yes it's only debugfs, I said that multiple times already, so it could 
be whatever, but in principle adding code to kernel should always be the 
2nd option. Especially since the implementation is only a 50% kludge (I 
am referring to the 2nd DROP_IDLE branch where the proposal does not add 
a timeout or wedging). So a half-policy even. Wedge if this stage of 
DROP_IDLE timed out, but don't wedge if this other stage of DROP_IDLE 
timed out or failed.

Which is why I was saying, if signals would be respected anyway, why 
couldn't you do the whole thing in IGT to start with.. "wrap" 
gem_quiescent_gpu with alarm(2), send SIGINT, wedge, whatever. If that 
works it would be the same effect. And policy where it belongs, zero 
kernel code required. If it works.. I haven't checked, you said it would 
though so what would be wrong with this approach?

And completely separate from the above line of discussion I am not even 
sure how the "no logs" relates to this all. Sure some bugs result in no 
logs since kernel crashes so badly. This patch will not improve that.

And if the kernel is not badly broken test runner will detect a timeout 
and sure all further testing will be invalid. It's not the first, or 
last, or the only way that can happen. There will be logs though so it 
can be debugged and fixed. (Unless there can't be logs anyway.) So you 
find the first failing test and fix the issue. How often does it happen 
anyway.

Or if I totally got this wrong please paste some CI or CIbuglog links to 
put me on the correct path.

Regards,

Tvrtko

> hang and return ETIME. CI will log an ETIME bug against test A. CI will 
> then try test B, which will fail with ETIME because the system is still 
> broken but claiming to be working. So log a new bug against test B. Move 
> on to test C, oh look, ETIME - log another bug and move on to test D... 
> That is far worse, a whole slew of pointless and incorrect bugs have 
> just been logged.
> 
> And how is it possibly considered a backwards breaking or dangerous 
> change to wedge instead of hanging forever? Reboot versus wedge. 
> Absolutely no defined behaviour at all because the system has simply 
> stopped versus marking the system as broken and having a best effort at 
> handling the situation. Yup, that's definitely a very dangerous change 
> that could break all sorts of random user applications.
> 
> Re 'IGT folks' - whom? Ashutosh had already agreed to the original patch.
> 
> And CI folks are certainly aware of such issues. There are any number of 
> comments in Jiras about 'no logs available, cannot analyse'.
> 
> John.
> 
> 
>> Regards,
>>
>> Tvrtko
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
  2022-11-09 11:35                           ` Tvrtko Ursulin
@ 2022-11-10  6:20                             ` John Harrison
  -1 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-10  6:20 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: Ewins, Jon, DRI-Devel

On 11/9/2022 03:35, Tvrtko Ursulin wrote:
> On 08/11/2022 19:37, John Harrison wrote:
>> On 11/8/2022 01:08, Tvrtko Ursulin wrote:
>>> On 07/11/2022 19:45, John Harrison wrote:
>>>> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>>>>> On 04/11/2022 17:45, John Harrison wrote:
>>>>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>>>>> sysfs with
>>>>>>>>>>>>>
>>>>>>>>>>>>> sysfs?
>>>>>>>>>>> Sorry, that was meant to say debugfs. I've also been working 
>>>>>>>>>>> on some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> special flags set. One of the possible paths waits for 
>>>>>>>>>>>>>> idle with an
>>>>>>>>>>>>>> infinite timeout. That causes problems for debugging 
>>>>>>>>>>>>>> issues when CI
>>>>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>>>>> system times
>>>>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions 
>>>>>>>>>>>>>> and then
>>>>>>>>>>>>>> reboots the system to recover it. Worst case, the CI 
>>>>>>>>>>>>>> system can't do
>>>>>>>>>>>>>> anything at all and then times out (after 1000s) and 
>>>>>>>>>>>>>> simply reboots.
>>>>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>>>>> sometimes not.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>>>>> timeout to
>>>>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue 
>>>>>>>>>>>>>> with a
>>>>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>>>>
>>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's 
>>>>>>>>>>>>> also only used
>>>>>>>>>>>>> here.
>>>>>>>>>>>>
>>>>>>>>>>>> So move here, dropping i915 prefix, next to the newly 
>>>>>>>>>>>> proposed one?
>>>>>>>>>>> Sure, can do that.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only 
>>>>>>>>>>>>> used in
>>>>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>>>>
>>>>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>>>>
>>>>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>>>>
>>>>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>>>>
>>>>>>>>>>> These two are totally unrelated and in code not being 
>>>>>>>>>>> touched by this change. I would rather not conflate changing 
>>>>>>>>>>> random other things with fixing this specific issue.
>>>>>>>>>>>
>>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>>>>
>>>>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>>>>
>>>>>>>>>>>>> My head spins.
>>>>>>>>>>>>
>>>>>>>>>>>> I follow and raise that the newly proposed 
>>>>>>>>>>>> DROP_IDLE_TIMEOUT applies to DROP_ACTIVE and not only 
>>>>>>>>>>>> DROP_IDLE.
>>>>>>>>>>> My original intention for the name was that is the 'drop 
>>>>>>>>>>> caches timeout for intel_gt_wait_for_idle'. Which is quite 
>>>>>>>>>>> the mouthful and hence abbreviated to DROP_IDLE_TIMEOUT. But 
>>>>>>>>>>> yes, I realised later that name can be conflated with the 
>>>>>>>>>>> DROP_IDLE flag. Will rename.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Things get refactored, code moves around, bits get left 
>>>>>>>>>>>> behind, who knows. No reason to get too worked up. :) As 
>>>>>>>>>>>> long as people are taking a wider view when touching the 
>>>>>>>>>>>> code base, and are not afraid to send cleanups, things 
>>>>>>>>>>>> should be good.
>>>>>>>>>>> On the other hand, if every patch gets blocked in code 
>>>>>>>>>>> review because someone points out some completely unrelated 
>>>>>>>>>>> piece of code could be a bit better then nothing ever gets 
>>>>>>>>>>> fixed. If you spot something that you think should be 
>>>>>>>>>>> improved, isn't the general idea that you should post a 
>>>>>>>>>>> patch yourself to improve it?
>>>>>>>>>>
>>>>>>>>>> There's two maintainers per branch and an order of magnitude 
>>>>>>>>>> or two more developers so it'd be nice if cleanups would just 
>>>>>>>>>> be incoming on self-initiative basis. ;)
>>>>>>>>>>
>>>>>>>>>>>> For the actual functional change at hand - it would be nice 
>>>>>>>>>>>> if code paths in question could handle SIGINT and then we 
>>>>>>>>>>>> could punt the decision on how long someone wants to wait 
>>>>>>>>>>>> purely to userspace. But it's probably hard and it's only 
>>>>>>>>>>>> debugfs so whatever.
>>>>>>>>>>>>
>>>>>>>>>>> The code paths in question will already abort on a signal 
>>>>>>>>>>> won't they? Both intel_gt_wait_for_idle() and 
>>>>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>>>>> uc_wait_for_idle eventually ends up, have an 
>>>>>>>>>>> 'if(signal_pending) return -EINTR;' check. Beyond that, it 
>>>>>>>>>>> sounds like what you are asking for is a change in the IGT 
>>>>>>>>>>> libraries and/or CI framework to start sending signals after 
>>>>>>>>>>> some specific timeout. That seems like a significantly more 
>>>>>>>>>>> complex change (in terms of the number of entities affected 
>>>>>>>>>>> and number of groups involved) and unnecessary.
>>>>>>>>>>
>>>>>>>>>> If you say so, I haven't looked at them all. But if the code 
>>>>>>>>>> path in question already aborts on signals then I am not sure 
>>>>>>>>>> what is the patch fixing? I assumed you are trying to avoid 
>>>>>>>>>> the write stuck in D forever, which then prevents driver 
>>>>>>>>>> unload and everything, requiring the test runner to 
>>>>>>>>>> eventually reboot. If you say SIGINT works then you can 
>>>>>>>>>> already recover from userspace, no?
>>>>>>>>>>
>>>>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>>>>> probably err on the side of safety and make it longer, but 
>>>>>>>>>>>> at most half from the test runner timeout.
>>>>>>>>>>> This is supposed to be test clean up. This is not about how 
>>>>>>>>>>> long a particular test takes to complete but about how long 
>>>>>>>>>>> it takes to declare the system broken after the test has 
>>>>>>>>>>> already finished. I would argue that even 10s is massively 
>>>>>>>>>>> longer than required.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I am not convinced that wedging is correct though. 
>>>>>>>>>>>> Conceptually could be just that the timeout is too short. 
>>>>>>>>>>>> What does wedging really give us, on top of limiting the 
>>>>>>>>>>>> wait, when latter AFAIU is the key factor which would 
>>>>>>>>>>>> prevent the need to reboot the machine?
>>>>>>>>>>>>
>>>>>>>>>>> It gives us a system that knows what state it is in. If we 
>>>>>>>>>>> can't idle the GT then something is very badly wrong. 
>>>>>>>>>>> Wedging indicates that. It also ensure that a full GT reset 
>>>>>>>>>>> will be attempted before the next test is run. Helping to 
>>>>>>>>>>> prevent a failure on test X from propagating into failures 
>>>>>>>>>>> of unrelated tests X+1, X+2, ... And if the GT reset does 
>>>>>>>>>>> not work because the system is really that badly broken then 
>>>>>>>>>>> future tests will not run rather than report erroneous 
>>>>>>>>>>> failures.
>>>>>>>>>>>
>>>>>>>>>>> This is not about getting a more stable system for end users 
>>>>>>>>>>> by sweeping issues under the carpet and pretending all is 
>>>>>>>>>>> well. End users don't run IGTs or explicitly call dodgy 
>>>>>>>>>>> debugfs entry points. The sole motivation here is to get 
>>>>>>>>>>> more accurate results from CI. That is, correctly 
>>>>>>>>>>> identifying which test has hit a problem, getting valid 
>>>>>>>>>>> debug analysis for that test (logs and such) and allowing 
>>>>>>>>>>> further testing to complete correctly in the case where the 
>>>>>>>>>>> system can be recovered.
>>>>>>>>>>
>>>>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>>>>> runner could already do or not. It can apply a timeout, it 
>>>>>>>>>> can also send SIGINT, and it could even trigger a reset from 
>>>>>>>>>> outside. Sure it is debugfs hacks so general "kernel should 
>>>>>>>>>> not implement policy" need not be strictly followed, but lets 
>>>>>>>>>> have it clear what are the options.
>>>>>>>>>
>>>>>>>>> One conceptual problem with applying this policy is that the 
>>>>>>>>> code is:
>>>>>>>>>
>>>>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>>>>         if (ret)
>>>>>>>>>             return ret;
>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>>     if (val & DROP_IDLE) {
>>>>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>>>>         if (ret)
>>>>>>>>>             return ret;
>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>>>>> first branch have a short timeout and wedge. Yeah some bug 
>>>>>>>>> happens to be there at the moment, but put a bug in a 
>>>>>>>>> different place and you hang on the second branch and then 
>>>>>>>>> need another patch. Versus perhaps making it all respect 
>>>>>>>>> SIGINT and handle from outside.
>>>>>>>>>
>>>>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle 
>>>>>>>> has completed successfully. There is no route to skip the GT 
>>>>>>>> idle or to do the PM idle even if the GT idle fails. So the 
>>>>>>>> chances of the PM idle failing are greatly reduced. There would 
>>>>>>>> have to be something outside of a GT keeping the GPU awake and 
>>>>>>>> there isn't a whole lot of hardware left at that point!
>>>>>>>
>>>>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>>>>> here and we add a timeout, tomorrow bug is there and then the 
>>>>>>> same dance. It can be just a sw bug which forgets to release the 
>>>>>>> pm ref in some circumstances, doesn't really matter.
>>>>>>>
>>>>>> Huh?
>>>>>>
>>>>>> Greatly reduced is the whole point. Today there is a bug and it 
>>>>>> causes a kernel hang which requires the CI framework to reboot 
>>>>>> the system in an extremely unfriendly way which makes it very 
>>>>>> hard to work out what happened. Logs are likely not available. We 
>>>>>> don't even necessarily know which test was being run at the time. 
>>>>>> Etc. So we replace the infinite timeout with a meaningful 
>>>>>> timeout. CI now correctly marks the single test as failing, 
>>>>>> captures all the correct logs, creates a useful bug report and 
>>>>>> continues on testing more stuff.
>>>>>
>>>>> So what is preventing CI to collect logs if IGT is forever stuck 
>>>>> in interruptible wait? Surely it can collect the logs at that 
>>>>> point if the kernel is healthy enough. If it isn't then I don't 
>>>>> see how wedging the GPU will make the kernel any healthier.
>>>>>
>>>>> Is i915 preventing better log collection or could test runner be 
>>>>> improved?
>>>>>
>>>>>> Sure, there is still the chance of hitting an infinite timeout. 
>>>>>> But that one is significantly more complicated to remove. And the 
>>>>>> chances of hitting that one are significantly smaller than the 
>>>>>> chances of hitting the first one.
>>>>>
>>>>> This statement relies on intimate knowledge implementation details 
>>>>> and a bit too much white box testing approach but that's okay, 
>>>>> lets move past this one.
>>>>>
>>>>>> So you are arguing that because I can't fix the last 0.1% of 
>>>>>> possible failures, I am not allowed to fix the first 99.9% of the 
>>>>>> failures?
>>>>>
>>>>> I am clearly not arguing for that. But we are also not talking 
>>>>> about "fixing failures" here. Just how to make CI cope better with 
>>>>> a class of i915 bugs.
>>>>>
>>>>>>>> Regarding signals, the PM idle code ends up at 
>>>>>>>> wait_var_event_killable(). I assume that is interruptible via 
>>>>>>>> at least a KILL signal if not any signal. Although it's not 
>>>>>>>> entirely clear trying to follow through the implementation of 
>>>>>>>> this code. Also, I have no idea if there is a safe way to add a 
>>>>>>>> timeout to that code (or why it wasn't already written with a 
>>>>>>>> timeout included). Someone more familiar with the wakeref 
>>>>>>>> internals would need to comment.
>>>>>>>>
>>>>>>>> However, I strongly disagree that we should not fix the driver 
>>>>>>>> just because it is possible to workaround the issue by 
>>>>>>>> re-writing the CI framework. Feel free to bring a redesign plan 
>>>>>>>> to the IGT WG and whatever equivalent CI meetings in parallel. 
>>>>>>>> But we absolutely should not have infinite waits in the kernel 
>>>>>>>> if there is a trivial way to not have infinite waits.
>>>>>>>
>>>>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>>>>
>>>>>>> The rest of the paragraph I don't really care - point is moot 
>>>>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>>>>> burdensome to i915, which this isn't. If either wasn't the case 
>>>>>>> then we certainly wouldn't be adding any workarounds in the 
>>>>>>> kernel if it can be achieved in IGT.
>>>>>>>
>>>>>>>> Also, sending a signal does not result in the wedge happening. 
>>>>>>>> I specifically did not want to change that code path because I 
>>>>>>>> was assuming there was a valid reason for it. If you have been 
>>>>>>>> interrupted then you are in the territory of maybe it would 
>>>>>>>> have succeeded if you just left it for a moment longer. 
>>>>>>>> Whereas, hitting the timeout says that someone very 
>>>>>>>> deliberately said this is too long to wait and therefore the 
>>>>>>>> system must be broken.
>>>>>>>
>>>>>>> I wanted to know specifically about wedging - why can't you 
>>>>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>>>>> wherever, if that's what you say is the right thing? 
>>>>>> Huh?
>>>>>>
>>>>>> DROP_IDLE has two waits. One that I am trying to change from 
>>>>>> infinite to finite + wedge. One that would take considerable 
>>>>>> effort to change and would be quite invasive to a lot more of the 
>>>>>> driver and which can only be hit if the first timeout actually 
>>>>>> completed successfully and is therefore of less importance 
>>>>>> anyway. Both of those time outs appear to respect signal interrupts.
>>>>>>
>>>>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>>>>> timeout expired? I915 is not controlling how much work there is 
>>>>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>>>>
>>>>>> Because this is a debug test interface that is used solely by IGT 
>>>>>> after it has finished its testing. This is not about wedging the 
>>>>>> device at some random arbitrary point because an AI compute 
>>>>>> workload takes three hours to complete. This is about a very 
>>>>>> specific test framework cleaning up after testing is completed 
>>>>>> and making sure the test did not fry the system.
>>>>>>
>>>>>> And even if an IGT test was calling DROP_IDLE in the middle of a 
>>>>>> test for some reason, it should not be deliberately pushing 10+ 
>>>>>> seconds of work through and then calling a debug only interface 
>>>>>> to flush it out. If a test wants to verify that the system can 
>>>>>> cope with submitting a minutes worth of rendering and then 
>>>>>> waiting for it to complete then the test should be using official 
>>>>>> channels for that wait.
>>>>>>
>>>>>>>
>>>>>>>> Plus, infinite wait is not a valid code path in the first place 
>>>>>>>> so any change in behaviour is not really a change in behaviour. 
>>>>>>>> Code can't be relying on a kernel call to never return for its 
>>>>>>>> correct operation!
>>>>>>>
>>>>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>>>>> other one as well. ;P
>>>>>> In what universe is it ever valid to wait forever for a test to 
>>>>>> complete?
>>>>>
>>>>> Well above you claimed both paths respect SIGINT. If that is so 
>>>>> then the wait is as infinite as the IGT wanted it to be.
>>>>>
>>>>>> See above, the PM code would require much more invasive changes. 
>>>>>> This was low hanging fruit. It was supposed to be a two minute 
>>>>>> change to a very self contained section of code that would 
>>>>>> provide significant benefit to debugging a small class of very 
>>>>>> hard to debug problems.
>>>>>
>>>>> Sure, but I'd still like to know why can't you do what you want 
>>>>> from the IGT framework.
>>>>>
>>>>> Have the timeout reduction in i915, again that's fine assuming 10 
>>>>> seconds it enough to not break something by accident.
>>>> CI showed no regressions. And if someone does find a valid reason 
>>>> why a post test drop caches call should legitimately take a 
>>>> stupidly long time then it is easy to track back where the ETIME 
>>>> error came from and bump the timeout.
>>>>
>>>>>
>>>>> With that change you already have broken the "infinite wait". It 
>>>>> makes the debugfs write return -ETIME in time much shorter than 
>>>>> the test runner timeout(s). What is the thing that you cannot do 
>>>>> from IGT at that point is my question? You want to wedge then? 
>>>>> Send DROP_RESET_ACTIVE to do it for you? If that doesn't work add 
>>>>> a new flag which will wedge explicitly.
>>>>>
>>>>> We are again degrading into a huge philosophical discussion and 
>>>>> all I wanted to start with is to hear how exactly things go bad.
>>>>>
>>>> I have no idea what you are wanting. I am trying to have a 
>>>> technical discussion about improving the stability of the driver 
>>>> during CI testing. I have no idea if you are arguing that this 
>>>> change is good, bad, broken, wrong direction or what.
>>>>
>>>> Things go bad as explained in the commit message. The CI framework 
>>>> does not use signals. The IGT framework does not use signals. There 
>>>> is no watchdog that sends a TERM or KILL signal after a specified 
>>>> timeout. All that happens is the IGT sits there forever waiting for 
>>>> the drop caches IOCTL to return. The CI framework eventually gives 
>>>> up waiting for the test to complete and tries to recover. There are 
>>>> many different CI frameworks in use across Intel. Some timeout 
>>>> quickly, some timeout slowly. But basically, they all eventually 
>>>> give up and don't bother trying any kind of remedial action but 
>>>> just hit the reset button (sometimes by literally power cycling the 
>>>> DUT). As result, background processes that are saving dmesg, 
>>>> stdout, etc do not necessarily terminate cleanly. That results in 
>>>> logs that are at best truncated, at worst missing entirely. It also 
>>>> results in some frameworks aborting testing at that point. So no 
>>>> results are generated for all the other tests that have yet to be 
>>>> run. Some frameworks also run tests in batches. All they log is 
>>>> that something, somewhere in the batch died. So you don't even know 
>>>> which specific test actually hit the problem.
>>>>
>>>> Can the CI frameworks be improved? Undoubtedly. In very many ways. 
>>>> Is that something we have the ability to do with a simple patch? 
>>>> No. Would re-writing the IGT framework to add watchdog mechanisms 
>>>> improve things? Yes. Can it be done with a simple patch? No. Would 
>>>> a simple patch to i915 significantly improve the situation? Yes. 
>>>> Will it solve every possible CI hang? No. Will it fix any actual 
>>>> end user visible bugs? No. Will it introduce any new bugs? No. Will 
>>>> it help us to debug at least some CI failures? Yes.
>>>
>>> To unblock, I suggest you go with the patch which caps the wait 
>>> only, and propose a wedging as an IGT patch to gem_quiescent_gpu(). 
>>> That should involve the CI/IGT folks into discussion on what logs 
>>> will be, or will not be collected once gem_quiescent_gpu() fails due 
>>> -ETIME. In fact probably you should copy CI/IGT folks on the v2 of 
>>> the i915 patch as well since I now think their acks would be good to 
>>> have - from the point of view of the current test runner behaviour 
>>> with hanging tests.
>>>
>> Simply returning -ETIME without wedging will actually make the 
>> situation worse. At the moment, you get 'all testing stopped due to 
>> machine not responding' bugs being logged. Which is a right pain and 
>> has very little useful information, but at least is not claiming 
>> random tests are broken when they are not. If you return ETIME 
>> without wedging then test A will 
>
> Several times I asked why can't you wedge from gem_quiescent_gpu() 
> since that is done on driver open. So the chain of failing tests 
> describing below is not relevant to my question.
Actually, no. You have mentioned gem_quiescent_gpu() once and as an IGT 
patch. Which presumably means an entire new API between IGT and i915.

>
> Whole point is why add policy to i915 if it can be done from 
> userspace. Current API is called "wait for idle", not "wait for idle 
> ten seconds max" (although fine since IGT will fail on timeout 
> already), and not "wait for idle or wedge, sometimes".
>
> Yes it's only debugfs, I said that multiple times already, so it could 
> be whatever, but in principle adding code to kernel should always be 
> the 2nd option. Especially since the implementation is only a 50% 
> kludge (I am referring to the 2nd DROP_IDLE branch where the proposal 
> does not add a timeout or wedging). So a half-policy even. Wedge if 
> this stage of DROP_IDLE timed out, but don't wedge if this other stage 
> of DROP_IDLE timed out or failed.
>
> Which is why I was saying, if signals would be respected anyway, why 
> couldn't you do the whole thing in IGT to start with.. "wrap" 
> gem_quiescent_gpu with alarm(2), send SIGINT, wedge, whatever. If that 
> works it would be the same effect. And policy where it belongs, zero 
> kernel code required. If it works.. I haven't checked, you said it 
> would though so what would be wrong with this approach?
Finding someone to do it. If you are familiar with the IGT framework 
internals then feel free. I am not. Whereas, this was a trivial change 
that could improve the situation while having no bad side effects 
(because if the alternative is hanging forever then any change is a good 
change).

>
> And completely separate from the above line of discussion I am not 
> even sure how the "no logs" relates to this all. Sure some bugs result 
> in no logs since kernel crashes so badly. This patch will not improve 
> that.
I never said it would solve every 'missing log' situation. I said it 
would help with the situation where the CI framework times out because 
of one specific class of failures. And in that case it does currently 
reboot with little or no attempt at recovery and therefore little or no 
log capture.

>
> And if the kernel is not badly broken test runner will detect a 
> timeout and sure all further testing will be invalid. It's not the 
> first, or last, or the only way that can happen. There will be logs 
> though so it can be debugged and fixed. (Unless there can't be logs 
> anyway.) So you find the first failing test and fix the issue. How 
> often does it happen anyway.
>
> Or if I totally got this wrong please paste some CI or CIbuglog links 
> to put me on the correct path.
As stated, there are very many bug reports of 'test timed out, 
rebooted'. It is impossible to know exactly how each particular instance 
got into that situation. So no, there is no CI report where I can 
categorically say this is exactly what happened. However, while 
debugging one such issue, I spotted this particular route into that 
situation and realised that it was something that could be trivially fixed.

Except apparently I'm not allowed to. So I give up. I don't have time to 
pursue this any further.

John.

>
> Regards,
>
> Tvrtko
>
>> hang and return ETIME. CI will log an ETIME bug against test A. CI 
>> will then try test B, which will fail with ETIME because the system 
>> is still broken but claiming to be working. So log a new bug against 
>> test B. Move on to test C, oh look, ETIME - log another bug and move 
>> on to test D... That is far worse, a whole slew of pointless and 
>> incorrect bugs have just been logged.
>>
>> And how is it possibly considered a backwards breaking or dangerous 
>> change to wedge instead of hanging forever? Reboot versus wedge. 
>> Absolutely no defined behaviour at all because the system has simply 
>> stopped versus marking the system as broken and having a best effort 
>> at handling the situation. Yup, that's definitely a very dangerous 
>> change that could break all sorts of random user applications.
>>
>> Re 'IGT folks' - whom? Ashutosh had already agreed to the original 
>> patch.
>>
>> And CI folks are certainly aware of such issues. There are any number 
>> of comments in Jiras about 'no logs available, cannot analyse'.
>>
>> John.
>>
>>
>>> Regards,
>>>
>>> Tvrtko
>>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-10  6:20                             ` John Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John Harrison @ 2022-11-10  6:20 UTC (permalink / raw)
  To: Tvrtko Ursulin, Jani Nikula, Intel-GFX; +Cc: DRI-Devel

On 11/9/2022 03:35, Tvrtko Ursulin wrote:
> On 08/11/2022 19:37, John Harrison wrote:
>> On 11/8/2022 01:08, Tvrtko Ursulin wrote:
>>> On 07/11/2022 19:45, John Harrison wrote:
>>>> On 11/7/2022 06:09, Tvrtko Ursulin wrote:
>>>>> On 04/11/2022 17:45, John Harrison wrote:
>>>>>> On 11/4/2022 03:01, Tvrtko Ursulin wrote:
>>>>>>> On 03/11/2022 19:16, John Harrison wrote:
>>>>>>>> On 11/3/2022 02:38, Tvrtko Ursulin wrote:
>>>>>>>>> On 03/11/2022 09:18, Tvrtko Ursulin wrote:
>>>>>>>>>> On 03/11/2022 01:33, John Harrison wrote:
>>>>>>>>>>> On 11/2/2022 07:20, Tvrtko Ursulin wrote:
>>>>>>>>>>>> On 02/11/2022 12:12, Jani Nikula wrote:
>>>>>>>>>>>>> On Tue, 01 Nov 2022, John.C.Harrison@Intel.com wrote:
>>>>>>>>>>>>>> From: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> At the end of each test, IGT does a drop caches call via 
>>>>>>>>>>>>>> sysfs with
>>>>>>>>>>>>>
>>>>>>>>>>>>> sysfs?
>>>>>>>>>>> Sorry, that was meant to say debugfs. I've also been working 
>>>>>>>>>>> on some sysfs IGT issues and evidently got my wires crossed!
>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> special flags set. One of the possible paths waits for 
>>>>>>>>>>>>>> idle with an
>>>>>>>>>>>>>> infinite timeout. That causes problems for debugging 
>>>>>>>>>>>>>> issues when CI
>>>>>>>>>>>>>> catches a "can't go idle" test failure. Best case, the CI 
>>>>>>>>>>>>>> system times
>>>>>>>>>>>>>> out (after 90s), attempts a bunch of state dump actions 
>>>>>>>>>>>>>> and then
>>>>>>>>>>>>>> reboots the system to recover it. Worst case, the CI 
>>>>>>>>>>>>>> system can't do
>>>>>>>>>>>>>> anything at all and then times out (after 1000s) and 
>>>>>>>>>>>>>> simply reboots.
>>>>>>>>>>>>>> Sometimes a serial port log of dmesg might be available, 
>>>>>>>>>>>>>> sometimes not.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So rather than making life hard for ourselves, change the 
>>>>>>>>>>>>>> timeout to
>>>>>>>>>>>>>> be 10s rather than infinite. Also, trigger the standard
>>>>>>>>>>>>>> wedge/reset/recover sequence so that testing can continue 
>>>>>>>>>>>>>> with a
>>>>>>>>>>>>>> working system (if possible).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>   drivers/gpu/drm/i915/i915_debugfs.c | 7 ++++++-
>>>>>>>>>>>>>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
>>>>>>>>>>>>>> b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>>> index ae987e92251dd..9d916fbbfc27c 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>>>>>>>>>>>>> @@ -641,6 +641,9 @@ 
>>>>>>>>>>>>>> DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
>>>>>>>>>>>>>>             DROP_RESET_ACTIVE | \
>>>>>>>>>>>>>>             DROP_RESET_SEQNO | \
>>>>>>>>>>>>>>             DROP_RCU)
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>> +#define DROP_IDLE_TIMEOUT    (HZ * 10)
>>>>>>>>>>>>>
>>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is defined in i915_drv.h. It's 
>>>>>>>>>>>>> also only used
>>>>>>>>>>>>> here.
>>>>>>>>>>>>
>>>>>>>>>>>> So move here, dropping i915 prefix, next to the newly 
>>>>>>>>>>>> proposed one?
>>>>>>>>>>> Sure, can do that.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> I915_GEM_IDLE_TIMEOUT is defined in i915_gem.h. It's only 
>>>>>>>>>>>>> used in
>>>>>>>>>>>>> gt/intel_gt.c.
>>>>>>>>>>>>
>>>>>>>>>>>> Move there and rename to GT_IDLE_TIMEOUT?
>>>>>>>>>>>>
>>>>>>>>>>>>> I915_GT_SUSPEND_IDLE_TIMEOUT is defined and used only in 
>>>>>>>>>>>>> intel_gt_pm.c.
>>>>>>>>>>>>
>>>>>>>>>>>> No action needed, maybe drop i915 prefix if wanted.
>>>>>>>>>>>>
>>>>>>>>>>> These two are totally unrelated and in code not being 
>>>>>>>>>>> touched by this change. I would rather not conflate changing 
>>>>>>>>>>> random other things with fixing this specific issue.
>>>>>>>>>>>
>>>>>>>>>>>>> I915_IDLE_ENGINES_TIMEOUT is in ms, the rest are in jiffies.
>>>>>>>>>>>>
>>>>>>>>>>>> Add _MS suffix if wanted.
>>>>>>>>>>>>
>>>>>>>>>>>>> My head spins.
>>>>>>>>>>>>
>>>>>>>>>>>> I follow and raise that the newly proposed 
>>>>>>>>>>>> DROP_IDLE_TIMEOUT applies to DROP_ACTIVE and not only 
>>>>>>>>>>>> DROP_IDLE.
>>>>>>>>>>> My original intention for the name was that is the 'drop 
>>>>>>>>>>> caches timeout for intel_gt_wait_for_idle'. Which is quite 
>>>>>>>>>>> the mouthful and hence abbreviated to DROP_IDLE_TIMEOUT. But 
>>>>>>>>>>> yes, I realised later that name can be conflated with the 
>>>>>>>>>>> DROP_IDLE flag. Will rename.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Things get refactored, code moves around, bits get left 
>>>>>>>>>>>> behind, who knows. No reason to get too worked up. :) As 
>>>>>>>>>>>> long as people are taking a wider view when touching the 
>>>>>>>>>>>> code base, and are not afraid to send cleanups, things 
>>>>>>>>>>>> should be good.
>>>>>>>>>>> On the other hand, if every patch gets blocked in code 
>>>>>>>>>>> review because someone points out some completely unrelated 
>>>>>>>>>>> piece of code could be a bit better then nothing ever gets 
>>>>>>>>>>> fixed. If you spot something that you think should be 
>>>>>>>>>>> improved, isn't the general idea that you should post a 
>>>>>>>>>>> patch yourself to improve it?
>>>>>>>>>>
>>>>>>>>>> There's two maintainers per branch and an order of magnitude 
>>>>>>>>>> or two more developers so it'd be nice if cleanups would just 
>>>>>>>>>> be incoming on self-initiative basis. ;)
>>>>>>>>>>
>>>>>>>>>>>> For the actual functional change at hand - it would be nice 
>>>>>>>>>>>> if code paths in question could handle SIGINT and then we 
>>>>>>>>>>>> could punt the decision on how long someone wants to wait 
>>>>>>>>>>>> purely to userspace. But it's probably hard and it's only 
>>>>>>>>>>>> debugfs so whatever.
>>>>>>>>>>>>
>>>>>>>>>>> The code paths in question will already abort on a signal 
>>>>>>>>>>> won't they? Both intel_gt_wait_for_idle() and 
>>>>>>>>>>> intel_guc_wait_for_pending_msg(), which is where the 
>>>>>>>>>>> uc_wait_for_idle eventually ends up, have an 
>>>>>>>>>>> 'if(signal_pending) return -EINTR;' check. Beyond that, it 
>>>>>>>>>>> sounds like what you are asking for is a change in the IGT 
>>>>>>>>>>> libraries and/or CI framework to start sending signals after 
>>>>>>>>>>> some specific timeout. That seems like a significantly more 
>>>>>>>>>>> complex change (in terms of the number of entities affected 
>>>>>>>>>>> and number of groups involved) and unnecessary.
>>>>>>>>>>
>>>>>>>>>> If you say so, I haven't looked at them all. But if the code 
>>>>>>>>>> path in question already aborts on signals then I am not sure 
>>>>>>>>>> what is the patch fixing? I assumed you are trying to avoid 
>>>>>>>>>> the write stuck in D forever, which then prevents driver 
>>>>>>>>>> unload and everything, requiring the test runner to 
>>>>>>>>>> eventually reboot. If you say SIGINT works then you can 
>>>>>>>>>> already recover from userspace, no?
>>>>>>>>>>
>>>>>>>>>>>> Whether or not 10s is enough CI will hopefully tell us. I'd 
>>>>>>>>>>>> probably err on the side of safety and make it longer, but 
>>>>>>>>>>>> at most half from the test runner timeout.
>>>>>>>>>>> This is supposed to be test clean up. This is not about how 
>>>>>>>>>>> long a particular test takes to complete but about how long 
>>>>>>>>>>> it takes to declare the system broken after the test has 
>>>>>>>>>>> already finished. I would argue that even 10s is massively 
>>>>>>>>>>> longer than required.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I am not convinced that wedging is correct though. 
>>>>>>>>>>>> Conceptually could be just that the timeout is too short. 
>>>>>>>>>>>> What does wedging really give us, on top of limiting the 
>>>>>>>>>>>> wait, when latter AFAIU is the key factor which would 
>>>>>>>>>>>> prevent the need to reboot the machine?
>>>>>>>>>>>>
>>>>>>>>>>> It gives us a system that knows what state it is in. If we 
>>>>>>>>>>> can't idle the GT then something is very badly wrong. 
>>>>>>>>>>> Wedging indicates that. It also ensure that a full GT reset 
>>>>>>>>>>> will be attempted before the next test is run. Helping to 
>>>>>>>>>>> prevent a failure on test X from propagating into failures 
>>>>>>>>>>> of unrelated tests X+1, X+2, ... And if the GT reset does 
>>>>>>>>>>> not work because the system is really that badly broken then 
>>>>>>>>>>> future tests will not run rather than report erroneous 
>>>>>>>>>>> failures.
>>>>>>>>>>>
>>>>>>>>>>> This is not about getting a more stable system for end users 
>>>>>>>>>>> by sweeping issues under the carpet and pretending all is 
>>>>>>>>>>> well. End users don't run IGTs or explicitly call dodgy 
>>>>>>>>>>> debugfs entry points. The sole motivation here is to get 
>>>>>>>>>>> more accurate results from CI. That is, correctly 
>>>>>>>>>>> identifying which test has hit a problem, getting valid 
>>>>>>>>>>> debug analysis for that test (logs and such) and allowing 
>>>>>>>>>>> further testing to complete correctly in the case where the 
>>>>>>>>>>> system can be recovered.
>>>>>>>>>>
>>>>>>>>>> I don't really oppose shortening of the timeout in principle, 
>>>>>>>>>> just want a clear statement if this is something IGT / test 
>>>>>>>>>> runner could already do or not. It can apply a timeout, it 
>>>>>>>>>> can also send SIGINT, and it could even trigger a reset from 
>>>>>>>>>> outside. Sure it is debugfs hacks so general "kernel should 
>>>>>>>>>> not implement policy" need not be strictly followed, but lets 
>>>>>>>>>> have it clear what are the options.
>>>>>>>>>
>>>>>>>>> One conceptual problem with applying this policy is that the 
>>>>>>>>> code is:
>>>>>>>>>
>>>>>>>>>     if (val & (DROP_IDLE | DROP_ACTIVE)) {
>>>>>>>>>         ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
>>>>>>>>>         if (ret)
>>>>>>>>>             return ret;
>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>>     if (val & DROP_IDLE) {
>>>>>>>>>         ret = intel_gt_pm_wait_for_idle(gt);
>>>>>>>>>         if (ret)
>>>>>>>>>             return ret;
>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>> So if someone passes in DROP_IDLE and then why would only the 
>>>>>>>>> first branch have a short timeout and wedge. Yeah some bug 
>>>>>>>>> happens to be there at the moment, but put a bug in a 
>>>>>>>>> different place and you hang on the second branch and then 
>>>>>>>>> need another patch. Versus perhaps making it all respect 
>>>>>>>>> SIGINT and handle from outside.
>>>>>>>>>
>>>>>>>> The pm_wait_for_idle is can only called after gt_wait_for_idle 
>>>>>>>> has completed successfully. There is no route to skip the GT 
>>>>>>>> idle or to do the PM idle even if the GT idle fails. So the 
>>>>>>>> chances of the PM idle failing are greatly reduced. There would 
>>>>>>>> have to be something outside of a GT keeping the GPU awake and 
>>>>>>>> there isn't a whole lot of hardware left at that point!
>>>>>>>
>>>>>>> Well "greatly reduced" is beside my point. Point is today bug is 
>>>>>>> here and we add a timeout, tomorrow bug is there and then the 
>>>>>>> same dance. It can be just a sw bug which forgets to release the 
>>>>>>> pm ref in some circumstances, doesn't really matter.
>>>>>>>
>>>>>> Huh?
>>>>>>
>>>>>> Greatly reduced is the whole point. Today there is a bug and it 
>>>>>> causes a kernel hang which requires the CI framework to reboot 
>>>>>> the system in an extremely unfriendly way which makes it very 
>>>>>> hard to work out what happened. Logs are likely not available. We 
>>>>>> don't even necessarily know which test was being run at the time. 
>>>>>> Etc. So we replace the infinite timeout with a meaningful 
>>>>>> timeout. CI now correctly marks the single test as failing, 
>>>>>> captures all the correct logs, creates a useful bug report and 
>>>>>> continues on testing more stuff.
>>>>>
>>>>> So what is preventing CI to collect logs if IGT is forever stuck 
>>>>> in interruptible wait? Surely it can collect the logs at that 
>>>>> point if the kernel is healthy enough. If it isn't then I don't 
>>>>> see how wedging the GPU will make the kernel any healthier.
>>>>>
>>>>> Is i915 preventing better log collection or could test runner be 
>>>>> improved?
>>>>>
>>>>>> Sure, there is still the chance of hitting an infinite timeout. 
>>>>>> But that one is significantly more complicated to remove. And the 
>>>>>> chances of hitting that one are significantly smaller than the 
>>>>>> chances of hitting the first one.
>>>>>
>>>>> This statement relies on intimate knowledge implementation details 
>>>>> and a bit too much white box testing approach but that's okay, 
>>>>> lets move past this one.
>>>>>
>>>>>> So you are arguing that because I can't fix the last 0.1% of 
>>>>>> possible failures, I am not allowed to fix the first 99.9% of the 
>>>>>> failures?
>>>>>
>>>>> I am clearly not arguing for that. But we are also not talking 
>>>>> about "fixing failures" here. Just how to make CI cope better with 
>>>>> a class of i915 bugs.
>>>>>
>>>>>>>> Regarding signals, the PM idle code ends up at 
>>>>>>>> wait_var_event_killable(). I assume that is interruptible via 
>>>>>>>> at least a KILL signal if not any signal. Although it's not 
>>>>>>>> entirely clear trying to follow through the implementation of 
>>>>>>>> this code. Also, I have no idea if there is a safe way to add a 
>>>>>>>> timeout to that code (or why it wasn't already written with a 
>>>>>>>> timeout included). Someone more familiar with the wakeref 
>>>>>>>> internals would need to comment.
>>>>>>>>
>>>>>>>> However, I strongly disagree that we should not fix the driver 
>>>>>>>> just because it is possible to workaround the issue by 
>>>>>>>> re-writing the CI framework. Feel free to bring a redesign plan 
>>>>>>>> to the IGT WG and whatever equivalent CI meetings in parallel. 
>>>>>>>> But we absolutely should not have infinite waits in the kernel 
>>>>>>>> if there is a trivial way to not have infinite waits.
>>>>>>>
>>>>>>> I thought I was clear that I am not really opposed to the timeout.
>>>>>>>
>>>>>>> The rest of the paragraph I don't really care - point is moot 
>>>>>>> because it's debugfs so we can do whatever, as long as it is not 
>>>>>>> burdensome to i915, which this isn't. If either wasn't the case 
>>>>>>> then we certainly wouldn't be adding any workarounds in the 
>>>>>>> kernel if it can be achieved in IGT.
>>>>>>>
>>>>>>>> Also, sending a signal does not result in the wedge happening. 
>>>>>>>> I specifically did not want to change that code path because I 
>>>>>>>> was assuming there was a valid reason for it. If you have been 
>>>>>>>> interrupted then you are in the territory of maybe it would 
>>>>>>>> have succeeded if you just left it for a moment longer. 
>>>>>>>> Whereas, hitting the timeout says that someone very 
>>>>>>>> deliberately said this is too long to wait and therefore the 
>>>>>>>> system must be broken.
>>>>>>>
>>>>>>> I wanted to know specifically about wedging - why can't you 
>>>>>>> wedge/reset from IGT if DROP_IDLE times out in quiescent or 
>>>>>>> wherever, if that's what you say is the right thing? 
>>>>>> Huh?
>>>>>>
>>>>>> DROP_IDLE has two waits. One that I am trying to change from 
>>>>>> infinite to finite + wedge. One that would take considerable 
>>>>>> effort to change and would be quite invasive to a lot more of the 
>>>>>> driver and which can only be hit if the first timeout actually 
>>>>>> completed successfully and is therefore of less importance 
>>>>>> anyway. Both of those time outs appear to respect signal interrupts.
>>>>>>
>>>>>>> That's a policy decision so why would i915 wedge if an arbitrary 
>>>>>>> timeout expired? I915 is not controlling how much work there is 
>>>>>>> outstanding at the point the IGT decides to call DROP_IDLE.
>>>>>>
>>>>>> Because this is a debug test interface that is used solely by IGT 
>>>>>> after it has finished its testing. This is not about wedging the 
>>>>>> device at some random arbitrary point because an AI compute 
>>>>>> workload takes three hours to complete. This is about a very 
>>>>>> specific test framework cleaning up after testing is completed 
>>>>>> and making sure the test did not fry the system.
>>>>>>
>>>>>> And even if an IGT test was calling DROP_IDLE in the middle of a 
>>>>>> test for some reason, it should not be deliberately pushing 10+ 
>>>>>> seconds of work through and then calling a debug only interface 
>>>>>> to flush it out. If a test wants to verify that the system can 
>>>>>> cope with submitting a minutes worth of rendering and then 
>>>>>> waiting for it to complete then the test should be using official 
>>>>>> channels for that wait.
>>>>>>
>>>>>>>
>>>>>>>> Plus, infinite wait is not a valid code path in the first place 
>>>>>>>> so any change in behaviour is not really a change in behaviour. 
>>>>>>>> Code can't be relying on a kernel call to never return for its 
>>>>>>>> correct operation!
>>>>>>>
>>>>>>> Why infinite wait wouldn't be valid? Then you better change the 
>>>>>>> other one as well. ;P
>>>>>> In what universe is it ever valid to wait forever for a test to 
>>>>>> complete?
>>>>>
>>>>> Well above you claimed both paths respect SIGINT. If that is so 
>>>>> then the wait is as infinite as the IGT wanted it to be.
>>>>>
>>>>>> See above, the PM code would require much more invasive changes. 
>>>>>> This was low hanging fruit. It was supposed to be a two minute 
>>>>>> change to a very self contained section of code that would 
>>>>>> provide significant benefit to debugging a small class of very 
>>>>>> hard to debug problems.
>>>>>
>>>>> Sure, but I'd still like to know why can't you do what you want 
>>>>> from the IGT framework.
>>>>>
>>>>> Have the timeout reduction in i915, again that's fine assuming 10 
>>>>> seconds it enough to not break something by accident.
>>>> CI showed no regressions. And if someone does find a valid reason 
>>>> why a post test drop caches call should legitimately take a 
>>>> stupidly long time then it is easy to track back where the ETIME 
>>>> error came from and bump the timeout.
>>>>
>>>>>
>>>>> With that change you already have broken the "infinite wait". It 
>>>>> makes the debugfs write return -ETIME in time much shorter than 
>>>>> the test runner timeout(s). What is the thing that you cannot do 
>>>>> from IGT at that point is my question? You want to wedge then? 
>>>>> Send DROP_RESET_ACTIVE to do it for you? If that doesn't work add 
>>>>> a new flag which will wedge explicitly.
>>>>>
>>>>> We are again degrading into a huge philosophical discussion and 
>>>>> all I wanted to start with is to hear how exactly things go bad.
>>>>>
>>>> I have no idea what you are wanting. I am trying to have a 
>>>> technical discussion about improving the stability of the driver 
>>>> during CI testing. I have no idea if you are arguing that this 
>>>> change is good, bad, broken, wrong direction or what.
>>>>
>>>> Things go bad as explained in the commit message. The CI framework 
>>>> does not use signals. The IGT framework does not use signals. There 
>>>> is no watchdog that sends a TERM or KILL signal after a specified 
>>>> timeout. All that happens is the IGT sits there forever waiting for 
>>>> the drop caches IOCTL to return. The CI framework eventually gives 
>>>> up waiting for the test to complete and tries to recover. There are 
>>>> many different CI frameworks in use across Intel. Some timeout 
>>>> quickly, some timeout slowly. But basically, they all eventually 
>>>> give up and don't bother trying any kind of remedial action but 
>>>> just hit the reset button (sometimes by literally power cycling the 
>>>> DUT). As result, background processes that are saving dmesg, 
>>>> stdout, etc do not necessarily terminate cleanly. That results in 
>>>> logs that are at best truncated, at worst missing entirely. It also 
>>>> results in some frameworks aborting testing at that point. So no 
>>>> results are generated for all the other tests that have yet to be 
>>>> run. Some frameworks also run tests in batches. All they log is 
>>>> that something, somewhere in the batch died. So you don't even know 
>>>> which specific test actually hit the problem.
>>>>
>>>> Can the CI frameworks be improved? Undoubtedly. In very many ways. 
>>>> Is that something we have the ability to do with a simple patch? 
>>>> No. Would re-writing the IGT framework to add watchdog mechanisms 
>>>> improve things? Yes. Can it be done with a simple patch? No. Would 
>>>> a simple patch to i915 significantly improve the situation? Yes. 
>>>> Will it solve every possible CI hang? No. Will it fix any actual 
>>>> end user visible bugs? No. Will it introduce any new bugs? No. Will 
>>>> it help us to debug at least some CI failures? Yes.
>>>
>>> To unblock, I suggest you go with the patch which caps the wait 
>>> only, and propose a wedging as an IGT patch to gem_quiescent_gpu(). 
>>> That should involve the CI/IGT folks into discussion on what logs 
>>> will be, or will not be collected once gem_quiescent_gpu() fails due 
>>> -ETIME. In fact probably you should copy CI/IGT folks on the v2 of 
>>> the i915 patch as well since I now think their acks would be good to 
>>> have - from the point of view of the current test runner behaviour 
>>> with hanging tests.
>>>
>> Simply returning -ETIME without wedging will actually make the 
>> situation worse. At the moment, you get 'all testing stopped due to 
>> machine not responding' bugs being logged. Which is a right pain and 
>> has very little useful information, but at least is not claiming 
>> random tests are broken when they are not. If you return ETIME 
>> without wedging then test A will 
>
> Several times I asked why can't you wedge from gem_quiescent_gpu() 
> since that is done on driver open. So the chain of failing tests 
> describing below is not relevant to my question.
Actually, no. You have mentioned gem_quiescent_gpu() once and as an IGT 
patch. Which presumably means an entire new API between IGT and i915.

>
> Whole point is why add policy to i915 if it can be done from 
> userspace. Current API is called "wait for idle", not "wait for idle 
> ten seconds max" (although fine since IGT will fail on timeout 
> already), and not "wait for idle or wedge, sometimes".
>
> Yes it's only debugfs, I said that multiple times already, so it could 
> be whatever, but in principle adding code to kernel should always be 
> the 2nd option. Especially since the implementation is only a 50% 
> kludge (I am referring to the 2nd DROP_IDLE branch where the proposal 
> does not add a timeout or wedging). So a half-policy even. Wedge if 
> this stage of DROP_IDLE timed out, but don't wedge if this other stage 
> of DROP_IDLE timed out or failed.
>
> Which is why I was saying, if signals would be respected anyway, why 
> couldn't you do the whole thing in IGT to start with.. "wrap" 
> gem_quiescent_gpu with alarm(2), send SIGINT, wedge, whatever. If that 
> works it would be the same effect. And policy where it belongs, zero 
> kernel code required. If it works.. I haven't checked, you said it 
> would though so what would be wrong with this approach?
Finding someone to do it. If you are familiar with the IGT framework 
internals then feel free. I am not. Whereas, this was a trivial change 
that could improve the situation while having no bad side effects 
(because if the alternative is hanging forever then any change is a good 
change).

>
> And completely separate from the above line of discussion I am not 
> even sure how the "no logs" relates to this all. Sure some bugs result 
> in no logs since kernel crashes so badly. This patch will not improve 
> that.
I never said it would solve every 'missing log' situation. I said it 
would help with the situation where the CI framework times out because 
of one specific class of failures. And in that case it does currently 
reboot with little or no attempt at recovery and therefore little or no 
log capture.

>
> And if the kernel is not badly broken test runner will detect a 
> timeout and sure all further testing will be invalid. It's not the 
> first, or last, or the only way that can happen. There will be logs 
> though so it can be debugged and fixed. (Unless there can't be logs 
> anyway.) So you find the first failing test and fix the issue. How 
> often does it happen anyway.
>
> Or if I totally got this wrong please paste some CI or CIbuglog links 
> to put me on the correct path.
As stated, there are very many bug reports of 'test timed out, 
rebooted'. It is impossible to know exactly how each particular instance 
got into that situation. So no, there is no CI report where I can 
categorically say this is exactly what happened. However, while 
debugging one such issue, I spotted this particular route into that 
situation and realised that it was something that could be trivially fixed.

Except apparently I'm not allowed to. So I give up. I don't have time to 
pursue this any further.

John.

>
> Regards,
>
> Tvrtko
>
>> hang and return ETIME. CI will log an ETIME bug against test A. CI 
>> will then try test B, which will fail with ETIME because the system 
>> is still broken but claiming to be working. So log a new bug against 
>> test B. Move on to test C, oh look, ETIME - log another bug and move 
>> on to test D... That is far worse, a whole slew of pointless and 
>> incorrect bugs have just been logged.
>>
>> And how is it possibly considered a backwards breaking or dangerous 
>> change to wedge instead of hanging forever? Reboot versus wedge. 
>> Absolutely no defined behaviour at all because the system has simply 
>> stopped versus marking the system as broken and having a best effort 
>> at handling the situation. Yup, that's definitely a very dangerous 
>> change that could break all sorts of random user applications.
>>
>> Re 'IGT folks' - whom? Ashutosh had already agreed to the original 
>> patch.
>>
>> And CI folks are certainly aware of such issues. There are any number 
>> of comments in Jiras about 'no logs available, cannot analyse'.
>>
>> John.
>>
>>
>>> Regards,
>>>
>>> Tvrtko
>>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-gfx] [PATCH] drm/i915: Don't wait forever in drop_caches
@ 2022-11-03  1:35 John.C.Harrison
  0 siblings, 0 replies; 31+ messages in thread
From: John.C.Harrison @ 2022-11-03  1:35 UTC (permalink / raw)
  To: Intel-GFX; +Cc: DRI-Devel

From: John Harrison <John.C.Harrison@Intel.com>

At the end of each test, IGT does a drop caches call via debugfs with
special flags set. One of the possible paths waits for idle with an
infinite timeout. That causes problems for debugging issues when CI
catches a "can't go idle" test failure. Best case, the CI system times
out (after 90s), attempts a bunch of state dump actions and then
reboots the system to recover it. Worst case, the CI system can't do
anything at all and then times out (after 1000s) and simply reboots.
Sometimes a serial port log of dmesg might be available, sometimes not.

So rather than making life hard for ourselves, change the timeout to
be 10s rather than infinite. Also, trigger the standard
wedge/reset/recover sequence so that testing can continue with a
working system (if possible).

v2: Rationalise timeout defines (review feedback from Jani & Tvrtko).

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 10 ++++++++--
 drivers/gpu/drm/i915/i915_drv.h     |  2 --
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index ae987e92251dd..a224584ea4eb1 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -621,6 +621,9 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
 			i915_perf_noa_delay_set,
 			"%llu\n");
 
+#define DROPCACHE_IDLE_ENGINES_TIMEOUT_MS	200
+#define DROPCACHE_IDLE_GT_TIMEOUT		(HZ * 10)
+
 #define DROP_UNBOUND	BIT(0)
 #define DROP_BOUND	BIT(1)
 #define DROP_RETIRE	BIT(2)
@@ -641,6 +644,7 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops,
 		  DROP_RESET_ACTIVE | \
 		  DROP_RESET_SEQNO | \
 		  DROP_RCU)
+
 static int
 i915_drop_caches_get(void *data, u64 *val)
 {
@@ -654,14 +658,16 @@ gt_drop_caches(struct intel_gt *gt, u64 val)
 	int ret;
 
 	if (val & DROP_RESET_ACTIVE &&
-	    wait_for(intel_engines_are_idle(gt), I915_IDLE_ENGINES_TIMEOUT))
+	    wait_for(intel_engines_are_idle(gt), DROPCACHE_IDLE_ENGINES_TIMEOUT_MS))
 		intel_gt_set_wedged(gt);
 
 	if (val & DROP_RETIRE)
 		intel_gt_retire_requests(gt);
 
 	if (val & (DROP_IDLE | DROP_ACTIVE)) {
-		ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT);
+		ret = intel_gt_wait_for_idle(gt, DROPCACHE_IDLE_GT_TIMEOUT);
+		if (ret == -ETIME)
+			intel_gt_set_wedged(gt);
 		if (ret)
 			return ret;
 	}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 05b3300cc4edf..4c2adaad8e9ed 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -162,8 +162,6 @@ struct i915_gem_mm {
 	u32 shrink_count;
 };
 
-#define I915_IDLE_ENGINES_TIMEOUT (200) /* in ms */
-
 unsigned long i915_fence_context_timeout(const struct drm_i915_private *i915,
 					 u64 context);
 
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2022-11-10  6:20 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-01 23:50 [PATCH] drm/i915: Don't wait forever in drop_caches John.C.Harrison
2022-11-01 23:50 ` [Intel-gfx] " John.C.Harrison
2022-11-02  0:10 ` [Intel-gfx] ✗ Fi.CI.DOCS: warning for " Patchwork
2022-11-02  0:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-11-02  9:13 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2022-11-02 12:12 ` [PATCH] " Jani Nikula
2022-11-02 12:12   ` [Intel-gfx] " Jani Nikula
2022-11-02 14:20   ` Tvrtko Ursulin
2022-11-03  1:33     ` John Harrison
2022-11-03  9:18       ` Tvrtko Ursulin
2022-11-03  9:38         ` Tvrtko Ursulin
2022-11-03 19:16           ` John Harrison
2022-11-04 10:01             ` Tvrtko Ursulin
2022-11-04 17:45               ` John Harrison
2022-11-04 17:45                 ` John Harrison
2022-11-07 14:09                 ` Tvrtko Ursulin
2022-11-07 14:09                   ` Tvrtko Ursulin
2022-11-07 19:45                   ` John Harrison
2022-11-07 19:45                     ` John Harrison
2022-11-08  9:08                     ` Tvrtko Ursulin
2022-11-08  9:08                       ` Tvrtko Ursulin
2022-11-08 19:37                       ` John Harrison
2022-11-08 19:37                         ` John Harrison
2022-11-09 11:35                         ` Tvrtko Ursulin
2022-11-09 11:35                           ` Tvrtko Ursulin
2022-11-10  6:20                           ` John Harrison
2022-11-10  6:20                             ` John Harrison
2022-11-03 19:37         ` John Harrison
2022-11-03 10:45       ` Jani Nikula
2022-11-03 19:39         ` John Harrison
2022-11-03  1:35 John.C.Harrison

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.