All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH i-g-t 0/1] Fix gem_scheduler.manycontexts for GuC submission
@ 2021-07-27 18:20 ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-27 18:20 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx

Patch should explain it all. Will include in [1] when that series is
respun.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

[1] https://patchwork.freedesktop.org/series/93071/

Matthew Brost (1):
  i915/gem_scheduler: Ensure submission order in manycontexts

 tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [igt-dev] [PATCH i-g-t 0/1] Fix gem_scheduler.manycontexts for GuC submission
@ 2021-07-27 18:20 ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-27 18:20 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx

Patch should explain it all. Will include in [1] when that series is
respun.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

[1] https://patchwork.freedesktop.org/series/93071/

Matthew Brost (1):
  i915/gem_scheduler: Ensure submission order in manycontexts

 tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-27 18:20 ` [igt-dev] " Matthew Brost
@ 2021-07-27 18:20   ` Matthew Brost
  -1 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-27 18:20 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx

With GuC submission contexts can get reordered (compared to submission
order), if contexts get reordered the sequential nature of the batches
releasing the next batch's semaphore in function timesliceN() get broken
resulting in the test taking much longer than if should. e.g. Every
contexts needs to be timesliced to release the next batch. Corking the
first submission until all the batches have been submitted should ensure
submission order.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index f03842478..41f2591a5 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
 	struct drm_i915_gem_execbuffer2 execbuf  = {
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
-		.flags = engine | I915_EXEC_FENCE_OUT,
+		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
 	};
 	uint32_t *result =
 		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
 	const intel_ctx_t *ctx;
 	int fence[count];
+	IGT_CORK_FENCE(cork);
 
 	/*
 	 * Create a pair of interlocking batches, that ping pong
@@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
 	igt_require(gem_scheduler_has_timeslicing(i915));
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
+	/*
+	 * With GuC submission contexts can get reordered (compared to
+	 * submission order), if contexts get reordered the sequential
+	 * nature of the batches releasing the next batch's semaphore gets
+	 * broken resulting in the test taking much longer than it should (e.g.
+	 * every context needs to be timesliced to release the next batch).
+	 * Corking the first submission until all batches have been
+	 * submitted should ensure submission order.
+	 */
+	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
+
 	/* No coupling between requests; free to timeslice */
 
 	for (int i = 0; i < count; i++) {
@@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
 		intel_ctx_destroy(i915, ctx);
 
 		fence[i] = execbuf.rsvd2 >> 32;
+		execbuf.rsvd2 >>= 32;
 	}
 
+	igt_cork_unplug(&cork);
 	gem_sync(i915, obj.handle);
 	gem_close(i915, obj.handle);
 
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-07-27 18:20   ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-27 18:20 UTC (permalink / raw)
  To: igt-dev; +Cc: intel-gfx

With GuC submission contexts can get reordered (compared to submission
order), if contexts get reordered the sequential nature of the batches
releasing the next batch's semaphore in function timesliceN() get broken
resulting in the test taking much longer than if should. e.g. Every
contexts needs to be timesliced to release the next batch. Corking the
first submission until all the batches have been submitted should ensure
submission order.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index f03842478..41f2591a5 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
 	struct drm_i915_gem_execbuffer2 execbuf  = {
 		.buffers_ptr = to_user_pointer(&obj),
 		.buffer_count = 1,
-		.flags = engine | I915_EXEC_FENCE_OUT,
+		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
 	};
 	uint32_t *result =
 		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
 	const intel_ctx_t *ctx;
 	int fence[count];
+	IGT_CORK_FENCE(cork);
 
 	/*
 	 * Create a pair of interlocking batches, that ping pong
@@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
 	igt_require(gem_scheduler_has_timeslicing(i915));
 	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
 
+	/*
+	 * With GuC submission contexts can get reordered (compared to
+	 * submission order), if contexts get reordered the sequential
+	 * nature of the batches releasing the next batch's semaphore gets
+	 * broken resulting in the test taking much longer than it should (e.g.
+	 * every context needs to be timesliced to release the next batch).
+	 * Corking the first submission until all batches have been
+	 * submitted should ensure submission order.
+	 */
+	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
+
 	/* No coupling between requests; free to timeslice */
 
 	for (int i = 0; i < count; i++) {
@@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
 		intel_ctx_destroy(i915, ctx);
 
 		fence[i] = execbuf.rsvd2 >> 32;
+		execbuf.rsvd2 >>= 32;
 	}
 
+	igt_cork_unplug(&cork);
 	gem_sync(i915, obj.handle);
 	gem_close(i915, obj.handle);
 
-- 
2.28.0

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [igt-dev] ✓ Fi.CI.BAT: success for Fix gem_scheduler.manycontexts for GuC submission
  2021-07-27 18:20 ` [igt-dev] " Matthew Brost
  (?)
  (?)
@ 2021-07-27 18:48 ` Patchwork
  -1 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-07-27 18:48 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 3185 bytes --]

== Series Details ==

Series: Fix gem_scheduler.manycontexts for GuC submission
URL   : https://patchwork.freedesktop.org/series/93077/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10405 -> IGTPW_6066
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/index.html

Known issues
------------

  Here are the changes found in IGTPW_6066 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@semaphore:
    - fi-bdw-5557u:       NOTRUN -> [SKIP][1] ([fdo#109271]) +25 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/fi-bdw-5557u/igt@amdgpu/amd_basic@semaphore.html

  * igt@core_hotunplug@unbind-rebind:
    - fi-bdw-5557u:       NOTRUN -> [WARN][2] ([i915#3718])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/fi-bdw-5557u/igt@core_hotunplug@unbind-rebind.html

  * igt@i915_pm_rpm@basic-rte:
    - fi-bdw-5557u:       NOTRUN -> [FAIL][3] ([i915#579])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/fi-bdw-5557u/igt@i915_pm_rpm@basic-rte.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@migrate:
    - {fi-hsw-gt1}:       [FAIL][4] -> [PASS][5]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/fi-hsw-gt1/igt@i915_selftest@live@migrate.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/fi-hsw-gt1/igt@i915_selftest@live@migrate.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [FAIL][6] ([i915#1372]) -> [PASS][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#2927]: https://gitlab.freedesktop.org/drm/intel/issues/2927
  [i915#2966]: https://gitlab.freedesktop.org/drm/intel/issues/2966
  [i915#3718]: https://gitlab.freedesktop.org/drm/intel/issues/3718
  [i915#579]: https://gitlab.freedesktop.org/drm/intel/issues/579


Participating hosts (41 -> 35)
------------------------------

  Missing    (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan bat-adlp-4 fi-bdw-samus bat-jsl-1 


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_6153 -> IGTPW_6066

  CI-20190529: 20190529
  CI_DRM_10405: 6db19b5e1fac016d9dffa6ce54aa21f3200c5c8d @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_6066: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/index.html
  IGT_6153: a5dffe7499a2f7189718ddf1ccf49060b7c1529d @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/index.html

[-- Attachment #1.2: Type: text/html, Size: 3638 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [igt-dev] ✗ GitLab.Pipeline: warning for Fix gem_scheduler.manycontexts for GuC submission
  2021-07-27 18:20 ` [igt-dev] " Matthew Brost
                   ` (2 preceding siblings ...)
  (?)
@ 2021-07-27 21:29 ` Patchwork
  -1 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-07-27 21:29 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev

== Series Details ==

Series: Fix gem_scheduler.manycontexts for GuC submission
URL   : https://patchwork.freedesktop.org/series/93077/
State : warning

== Summary ==

Pipeline status: FAILED.

see https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/pipelines/368537 for the overview.

test:ninja-test-arm64 has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/12221207):
      return options.run_func(options)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 805, in run
      return th.doit()
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 555, in doit
      self.run_tests(tests)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 715, in run_tests
      self.drain_futures(futures)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 732, in drain_futures
      self.print_stats(numlen, tests, name, result.result(), i)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 505, in print_stats
      result_str += "\n\n" + result.get_log()
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 178, in get_log
      res += self.stde
  TypeError: can only concatenate str (not "bytes") to str
  FAILED: meson-test 
  /usr/bin/meson test --no-rebuild --print-errorlogs
  ninja: build stopped: subcommand failed.
  section_end:1627421100:step_script
  ERROR: Job failed: execution took longer than 1h0m0s seconds
  

test:ninja-test-armhf has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/12221209):
      return options.run_func(options)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 805, in run
      return th.doit()
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 555, in doit
      self.run_tests(tests)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 715, in run_tests
      self.drain_futures(futures)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 732, in drain_futures
      self.print_stats(numlen, tests, name, result.result(), i)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 505, in print_stats
      result_str += "\n\n" + result.get_log()
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 178, in get_log
      res += self.stde
  TypeError: can only concatenate str (not "bytes") to str
  FAILED: meson-test 
  /usr/bin/meson test --no-rebuild --print-errorlogs
  ninja: build stopped: subcommand failed.
  section_end:1627421101:step_script
  ERROR: Job failed: execution took longer than 1h0m0s seconds
  

test:ninja-test-minimal has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/12221208):
      return options.run_func(options)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 805, in run
      return th.doit()
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 555, in doit
      self.run_tests(tests)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 715, in run_tests
      self.drain_futures(futures)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 732, in drain_futures
      self.print_stats(numlen, tests, name, result.result(), i)
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 505, in print_stats
      result_str += "\n\n" + result.get_log()
    File "/usr/lib/python3/dist-packages/mesonbuild/mtest.py", line 178, in get_log
      res += self.stde
  TypeError: can only concatenate str (not "bytes") to str
  FAILED: meson-test 
  /usr/bin/meson test --no-rebuild --print-errorlogs
  ninja: build stopped: subcommand failed.
  section_end:1627421100:step_script
  ERROR: Job failed: execution took longer than 1h0m0s seconds

== Logs ==

For more details see: https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/pipelines/368537
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [igt-dev] ✗ Fi.CI.IGT: failure for Fix gem_scheduler.manycontexts for GuC submission
  2021-07-27 18:20 ` [igt-dev] " Matthew Brost
                   ` (3 preceding siblings ...)
  (?)
@ 2021-07-28  4:09 ` Patchwork
  -1 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-07-28  4:09 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev


[-- Attachment #1.1: Type: text/plain, Size: 30267 bytes --]

== Series Details ==

Series: Fix gem_scheduler.manycontexts for GuC submission
URL   : https://patchwork.freedesktop.org/series/93077/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10405_full -> IGTPW_6066_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_6066_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_6066_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_6066_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_selftest@all@damage_iter_no_damage:
    - shard-apl:          NOTRUN -> [INCOMPLETE][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl3/igt@kms_selftest@all@damage_iter_no_damage.html

  
New tests
---------

  New tests have been introduced between CI_DRM_10405_full and IGTPW_6066_full:

### New IGT tests (1) ###

  * igt@kms_busy@extended-pageflip-hang-oldfb@pipe-b:
    - Statuses : 1 pass(s)
    - Exec time: [0.17] s

  

Known issues
------------

  Here are the changes found in IGTPW_6066_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@feature_discovery@chamelium:
    - shard-tglb:         NOTRUN -> [SKIP][2] ([fdo#111827])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb7/igt@feature_discovery@chamelium.html
    - shard-iclb:         NOTRUN -> [SKIP][3] ([fdo#111827])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb3/igt@feature_discovery@chamelium.html

  * igt@feature_discovery@psr2:
    - shard-iclb:         [PASS][4] -> [SKIP][5] ([i915#658])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb2/igt@feature_discovery@psr2.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb4/igt@feature_discovery@psr2.html

  * igt@gem_ctx_persistence@engines-mixed:
    - shard-snb:          NOTRUN -> [SKIP][6] ([fdo#109271] / [i915#1099]) +6 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb7/igt@gem_ctx_persistence@engines-mixed.html

  * igt@gem_ctx_persistence@many-contexts:
    - shard-tglb:         [PASS][7] -> [FAIL][8] ([i915#2410])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-tglb2/igt@gem_ctx_persistence@many-contexts.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb1/igt@gem_ctx_persistence@many-contexts.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [PASS][9] -> [TIMEOUT][10] ([i915#2369] / [i915#3063] / [i915#3648])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-tglb1/igt@gem_eio@unwedge-stress.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb7/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglb:         [PASS][11] -> [FAIL][12] ([i915#2842]) +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-tglb1/igt@gem_exec_fair@basic-flow@rcs0.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb6/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-none-vip@rcs0:
    - shard-kbl:          [PASS][13] -> [FAIL][14] ([i915#2842]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-kbl7/igt@gem_exec_fair@basic-none-vip@rcs0.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl3/igt@gem_exec_fair@basic-none-vip@rcs0.html

  * igt@gem_exec_fair@basic-none@vecs0:
    - shard-apl:          [PASS][15] -> [FAIL][16] ([i915#2842] / [i915#3468])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-apl8/igt@gem_exec_fair@basic-none@vecs0.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl6/igt@gem_exec_fair@basic-none@vecs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-iclb:         [PASS][17] -> [FAIL][18] ([i915#2842])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb2/igt@gem_exec_fair@basic-pace@vecs0.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb1/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-glk:          [PASS][19] -> [FAIL][20] ([i915#2842]) +4 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-glk6/igt@gem_exec_fair@basic-throttle@rcs0.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk3/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_exec_params@no-blt:
    - shard-tglb:         NOTRUN -> [SKIP][21] ([fdo#109283])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb1/igt@gem_exec_params@no-blt.html
    - shard-iclb:         NOTRUN -> [SKIP][22] ([fdo#109283])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb1/igt@gem_exec_params@no-blt.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][23] ([fdo#109271] / [i915#2190])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl2/igt@gem_huc_copy@huc-copy.html

  * igt@gem_pread@exhaustion:
    - shard-snb:          NOTRUN -> [WARN][24] ([i915#2658])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb5/igt@gem_pread@exhaustion.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-apl:          NOTRUN -> [WARN][25] ([i915#2658])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl7/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_userptr_blits@access-control:
    - shard-tglb:         NOTRUN -> [SKIP][26] ([i915#3297])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb1/igt@gem_userptr_blits@access-control.html

  * igt@gem_userptr_blits@input-checking:
    - shard-tglb:         NOTRUN -> [DMESG-WARN][27] ([i915#3002])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb5/igt@gem_userptr_blits@input-checking.html
    - shard-glk:          NOTRUN -> [DMESG-WARN][28] ([i915#3002])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk2/igt@gem_userptr_blits@input-checking.html
    - shard-iclb:         NOTRUN -> [DMESG-WARN][29] ([i915#3002])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb5/igt@gem_userptr_blits@input-checking.html
    - shard-kbl:          NOTRUN -> [DMESG-WARN][30] ([i915#3002])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl7/igt@gem_userptr_blits@input-checking.html
    - shard-snb:          NOTRUN -> [DMESG-WARN][31] ([i915#3002])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb7/igt@gem_userptr_blits@input-checking.html

  * igt@gen7_exec_parse@basic-allowed:
    - shard-tglb:         NOTRUN -> [SKIP][32] ([fdo#109289])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb5/igt@gen7_exec_parse@basic-allowed.html
    - shard-iclb:         NOTRUN -> [SKIP][33] ([fdo#109289]) +1 similar issue
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb6/igt@gen7_exec_parse@basic-allowed.html

  * igt@gen9_exec_parse@bb-start-far:
    - shard-iclb:         NOTRUN -> [SKIP][34] ([i915#2856])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb2/igt@gen9_exec_parse@bb-start-far.html
    - shard-tglb:         NOTRUN -> [SKIP][35] ([i915#2856])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@gen9_exec_parse@bb-start-far.html

  * igt@i915_pm_rc6_residency@media-rc6-accuracy:
    - shard-tglb:         NOTRUN -> [SKIP][36] ([fdo#109289] / [fdo#111719])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb7/igt@i915_pm_rc6_residency@media-rc6-accuracy.html

  * igt@i915_pm_rpm@basic-rte:
    - shard-apl:          NOTRUN -> [FAIL][37] ([i915#579])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl1/igt@i915_pm_rpm@basic-rte.html
    - shard-tglb:         NOTRUN -> [FAIL][38] ([i915#579])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@i915_pm_rpm@basic-rte.html

  * igt@i915_pm_rpm@fences:
    - shard-tglb:         NOTRUN -> [SKIP][39] ([i915#579]) +1 similar issue
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@i915_pm_rpm@fences.html

  * igt@kms_big_fb@linear-32bpp-rotate-180:
    - shard-glk:          NOTRUN -> [DMESG-WARN][40] ([i915#118] / [i915#95])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk1/igt@kms_big_fb@linear-32bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-apl:          NOTRUN -> [SKIP][41] ([fdo#109271] / [i915#3777]) +2 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl7/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@y-tiled-32bpp-rotate-0:
    - shard-glk:          [PASS][42] -> [DMESG-WARN][43] ([i915#118] / [i915#95]) +2 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-glk7/igt@kms_big_fb@y-tiled-32bpp-rotate-0.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk1/igt@kms_big_fb@y-tiled-32bpp-rotate-0.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-glk:          NOTRUN -> [SKIP][44] ([fdo#109271] / [i915#3777])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk3/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
    - shard-kbl:          NOTRUN -> [SKIP][45] ([fdo#109271] / [i915#3777])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl6/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-iclb:         NOTRUN -> [DMESG-WARN][46] ([i915#3621])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([fdo#111615])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
    - shard-iclb:         NOTRUN -> [SKIP][48] ([fdo#110723])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb2/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html

  * igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs:
    - shard-iclb:         NOTRUN -> [SKIP][49] ([fdo#109278]) +11 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb6/igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html
    - shard-tglb:         NOTRUN -> [SKIP][50] ([i915#3689]) +3 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb5/igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html

  * igt@kms_chamelium@hdmi-edid-change-during-suspend:
    - shard-apl:          NOTRUN -> [SKIP][51] ([fdo#109271] / [fdo#111827]) +22 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl2/igt@kms_chamelium@hdmi-edid-change-during-suspend.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - shard-iclb:         NOTRUN -> [SKIP][52] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb6/igt@kms_chamelium@hdmi-hpd-fast.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-25:
    - shard-snb:          NOTRUN -> [SKIP][53] ([fdo#109271] / [fdo#111827]) +19 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb7/igt@kms_color_chamelium@pipe-a-ctm-0-25.html

  * igt@kms_color_chamelium@pipe-b-ctm-limited-range:
    - shard-tglb:         NOTRUN -> [SKIP][54] ([fdo#109284] / [fdo#111827]) +3 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb6/igt@kms_color_chamelium@pipe-b-ctm-limited-range.html
    - shard-glk:          NOTRUN -> [SKIP][55] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk3/igt@kms_color_chamelium@pipe-b-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-d-ctm-negative:
    - shard-iclb:         NOTRUN -> [SKIP][56] ([fdo#109278] / [fdo#109284] / [fdo#111827])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb4/igt@kms_color_chamelium@pipe-d-ctm-negative.html
    - shard-kbl:          NOTRUN -> [SKIP][57] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl6/igt@kms_color_chamelium@pipe-d-ctm-negative.html

  * igt@kms_content_protection@uevent:
    - shard-apl:          NOTRUN -> [FAIL][58] ([i915#2105])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl3/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-a-cursor-512x170-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][59] ([fdo#109279] / [i915#3359]) +2 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb6/igt@kms_cursor_crc@pipe-a-cursor-512x170-onscreen.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x512-offscreen:
    - shard-iclb:         NOTRUN -> [SKIP][60] ([fdo#109278] / [fdo#109279]) +1 similar issue
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb7/igt@kms_cursor_crc@pipe-b-cursor-512x512-offscreen.html

  * igt@kms_cursor_crc@pipe-b-cursor-max-size-sliding:
    - shard-tglb:         NOTRUN -> [SKIP][61] ([i915#3359])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb3/igt@kms_cursor_crc@pipe-b-cursor-max-size-sliding.html

  * igt@kms_cursor_crc@pipe-d-cursor-32x32-sliding:
    - shard-tglb:         NOTRUN -> [SKIP][62] ([i915#3319])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb6/igt@kms_cursor_crc@pipe-d-cursor-32x32-sliding.html

  * igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge:
    - shard-snb:          NOTRUN -> [SKIP][63] ([fdo#109271]) +379 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb7/igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge.html

  * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([fdo#109274] / [fdo#109278]) +1 similar issue
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb2/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html

  * igt@kms_flip@2x-nonexisting-fb-interruptible:
    - shard-iclb:         NOTRUN -> [SKIP][65] ([fdo#109274]) +2 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb8/igt@kms_flip@2x-nonexisting-fb-interruptible.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs:
    - shard-apl:          NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#2672])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl8/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-blt:
    - shard-tglb:         NOTRUN -> [SKIP][67] ([fdo#111825]) +13 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-pwrite:
    - shard-iclb:         NOTRUN -> [SKIP][68] ([fdo#109280]) +7 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-mmap-cpu:
    - shard-glk:          NOTRUN -> [SKIP][69] ([fdo#109271]) +35 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk3/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-pwrite:
    - shard-kbl:          NOTRUN -> [SKIP][70] ([fdo#109271]) +39 similar issues
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-pwrite.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-kbl:          [PASS][71] -> [DMESG-WARN][72] ([i915#180]) +3 similar issues
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-kbl1/igt@kms_hdr@bpc-switch-suspend.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl1/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - shard-apl:          NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#533]) +1 similar issue
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl7/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
    - shard-glk:          NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#533])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk1/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
    - shard-kbl:          NOTRUN -> [SKIP][75] ([fdo#109271] / [i915#533])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl4/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-basic:
    - shard-apl:          NOTRUN -> [FAIL][76] ([fdo#108145] / [i915#265]) +2 similar issues
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl6/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][77] ([i915#265])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl2/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html

  * igt@kms_plane_multiple@atomic-pipe-c-tiling-yf:
    - shard-tglb:         NOTRUN -> [SKIP][78] ([fdo#112054])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb6/igt@kms_plane_multiple@atomic-pipe-c-tiling-yf.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1:
    - shard-apl:          NOTRUN -> [SKIP][79] ([fdo#109271] / [i915#658]) +5 similar issues
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl3/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-1:
    - shard-tglb:         NOTRUN -> [SKIP][80] ([i915#2920])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb6/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
    - shard-kbl:          NOTRUN -> [SKIP][81] ([fdo#109271] / [i915#658])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl1/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
    - shard-glk:          NOTRUN -> [SKIP][82] ([fdo#109271] / [i915#658])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk9/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html
    - shard-iclb:         NOTRUN -> [SKIP][83] ([i915#658])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb5/igt@kms_psr2_sf@plane-move-sf-dmg-area-1.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         NOTRUN -> [SKIP][84] ([fdo#109441])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb4/igt@kms_psr@psr2_cursor_plane_move.html
    - shard-tglb:         NOTRUN -> [FAIL][85] ([i915#132] / [i915#3467])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         [PASS][86] -> [SKIP][87] ([fdo#109441]) +2 similar issues
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb2/igt@kms_psr@psr2_cursor_render.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb7/igt@kms_psr@psr2_cursor_render.html

  * igt@kms_setmode@basic:
    - shard-snb:          NOTRUN -> [FAIL][88] ([i915#31])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb6/igt@kms_setmode@basic.html

  * igt@nouveau_crc@pipe-a-source-outp-complete:
    - shard-tglb:         NOTRUN -> [SKIP][89] ([i915#2530])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb2/igt@nouveau_crc@pipe-a-source-outp-complete.html
    - shard-iclb:         NOTRUN -> [SKIP][90] ([i915#2530])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb5/igt@nouveau_crc@pipe-a-source-outp-complete.html

  * igt@nouveau_crc@pipe-b-ctx-flip-skip-current-frame:
    - shard-apl:          NOTRUN -> [SKIP][91] ([fdo#109271]) +303 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl3/igt@nouveau_crc@pipe-b-ctx-flip-skip-current-frame.html

  * igt@prime_nv_api@nv_self_import_to_different_fd:
    - shard-tglb:         NOTRUN -> [SKIP][92] ([fdo#109291])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb7/igt@prime_nv_api@nv_self_import_to_different_fd.html

  * igt@sysfs_clients@pidname:
    - shard-iclb:         NOTRUN -> [SKIP][93] ([i915#2994])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb3/igt@sysfs_clients@pidname.html
    - shard-glk:          NOTRUN -> [SKIP][94] ([fdo#109271] / [i915#2994])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk1/igt@sysfs_clients@pidname.html
    - shard-tglb:         NOTRUN -> [SKIP][95] ([i915#2994])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-tglb7/igt@sysfs_clients@pidname.html

  * igt@sysfs_clients@sema-50:
    - shard-apl:          NOTRUN -> [SKIP][96] ([fdo#109271] / [i915#2994]) +2 similar issues
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl6/igt@sysfs_clients@sema-50.html

  
#### Possible fixes ####

  * igt@gem_eio@unwedge-stress:
    - shard-iclb:         [TIMEOUT][97] ([i915#2369] / [i915#2481] / [i915#3070]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb4/igt@gem_eio@unwedge-stress.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb8/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - shard-iclb:         [FAIL][99] ([i915#2842]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb2/igt@gem_exec_fair@basic-pace@rcs0.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb1/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_mmap_gtt@cpuset-medium-copy:
    - shard-iclb:         [FAIL][101] ([i915#2428]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb1/igt@gem_mmap_gtt@cpuset-medium-copy.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb6/igt@gem_mmap_gtt@cpuset-medium-copy.html

  * igt@gem_mmap_gtt@cpuset-medium-copy-odd:
    - shard-iclb:         [FAIL][103] ([i915#307]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb4/igt@gem_mmap_gtt@cpuset-medium-copy-odd.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb7/igt@gem_mmap_gtt@cpuset-medium-copy-odd.html

  * igt@gem_ppgtt@flink-and-close-vma-leak:
    - shard-glk:          [FAIL][105] ([i915#644]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-glk9/igt@gem_ppgtt@flink-and-close-vma-leak.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk9/igt@gem_ppgtt@flink-and-close-vma-leak.html

  * igt@i915_selftest@live@hangcheck:
    - shard-snb:          [INCOMPLETE][107] ([i915#2782]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-snb7/igt@i915_selftest@live@hangcheck.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-snb6/igt@i915_selftest@live@hangcheck.html

  * igt@kms_big_fb@linear-32bpp-rotate-0:
    - shard-glk:          [DMESG-WARN][109] ([i915#118] / [i915#95]) -> [PASS][110]
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-glk1/igt@kms_big_fb@linear-32bpp-rotate-0.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-glk7/igt@kms_big_fb@linear-32bpp-rotate-0.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-apl:          [DMESG-WARN][111] ([i915#180]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-apl3/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl6/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_psr@psr2_cursor_mmap_cpu:
    - shard-iclb:         [SKIP][113] ([fdo#109441]) -> [PASS][114] +1 similar issue
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb5/igt@kms_psr@psr2_cursor_mmap_cpu.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_cpu.html

  
#### Warnings ####

  * igt@kms_psr2_sf@cursor-plane-update-sf:
    - shard-iclb:         [SKIP][115] ([i915#2920]) -> [SKIP][116] ([i915#658]) +2 similar issues
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb2/igt@kms_psr2_sf@cursor-plane-update-sf.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb8/igt@kms_psr2_sf@cursor-plane-update-sf.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4:
    - shard-iclb:         [SKIP][117] ([i915#658]) -> [SKIP][118] ([i915#2920]) +1 similar issue
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb3/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-4.html

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][119], [FAIL][120]) ([i915#2426] / [i915#3002] / [i915#3363]) -> ([FAIL][121], [FAIL][122], [FAIL][123], [FAIL][124], [FAIL][125], [FAIL][126], [FAIL][127]) ([fdo#109271] / [i915#1814] / [i915#2426] / [i915#3002] / [i915#3363] / [i915#602])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-kbl4/igt@runner@aborted.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-kbl4/igt@runner@aborted.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl7/igt@runner@aborted.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl1/igt@runner@aborted.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl1/igt@runner@aborted.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl7/igt@runner@aborted.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl1/igt@runner@aborted.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl7/igt@runner@aborted.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-kbl1/igt@runner@aborted.html
    - shard-iclb:         ([FAIL][128], [FAIL][129]) ([i915#2426] / [i915#3002] / [i915#3690]) -> ([FAIL][130], [FAIL][131], [FAIL][132], [FAIL][133]) ([i915#1814] / [i915#2426] / [i915#3002] / [i915#3690])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb7/igt@runner@aborted.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-iclb1/igt@runner@aborted.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb8/igt@runner@aborted.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb1/igt@runner@aborted.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb5/igt@runner@aborted.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-iclb8/igt@runner@aborted.html
    - shard-apl:          ([FAIL][134], [FAIL][135], [FAIL][136]) ([fdo#109271] / [i915#180] / [i915#1814] / [i915#3002] / [i915#3363]) -> ([FAIL][137], [FAIL][138]) ([i915#2426] / [i915#3002] / [i915#3363])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-apl6/igt@runner@aborted.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-apl3/igt@runner@aborted.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10405/shard-apl7/igt@runner@aborted.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl3/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/shard-apl3/igt@runner@aborted.html

  
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111719]: https://bugs.freedesktop.org/show_bug.cgi?id=111719
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112054]: https://bugs.freedesktop.org/show_bug.cgi?id=112054
  [i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099
  [i915#118]: https://gitlab.freedesktop.org/drm/intel/issues/118
  [i915#132]: https://gitlab.freedesktop.org/drm/intel/issues/132
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issu

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6066/index.html

[-- Attachment #1.2: Type: text/html, Size: 37067 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-27 18:20   ` [igt-dev] " Matthew Brost
@ 2021-07-29 23:54     ` John Harrison
  -1 siblings, 0 replies; 21+ messages in thread
From: John Harrison @ 2021-07-29 23:54 UTC (permalink / raw)
  To: Matthew Brost, igt-dev; +Cc: intel-gfx

On 7/27/2021 11:20, Matthew Brost wrote:
> With GuC submission contexts can get reordered (compared to submission
> order), if contexts get reordered the sequential nature of the batches
> releasing the next batch's semaphore in function timesliceN() get broken
> resulting in the test taking much longer than if should. e.g. Every
> contexts needs to be timesliced to release the next batch. Corking the
> first submission until all the batches have been submitted should ensure
> submission order.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> index f03842478..41f2591a5 100644
> --- a/tests/i915/gem_exec_schedule.c
> +++ b/tests/i915/gem_exec_schedule.c
> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   	struct drm_i915_gem_execbuffer2 execbuf  = {
>   		.buffers_ptr = to_user_pointer(&obj),
>   		.buffer_count = 1,
> -		.flags = engine | I915_EXEC_FENCE_OUT,
> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>   	};
>   	uint32_t *result =
>   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>   	const intel_ctx_t *ctx;
>   	int fence[count];
> +	IGT_CORK_FENCE(cork);
>   
>   	/*
>   	 * Create a pair of interlocking batches, that ping pong
> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   	igt_require(gem_scheduler_has_timeslicing(i915));
>   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>   
> +	/*
> +	 * With GuC submission contexts can get reordered (compared to
> +	 * submission order), if contexts get reordered the sequential
> +	 * nature of the batches releasing the next batch's semaphore gets
> +	 * broken resulting in the test taking much longer than it should (e.g.
> +	 * every context needs to be timesliced to release the next batch).
> +	 * Corking the first submission until all batches have been
> +	 * submitted should ensure submission order.
> +	 */
> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> +
>   	/* No coupling between requests; free to timeslice */
>   
>   	for (int i = 0; i < count; i++) {
> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   		intel_ctx_destroy(i915, ctx);
>   
>   		fence[i] = execbuf.rsvd2 >> 32;
> +		execbuf.rsvd2 >>= 32;
This means you are passing fence_out[A] as fenc_in[B]? I.e. this patch 
is also changing the behaviour to make each batch dependent upon the 
previous one. That change is not mentioned in the new comment. It is 
also the exact opposite of the comment immediately above the loop - 'No 
coupling between requests'.

John.


>   	}
>   
> +	igt_cork_unplug(&cork);
>   	gem_sync(i915, obj.handle);
>   	gem_close(i915, obj.handle);
>   

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-07-29 23:54     ` John Harrison
  0 siblings, 0 replies; 21+ messages in thread
From: John Harrison @ 2021-07-29 23:54 UTC (permalink / raw)
  To: Matthew Brost, igt-dev; +Cc: intel-gfx

On 7/27/2021 11:20, Matthew Brost wrote:
> With GuC submission contexts can get reordered (compared to submission
> order), if contexts get reordered the sequential nature of the batches
> releasing the next batch's semaphore in function timesliceN() get broken
> resulting in the test taking much longer than if should. e.g. Every
> contexts needs to be timesliced to release the next batch. Corking the
> first submission until all the batches have been submitted should ensure
> submission order.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> index f03842478..41f2591a5 100644
> --- a/tests/i915/gem_exec_schedule.c
> +++ b/tests/i915/gem_exec_schedule.c
> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   	struct drm_i915_gem_execbuffer2 execbuf  = {
>   		.buffers_ptr = to_user_pointer(&obj),
>   		.buffer_count = 1,
> -		.flags = engine | I915_EXEC_FENCE_OUT,
> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>   	};
>   	uint32_t *result =
>   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>   	const intel_ctx_t *ctx;
>   	int fence[count];
> +	IGT_CORK_FENCE(cork);
>   
>   	/*
>   	 * Create a pair of interlocking batches, that ping pong
> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   	igt_require(gem_scheduler_has_timeslicing(i915));
>   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>   
> +	/*
> +	 * With GuC submission contexts can get reordered (compared to
> +	 * submission order), if contexts get reordered the sequential
> +	 * nature of the batches releasing the next batch's semaphore gets
> +	 * broken resulting in the test taking much longer than it should (e.g.
> +	 * every context needs to be timesliced to release the next batch).
> +	 * Corking the first submission until all batches have been
> +	 * submitted should ensure submission order.
> +	 */
> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> +
>   	/* No coupling between requests; free to timeslice */
>   
>   	for (int i = 0; i < count; i++) {
> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   		intel_ctx_destroy(i915, ctx);
>   
>   		fence[i] = execbuf.rsvd2 >> 32;
> +		execbuf.rsvd2 >>= 32;
This means you are passing fence_out[A] as fenc_in[B]? I.e. this patch 
is also changing the behaviour to make each batch dependent upon the 
previous one. That change is not mentioned in the new comment. It is 
also the exact opposite of the comment immediately above the loop - 'No 
coupling between requests'.

John.


>   	}
>   
> +	igt_cork_unplug(&cork);
>   	gem_sync(i915, obj.handle);
>   	gem_close(i915, obj.handle);
>   

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-29 23:54     ` John Harrison
@ 2021-07-30  0:00       ` Matthew Brost
  -1 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-30  0:00 UTC (permalink / raw)
  To: John Harrison; +Cc: igt-dev, intel-gfx

On Thu, Jul 29, 2021 at 04:54:08PM -0700, John Harrison wrote:
> On 7/27/2021 11:20, Matthew Brost wrote:
> > With GuC submission contexts can get reordered (compared to submission
> > order), if contexts get reordered the sequential nature of the batches
> > releasing the next batch's semaphore in function timesliceN() get broken
> > resulting in the test taking much longer than if should. e.g. Every
> > contexts needs to be timesliced to release the next batch. Corking the
> > first submission until all the batches have been submitted should ensure
> > submission order.
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
> >   1 file changed, 15 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> > index f03842478..41f2591a5 100644
> > --- a/tests/i915/gem_exec_schedule.c
> > +++ b/tests/i915/gem_exec_schedule.c
> > @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	struct drm_i915_gem_execbuffer2 execbuf  = {
> >   		.buffers_ptr = to_user_pointer(&obj),
> >   		.buffer_count = 1,
> > -		.flags = engine | I915_EXEC_FENCE_OUT,
> > +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
> >   	};
> >   	uint32_t *result =
> >   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
> >   	const intel_ctx_t *ctx;
> >   	int fence[count];
> > +	IGT_CORK_FENCE(cork);
> >   	/*
> >   	 * Create a pair of interlocking batches, that ping pong
> > @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	igt_require(gem_scheduler_has_timeslicing(i915));
> >   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
> > +	/*
> > +	 * With GuC submission contexts can get reordered (compared to
> > +	 * submission order), if contexts get reordered the sequential
> > +	 * nature of the batches releasing the next batch's semaphore gets
> > +	 * broken resulting in the test taking much longer than it should (e.g.
> > +	 * every context needs to be timesliced to release the next batch).
> > +	 * Corking the first submission until all batches have been
> > +	 * submitted should ensure submission order.
> > +	 */
> > +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> > +
> >   	/* No coupling between requests; free to timeslice */
> >   	for (int i = 0; i < count; i++) {
> > @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   		intel_ctx_destroy(i915, ctx);
> >   		fence[i] = execbuf.rsvd2 >> 32;
> > +		execbuf.rsvd2 >>= 32;
> This means you are passing fence_out[A] as fenc_in[B]? I.e. this patch is
> also changing the behaviour to make each batch dependent upon the previous

This is a submission fence, it just ensures they get submitted in order.
Corking the first request + the fence, ensures all the requests get
submitted basically at the same time compared to execbuf IOCTL time
without it.

> one. That change is not mentioned in the new comment. It is also the exact

Yea, I could explain this better. Will fix.

Matt

> opposite of the comment immediately above the loop - 'No coupling between
> requests'.
> 
> John.
> 
> 
> >   	}
> > +	igt_cork_unplug(&cork);
> >   	gem_sync(i915, obj.handle);
> >   	gem_close(i915, obj.handle);
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-07-30  0:00       ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-30  0:00 UTC (permalink / raw)
  To: John Harrison; +Cc: igt-dev, intel-gfx

On Thu, Jul 29, 2021 at 04:54:08PM -0700, John Harrison wrote:
> On 7/27/2021 11:20, Matthew Brost wrote:
> > With GuC submission contexts can get reordered (compared to submission
> > order), if contexts get reordered the sequential nature of the batches
> > releasing the next batch's semaphore in function timesliceN() get broken
> > resulting in the test taking much longer than if should. e.g. Every
> > contexts needs to be timesliced to release the next batch. Corking the
> > first submission until all the batches have been submitted should ensure
> > submission order.
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
> >   1 file changed, 15 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> > index f03842478..41f2591a5 100644
> > --- a/tests/i915/gem_exec_schedule.c
> > +++ b/tests/i915/gem_exec_schedule.c
> > @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	struct drm_i915_gem_execbuffer2 execbuf  = {
> >   		.buffers_ptr = to_user_pointer(&obj),
> >   		.buffer_count = 1,
> > -		.flags = engine | I915_EXEC_FENCE_OUT,
> > +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
> >   	};
> >   	uint32_t *result =
> >   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
> >   	const intel_ctx_t *ctx;
> >   	int fence[count];
> > +	IGT_CORK_FENCE(cork);
> >   	/*
> >   	 * Create a pair of interlocking batches, that ping pong
> > @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	igt_require(gem_scheduler_has_timeslicing(i915));
> >   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
> > +	/*
> > +	 * With GuC submission contexts can get reordered (compared to
> > +	 * submission order), if contexts get reordered the sequential
> > +	 * nature of the batches releasing the next batch's semaphore gets
> > +	 * broken resulting in the test taking much longer than it should (e.g.
> > +	 * every context needs to be timesliced to release the next batch).
> > +	 * Corking the first submission until all batches have been
> > +	 * submitted should ensure submission order.
> > +	 */
> > +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> > +
> >   	/* No coupling between requests; free to timeslice */
> >   	for (int i = 0; i < count; i++) {
> > @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   		intel_ctx_destroy(i915, ctx);
> >   		fence[i] = execbuf.rsvd2 >> 32;
> > +		execbuf.rsvd2 >>= 32;
> This means you are passing fence_out[A] as fenc_in[B]? I.e. this patch is
> also changing the behaviour to make each batch dependent upon the previous

This is a submission fence, it just ensures they get submitted in order.
Corking the first request + the fence, ensures all the requests get
submitted basically at the same time compared to execbuf IOCTL time
without it.

> one. That change is not mentioned in the new comment. It is also the exact

Yea, I could explain this better. Will fix.

Matt

> opposite of the comment immediately above the loop - 'No coupling between
> requests'.
> 
> John.
> 
> 
> >   	}
> > +	igt_cork_unplug(&cork);
> >   	gem_sync(i915, obj.handle);
> >   	gem_close(i915, obj.handle);
> 
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-27 18:20   ` [igt-dev] " Matthew Brost
  (?)
  (?)
@ 2021-07-30  9:58   ` Tvrtko Ursulin
  2021-07-30 18:06       ` [igt-dev] " Matthew Brost
  -1 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-07-30  9:58 UTC (permalink / raw)
  To: Matthew Brost, igt-dev; +Cc: intel-gfx


On 27/07/2021 19:20, Matthew Brost wrote:
> With GuC submission contexts can get reordered (compared to submission
> order), if contexts get reordered the sequential nature of the batches
> releasing the next batch's semaphore in function timesliceN() get broken
> resulting in the test taking much longer than if should. e.g. Every
> contexts needs to be timesliced to release the next batch. Corking the
> first submission until all the batches have been submitted should ensure
> submission order.

The explanation sounds suspect.

Consider this comment from the test itself:

	/*
	 * Create a pair of interlocking batches, that ping pong
	 * between each other, and only advance one step at a time.
	 * We require the kernel to preempt at each semaphore and
	 * switch to the other batch in order to advance.
	 */

I'd say the test does not rely on no re-ordering at all, but relies on 
context switch on an unsatisfied semaphore.

In the commit you seem to acknowledge GuC does not do that but instead 
ends up waiting for the timeslice to expire, did I get that right? If 
so, why does the GuC does not do that and can we fix it?

Regards,

Tvrtko

> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> index f03842478..41f2591a5 100644
> --- a/tests/i915/gem_exec_schedule.c
> +++ b/tests/i915/gem_exec_schedule.c
> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   	struct drm_i915_gem_execbuffer2 execbuf  = {
>   		.buffers_ptr = to_user_pointer(&obj),
>   		.buffer_count = 1,
> -		.flags = engine | I915_EXEC_FENCE_OUT,
> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>   	};
>   	uint32_t *result =
>   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>   	const intel_ctx_t *ctx;
>   	int fence[count];
> +	IGT_CORK_FENCE(cork);
>   
>   	/*
>   	 * Create a pair of interlocking batches, that ping pong
> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   	igt_require(gem_scheduler_has_timeslicing(i915));
>   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>   
> +	/*
> +	 * With GuC submission contexts can get reordered (compared to
> +	 * submission order), if contexts get reordered the sequential
> +	 * nature of the batches releasing the next batch's semaphore gets
> +	 * broken resulting in the test taking much longer than it should (e.g.
> +	 * every context needs to be timesliced to release the next batch).
> +	 * Corking the first submission until all batches have been
> +	 * submitted should ensure submission order.
> +	 */
> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> +
>   	/* No coupling between requests; free to timeslice */
>   
>   	for (int i = 0; i < count; i++) {
> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>   		intel_ctx_destroy(i915, ctx);
>   
>   		fence[i] = execbuf.rsvd2 >> 32;
> +		execbuf.rsvd2 >>= 32;
>   	}
>   
> +	igt_cork_unplug(&cork);
>   	gem_sync(i915, obj.handle);
>   	gem_close(i915, obj.handle);
>   
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-30  9:58   ` [Intel-gfx] " Tvrtko Ursulin
@ 2021-07-30 18:06       ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-30 18:06 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: igt-dev, intel-gfx

On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
> 
> On 27/07/2021 19:20, Matthew Brost wrote:
> > With GuC submission contexts can get reordered (compared to submission
> > order), if contexts get reordered the sequential nature of the batches
> > releasing the next batch's semaphore in function timesliceN() get broken
> > resulting in the test taking much longer than if should. e.g. Every
> > contexts needs to be timesliced to release the next batch. Corking the
> > first submission until all the batches have been submitted should ensure
> > submission order.
> 
> The explanation sounds suspect.
> 
> Consider this comment from the test itself:
> 
> 	/*
> 	 * Create a pair of interlocking batches, that ping pong
> 	 * between each other, and only advance one step at a time.
> 	 * We require the kernel to preempt at each semaphore and
> 	 * switch to the other batch in order to advance.
> 	 */
> 
> I'd say the test does not rely on no re-ordering at all, but relies on
> context switch on an unsatisfied semaphore.
>

Yes, let do a simple example with 5 batches. Batch 0 releases batch's
semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
are seen in order the test should take 40 timeslices (8 semaphores in
each batch have to be released).

If the batches are in the below order:
0 2 1 3 4

Now we have 72 timeslices. Now imagine with 67 batches completely out of
order, the number timeslices can explode.

> In the commit you seem to acknowledge GuC does not do that but instead ends
> up waiting for the timeslice to expire, did I get that right? If so, why

I think GuC waits for the timeslice to expire if a semaphore is
unsatisfied, I have to double check on that. I thought that was what
execlists were doing too but I now see it has a convoluted algorithm to
yield the timeslice if subsequent request comes in and the ring is
waiting on a timeslice. Let me check with GuC team and see if they can
/ are doing something similiar. I was thinking the only to switch a
sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
be incorrect.

For what is worth, after this change the run times of test are pretty
similar for execlists & GuC on TGL.

Matt

> does the GuC does not do that and can we fix it?
> 
> Regards,
> 
> Tvrtko
> 
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
> >   1 file changed, 15 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> > index f03842478..41f2591a5 100644
> > --- a/tests/i915/gem_exec_schedule.c
> > +++ b/tests/i915/gem_exec_schedule.c
> > @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	struct drm_i915_gem_execbuffer2 execbuf  = {
> >   		.buffers_ptr = to_user_pointer(&obj),
> >   		.buffer_count = 1,
> > -		.flags = engine | I915_EXEC_FENCE_OUT,
> > +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
> >   	};
> >   	uint32_t *result =
> >   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
> >   	const intel_ctx_t *ctx;
> >   	int fence[count];
> > +	IGT_CORK_FENCE(cork);
> >   	/*
> >   	 * Create a pair of interlocking batches, that ping pong
> > @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	igt_require(gem_scheduler_has_timeslicing(i915));
> >   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
> > +	/*
> > +	 * With GuC submission contexts can get reordered (compared to
> > +	 * submission order), if contexts get reordered the sequential
> > +	 * nature of the batches releasing the next batch's semaphore gets
> > +	 * broken resulting in the test taking much longer than it should (e.g.
> > +	 * every context needs to be timesliced to release the next batch).
> > +	 * Corking the first submission until all batches have been
> > +	 * submitted should ensure submission order.
> > +	 */
> > +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> > +
> >   	/* No coupling between requests; free to timeslice */
> >   	for (int i = 0; i < count; i++) {
> > @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   		intel_ctx_destroy(i915, ctx);
> >   		fence[i] = execbuf.rsvd2 >> 32;
> > +		execbuf.rsvd2 >>= 32;
> >   	}
> > +	igt_cork_unplug(&cork);
> >   	gem_sync(i915, obj.handle);
> >   	gem_close(i915, obj.handle);
> > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [igt-dev] [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-07-30 18:06       ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2021-07-30 18:06 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: igt-dev, intel-gfx

On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
> 
> On 27/07/2021 19:20, Matthew Brost wrote:
> > With GuC submission contexts can get reordered (compared to submission
> > order), if contexts get reordered the sequential nature of the batches
> > releasing the next batch's semaphore in function timesliceN() get broken
> > resulting in the test taking much longer than if should. e.g. Every
> > contexts needs to be timesliced to release the next batch. Corking the
> > first submission until all the batches have been submitted should ensure
> > submission order.
> 
> The explanation sounds suspect.
> 
> Consider this comment from the test itself:
> 
> 	/*
> 	 * Create a pair of interlocking batches, that ping pong
> 	 * between each other, and only advance one step at a time.
> 	 * We require the kernel to preempt at each semaphore and
> 	 * switch to the other batch in order to advance.
> 	 */
> 
> I'd say the test does not rely on no re-ordering at all, but relies on
> context switch on an unsatisfied semaphore.
>

Yes, let do a simple example with 5 batches. Batch 0 releases batch's
semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
are seen in order the test should take 40 timeslices (8 semaphores in
each batch have to be released).

If the batches are in the below order:
0 2 1 3 4

Now we have 72 timeslices. Now imagine with 67 batches completely out of
order, the number timeslices can explode.

> In the commit you seem to acknowledge GuC does not do that but instead ends
> up waiting for the timeslice to expire, did I get that right? If so, why

I think GuC waits for the timeslice to expire if a semaphore is
unsatisfied, I have to double check on that. I thought that was what
execlists were doing too but I now see it has a convoluted algorithm to
yield the timeslice if subsequent request comes in and the ring is
waiting on a timeslice. Let me check with GuC team and see if they can
/ are doing something similiar. I was thinking the only to switch a
sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
be incorrect.

For what is worth, after this change the run times of test are pretty
similar for execlists & GuC on TGL.

Matt

> does the GuC does not do that and can we fix it?
> 
> Regards,
> 
> Tvrtko
> 
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
> >   1 file changed, 15 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> > index f03842478..41f2591a5 100644
> > --- a/tests/i915/gem_exec_schedule.c
> > +++ b/tests/i915/gem_exec_schedule.c
> > @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	struct drm_i915_gem_execbuffer2 execbuf  = {
> >   		.buffers_ptr = to_user_pointer(&obj),
> >   		.buffer_count = 1,
> > -		.flags = engine | I915_EXEC_FENCE_OUT,
> > +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
> >   	};
> >   	uint32_t *result =
> >   		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
> >   	const intel_ctx_t *ctx;
> >   	int fence[count];
> > +	IGT_CORK_FENCE(cork);
> >   	/*
> >   	 * Create a pair of interlocking batches, that ping pong
> > @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   	igt_require(gem_scheduler_has_timeslicing(i915));
> >   	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
> > +	/*
> > +	 * With GuC submission contexts can get reordered (compared to
> > +	 * submission order), if contexts get reordered the sequential
> > +	 * nature of the batches releasing the next batch's semaphore gets
> > +	 * broken resulting in the test taking much longer than it should (e.g.
> > +	 * every context needs to be timesliced to release the next batch).
> > +	 * Corking the first submission until all batches have been
> > +	 * submitted should ensure submission order.
> > +	 */
> > +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> > +
> >   	/* No coupling between requests; free to timeslice */
> >   	for (int i = 0; i < count; i++) {
> > @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> >   		intel_ctx_destroy(i915, ctx);
> >   		fence[i] = execbuf.rsvd2 >> 32;
> > +		execbuf.rsvd2 >>= 32;
> >   	}
> > +	igt_cork_unplug(&cork);
> >   	gem_sync(i915, obj.handle);
> >   	gem_close(i915, obj.handle);
> > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-30 18:06       ` [igt-dev] " Matthew Brost
@ 2021-08-02  8:59         ` Tvrtko Ursulin
  -1 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-08-02  8:59 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev, intel-gfx



On 30/07/2021 19:06, Matthew Brost wrote:
> On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
>>
>> On 27/07/2021 19:20, Matthew Brost wrote:
>>> With GuC submission contexts can get reordered (compared to submission
>>> order), if contexts get reordered the sequential nature of the batches
>>> releasing the next batch's semaphore in function timesliceN() get broken
>>> resulting in the test taking much longer than if should. e.g. Every
>>> contexts needs to be timesliced to release the next batch. Corking the
>>> first submission until all the batches have been submitted should ensure
>>> submission order.
>>
>> The explanation sounds suspect.
>>
>> Consider this comment from the test itself:
>>
>> 	/*
>> 	 * Create a pair of interlocking batches, that ping pong
>> 	 * between each other, and only advance one step at a time.
>> 	 * We require the kernel to preempt at each semaphore and
>> 	 * switch to the other batch in order to advance.
>> 	 */
>>
>> I'd say the test does not rely on no re-ordering at all, but relies on
>> context switch on an unsatisfied semaphore.
>>
> 
> Yes, let do a simple example with 5 batches. Batch 0 releases batch's
> semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
> are seen in order the test should take 40 timeslices (8 semaphores in
> each batch have to be released).
> 
> If the batches are in the below order:
> 0 2 1 3 4
> 
> Now we have 72 timeslices. Now imagine with 67 batches completely out of
> order, the number timeslices can explode.

Yes that part is clear, issue is to understand why is guc waiting for 
the timeslice to expire..

>> In the commit you seem to acknowledge GuC does not do that but instead ends
>> up waiting for the timeslice to expire, did I get that right? If so, why
> 
> I think GuC waits for the timeslice to expire if a semaphore is
> unsatisfied, I have to double check on that. I thought that was what
> execlists were doing too but I now see it has a convoluted algorithm to
> yield the timeslice if subsequent request comes in and the ring is
> waiting on a timeslice. Let me check with GuC team and see if they can
> / are doing something similiar. I was thinking the only to switch a
> sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
> be incorrect.

.. so this will need clarifying with the firmware team.

With execlists we enable and react on GT_WAIT_SEMAPHORE_INTERRUPT. If 
guc does not, or can not, do that that could be worrying since userspace 
can and does use semaphores legitimately so making it pay the timeslice 
penalty. Well actually that has an effect to unrelated clients as well, 
not just the semaphore user.

> For what is worth, after this change the run times of test are pretty
> similar for execlists & GuC on TGL.

Yes, but the test was useful in this case since it found a weakness in 
guc scheduling so it may not be the best approach to hide that.

Regards,

Tvrtko

> 
> Matt
> 
>> does the GuC does not do that and can we fix it?
>>
>> Regards,
>>
>> Tvrtko
>>
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>    tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>>>    1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
>>> index f03842478..41f2591a5 100644
>>> --- a/tests/i915/gem_exec_schedule.c
>>> +++ b/tests/i915/gem_exec_schedule.c
>>> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	struct drm_i915_gem_execbuffer2 execbuf  = {
>>>    		.buffers_ptr = to_user_pointer(&obj),
>>>    		.buffer_count = 1,
>>> -		.flags = engine | I915_EXEC_FENCE_OUT,
>>> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>>>    	};
>>>    	uint32_t *result =
>>>    		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>>>    	const intel_ctx_t *ctx;
>>>    	int fence[count];
>>> +	IGT_CORK_FENCE(cork);
>>>    	/*
>>>    	 * Create a pair of interlocking batches, that ping pong
>>> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	igt_require(gem_scheduler_has_timeslicing(i915));
>>>    	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>>> +	/*
>>> +	 * With GuC submission contexts can get reordered (compared to
>>> +	 * submission order), if contexts get reordered the sequential
>>> +	 * nature of the batches releasing the next batch's semaphore gets
>>> +	 * broken resulting in the test taking much longer than it should (e.g.
>>> +	 * every context needs to be timesliced to release the next batch).
>>> +	 * Corking the first submission until all batches have been
>>> +	 * submitted should ensure submission order.
>>> +	 */
>>> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
>>> +
>>>    	/* No coupling between requests; free to timeslice */
>>>    	for (int i = 0; i < count; i++) {
>>> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    		intel_ctx_destroy(i915, ctx);
>>>    		fence[i] = execbuf.rsvd2 >> 32;
>>> +		execbuf.rsvd2 >>= 32;
>>>    	}
>>> +	igt_cork_unplug(&cork);
>>>    	gem_sync(i915, obj.handle);
>>>    	gem_close(i915, obj.handle);
>>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [igt-dev] [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-08-02  8:59         ` Tvrtko Ursulin
  0 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-08-02  8:59 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev, intel-gfx



On 30/07/2021 19:06, Matthew Brost wrote:
> On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
>>
>> On 27/07/2021 19:20, Matthew Brost wrote:
>>> With GuC submission contexts can get reordered (compared to submission
>>> order), if contexts get reordered the sequential nature of the batches
>>> releasing the next batch's semaphore in function timesliceN() get broken
>>> resulting in the test taking much longer than if should. e.g. Every
>>> contexts needs to be timesliced to release the next batch. Corking the
>>> first submission until all the batches have been submitted should ensure
>>> submission order.
>>
>> The explanation sounds suspect.
>>
>> Consider this comment from the test itself:
>>
>> 	/*
>> 	 * Create a pair of interlocking batches, that ping pong
>> 	 * between each other, and only advance one step at a time.
>> 	 * We require the kernel to preempt at each semaphore and
>> 	 * switch to the other batch in order to advance.
>> 	 */
>>
>> I'd say the test does not rely on no re-ordering at all, but relies on
>> context switch on an unsatisfied semaphore.
>>
> 
> Yes, let do a simple example with 5 batches. Batch 0 releases batch's
> semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
> are seen in order the test should take 40 timeslices (8 semaphores in
> each batch have to be released).
> 
> If the batches are in the below order:
> 0 2 1 3 4
> 
> Now we have 72 timeslices. Now imagine with 67 batches completely out of
> order, the number timeslices can explode.

Yes that part is clear, issue is to understand why is guc waiting for 
the timeslice to expire..

>> In the commit you seem to acknowledge GuC does not do that but instead ends
>> up waiting for the timeslice to expire, did I get that right? If so, why
> 
> I think GuC waits for the timeslice to expire if a semaphore is
> unsatisfied, I have to double check on that. I thought that was what
> execlists were doing too but I now see it has a convoluted algorithm to
> yield the timeslice if subsequent request comes in and the ring is
> waiting on a timeslice. Let me check with GuC team and see if they can
> / are doing something similiar. I was thinking the only to switch a
> sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
> be incorrect.

.. so this will need clarifying with the firmware team.

With execlists we enable and react on GT_WAIT_SEMAPHORE_INTERRUPT. If 
guc does not, or can not, do that that could be worrying since userspace 
can and does use semaphores legitimately so making it pay the timeslice 
penalty. Well actually that has an effect to unrelated clients as well, 
not just the semaphore user.

> For what is worth, after this change the run times of test are pretty
> similar for execlists & GuC on TGL.

Yes, but the test was useful in this case since it found a weakness in 
guc scheduling so it may not be the best approach to hide that.

Regards,

Tvrtko

> 
> Matt
> 
>> does the GuC does not do that and can we fix it?
>>
>> Regards,
>>
>> Tvrtko
>>
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>    tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>>>    1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
>>> index f03842478..41f2591a5 100644
>>> --- a/tests/i915/gem_exec_schedule.c
>>> +++ b/tests/i915/gem_exec_schedule.c
>>> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	struct drm_i915_gem_execbuffer2 execbuf  = {
>>>    		.buffers_ptr = to_user_pointer(&obj),
>>>    		.buffer_count = 1,
>>> -		.flags = engine | I915_EXEC_FENCE_OUT,
>>> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>>>    	};
>>>    	uint32_t *result =
>>>    		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>>>    	const intel_ctx_t *ctx;
>>>    	int fence[count];
>>> +	IGT_CORK_FENCE(cork);
>>>    	/*
>>>    	 * Create a pair of interlocking batches, that ping pong
>>> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	igt_require(gem_scheduler_has_timeslicing(i915));
>>>    	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>>> +	/*
>>> +	 * With GuC submission contexts can get reordered (compared to
>>> +	 * submission order), if contexts get reordered the sequential
>>> +	 * nature of the batches releasing the next batch's semaphore gets
>>> +	 * broken resulting in the test taking much longer than it should (e.g.
>>> +	 * every context needs to be timesliced to release the next batch).
>>> +	 * Corking the first submission until all batches have been
>>> +	 * submitted should ensure submission order.
>>> +	 */
>>> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
>>> +
>>>    	/* No coupling between requests; free to timeslice */
>>>    	for (int i = 0; i < count; i++) {
>>> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    		intel_ctx_destroy(i915, ctx);
>>>    		fence[i] = execbuf.rsvd2 >> 32;
>>> +		execbuf.rsvd2 >>= 32;
>>>    	}
>>> +	igt_cork_unplug(&cork);
>>>    	gem_sync(i915, obj.handle);
>>>    	gem_close(i915, obj.handle);
>>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-08-02  8:59         ` [igt-dev] " Tvrtko Ursulin
  (?)
@ 2021-08-02 20:10         ` Matthew Brost
  2021-08-03  8:54             ` [igt-dev] " Tvrtko Ursulin
  -1 siblings, 1 reply; 21+ messages in thread
From: Matthew Brost @ 2021-08-02 20:10 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: igt-dev, intel-gfx

On Mon, Aug 02, 2021 at 09:59:01AM +0100, Tvrtko Ursulin wrote:
> 
> 
> On 30/07/2021 19:06, Matthew Brost wrote:
> > On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
> > > 
> > > On 27/07/2021 19:20, Matthew Brost wrote:
> > > > With GuC submission contexts can get reordered (compared to submission
> > > > order), if contexts get reordered the sequential nature of the batches
> > > > releasing the next batch's semaphore in function timesliceN() get broken
> > > > resulting in the test taking much longer than if should. e.g. Every
> > > > contexts needs to be timesliced to release the next batch. Corking the
> > > > first submission until all the batches have been submitted should ensure
> > > > submission order.
> > > 
> > > The explanation sounds suspect.
> > > 
> > > Consider this comment from the test itself:
> > > 
> > > 	/*
> > > 	 * Create a pair of interlocking batches, that ping pong
> > > 	 * between each other, and only advance one step at a time.
> > > 	 * We require the kernel to preempt at each semaphore and
> > > 	 * switch to the other batch in order to advance.
> > > 	 */
> > > 
> > > I'd say the test does not rely on no re-ordering at all, but relies on
> > > context switch on an unsatisfied semaphore.
> > > 
> > 
> > Yes, let do a simple example with 5 batches. Batch 0 releases batch's
> > semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
> > are seen in order the test should take 40 timeslices (8 semaphores in
> > each batch have to be released).
> > 
> > If the batches are in the below order:
> > 0 2 1 3 4
> > 
> > Now we have 72 timeslices. Now imagine with 67 batches completely out of
> > order, the number timeslices can explode.
> 
> Yes that part is clear, issue is to understand why is guc waiting for the
> timeslice to expire..
> 
> > > In the commit you seem to acknowledge GuC does not do that but instead ends
> > > up waiting for the timeslice to expire, did I get that right? If so, why
> > 
> > I think GuC waits for the timeslice to expire if a semaphore is
> > unsatisfied, I have to double check on that. I thought that was what
> > execlists were doing too but I now see it has a convoluted algorithm to
> > yield the timeslice if subsequent request comes in and the ring is
> > waiting on a timeslice. Let me check with GuC team and see if they can
> > / are doing something similiar. I was thinking the only to switch a
> > sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
> > be incorrect.
> 
> .. so this will need clarifying with the firmware team.
> 

They do not use the GT_WAIT_SEMAPHORE_INTERRUPT. However, we can
clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH will result in more or less the
same behavior as execlists but I'm suspect if that is the right
solution. More on that below.

> With execlists we enable and react on GT_WAIT_SEMAPHORE_INTERRUPT. If guc

Because execlists does this, doesn't mean it is the spec or is correct.
As far as I can tell this behavior is yet another thing just shoehorned
into the execlists scheduler without a ton of thought or input from
architecture about what the scheduler should look like or what the UMD
needs actually are.

If we change anything related to GuC scheduling there needs to be a clear
need - again saying execlists does this is not an argument. There needs
to be an agreement with architecture, the UMD teams, the i915 team,
possibly the Windows team, and the GuC team before we make any changes.

IMO the correct solution is to use tokens. Have uAPI interface which
distributes tokens to the UMDs, the i915 clears context switch inhibit
bit in the LRC if the user opted into tokens, and now semaphores switch
out automatically and get rescheduled when the token is signaled.

> does not, or can not, do that that could be worrying since userspace can and
> does use semaphores legitimately so making it pay the timeslice penalty.
> Well actually that has an effect to unrelated clients as well, not just the
> semaphore user.

Not buying this argument. Any user can submit a long running batch that
always uses its full time slice and this affects unrelated clients.

> 
> > For what is worth, after this change the run times of test are pretty
> > similar for execlists & GuC on TGL.
> 
> Yes, but the test was useful in this case since it found a weakness in guc
> scheduling so it may not be the best approach to hide that.
>

Not a weakness, just a difference.

Matt

> Regards,
> 
> Tvrtko
> 
> > 
> > Matt
> > 
> > > does the GuC does not do that and can we fix it?
> > > 
> > > Regards,
> > > 
> > > Tvrtko
> > > 
> > > > 
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > >    tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
> > > >    1 file changed, 15 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> > > > index f03842478..41f2591a5 100644
> > > > --- a/tests/i915/gem_exec_schedule.c
> > > > +++ b/tests/i915/gem_exec_schedule.c
> > > > @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> > > >    	struct drm_i915_gem_execbuffer2 execbuf  = {
> > > >    		.buffers_ptr = to_user_pointer(&obj),
> > > >    		.buffer_count = 1,
> > > > -		.flags = engine | I915_EXEC_FENCE_OUT,
> > > > +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
> > > >    	};
> > > >    	uint32_t *result =
> > > >    		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
> > > >    	const intel_ctx_t *ctx;
> > > >    	int fence[count];
> > > > +	IGT_CORK_FENCE(cork);
> > > >    	/*
> > > >    	 * Create a pair of interlocking batches, that ping pong
> > > > @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> > > >    	igt_require(gem_scheduler_has_timeslicing(i915));
> > > >    	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
> > > > +	/*
> > > > +	 * With GuC submission contexts can get reordered (compared to
> > > > +	 * submission order), if contexts get reordered the sequential
> > > > +	 * nature of the batches releasing the next batch's semaphore gets
> > > > +	 * broken resulting in the test taking much longer than it should (e.g.
> > > > +	 * every context needs to be timesliced to release the next batch).
> > > > +	 * Corking the first submission until all batches have been
> > > > +	 * submitted should ensure submission order.
> > > > +	 */
> > > > +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
> > > > +
> > > >    	/* No coupling between requests; free to timeslice */
> > > >    	for (int i = 0; i < count; i++) {
> > > > @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
> > > >    		intel_ctx_destroy(i915, ctx);
> > > >    		fence[i] = execbuf.rsvd2 >> 32;
> > > > +		execbuf.rsvd2 >>= 32;
> > > >    	}
> > > > +	igt_cork_unplug(&cork);
> > > >    	gem_sync(i915, obj.handle);
> > > >    	gem_close(i915, obj.handle);
> > > > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-08-02 20:10         ` Matthew Brost
@ 2021-08-03  8:54             ` Tvrtko Ursulin
  0 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-08-03  8:54 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev, intel-gfx


On 02/08/2021 21:10, Matthew Brost wrote:
> On Mon, Aug 02, 2021 at 09:59:01AM +0100, Tvrtko Ursulin wrote:
>>
>>
>> On 30/07/2021 19:06, Matthew Brost wrote:
>>> On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
>>>>
>>>> On 27/07/2021 19:20, Matthew Brost wrote:
>>>>> With GuC submission contexts can get reordered (compared to submission
>>>>> order), if contexts get reordered the sequential nature of the batches
>>>>> releasing the next batch's semaphore in function timesliceN() get broken
>>>>> resulting in the test taking much longer than if should. e.g. Every
>>>>> contexts needs to be timesliced to release the next batch. Corking the
>>>>> first submission until all the batches have been submitted should ensure
>>>>> submission order.
>>>>
>>>> The explanation sounds suspect.
>>>>
>>>> Consider this comment from the test itself:
>>>>
>>>> 	/*
>>>> 	 * Create a pair of interlocking batches, that ping pong
>>>> 	 * between each other, and only advance one step at a time.
>>>> 	 * We require the kernel to preempt at each semaphore and
>>>> 	 * switch to the other batch in order to advance.
>>>> 	 */
>>>>
>>>> I'd say the test does not rely on no re-ordering at all, but relies on
>>>> context switch on an unsatisfied semaphore.
>>>>
>>>
>>> Yes, let do a simple example with 5 batches. Batch 0 releases batch's
>>> semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
>>> are seen in order the test should take 40 timeslices (8 semaphores in
>>> each batch have to be released).
>>>
>>> If the batches are in the below order:
>>> 0 2 1 3 4
>>>
>>> Now we have 72 timeslices. Now imagine with 67 batches completely out of
>>> order, the number timeslices can explode.
>>
>> Yes that part is clear, issue is to understand why is guc waiting for the
>> timeslice to expire..
>>
>>>> In the commit you seem to acknowledge GuC does not do that but instead ends
>>>> up waiting for the timeslice to expire, did I get that right? If so, why
>>>
>>> I think GuC waits for the timeslice to expire if a semaphore is
>>> unsatisfied, I have to double check on that. I thought that was what
>>> execlists were doing too but I now see it has a convoluted algorithm to
>>> yield the timeslice if subsequent request comes in and the ring is
>>> waiting on a timeslice. Let me check with GuC team and see if they can
>>> / are doing something similiar. I was thinking the only to switch a
>>> sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
>>> be incorrect.
>>
>> .. so this will need clarifying with the firmware team.
>>
> 
> They do not use the GT_WAIT_SEMAPHORE_INTERRUPT. However, we can
> clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH will result in more or less the
> same behavior as execlists but I'm suspect if that is the right
> solution. More on that below.
> 
>> With execlists we enable and react on GT_WAIT_SEMAPHORE_INTERRUPT. If guc
> 
> Because execlists does this, doesn't mean it is the spec or is correct.
> As far as I can tell this behavior is yet another thing just shoehorned
> into the execlists scheduler without a ton of thought or input from
> architecture about what the scheduler should look like or what the UMD
> needs actually are.
> 
> If we change anything related to GuC scheduling there needs to be a clear
> need - again saying execlists does this is not an argument. There needs
> to be an agreement with architecture, the UMD teams, the i915 team,
> possibly the Windows team, and the GuC team before we make any changes.
> 
> IMO the correct solution is to use tokens. Have uAPI interface which
> distributes tokens to the UMDs, the i915 clears context switch inhibit
> bit in the LRC if the user opted into tokens, and now semaphores switch
> out automatically and get rescheduled when the token is signaled.

Tokens are Gen12+ right? Downside in that plan would be what do you do 
with earlier platforms.

>> does not, or can not, do that that could be worrying since userspace can and
>> does use semaphores legitimately so making it pay the timeslice penalty.
>> Well actually that has an effect to unrelated clients as well, not just the
>> semaphore user.
> 
> Not buying this argument. Any user can submit a long running batch that
> always uses its full time slice and this affects unrelated clients.

To an extent, but it's not the same if that batch is long running due 
some work it's doing, or long running because it sits there waiting on 
an unsatisfied semaphore wasting everyone's time. If nothing because 
that might not be what the userspace expects.

But yes, you will need to figure out if UMDs benefit from this in 
practical use cases before you can rip this out.

And it will tie back to the thing about tokens and uapi you mention. 
(Although I don't immediately see how exposing the hardware "flavour of 
the day" thing like tokens makes a good candidate to be mentioned in the 
uapi. Especially given their limited nature.)

>>> For what is worth, after this change the run times of test are pretty
>>> similar for execlists & GuC on TGL.
>>
>> Yes, but the test was useful in this case since it found a weakness in guc
>> scheduling so it may not be the best approach to hide that.
>>
> 
> Not a weakness, just a difference.

Okay not a weakness, it's just much slower when userspace uses 
semaphores. :)

Also, I worry submit fence works for you in this patch for you not by 
ABI contract but due implementation details. Probably both in case of 
execlists and GuC. Because all it is guaranteeing as part of its ABI 
contract is that request B will not enter the backend before request A. 
But backend is really free to execute them in any order. (Assuming no 
other dependencies.) So I think that's the second reason this patch as 
is is not the best choice.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [igt-dev] [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-08-03  8:54             ` Tvrtko Ursulin
  0 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-08-03  8:54 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev, intel-gfx


On 02/08/2021 21:10, Matthew Brost wrote:
> On Mon, Aug 02, 2021 at 09:59:01AM +0100, Tvrtko Ursulin wrote:
>>
>>
>> On 30/07/2021 19:06, Matthew Brost wrote:
>>> On Fri, Jul 30, 2021 at 10:58:38AM +0100, Tvrtko Ursulin wrote:
>>>>
>>>> On 27/07/2021 19:20, Matthew Brost wrote:
>>>>> With GuC submission contexts can get reordered (compared to submission
>>>>> order), if contexts get reordered the sequential nature of the batches
>>>>> releasing the next batch's semaphore in function timesliceN() get broken
>>>>> resulting in the test taking much longer than if should. e.g. Every
>>>>> contexts needs to be timesliced to release the next batch. Corking the
>>>>> first submission until all the batches have been submitted should ensure
>>>>> submission order.
>>>>
>>>> The explanation sounds suspect.
>>>>
>>>> Consider this comment from the test itself:
>>>>
>>>> 	/*
>>>> 	 * Create a pair of interlocking batches, that ping pong
>>>> 	 * between each other, and only advance one step at a time.
>>>> 	 * We require the kernel to preempt at each semaphore and
>>>> 	 * switch to the other batch in order to advance.
>>>> 	 */
>>>>
>>>> I'd say the test does not rely on no re-ordering at all, but relies on
>>>> context switch on an unsatisfied semaphore.
>>>>
>>>
>>> Yes, let do a simple example with 5 batches. Batch 0 releases batch's
>>> semaphore 1, batch 1 releases batch's 2 semaphore, etc... If the batches
>>> are seen in order the test should take 40 timeslices (8 semaphores in
>>> each batch have to be released).
>>>
>>> If the batches are in the below order:
>>> 0 2 1 3 4
>>>
>>> Now we have 72 timeslices. Now imagine with 67 batches completely out of
>>> order, the number timeslices can explode.
>>
>> Yes that part is clear, issue is to understand why is guc waiting for the
>> timeslice to expire..
>>
>>>> In the commit you seem to acknowledge GuC does not do that but instead ends
>>>> up waiting for the timeslice to expire, did I get that right? If so, why
>>>
>>> I think GuC waits for the timeslice to expire if a semaphore is
>>> unsatisfied, I have to double check on that. I thought that was what
>>> execlists were doing too but I now see it has a convoluted algorithm to
>>> yield the timeslice if subsequent request comes in and the ring is
>>> waiting on a timeslice. Let me check with GuC team and see if they can
>>> / are doing something similiar. I was thinking the only to switch a
>>> sempahore was clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH but that appears to
>>> be incorrect.
>>
>> .. so this will need clarifying with the firmware team.
>>
> 
> They do not use the GT_WAIT_SEMAPHORE_INTERRUPT. However, we can
> clear CTX_CTRL_INHIBIT_SYN_CTX_SWITCH will result in more or less the
> same behavior as execlists but I'm suspect if that is the right
> solution. More on that below.
> 
>> With execlists we enable and react on GT_WAIT_SEMAPHORE_INTERRUPT. If guc
> 
> Because execlists does this, doesn't mean it is the spec or is correct.
> As far as I can tell this behavior is yet another thing just shoehorned
> into the execlists scheduler without a ton of thought or input from
> architecture about what the scheduler should look like or what the UMD
> needs actually are.
> 
> If we change anything related to GuC scheduling there needs to be a clear
> need - again saying execlists does this is not an argument. There needs
> to be an agreement with architecture, the UMD teams, the i915 team,
> possibly the Windows team, and the GuC team before we make any changes.
> 
> IMO the correct solution is to use tokens. Have uAPI interface which
> distributes tokens to the UMDs, the i915 clears context switch inhibit
> bit in the LRC if the user opted into tokens, and now semaphores switch
> out automatically and get rescheduled when the token is signaled.

Tokens are Gen12+ right? Downside in that plan would be what do you do 
with earlier platforms.

>> does not, or can not, do that that could be worrying since userspace can and
>> does use semaphores legitimately so making it pay the timeslice penalty.
>> Well actually that has an effect to unrelated clients as well, not just the
>> semaphore user.
> 
> Not buying this argument. Any user can submit a long running batch that
> always uses its full time slice and this affects unrelated clients.

To an extent, but it's not the same if that batch is long running due 
some work it's doing, or long running because it sits there waiting on 
an unsatisfied semaphore wasting everyone's time. If nothing because 
that might not be what the userspace expects.

But yes, you will need to figure out if UMDs benefit from this in 
practical use cases before you can rip this out.

And it will tie back to the thing about tokens and uapi you mention. 
(Although I don't immediately see how exposing the hardware "flavour of 
the day" thing like tokens makes a good candidate to be mentioned in the 
uapi. Especially given their limited nature.)

>>> For what is worth, after this change the run times of test are pretty
>>> similar for execlists & GuC on TGL.
>>
>> Yes, but the test was useful in this case since it found a weakness in guc
>> scheduling so it may not be the best approach to hide that.
>>
> 
> Not a weakness, just a difference.

Okay not a weakness, it's just much slower when userspace uses 
semaphores. :)

Also, I worry submit fence works for you in this patch for you not by 
ABI contract but due implementation details. Probably both in case of 
execlists and GuC. Because all it is guaranteeing as part of its ABI 
contract is that request B will not enter the backend before request A. 
But backend is really free to execute them in any order. (Assuming no 
other dependencies.) So I think that's the second reason this patch as 
is is not the best choice.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
  2021-07-30  0:00       ` Matthew Brost
@ 2021-08-19 23:31         ` John Harrison
  -1 siblings, 0 replies; 21+ messages in thread
From: John Harrison @ 2021-08-19 23:31 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev, intel-gfx

On 7/29/2021 17:00, Matthew Brost wrote:
> On Thu, Jul 29, 2021 at 04:54:08PM -0700, John Harrison wrote:
>> On 7/27/2021 11:20, Matthew Brost wrote:
>>> With GuC submission contexts can get reordered (compared to submission
>>> order), if contexts get reordered the sequential nature of the batches
>>> releasing the next batch's semaphore in function timesliceN() get broken
>>> resulting in the test taking much longer than if should. e.g. Every
>>> contexts needs to be timesliced to release the next batch. Corking the
>>> first submission until all the batches have been submitted should ensure
>>> submission order.
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>    tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>>>    1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
>>> index f03842478..41f2591a5 100644
>>> --- a/tests/i915/gem_exec_schedule.c
>>> +++ b/tests/i915/gem_exec_schedule.c
>>> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	struct drm_i915_gem_execbuffer2 execbuf  = {
>>>    		.buffers_ptr = to_user_pointer(&obj),
>>>    		.buffer_count = 1,
>>> -		.flags = engine | I915_EXEC_FENCE_OUT,
>>> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>>>    	};
>>>    	uint32_t *result =
>>>    		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>>>    	const intel_ctx_t *ctx;
>>>    	int fence[count];
>>> +	IGT_CORK_FENCE(cork);
>>>    	/*
>>>    	 * Create a pair of interlocking batches, that ping pong
>>> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	igt_require(gem_scheduler_has_timeslicing(i915));
>>>    	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>>> +	/*
>>> +	 * With GuC submission contexts can get reordered (compared to
>>> +	 * submission order), if contexts get reordered the sequential
>>> +	 * nature of the batches releasing the next batch's semaphore gets
>>> +	 * broken resulting in the test taking much longer than it should (e.g.
>>> +	 * every context needs to be timesliced to release the next batch).
>>> +	 * Corking the first submission until all batches have been
>>> +	 * submitted should ensure submission order.
>>> +	 */
>>> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
>>> +
>>>    	/* No coupling between requests; free to timeslice */
>>>    	for (int i = 0; i < count; i++) {
>>> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    		intel_ctx_destroy(i915, ctx);
>>>    		fence[i] = execbuf.rsvd2 >> 32;
>>> +		execbuf.rsvd2 >>= 32;
>> This means you are passing fence_out[A] as fenc_in[B]? I.e. this patch is
>> also changing the behaviour to make each batch dependent upon the previous
> This is a submission fence, it just ensures they get submitted in order.
> Corking the first request + the fence, ensures all the requests get
> submitted basically at the same time compared to execbuf IOCTL time
> without it.
The input side is the submit fence, but the output side is the 
completion fence. You are chaining the out fence of the previous request 
as the submit fence of the next request.

Loop 0:
   execbuf.rsvd2 = cork
   submit()
       execbuf.rsvd2 is now the out fence in the upper 32
   fence[0] = execbuf.rsvd2 >> 32;
   execbuf.rsvd2 >>= 32;
       move new out fence to be the next in fence

Loop 1:
   execbuf.rsvd2 == fence[0]
   submit()
   fence[1] = new out fence

Loop 2:
   execbuf.rsvd2 == fence[1]
   ...


You have changed the parallel requests into a sequential line. Request X 
is now waiting for Request Y to *complete* before it can be submitted. 
Only the first request is waiting on the cork.

John.

>> one. That change is not mentioned in the new comment. It is also the exact
> Yea, I could explain this better. Will fix.
>
> Matt
>
>> opposite of the comment immediately above the loop - 'No coupling between
>> requests'.
>>
>> John.
>>
>>
>>>    	}
>>> +	igt_cork_unplug(&cork);
>>>    	gem_sync(i915, obj.handle);
>>>    	gem_close(i915, obj.handle);


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [igt-dev] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts
@ 2021-08-19 23:31         ` John Harrison
  0 siblings, 0 replies; 21+ messages in thread
From: John Harrison @ 2021-08-19 23:31 UTC (permalink / raw)
  To: Matthew Brost; +Cc: igt-dev, intel-gfx

On 7/29/2021 17:00, Matthew Brost wrote:
> On Thu, Jul 29, 2021 at 04:54:08PM -0700, John Harrison wrote:
>> On 7/27/2021 11:20, Matthew Brost wrote:
>>> With GuC submission contexts can get reordered (compared to submission
>>> order), if contexts get reordered the sequential nature of the batches
>>> releasing the next batch's semaphore in function timesliceN() get broken
>>> resulting in the test taking much longer than if should. e.g. Every
>>> contexts needs to be timesliced to release the next batch. Corking the
>>> first submission until all the batches have been submitted should ensure
>>> submission order.
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>    tests/i915/gem_exec_schedule.c | 16 +++++++++++++++-
>>>    1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
>>> index f03842478..41f2591a5 100644
>>> --- a/tests/i915/gem_exec_schedule.c
>>> +++ b/tests/i915/gem_exec_schedule.c
>>> @@ -597,12 +597,13 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	struct drm_i915_gem_execbuffer2 execbuf  = {
>>>    		.buffers_ptr = to_user_pointer(&obj),
>>>    		.buffer_count = 1,
>>> -		.flags = engine | I915_EXEC_FENCE_OUT,
>>> +		.flags = engine | I915_EXEC_FENCE_OUT | I915_EXEC_FENCE_SUBMIT,
>>>    	};
>>>    	uint32_t *result =
>>>    		gem_mmap__device_coherent(i915, obj.handle, 0, sz, PROT_READ);
>>>    	const intel_ctx_t *ctx;
>>>    	int fence[count];
>>> +	IGT_CORK_FENCE(cork);
>>>    	/*
>>>    	 * Create a pair of interlocking batches, that ping pong
>>> @@ -614,6 +615,17 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    	igt_require(gem_scheduler_has_timeslicing(i915));
>>>    	igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8);
>>> +	/*
>>> +	 * With GuC submission contexts can get reordered (compared to
>>> +	 * submission order), if contexts get reordered the sequential
>>> +	 * nature of the batches releasing the next batch's semaphore gets
>>> +	 * broken resulting in the test taking much longer than it should (e.g.
>>> +	 * every context needs to be timesliced to release the next batch).
>>> +	 * Corking the first submission until all batches have been
>>> +	 * submitted should ensure submission order.
>>> +	 */
>>> +	execbuf.rsvd2 = igt_cork_plug(&cork, i915);
>>> +
>>>    	/* No coupling between requests; free to timeslice */
>>>    	for (int i = 0; i < count; i++) {
>>> @@ -624,8 +636,10 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg,
>>>    		intel_ctx_destroy(i915, ctx);
>>>    		fence[i] = execbuf.rsvd2 >> 32;
>>> +		execbuf.rsvd2 >>= 32;
>> This means you are passing fence_out[A] as fenc_in[B]? I.e. this patch is
>> also changing the behaviour to make each batch dependent upon the previous
> This is a submission fence, it just ensures they get submitted in order.
> Corking the first request + the fence, ensures all the requests get
> submitted basically at the same time compared to execbuf IOCTL time
> without it.
The input side is the submit fence, but the output side is the 
completion fence. You are chaining the out fence of the previous request 
as the submit fence of the next request.

Loop 0:
   execbuf.rsvd2 = cork
   submit()
       execbuf.rsvd2 is now the out fence in the upper 32
   fence[0] = execbuf.rsvd2 >> 32;
   execbuf.rsvd2 >>= 32;
       move new out fence to be the next in fence

Loop 1:
   execbuf.rsvd2 == fence[0]
   submit()
   fence[1] = new out fence

Loop 2:
   execbuf.rsvd2 == fence[1]
   ...


You have changed the parallel requests into a sequential line. Request X 
is now waiting for Request Y to *complete* before it can be submitted. 
Only the first request is waiting on the cork.

John.

>> one. That change is not mentioned in the new comment. It is also the exact
> Yea, I could explain this better. Will fix.
>
> Matt
>
>> opposite of the comment immediately above the loop - 'No coupling between
>> requests'.
>>
>> John.
>>
>>
>>>    	}
>>> +	igt_cork_unplug(&cork);
>>>    	gem_sync(i915, obj.handle);
>>>    	gem_close(i915, obj.handle);

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2021-08-19 23:31 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-27 18:20 [Intel-gfx] [PATCH i-g-t 0/1] Fix gem_scheduler.manycontexts for GuC submission Matthew Brost
2021-07-27 18:20 ` [igt-dev] " Matthew Brost
2021-07-27 18:20 ` [Intel-gfx] [PATCH i-g-t 1/1] i915/gem_scheduler: Ensure submission order in manycontexts Matthew Brost
2021-07-27 18:20   ` [igt-dev] " Matthew Brost
2021-07-29 23:54   ` [Intel-gfx] " John Harrison
2021-07-29 23:54     ` John Harrison
2021-07-30  0:00     ` [Intel-gfx] " Matthew Brost
2021-07-30  0:00       ` Matthew Brost
2021-08-19 23:31       ` [Intel-gfx] " John Harrison
2021-08-19 23:31         ` John Harrison
2021-07-30  9:58   ` [Intel-gfx] " Tvrtko Ursulin
2021-07-30 18:06     ` Matthew Brost
2021-07-30 18:06       ` [igt-dev] " Matthew Brost
2021-08-02  8:59       ` Tvrtko Ursulin
2021-08-02  8:59         ` [igt-dev] " Tvrtko Ursulin
2021-08-02 20:10         ` Matthew Brost
2021-08-03  8:54           ` Tvrtko Ursulin
2021-08-03  8:54             ` [igt-dev] " Tvrtko Ursulin
2021-07-27 18:48 ` [igt-dev] ✓ Fi.CI.BAT: success for Fix gem_scheduler.manycontexts for GuC submission Patchwork
2021-07-27 21:29 ` [igt-dev] ✗ GitLab.Pipeline: warning " Patchwork
2021-07-28  4:09 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.