intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission
@ 2021-07-29  0:33 Matthew Brost
  2021-07-29  0:34 ` [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context Matthew Brost
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Matthew Brost @ 2021-07-29  0:33 UTC (permalink / raw)
  To: intel-gfx, dri-devel

Should fix below failures with GuC submission for the following tests:
gem_exec_balancer --r noheartbeat
gem_ctx_persistence --r heartbeat-close

Not going to fix:
gem_ctx_persistence --r heartbeat-many
gem_ctx_persistence --r heartbeat-stop

As the above tests change the heartbeat value to 0 (off) after the
context is closed and we have no way to detect that with GuC submission
unless we keep a list of closed but running contexts which seems like
overkill for a non-real world use case. We likely should just skip these
tests with GuC submission.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Matthew Brost (1):
  drm/i915: Check if engine has heartbeat when closing a context

 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
 drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
 drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
 .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
 6 files changed, 26 insertions(+), 24 deletions(-)

-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-07-29  0:33 [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission Matthew Brost
@ 2021-07-29  0:34 ` Matthew Brost
  2021-07-30  0:13   ` John Harrison
  2021-07-29  2:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success for Fix gem_ctx_persistence failures with GuC submission Patchwork
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Matthew Brost @ 2021-07-29  0:34 UTC (permalink / raw)
  To: intel-gfx, dri-devel

If an engine associated with a context does not have a heartbeat, ban it
immediately. This is needed for GuC submission as a idle pulse doesn't
kick the context off the hardware where it then can check for a
heartbeat and ban the context.

This patch also updates intel_engine_has_heartbeat to be a vfunc as we
now need to call this function on execlists virtual engines too.  

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
 drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
 drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
 .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
 .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
 6 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 9c3672bac0e2..b8e01c5ba9e5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -1090,8 +1090,9 @@ static void kill_engines(struct i915_gem_engines *engines, bool ban)
 	 */
 	for_each_gem_engine(ce, engines, it) {
 		struct intel_engine_cs *engine;
+		bool local_ban = ban || !intel_engine_has_heartbeat(ce->engine);
 
-		if (ban && intel_context_ban(ce, NULL))
+		if (local_ban && intel_context_ban(ce, NULL))
 			continue;
 
 		/*
@@ -1104,7 +1105,7 @@ static void kill_engines(struct i915_gem_engines *engines, bool ban)
 		engine = active_engine(ce);
 
 		/* First attempt to gracefully cancel the context */
-		if (engine && !__cancel_engine(engine) && ban)
+		if (engine && !__cancel_engine(engine) && local_ban)
 			/*
 			 * If we are unable to send a preemptive pulse to bump
 			 * the context from the GPU, we have to resort to a full
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index e54351a170e2..65f2eb2a78e4 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -55,6 +55,8 @@ struct intel_context_ops {
 	void (*reset)(struct intel_context *ce);
 	void (*destroy)(struct kref *kref);
 
+	bool (*has_heartbeat)(const struct intel_engine_cs *engine);
+
 	/* virtual engine/context interface */
 	struct intel_context *(*create_virtual)(struct intel_engine_cs **engine,
 						unsigned int count);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index c2a5640ae055..1b11a808acc4 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -283,28 +283,11 @@ struct intel_context *
 intel_engine_create_virtual(struct intel_engine_cs **siblings,
 			    unsigned int count);
 
-static inline bool
-intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine)
-{
-	/*
-	 * For non-GuC submission we expect the back-end to look at the
-	 * heartbeat status of the actual physical engine that the work
-	 * has been (or is being) scheduled on, so we should only reach
-	 * here with GuC submission enabled.
-	 */
-	GEM_BUG_ON(!intel_engine_uses_guc(engine));
-
-	return intel_guc_virtual_engine_has_heartbeat(engine);
-}
-
 static inline bool
 intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
 {
-	if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
-		return false;
-
-	if (intel_engine_is_virtual(engine))
-		return intel_virtual_engine_has_heartbeat(engine);
+	if (engine->cops->has_heartbeat)
+		return engine->cops->has_heartbeat(engine);
 	else
 		return READ_ONCE(engine->props.heartbeat_interval_ms);
 }
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index de5f9c86b9a4..18005b5546b6 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs *engine, unsigned int sibling)
 	return ve->siblings[sibling];
 }
 
+static bool virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
+{
+	struct intel_engine_cs *engine;
+	intel_engine_mask_t tmp, mask = ve->mask;
+
+	for_each_engine_masked(engine, ve->gt, mask, tmp)
+		if (READ_ONCE(engine->props.heartbeat_interval_ms))
+			return true;
+
+	return false;
+}
+
 static const struct intel_context_ops virtual_context_ops = {
 	.flags = COPS_HAS_INFLIGHT,
 
@@ -3634,6 +3646,8 @@ static const struct intel_context_ops virtual_context_ops = {
 	.enter = virtual_context_enter,
 	.exit = virtual_context_exit,
 
+	.has_heartbeat = virtual_engine_has_heartbeat,
+
 	.destroy = virtual_context_destroy,
 
 	.get_sibling = virtual_get_sibling,
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 89ff0e4b4bc7..ae70bff3605f 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct intel_context *ce)
 	return lrc_alloc(ce, engine);
 }
 
+static bool guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve);
+
 static const struct intel_context_ops virtual_guc_context_ops = {
 	.alloc = guc_virtual_context_alloc,
 
@@ -2183,6 +2185,8 @@ static const struct intel_context_ops virtual_guc_context_ops = {
 	.enter = guc_virtual_context_enter,
 	.exit = guc_virtual_context_exit,
 
+	.has_heartbeat = guc_virtual_engine_has_heartbeat,
+
 	.sched_disable = guc_context_sched_disable,
 
 	.destroy = guc_context_destroy,
@@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count)
 	return ERR_PTR(err);
 }
 
-bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
+static bool guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
 {
 	struct intel_engine_cs *engine;
 	intel_engine_mask_t tmp, mask = ve->mask;
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
index c7ef44fa0c36..c2afc3b88fd8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
@@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct intel_engine_cs *engine,
 				    struct i915_request *hung_rq,
 				    struct drm_printer *m);
 
-bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve);
-
 int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
 				   atomic_t *wait_var,
 				   bool interruptible,
-- 
2.28.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for Fix gem_ctx_persistence failures with GuC submission
  2021-07-29  0:33 [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission Matthew Brost
  2021-07-29  0:34 ` [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context Matthew Brost
@ 2021-07-29  2:08 ` Patchwork
  2021-07-29  7:30 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  2021-08-10  6:38 ` [Intel-gfx] [PATCH 0/1] " Daniel Vetter
  3 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2021-07-29  2:08 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 3220 bytes --]

== Series Details ==

Series: Fix gem_ctx_persistence failures with GuC submission
URL   : https://patchwork.freedesktop.org/series/93149/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10415 -> Patchwork_20733
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/index.html

Known issues
------------

  Here are the changes found in Patchwork_20733 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@execlists:
    - fi-bsw-nick:        [PASS][1] -> [INCOMPLETE][2] ([i915#2782] / [i915#2940])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/fi-bsw-nick/igt@i915_selftest@live@execlists.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/fi-bsw-nick/igt@i915_selftest@live@execlists.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-kbl-7500u:       [PASS][3] -> [FAIL][4] ([i915#1372])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/fi-kbl-7500u/igt@kms_chamelium@dp-crc-fast.html

  * igt@runner@aborted:
    - fi-bsw-nick:        NOTRUN -> [FAIL][5] ([fdo#109271] / [i915#1436])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/fi-bsw-nick/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@i915_module_load@reload:
    - fi-kbl-soraka:      [DMESG-WARN][6] ([i915#1982]) -> [PASS][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/fi-kbl-soraka/igt@i915_module_load@reload.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/fi-kbl-soraka/igt@i915_module_load@reload.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2782]: https://gitlab.freedesktop.org/drm/intel/issues/2782
  [i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
  [i915#3303]: https://gitlab.freedesktop.org/drm/intel/issues/3303


Participating hosts (43 -> 35)
------------------------------

  Missing    (8): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan bat-adlp-4 fi-ctg-p8600 fi-bdw-samus fi-tgl-y bat-jsl-1 


Build changes
-------------

  * Linux: CI_DRM_10415 -> Patchwork_20733

  CI-20190529: 20190529
  CI_DRM_10415: 457209baa84d04e17ce648a12733a32809717494 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6155: 4b51398dcd7559012b85776e7353d516ff1e6ce6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20733: e420bdd35a038b24172c0d3fcf725d8a04c9f946 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

e420bdd35a03 drm/i915: Check if engine has heartbeat when closing a context

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/index.html

[-- Attachment #1.2: Type: text/html, Size: 3839 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for Fix gem_ctx_persistence failures with GuC submission
  2021-07-29  0:33 [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission Matthew Brost
  2021-07-29  0:34 ` [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context Matthew Brost
  2021-07-29  2:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success for Fix gem_ctx_persistence failures with GuC submission Patchwork
@ 2021-07-29  7:30 ` Patchwork
  2021-08-10  6:38 ` [Intel-gfx] [PATCH 0/1] " Daniel Vetter
  3 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2021-07-29  7:30 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 30275 bytes --]

== Series Details ==

Series: Fix gem_ctx_persistence failures with GuC submission
URL   : https://patchwork.freedesktop.org/series/93149/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_10415_full -> Patchwork_20733_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_20733_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_20733_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_20733_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@mock@requests:
    - shard-kbl:          [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-kbl7/igt@i915_selftest@mock@requests.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl3/igt@i915_selftest@mock@requests.html
    - shard-tglb:         [PASS][3] -> [INCOMPLETE][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-tglb6/igt@i915_selftest@mock@requests.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb3/igt@i915_selftest@mock@requests.html
    - shard-apl:          [PASS][5] -> [INCOMPLETE][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-apl6/igt@i915_selftest@mock@requests.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl8/igt@i915_selftest@mock@requests.html
    - shard-glk:          [PASS][7] -> [INCOMPLETE][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-glk3/igt@i915_selftest@mock@requests.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@i915_selftest@mock@requests.html
    - shard-snb:          NOTRUN -> [INCOMPLETE][9]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-snb6/igt@i915_selftest@mock@requests.html
    - shard-iclb:         [PASS][10] -> [INCOMPLETE][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-iclb4/igt@i915_selftest@mock@requests.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb6/igt@i915_selftest@mock@requests.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_selftest@mock@requests:
    - {shard-rkl}:        [PASS][12] -> [INCOMPLETE][13]
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-2/igt@i915_selftest@mock@requests.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-1/igt@i915_selftest@mock@requests.html

  
Known issues
------------

  Here are the changes found in Patchwork_20733_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@feature_discovery@display-3x:
    - shard-glk:          NOTRUN -> [SKIP][14] ([fdo#109271]) +39 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@feature_discovery@display-3x.html

  * igt@gem_create@create-massive:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][15] ([i915#3002])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl7/igt@gem_create@create-massive.html

  * igt@gem_ctx_persistence@smoketest:
    - shard-snb:          NOTRUN -> [SKIP][16] ([fdo#109271] / [i915#1099]) +4 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-snb2/igt@gem_ctx_persistence@smoketest.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-iclb:         [PASS][17] -> [FAIL][18] ([i915#2842]) +1 similar issue
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-iclb7/igt@gem_exec_fair@basic-none-share@rcs0.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb4/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
    - shard-glk:          [PASS][19] -> [FAIL][20] ([i915#2842])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-glk9/igt@gem_exec_fair@basic-none@rcs0.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk4/igt@gem_exec_fair@basic-none@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-apl:          NOTRUN -> [FAIL][21] ([i915#2842])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl6/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-iclb:         NOTRUN -> [FAIL][22] ([i915#2842])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@gem_exec_fair@basic-pace@bcs0:
    - shard-tglb:         [PASS][23] -> [FAIL][24] ([i915#2842]) +1 similar issue
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-tglb5/igt@gem_exec_fair@basic-pace@bcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb2/igt@gem_exec_fair@basic-pace@bcs0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-iclb:         NOTRUN -> [SKIP][25] ([i915#2190])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@gem_huc_copy@huc-copy.html

  * igt@gem_pread@exhaustion:
    - shard-snb:          NOTRUN -> [WARN][26] ([i915#2658])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-snb7/igt@gem_pread@exhaustion.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-kbl:          NOTRUN -> [WARN][27] ([i915#2658]) +1 similar issue
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl4/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_render_copy@linear-to-vebox-y-tiled:
    - shard-iclb:         NOTRUN -> [SKIP][28] ([i915#768])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb3/igt@gem_render_copy@linear-to-vebox-y-tiled.html

  * igt@gem_userptr_blits@create-destroy-unsync:
    - shard-iclb:         NOTRUN -> [SKIP][29] ([i915#3297]) +1 similar issue
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@gem_userptr_blits@create-destroy-unsync.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-skl:          [PASS][30] -> [DMESG-WARN][31] ([i915#1436] / [i915#716])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl9/igt@gen9_exec_parse@allowed-single.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl10/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_pm_backlight@fade_with_suspend:
    - shard-skl:          [PASS][32] -> [INCOMPLETE][33] ([i915#198]) +2 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl7/igt@i915_pm_backlight@fade_with_suspend.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl3/igt@i915_pm_backlight@fade_with_suspend.html

  * igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp:
    - shard-apl:          NOTRUN -> [SKIP][34] ([fdo#109271] / [i915#1937])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl7/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp.html

  * igt@i915_pm_rpm@basic-rte:
    - shard-glk:          NOTRUN -> [FAIL][35] ([i915#579])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@i915_pm_rpm@basic-rte.html

  * igt@i915_pm_rpm@gem-execbuf:
    - shard-iclb:         NOTRUN -> [SKIP][36] ([i915#579]) +1 similar issue
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@i915_pm_rpm@gem-execbuf.html

  * igt@kms_big_fb@x-tiled-32bpp-rotate-180:
    - shard-glk:          NOTRUN -> [DMESG-WARN][37] ([i915#118] / [i915#95])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@kms_big_fb@x-tiled-32bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-32bpp-rotate-270:
    - shard-iclb:         NOTRUN -> [SKIP][38] ([fdo#110725] / [fdo#111614]) +1 similar issue
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_big_fb@x-tiled-32bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-kbl:          NOTRUN -> [SKIP][39] ([fdo#109271] / [i915#3777])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl4/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-apl:          NOTRUN -> [SKIP][40] ([fdo#109271] / [i915#3777]) +2 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-tglb:         NOTRUN -> [SKIP][41] ([fdo#111615])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
    - shard-kbl:          NOTRUN -> [SKIP][42] ([fdo#109271]) +198 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl2/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html

  * igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_ccs:
    - shard-snb:          NOTRUN -> [SKIP][43] ([fdo#109271]) +383 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-snb6/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_ccs.html

  * igt@kms_ccs@pipe-b-bad-aux-stride-yf_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][44] ([i915#3689])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_ccs@pipe-b-bad-aux-stride-yf_tiled_ccs.html

  * igt@kms_ccs@pipe-b-crc-primary-basic-y_tiled_gen12_rc_ccs_cc:
    - shard-skl:          NOTRUN -> [SKIP][45] ([fdo#109271]) +4 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl2/igt@kms_ccs@pipe-b-crc-primary-basic-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_rc_ccs_cc:
    - shard-iclb:         NOTRUN -> [SKIP][46] ([fdo#109278]) +12 similar issues
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_chamelium@dp-audio-edid:
    - shard-iclb:         NOTRUN -> [SKIP][47] ([fdo#109284] / [fdo#111827])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_chamelium@dp-audio-edid.html

  * igt@kms_chamelium@dp-crc-multiple:
    - shard-tglb:         NOTRUN -> [SKIP][48] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_chamelium@dp-crc-multiple.html

  * igt@kms_chamelium@hdmi-crc-nonplanar-formats:
    - shard-glk:          NOTRUN -> [SKIP][49] ([fdo#109271] / [fdo#111827]) +3 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@kms_chamelium@hdmi-crc-nonplanar-formats.html

  * igt@kms_chamelium@hdmi-mode-timings:
    - shard-snb:          NOTRUN -> [SKIP][50] ([fdo#109271] / [fdo#111827]) +19 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-snb6/igt@kms_chamelium@hdmi-mode-timings.html

  * igt@kms_chamelium@vga-hpd-for-each-pipe:
    - shard-kbl:          NOTRUN -> [SKIP][51] ([fdo#109271] / [fdo#111827]) +13 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl2/igt@kms_chamelium@vga-hpd-for-each-pipe.html

  * igt@kms_color_chamelium@pipe-a-ctm-0-5:
    - shard-apl:          NOTRUN -> [SKIP][52] ([fdo#109271] / [fdo#111827]) +26 similar issues
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl8/igt@kms_color_chamelium@pipe-a-ctm-0-5.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-apl:          NOTRUN -> [TIMEOUT][53] ([i915#1319])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl6/igt@kms_content_protection@atomic-dpms.html
    - shard-kbl:          NOTRUN -> [TIMEOUT][54] ([i915#1319])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl4/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@dp-mst-type-1:
    - shard-tglb:         NOTRUN -> [SKIP][55] ([i915#3116])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_content_protection@dp-mst-type-1.html

  * igt@kms_content_protection@uevent:
    - shard-apl:          NOTRUN -> [FAIL][56] ([i915#2105])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl8/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-c-cursor-512x170-random:
    - shard-iclb:         NOTRUN -> [SKIP][57] ([fdo#109278] / [fdo#109279])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_cursor_crc@pipe-c-cursor-512x170-random.html

  * igt@kms_cursor_crc@pipe-c-cursor-max-size-sliding:
    - shard-tglb:         NOTRUN -> [SKIP][58] ([i915#3359]) +4 similar issues
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_cursor_crc@pipe-c-cursor-max-size-sliding.html

  * igt@kms_cursor_crc@pipe-d-cursor-512x170-offscreen:
    - shard-tglb:         NOTRUN -> [SKIP][59] ([fdo#109279] / [i915#3359])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_cursor_crc@pipe-d-cursor-512x170-offscreen.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-skl:          [PASS][60] -> [FAIL][61] ([i915#2346])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl3/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl5/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_dp_tiled_display@basic-test-pattern:
    - shard-tglb:         NOTRUN -> [SKIP][62] ([i915#426])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_dp_tiled_display@basic-test-pattern.html

  * igt@kms_flip@2x-dpms-vs-vblank-race:
    - shard-iclb:         NOTRUN -> [SKIP][63] ([fdo#109274])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb3/igt@kms_flip@2x-dpms-vs-vblank-race.html

  * igt@kms_flip@2x-plain-flip-ts-check-interruptible@ab-hdmi-a1-hdmi-a2:
    - shard-glk:          [PASS][64] -> [FAIL][65] ([i915#2122])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-glk2/igt@kms_flip@2x-plain-flip-ts-check-interruptible@ab-hdmi-a1-hdmi-a2.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk8/igt@kms_flip@2x-plain-flip-ts-check-interruptible@ab-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@flip-vs-expired-vblank@c-edp1:
    - shard-skl:          [PASS][66] -> [FAIL][67] ([i915#79])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl5/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl7/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html

  * igt@kms_flip@flip-vs-suspend@a-dp1:
    - shard-apl:          [PASS][68] -> [DMESG-WARN][69] ([i915#180])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-apl1/igt@kms_flip@flip-vs-suspend@a-dp1.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl1/igt@kms_flip@flip-vs-suspend@a-dp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@b-edp1:
    - shard-skl:          [PASS][70] -> [FAIL][71] ([i915#2122]) +1 similar issue
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl6/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-edp1.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl10/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-edp1.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-mmap-wc:
    - shard-tglb:         NOTRUN -> [SKIP][72] ([fdo#111825]) +1 similar issue
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-indfb-draw-render:
    - shard-iclb:         NOTRUN -> [SKIP][73] ([fdo#109280]) +7 similar issues
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-indfb-draw-render.html

  * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence:
    - shard-kbl:          NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#533])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl1/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-7efc:
    - shard-apl:          NOTRUN -> [FAIL][75] ([fdo#108145] / [i915#265]) +2 similar issues
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl7/igt@kms_plane_alpha_blend@pipe-b-alpha-7efc.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
    - shard-glk:          NOTRUN -> [FAIL][76] ([i915#265])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html
    - shard-kbl:          NOTRUN -> [FAIL][77] ([i915#265])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl6/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max:
    - shard-kbl:          NOTRUN -> [FAIL][78] ([fdo#108145] / [i915#265])
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl4/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
    - shard-skl:          [PASS][79] -> [FAIL][80] ([fdo#108145] / [i915#265])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl1/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl4/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-vs-premult-vs-constant:
    - shard-iclb:         [PASS][81] -> [SKIP][82] ([fdo#109278]) +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-iclb4/igt@kms_plane_alpha_blend@pipe-c-coverage-vs-premult-vs-constant.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb2/igt@kms_plane_alpha_blend@pipe-c-coverage-vs-premult-vs-constant.html

  * igt@kms_plane_lowres@pipe-a-tiling-x:
    - shard-iclb:         NOTRUN -> [SKIP][83] ([i915#3536])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_plane_lowres@pipe-a-tiling-x.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1:
    - shard-glk:          NOTRUN -> [SKIP][84] ([fdo#109271] / [i915#658])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5:
    - shard-iclb:         NOTRUN -> [SKIP][85] ([i915#658])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4:
    - shard-apl:          NOTRUN -> [SKIP][86] ([fdo#109271] / [i915#658]) +7 similar issues
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl6/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5:
    - shard-kbl:          NOTRUN -> [SKIP][87] ([fdo#109271] / [i915#658]) +3 similar issues
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl4/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-5.html

  * igt@kms_psr@psr2_cursor_blt:
    - shard-iclb:         NOTRUN -> [SKIP][88] ([fdo#109441])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb3/igt@kms_psr@psr2_cursor_blt.html
    - shard-tglb:         NOTRUN -> [FAIL][89] ([i915#132] / [i915#3467])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@kms_psr@psr2_cursor_blt.html

  * igt@kms_psr@psr2_suspend:
    - shard-iclb:         [PASS][90] -> [SKIP][91] ([fdo#109441]) +2 similar issues
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-iclb2/igt@kms_psr@psr2_suspend.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb7/igt@kms_psr@psr2_suspend.html

  * igt@kms_setmode@basic:
    - shard-snb:          NOTRUN -> [FAIL][92] ([i915#31])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-snb2/igt@kms_setmode@basic.html

  * igt@kms_vblank@pipe-d-ts-continuation-idle:
    - shard-apl:          NOTRUN -> [SKIP][93] ([fdo#109271]) +362 similar issues
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl7/igt@kms_vblank@pipe-d-ts-continuation-idle.html

  * igt@kms_vblank@pipe-d-wait-idle:
    - shard-apl:          NOTRUN -> [SKIP][94] ([fdo#109271] / [i915#533]) +3 similar issues
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl6/igt@kms_vblank@pipe-d-wait-idle.html

  * igt@kms_writeback@writeback-fb-id:
    - shard-apl:          NOTRUN -> [SKIP][95] ([fdo#109271] / [i915#2437])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl2/igt@kms_writeback@writeback-fb-id.html

  * igt@prime_nv_api@i915_nv_double_export:
    - shard-iclb:         NOTRUN -> [SKIP][96] ([fdo#109291]) +1 similar issue
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@prime_nv_api@i915_nv_double_export.html

  * igt@prime_nv_pcopy@test3_3:
    - shard-tglb:         NOTRUN -> [SKIP][97] ([fdo#109291])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@prime_nv_pcopy@test3_3.html

  * igt@sysfs_clients@fair-7:
    - shard-apl:          NOTRUN -> [SKIP][98] ([fdo#109271] / [i915#2994]) +7 similar issues
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-apl1/igt@sysfs_clients@fair-7.html

  * igt@sysfs_clients@sema-25:
    - shard-glk:          NOTRUN -> [SKIP][99] ([fdo#109271] / [i915#2994])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk6/igt@sysfs_clients@sema-25.html

  * igt@sysfs_clients@sema-50:
    - shard-iclb:         NOTRUN -> [SKIP][100] ([i915#2994])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@sysfs_clients@sema-50.html

  * igt@sysfs_clients@split-50:
    - shard-kbl:          NOTRUN -> [SKIP][101] ([fdo#109271] / [i915#2994]) +2 similar issues
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl2/igt@sysfs_clients@split-50.html

  
#### Possible fixes ####

  * igt@fbdev@write:
    - {shard-rkl}:        [SKIP][102] ([i915#2582]) -> [PASS][103]
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@fbdev@write.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@fbdev@write.html

  * igt@gem_ctx_persistence@many-contexts:
    - shard-tglb:         [FAIL][104] ([i915#2410]) -> [PASS][105]
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-tglb7/igt@gem_ctx_persistence@many-contexts.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb1/igt@gem_ctx_persistence@many-contexts.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [FAIL][106] ([i915#2842]) -> [PASS][107] +1 similar issue
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-glk7/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk9/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - {shard-rkl}:        [FAIL][108] ([i915#2842]) -> [PASS][109] +3 similar issues
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@gem_exec_fair@basic-pace@rcs0.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-5/igt@gem_exec_fair@basic-pace@rcs0.html
    - shard-tglb:         [FAIL][110] ([i915#2842]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-tglb5/igt@gem_exec_fair@basic-pace@rcs0.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-tglb2/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs0:
    - shard-kbl:          [SKIP][112] ([fdo#109271]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-kbl1/igt@gem_exec_fair@basic-pace@vcs0.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-kbl3/igt@gem_exec_fair@basic-pace@vcs0.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-iclb:         [FAIL][114] ([i915#2842]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-iclb3/igt@gem_exec_fair@basic-pace@vecs0.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb8/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_whisper@basic-fds:
    - shard-glk:          [DMESG-WARN][116] ([i915#118] / [i915#95]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-glk8/igt@gem_exec_whisper@basic-fds.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk1/igt@gem_exec_whisper@basic-fds.html

  * igt@gem_mmap_gtt@cpuset-big-copy:
    - shard-glk:          [FAIL][118] ([i915#1888] / [i915#307]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-glk2/igt@gem_mmap_gtt@cpuset-big-copy.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-glk8/igt@gem_mmap_gtt@cpuset-big-copy.html

  * igt@gem_workarounds@suspend-resume-context:
    - {shard-rkl}:        [FAIL][120] ([fdo#103375]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-6/igt@gem_workarounds@suspend-resume-context.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-5/igt@gem_workarounds@suspend-resume-context.html

  * igt@i915_pm_dc@dc5-psr:
    - {shard-rkl}:        [SKIP][122] ([i915#658]) -> [PASS][123]
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@i915_pm_dc@dc5-psr.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@i915_pm_dc@dc5-psr.html

  * igt@i915_suspend@debugfs-reader:
    - shard-iclb:         [INCOMPLETE][124] ([i915#1185]) -> [PASS][125]
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-iclb3/igt@i915_suspend@debugfs-reader.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-iclb1/igt@i915_suspend@debugfs-reader.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - {shard-rkl}:        [SKIP][126] ([i915#3721]) -> [PASS][127] +2 similar issues
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_ccs@pipe-a-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc:
    - {shard-rkl}:        [FAIL][128] ([i915#3678]) -> [PASS][129] +1 similar issue
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@kms_ccs@pipe-a-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@kms_ccs@pipe-a-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_color@pipe-b-ctm-0-75:
    - {shard-rkl}:        [SKIP][130] ([i915#1149] / [i915#1849]) -> [PASS][131] +2 similar issues
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@kms_color@pipe-b-ctm-0-75.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@kms_color@pipe-b-ctm-0-75.html

  * igt@kms_cursor_crc@pipe-c-cursor-256x85-offscreen:
    - {shard-rkl}:        [SKIP][132] ([fdo#112022]) -> [PASS][133] +6 similar issues
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@kms_cursor_crc@pipe-c-cursor-256x85-offscreen.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@kms_cursor_crc@pipe-c-cursor-256x85-offscreen.html

  * igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
    - {shard-rkl}:        [SKIP][134] ([fdo#111825]) -> [PASS][135] +3 similar issues
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-rkl-6/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-skl:          [FAIL][136] ([i915#2346] / [i915#533]) -> [PASS][137]
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-skl4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_draw_crc@draw-method-rgb565-blt-ytiled:
    - {shard-rkl}:        [SKIP][138] ([fdo#111314]) -> [PASS][139] +3 similar issues
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10415/shard-rkl-1/igt@kms_draw_crc@draw-method-rgb565-blt-ytiled.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/shar

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20733/index.html

[-- Attachment #1.2: Type: text/html, Size: 33586 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-07-29  0:34 ` [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context Matthew Brost
@ 2021-07-30  0:13   ` John Harrison
  2021-07-30  9:49     ` Tvrtko Ursulin
  0 siblings, 1 reply; 18+ messages in thread
From: John Harrison @ 2021-07-30  0:13 UTC (permalink / raw)
  To: Matthew Brost, intel-gfx, dri-devel

On 7/28/2021 17:34, Matthew Brost wrote:
> If an engine associated with a context does not have a heartbeat, ban it
> immediately. This is needed for GuC submission as a idle pulse doesn't
> kick the context off the hardware where it then can check for a
> heartbeat and ban the context.
It's worse than this. If the engine in question is an individual 
physical engine then sending a pulse (with sufficiently high priority) 
will pre-empt the engine and kick the context off. However, the GuC 
scheduler does not have hacks in it to check the state of the heartbeat 
or whether a context is actually a zombie or not. Thus, the context will 
get resubmitted to the hardware after the pulse completes and 
effectively nothing will have happened.

I would assume that the DRM scheduler which we are meant to be switching 
to for execlist as well as GuC submission is also unlikely to have hacks 
for zombie contexts and tests for whether the i915 specific heartbeat 
has been disabled since the context became a zombie. So when that switch 
happens, this test will also fail in execlist mode as well as GuC mode.

The choices I see here are to simply remove persistence completely (it 
is a basically a bug that became UAPI because it wasn't caught soon 
enough!) or to implement it in a way that does not require hacks in the 
back end scheduler. Apparently, the DRM scheduler is expected to allow 
zombie contexts to persist until the DRM file handle is closed. So 
presumably we will have to go with option two.

That means flagging a context as being a zombie when it is closed but 
still active. The driver would then add it to a zombie list owned by the 
DRM client object. When that client object is closed, i915 would go 
through the list and genuinely kill all the contexts. No back end 
scheduler hacks required and no intimate knowledge of the i915 heartbeat 
mechanism required either.

John.


>
> This patch also updates intel_engine_has_heartbeat to be a vfunc as we
> now need to call this function on execlists virtual engines too.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>   6 files changed, 26 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 9c3672bac0e2..b8e01c5ba9e5 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -1090,8 +1090,9 @@ static void kill_engines(struct i915_gem_engines *engines, bool ban)
>   	 */
>   	for_each_gem_engine(ce, engines, it) {
>   		struct intel_engine_cs *engine;
> +		bool local_ban = ban || !intel_engine_has_heartbeat(ce->engine);
>   
> -		if (ban && intel_context_ban(ce, NULL))
> +		if (local_ban && intel_context_ban(ce, NULL))
>   			continue;
>   
>   		/*
> @@ -1104,7 +1105,7 @@ static void kill_engines(struct i915_gem_engines *engines, bool ban)
>   		engine = active_engine(ce);
>   
>   		/* First attempt to gracefully cancel the context */
> -		if (engine && !__cancel_engine(engine) && ban)
> +		if (engine && !__cancel_engine(engine) && local_ban)
>   			/*
>   			 * If we are unable to send a preemptive pulse to bump
>   			 * the context from the GPU, we have to resort to a full
> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
> index e54351a170e2..65f2eb2a78e4 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> @@ -55,6 +55,8 @@ struct intel_context_ops {
>   	void (*reset)(struct intel_context *ce);
>   	void (*destroy)(struct kref *kref);
>   
> +	bool (*has_heartbeat)(const struct intel_engine_cs *engine);
> +
>   	/* virtual engine/context interface */
>   	struct intel_context *(*create_virtual)(struct intel_engine_cs **engine,
>   						unsigned int count);
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
> index c2a5640ae055..1b11a808acc4 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> @@ -283,28 +283,11 @@ struct intel_context *
>   intel_engine_create_virtual(struct intel_engine_cs **siblings,
>   			    unsigned int count);
>   
> -static inline bool
> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine)
> -{
> -	/*
> -	 * For non-GuC submission we expect the back-end to look at the
> -	 * heartbeat status of the actual physical engine that the work
> -	 * has been (or is being) scheduled on, so we should only reach
> -	 * here with GuC submission enabled.
> -	 */
> -	GEM_BUG_ON(!intel_engine_uses_guc(engine));
> -
> -	return intel_guc_virtual_engine_has_heartbeat(engine);
> -}
> -
>   static inline bool
>   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>   {
> -	if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
> -		return false;
> -
> -	if (intel_engine_is_virtual(engine))
> -		return intel_virtual_engine_has_heartbeat(engine);
> +	if (engine->cops->has_heartbeat)
> +		return engine->cops->has_heartbeat(engine);
>   	else
>   		return READ_ONCE(engine->props.heartbeat_interval_ms);
>   }
> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> index de5f9c86b9a4..18005b5546b6 100644
> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs *engine, unsigned int sibling)
>   	return ve->siblings[sibling];
>   }
>   
> +static bool virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
> +{
> +	struct intel_engine_cs *engine;
> +	intel_engine_mask_t tmp, mask = ve->mask;
> +
> +	for_each_engine_masked(engine, ve->gt, mask, tmp)
> +		if (READ_ONCE(engine->props.heartbeat_interval_ms))
> +			return true;
> +
> +	return false;
> +}
> +
>   static const struct intel_context_ops virtual_context_ops = {
>   	.flags = COPS_HAS_INFLIGHT,
>   
> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops virtual_context_ops = {
>   	.enter = virtual_context_enter,
>   	.exit = virtual_context_exit,
>   
> +	.has_heartbeat = virtual_engine_has_heartbeat,
> +
>   	.destroy = virtual_context_destroy,
>   
>   	.get_sibling = virtual_get_sibling,
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 89ff0e4b4bc7..ae70bff3605f 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct intel_context *ce)
>   	return lrc_alloc(ce, engine);
>   }
>   
> +static bool guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve);
> +
>   static const struct intel_context_ops virtual_guc_context_ops = {
>   	.alloc = guc_virtual_context_alloc,
>   
> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops virtual_guc_context_ops = {
>   	.enter = guc_virtual_context_enter,
>   	.exit = guc_virtual_context_exit,
>   
> +	.has_heartbeat = guc_virtual_engine_has_heartbeat,
> +
>   	.sched_disable = guc_context_sched_disable,
>   
>   	.destroy = guc_context_destroy,
> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count)
>   	return ERR_PTR(err);
>   }
>   
> -bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
> +static bool guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve)
>   {
>   	struct intel_engine_cs *engine;
>   	intel_engine_mask_t tmp, mask = ve->mask;
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> index c7ef44fa0c36..c2afc3b88fd8 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct intel_engine_cs *engine,
>   				    struct i915_request *hung_rq,
>   				    struct drm_printer *m);
>   
> -bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve);
> -
>   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>   				   atomic_t *wait_var,
>   				   bool interruptible,

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-07-30  0:13   ` John Harrison
@ 2021-07-30  9:49     ` Tvrtko Ursulin
  2021-07-30 18:13       ` John Harrison
  2021-07-30 18:13       ` Matthew Brost
  0 siblings, 2 replies; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-07-30  9:49 UTC (permalink / raw)
  To: John Harrison, Matthew Brost, intel-gfx, dri-devel


On 30/07/2021 01:13, John Harrison wrote:
> On 7/28/2021 17:34, Matthew Brost wrote:
>> If an engine associated with a context does not have a heartbeat, ban it
>> immediately. This is needed for GuC submission as a idle pulse doesn't
>> kick the context off the hardware where it then can check for a
>> heartbeat and ban the context.

Pulse, that is a request with I915_PRIORITY_BARRIER, does not preempt a 
running normal priority context?

Why does it matter then whether or not heartbeats are enabled - when 
heartbeat just ends up sending the same engine pulse (eventually, with 
raising priority)?

> It's worse than this. If the engine in question is an individual 
> physical engine then sending a pulse (with sufficiently high priority) 
> will pre-empt the engine and kick the context off. However, the GuC 

Why it is different for physical vs virtual, aren't both just 
schedulable contexts with different engine masks for what GuC is 
concerned? Oh, is it a matter of needing to send pulses to all engines 
which comprise a virtual one?

> scheduler does not have hacks in it to check the state of the heartbeat 
> or whether a context is actually a zombie or not. Thus, the context will 
> get resubmitted to the hardware after the pulse completes and 
> effectively nothing will have happened.
> 
> I would assume that the DRM scheduler which we are meant to be switching 
> to for execlist as well as GuC submission is also unlikely to have hacks 
> for zombie contexts and tests for whether the i915 specific heartbeat 
> has been disabled since the context became a zombie. So when that switch 
> happens, this test will also fail in execlist mode as well as GuC mode.
> 
> The choices I see here are to simply remove persistence completely (it 
> is a basically a bug that became UAPI because it wasn't caught soon 
> enough!) or to implement it in a way that does not require hacks in the 
> back end scheduler. Apparently, the DRM scheduler is expected to allow 
> zombie contexts to persist until the DRM file handle is closed. So 
> presumably we will have to go with option two.
> 
> That means flagging a context as being a zombie when it is closed but 
> still active. The driver would then add it to a zombie list owned by the 
> DRM client object. When that client object is closed, i915 would go 
> through the list and genuinely kill all the contexts. No back end 
> scheduler hacks required and no intimate knowledge of the i915 heartbeat 
> mechanism required either.
> 
> John.
> 
> 
>>
>> This patch also updates intel_engine_has_heartbeat to be a vfunc as we
>> now need to call this function on execlists virtual engines too.
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
>>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>   6 files changed, 26 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>> index 9c3672bac0e2..b8e01c5ba9e5 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct i915_gem_engines 
>> *engines, bool ban)
>>        */
>>       for_each_gem_engine(ce, engines, it) {
>>           struct intel_engine_cs *engine;
>> +        bool local_ban = ban || !intel_engine_has_heartbeat(ce->engine);

In any case (pending me understanding what's really going on there), why 
would this check not be in kill_context with currently does this:

	bool ban = (!i915_gem_context_is_persistent(ctx) ||
		    !ctx->i915->params.enable_hangcheck);
...
		kill_engines(pos, ban);

So whether to ban decision would be consolidated to one place.

In fact, decision on whether to allow persistent is tied to 
enable_hangcheck, which also drives hearbeat emission. So perhaps one 
part of the correct fix is to extend the above (kill_context) ban 
criteria to include hearbeat values anyway. Otherwise isn't it a simple 
miss that this check fails to account to hearbeat disablement via sysfs?

Regards,

Tvrtko

>> -        if (ban && intel_context_ban(ce, NULL))
>> +        if (local_ban && intel_context_ban(ce, NULL))
>>               continue;
>>           /*
>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct i915_gem_engines 
>> *engines, bool ban)
>>           engine = active_engine(ce);
>>           /* First attempt to gracefully cancel the context */
>> -        if (engine && !__cancel_engine(engine) && ban)
>> +        if (engine && !__cancel_engine(engine) && local_ban)
>>               /*
>>                * If we are unable to send a preemptive pulse to bump
>>                * the context from the GPU, we have to resort to a full
>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
>> index e54351a170e2..65f2eb2a78e4 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
>> @@ -55,6 +55,8 @@ struct intel_context_ops {
>>       void (*reset)(struct intel_context *ce);
>>       void (*destroy)(struct kref *kref);
>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
>> +
>>       /* virtual engine/context interface */
>>       struct intel_context *(*create_virtual)(struct intel_engine_cs 
>> **engine,
>>                           unsigned int count);
>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
>> b/drivers/gpu/drm/i915/gt/intel_engine.h
>> index c2a5640ae055..1b11a808acc4 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>> @@ -283,28 +283,11 @@ struct intel_context *
>>   intel_engine_create_virtual(struct intel_engine_cs **siblings,
>>                   unsigned int count);
>> -static inline bool
>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine)
>> -{
>> -    /*
>> -     * For non-GuC submission we expect the back-end to look at the
>> -     * heartbeat status of the actual physical engine that the work
>> -     * has been (or is being) scheduled on, so we should only reach
>> -     * here with GuC submission enabled.
>> -     */
>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
>> -
>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
>> -}
>> -
>>   static inline bool
>>   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>>   {
>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
>> -        return false;
>> -
>> -    if (intel_engine_is_virtual(engine))
>> -        return intel_virtual_engine_has_heartbeat(engine);
>> +    if (engine->cops->has_heartbeat)
>> +        return engine->cops->has_heartbeat(engine);
>>       else
>>           return READ_ONCE(engine->props.heartbeat_interval_ms);
>>   }
>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>> index de5f9c86b9a4..18005b5546b6 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs 
>> *engine, unsigned int sibling)
>>       return ve->siblings[sibling];
>>   }
>> +static bool virtual_engine_has_heartbeat(const struct intel_engine_cs 
>> *ve)
>> +{
>> +    struct intel_engine_cs *engine;
>> +    intel_engine_mask_t tmp, mask = ve->mask;
>> +
>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
>> +            return true;
>> +
>> +    return false;
>> +}
>> +
>>   static const struct intel_context_ops virtual_context_ops = {
>>       .flags = COPS_HAS_INFLIGHT,
>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops 
>> virtual_context_ops = {
>>       .enter = virtual_context_enter,
>>       .exit = virtual_context_exit,
>> +    .has_heartbeat = virtual_engine_has_heartbeat,
>> +
>>       .destroy = virtual_context_destroy,
>>       .get_sibling = virtual_get_sibling,
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index 89ff0e4b4bc7..ae70bff3605f 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct 
>> intel_context *ce)
>>       return lrc_alloc(ce, engine);
>>   }
>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>> intel_engine_cs *ve);
>> +
>>   static const struct intel_context_ops virtual_guc_context_ops = {
>>       .alloc = guc_virtual_context_alloc,
>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops 
>> virtual_guc_context_ops = {
>>       .enter = guc_virtual_context_enter,
>>       .exit = guc_virtual_context_exit,
>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
>> +
>>       .sched_disable = guc_context_sched_disable,
>>       .destroy = guc_context_destroy,
>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs 
>> **siblings, unsigned int count)
>>       return ERR_PTR(err);
>>   }
>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>> intel_engine_cs *ve)
>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>> intel_engine_cs *ve)
>>   {
>>       struct intel_engine_cs *engine;
>>       intel_engine_mask_t tmp, mask = ve->mask;
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> index c7ef44fa0c36..c2afc3b88fd8 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct 
>> intel_engine_cs *engine,
>>                       struct i915_request *hung_rq,
>>                       struct drm_printer *m);
>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>> intel_engine_cs *ve);
>> -
>>   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>>                      atomic_t *wait_var,
>>                      bool interruptible,
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-07-30  9:49     ` Tvrtko Ursulin
@ 2021-07-30 18:13       ` John Harrison
  2021-08-02  9:40         ` Tvrtko Ursulin
  2021-07-30 18:13       ` Matthew Brost
  1 sibling, 1 reply; 18+ messages in thread
From: John Harrison @ 2021-07-30 18:13 UTC (permalink / raw)
  To: Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

On 7/30/2021 02:49, Tvrtko Ursulin wrote:
> On 30/07/2021 01:13, John Harrison wrote:
>> On 7/28/2021 17:34, Matthew Brost wrote:
>>> If an engine associated with a context does not have a heartbeat, 
>>> ban it
>>> immediately. This is needed for GuC submission as a idle pulse doesn't
>>> kick the context off the hardware where it then can check for a
>>> heartbeat and ban the context.
>
> Pulse, that is a request with I915_PRIORITY_BARRIER, does not preempt 
> a running normal priority context?
>
> Why does it matter then whether or not heartbeats are enabled - when 
> heartbeat just ends up sending the same engine pulse (eventually, with 
> raising priority)?
The point is that the pulse is pointless. See the rest of my comments 
below, specifically "the context will get resubmitted to the hardware 
after the pulse completes". To re-iterate...

Yes, it preempts the context. Yes, it does so whether heartbeats are 
enabled or not. But so what? Who cares? You have preempted a context. It 
is no longer running on the hardware. BUT IT IS STILL A VALID CONTEXT. 
The backend scheduler will just resubmit it to the hardware as soon as 
the pulse completes. The only reason this works at all is because of the 
horrid hack in the execlist scheduler's back end implementation (in 
__execlists_schedule_in):
         if (unlikely(intel_context_is_closed(ce) &&
                      !intel_engine_has_heartbeat(engine)))
                 intel_context_set_banned(ce);

The actual back end scheduler is saying "Is this a zombie context? Is 
the heartbeat disabled? Then ban it". No other scheduler backend is 
going to have knowledge of zombie context status or of the heartbeat 
status. Nor are they going to call back into the higher levels of the 
i915 driver to trigger a ban operation. Certainly a hardware implemented 
scheduler is not going to be looking at private i915 driver information 
to decide whether to submit a context or whether to tell the OS to kill 
it off instead.

For persistence to work with a hardware scheduler (or a non-Intel 
specific scheduler such as the DRM one), the handling of zombie 
contexts, banning, etc. *must* be done entirely in the front end. It 
cannot rely on any backend hacks. That means you can't rely on any fancy 
behaviour of pulses.

If you want to ban a context then you must explicitly ban that context. 
If you want to ban it at some later point then you need to track it at 
the top level as a zombie and then explicitly ban that zombie at 
whatever later point.


>
>> It's worse than this. If the engine in question is an individual 
>> physical engine then sending a pulse (with sufficiently high 
>> priority) will pre-empt the engine and kick the context off. However, 
>> the GuC 
>
> Why it is different for physical vs virtual, aren't both just 
> schedulable contexts with different engine masks for what GuC is 
> concerned? Oh, is it a matter of needing to send pulses to all engines 
> which comprise a virtual one?
It isn't different. It is totally broken for both. It is potentially 
more broken for virtual engines because of the question of which engine 
to pulse. But as stated above, the pulse is pointless anyway so the 
which engine question doesn't even matter.

John.


>
>> scheduler does not have hacks in it to check the state of the 
>> heartbeat or whether a context is actually a zombie or not. Thus, the 
>> context will get resubmitted to the hardware after the pulse 
>> completes and effectively nothing will have happened.
>>
>> I would assume that the DRM scheduler which we are meant to be 
>> switching to for execlist as well as GuC submission is also unlikely 
>> to have hacks for zombie contexts and tests for whether the i915 
>> specific heartbeat has been disabled since the context became a 
>> zombie. So when that switch happens, this test will also fail in 
>> execlist mode as well as GuC mode.
>>
>> The choices I see here are to simply remove persistence completely 
>> (it is a basically a bug that became UAPI because it wasn't caught 
>> soon enough!) or to implement it in a way that does not require hacks 
>> in the back end scheduler. Apparently, the DRM scheduler is expected 
>> to allow zombie contexts to persist until the DRM file handle is 
>> closed. So presumably we will have to go with option two.
>>
>> That means flagging a context as being a zombie when it is closed but 
>> still active. The driver would then add it to a zombie list owned by 
>> the DRM client object. When that client object is closed, i915 would 
>> go through the list and genuinely kill all the contexts. No back end 
>> scheduler hacks required and no intimate knowledge of the i915 
>> heartbeat mechanism required either.
>>
>> John.
>>
>>
>>>
>>> This patch also updates intel_engine_has_heartbeat to be a vfunc as we
>>> now need to call this function on execlists virtual engines too.
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 
>>> ++-----------------
>>>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>>   6 files changed, 26 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct 
>>> i915_gem_engines *engines, bool ban)
>>>        */
>>>       for_each_gem_engine(ce, engines, it) {
>>>           struct intel_engine_cs *engine;
>>> +        bool local_ban = ban || 
>>> !intel_engine_has_heartbeat(ce->engine);
>
> In any case (pending me understanding what's really going on there), 
> why would this check not be in kill_context with currently does this:
>
>     bool ban = (!i915_gem_context_is_persistent(ctx) ||
>             !ctx->i915->params.enable_hangcheck);
> ...
>         kill_engines(pos, ban);
>
> So whether to ban decision would be consolidated to one place.
>
> In fact, decision on whether to allow persistent is tied to 
> enable_hangcheck, which also drives hearbeat emission. So perhaps one 
> part of the correct fix is to extend the above (kill_context) ban 
> criteria to include hearbeat values anyway. Otherwise isn't it a 
> simple miss that this check fails to account to hearbeat disablement 
> via sysfs?
>
> Regards,
>
> Tvrtko
>
>>> -        if (ban && intel_context_ban(ce, NULL))
>>> +        if (local_ban && intel_context_ban(ce, NULL))
>>>               continue;
>>>           /*
>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct 
>>> i915_gem_engines *engines, bool ban)
>>>           engine = active_engine(ce);
>>>           /* First attempt to gracefully cancel the context */
>>> -        if (engine && !__cancel_engine(engine) && ban)
>>> +        if (engine && !__cancel_engine(engine) && local_ban)
>>>               /*
>>>                * If we are unable to send a preemptive pulse to bump
>>>                * the context from the GPU, we have to resort to a full
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>> index e54351a170e2..65f2eb2a78e4 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
>>>       void (*reset)(struct intel_context *ce);
>>>       void (*destroy)(struct kref *kref);
>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
>>> +
>>>       /* virtual engine/context interface */
>>>       struct intel_context *(*create_virtual)(struct intel_engine_cs 
>>> **engine,
>>>                           unsigned int count);
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
>>> index c2a5640ae055..1b11a808acc4 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>>> @@ -283,28 +283,11 @@ struct intel_context *
>>>   intel_engine_create_virtual(struct intel_engine_cs **siblings,
>>>                   unsigned int count);
>>> -static inline bool
>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs 
>>> *engine)
>>> -{
>>> -    /*
>>> -     * For non-GuC submission we expect the back-end to look at the
>>> -     * heartbeat status of the actual physical engine that the work
>>> -     * has been (or is being) scheduled on, so we should only reach
>>> -     * here with GuC submission enabled.
>>> -     */
>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
>>> -
>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
>>> -}
>>> -
>>>   static inline bool
>>>   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>>>   {
>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
>>> -        return false;
>>> -
>>> -    if (intel_engine_is_virtual(engine))
>>> -        return intel_virtual_engine_has_heartbeat(engine);
>>> +    if (engine->cops->has_heartbeat)
>>> +        return engine->cops->has_heartbeat(engine);
>>>       else
>>>           return READ_ONCE(engine->props.heartbeat_interval_ms);
>>>   }
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>> index de5f9c86b9a4..18005b5546b6 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs 
>>> *engine, unsigned int sibling)
>>>       return ve->siblings[sibling];
>>>   }
>>> +static bool virtual_engine_has_heartbeat(const struct 
>>> intel_engine_cs *ve)
>>> +{
>>> +    struct intel_engine_cs *engine;
>>> +    intel_engine_mask_t tmp, mask = ve->mask;
>>> +
>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
>>> +            return true;
>>> +
>>> +    return false;
>>> +}
>>> +
>>>   static const struct intel_context_ops virtual_context_ops = {
>>>       .flags = COPS_HAS_INFLIGHT,
>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops 
>>> virtual_context_ops = {
>>>       .enter = virtual_context_enter,
>>>       .exit = virtual_context_exit,
>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
>>> +
>>>       .destroy = virtual_context_destroy,
>>>       .get_sibling = virtual_get_sibling,
>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> index 89ff0e4b4bc7..ae70bff3605f 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct 
>>> intel_context *ce)
>>>       return lrc_alloc(ce, engine);
>>>   }
>>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>>> intel_engine_cs *ve);
>>> +
>>>   static const struct intel_context_ops virtual_guc_context_ops = {
>>>       .alloc = guc_virtual_context_alloc,
>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops 
>>> virtual_guc_context_ops = {
>>>       .enter = guc_virtual_context_enter,
>>>       .exit = guc_virtual_context_exit,
>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
>>> +
>>>       .sched_disable = guc_context_sched_disable,
>>>       .destroy = guc_context_destroy,
>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs 
>>> **siblings, unsigned int count)
>>>       return ERR_PTR(err);
>>>   }
>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>>> intel_engine_cs *ve)
>>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>>> intel_engine_cs *ve)
>>>   {
>>>       struct intel_engine_cs *engine;
>>>       intel_engine_mask_t tmp, mask = ve->mask;
>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>> index c7ef44fa0c36..c2afc3b88fd8 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct 
>>> intel_engine_cs *engine,
>>>                       struct i915_request *hung_rq,
>>>                       struct drm_printer *m);
>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>>> intel_engine_cs *ve);
>>> -
>>>   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>>>                      atomic_t *wait_var,
>>>                      bool interruptible,
>>
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-07-30  9:49     ` Tvrtko Ursulin
  2021-07-30 18:13       ` John Harrison
@ 2021-07-30 18:13       ` Matthew Brost
  1 sibling, 0 replies; 18+ messages in thread
From: Matthew Brost @ 2021-07-30 18:13 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: John Harrison, intel-gfx, dri-devel

On Fri, Jul 30, 2021 at 10:49:01AM +0100, Tvrtko Ursulin wrote:
> 
> On 30/07/2021 01:13, John Harrison wrote:
> > On 7/28/2021 17:34, Matthew Brost wrote:
> > > If an engine associated with a context does not have a heartbeat, ban it
> > > immediately. This is needed for GuC submission as a idle pulse doesn't
> > > kick the context off the hardware where it then can check for a
> > > heartbeat and ban the context.
> 
> Pulse, that is a request with I915_PRIORITY_BARRIER, does not preempt a
> running normal priority context?
> 

Yes, in both execlists and GuC submission the contexts gets preempted.
With execlists the i915 see the preempt CSB while with GuC submission
the GUC sees it.

> Why does it matter then whether or not heartbeats are enabled - when
> heartbeat just ends up sending the same engine pulse (eventually, with
> raising priority)?
>

With execlists when the request gets resubmitted, there is check if the
context is closed and the heartbeat is disabled. If this is true, the
context gets banned. See __execlists_schedule_in.

With the Guc sense it owns the CSB / resubmission, the heartbeat /
closed check doesn't exist to ban the context. 

> > It's worse than this. If the engine in question is an individual
> > physical engine then sending a pulse (with sufficiently high priority)
> > will pre-empt the engine and kick the context off. However, the GuC
> 
> Why it is different for physical vs virtual, aren't both just schedulable
> contexts with different engine masks for what GuC is concerned? Oh, is it a
> matter of needing to send pulses to all engines which comprise a virtual
> one?

Yes. The whole idle pulse thing is kinda junk. It really makes an
assumption that the backend is execlists. We likely have a bit more work
here.

> 
> > scheduler does not have hacks in it to check the state of the heartbeat
> > or whether a context is actually a zombie or not. Thus, the context will
> > get resubmitted to the hardware after the pulse completes and
> > effectively nothing will have happened.
> > 
> > I would assume that the DRM scheduler which we are meant to be switching
> > to for execlist as well as GuC submission is also unlikely to have hacks
> > for zombie contexts and tests for whether the i915 specific heartbeat
> > has been disabled since the context became a zombie. So when that switch
> > happens, this test will also fail in execlist mode as well as GuC mode.
> > 
> > The choices I see here are to simply remove persistence completely (it
> > is a basically a bug that became UAPI because it wasn't caught soon
> > enough!) or to implement it in a way that does not require hacks in the
> > back end scheduler. Apparently, the DRM scheduler is expected to allow
> > zombie contexts to persist until the DRM file handle is closed. So
> > presumably we will have to go with option two.
> > 
> > That means flagging a context as being a zombie when it is closed but
> > still active. The driver would then add it to a zombie list owned by the
> > DRM client object. When that client object is closed, i915 would go
> > through the list and genuinely kill all the contexts. No back end
> > scheduler hacks required and no intimate knowledge of the i915 heartbeat
> > mechanism required either.
> > 
> > John.
> > 
> > 
> > > 
> > > This patch also updates intel_engine_has_heartbeat to be a vfunc as we
> > > now need to call this function on execlists virtual engines too.
> > > 
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > >   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
> > >   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
> > >   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
> > >   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
> > >   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
> > >   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
> > >   6 files changed, 26 insertions(+), 24 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > index 9c3672bac0e2..b8e01c5ba9e5 100644
> > > --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > @@ -1090,8 +1090,9 @@ static void kill_engines(struct
> > > i915_gem_engines *engines, bool ban)
> > >        */
> > >       for_each_gem_engine(ce, engines, it) {
> > >           struct intel_engine_cs *engine;
> > > +        bool local_ban = ban || !intel_engine_has_heartbeat(ce->engine);
> 
> In any case (pending me understanding what's really going on there), why
> would this check not be in kill_context with currently does this:
> 
> 	bool ban = (!i915_gem_context_is_persistent(ctx) ||
> 		    !ctx->i915->params.enable_hangcheck);
> ...

This gem_context level check, while the other check is per
intel_context. We don't have the intel_context here.

> 		kill_engines(pos, ban);
> 
> So whether to ban decision would be consolidated to one place.
> 
> In fact, decision on whether to allow persistent is tied to
> enable_hangcheck, which also drives hearbeat emission. So perhaps one part
> of the correct fix is to extend the above (kill_context) ban criteria to
> include hearbeat values anyway. Otherwise isn't it a simple miss that this
> check fails to account to hearbeat disablement via sysfs?
> 

The execlists has that check in the resubmission path which doesn't
exist for the GuC (explained above). This code just moves this check to
a place where it works with GuC submission.

Matt

> Regards,
> 
> Tvrtko
> 
> > > -        if (ban && intel_context_ban(ce, NULL))
> > > +        if (local_ban && intel_context_ban(ce, NULL))
> > >               continue;
> > >           /*
> > > @@ -1104,7 +1105,7 @@ static void kill_engines(struct
> > > i915_gem_engines *engines, bool ban)
> > >           engine = active_engine(ce);
> > >           /* First attempt to gracefully cancel the context */
> > > -        if (engine && !__cancel_engine(engine) && ban)
> > > +        if (engine && !__cancel_engine(engine) && local_ban)
> > >               /*
> > >                * If we are unable to send a preemptive pulse to bump
> > >                * the context from the GPU, we have to resort to a full
> > > diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > b/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > index e54351a170e2..65f2eb2a78e4 100644
> > > --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > @@ -55,6 +55,8 @@ struct intel_context_ops {
> > >       void (*reset)(struct intel_context *ce);
> > >       void (*destroy)(struct kref *kref);
> > > +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
> > > +
> > >       /* virtual engine/context interface */
> > >       struct intel_context *(*create_virtual)(struct intel_engine_cs
> > > **engine,
> > >                           unsigned int count);
> > > diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h
> > > b/drivers/gpu/drm/i915/gt/intel_engine.h
> > > index c2a5640ae055..1b11a808acc4 100644
> > > --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> > > +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> > > @@ -283,28 +283,11 @@ struct intel_context *
> > >   intel_engine_create_virtual(struct intel_engine_cs **siblings,
> > >                   unsigned int count);
> > > -static inline bool
> > > -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine)
> > > -{
> > > -    /*
> > > -     * For non-GuC submission we expect the back-end to look at the
> > > -     * heartbeat status of the actual physical engine that the work
> > > -     * has been (or is being) scheduled on, so we should only reach
> > > -     * here with GuC submission enabled.
> > > -     */
> > > -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
> > > -
> > > -    return intel_guc_virtual_engine_has_heartbeat(engine);
> > > -}
> > > -
> > >   static inline bool
> > >   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
> > >   {
> > > -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
> > > -        return false;
> > > -
> > > -    if (intel_engine_is_virtual(engine))
> > > -        return intel_virtual_engine_has_heartbeat(engine);
> > > +    if (engine->cops->has_heartbeat)
> > > +        return engine->cops->has_heartbeat(engine);
> > >       else
> > >           return READ_ONCE(engine->props.heartbeat_interval_ms);
> > >   }
> > > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > index de5f9c86b9a4..18005b5546b6 100644
> > > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs
> > > *engine, unsigned int sibling)
> > >       return ve->siblings[sibling];
> > >   }
> > > +static bool virtual_engine_has_heartbeat(const struct
> > > intel_engine_cs *ve)
> > > +{
> > > +    struct intel_engine_cs *engine;
> > > +    intel_engine_mask_t tmp, mask = ve->mask;
> > > +
> > > +    for_each_engine_masked(engine, ve->gt, mask, tmp)
> > > +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
> > > +            return true;
> > > +
> > > +    return false;
> > > +}
> > > +
> > >   static const struct intel_context_ops virtual_context_ops = {
> > >       .flags = COPS_HAS_INFLIGHT,
> > > @@ -3634,6 +3646,8 @@ static const struct intel_context_ops
> > > virtual_context_ops = {
> > >       .enter = virtual_context_enter,
> > >       .exit = virtual_context_exit,
> > > +    .has_heartbeat = virtual_engine_has_heartbeat,
> > > +
> > >       .destroy = virtual_context_destroy,
> > >       .get_sibling = virtual_get_sibling,
> > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > index 89ff0e4b4bc7..ae70bff3605f 100644
> > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct
> > > intel_context *ce)
> > >       return lrc_alloc(ce, engine);
> > >   }
> > > +static bool guc_virtual_engine_has_heartbeat(const struct
> > > intel_engine_cs *ve);
> > > +
> > >   static const struct intel_context_ops virtual_guc_context_ops = {
> > >       .alloc = guc_virtual_context_alloc,
> > > @@ -2183,6 +2185,8 @@ static const struct intel_context_ops
> > > virtual_guc_context_ops = {
> > >       .enter = guc_virtual_context_enter,
> > >       .exit = guc_virtual_context_exit,
> > > +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
> > > +
> > >       .sched_disable = guc_context_sched_disable,
> > >       .destroy = guc_context_destroy,
> > > @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs
> > > **siblings, unsigned int count)
> > >       return ERR_PTR(err);
> > >   }
> > > -bool intel_guc_virtual_engine_has_heartbeat(const struct
> > > intel_engine_cs *ve)
> > > +static bool guc_virtual_engine_has_heartbeat(const struct
> > > intel_engine_cs *ve)
> > >   {
> > >       struct intel_engine_cs *engine;
> > >       intel_engine_mask_t tmp, mask = ve->mask;
> > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > index c7ef44fa0c36..c2afc3b88fd8 100644
> > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct
> > > intel_engine_cs *engine,
> > >                       struct i915_request *hung_rq,
> > >                       struct drm_printer *m);
> > > -bool intel_guc_virtual_engine_has_heartbeat(const struct
> > > intel_engine_cs *ve);
> > > -
> > >   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
> > >                      atomic_t *wait_var,
> > >                      bool interruptible,
> > 
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-07-30 18:13       ` John Harrison
@ 2021-08-02  9:40         ` Tvrtko Ursulin
  2021-08-06 18:00           ` John Harrison
  0 siblings, 1 reply; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-08-02  9:40 UTC (permalink / raw)
  To: John Harrison, Matthew Brost, intel-gfx, dri-devel


On 30/07/2021 19:13, John Harrison wrote:
> On 7/30/2021 02:49, Tvrtko Ursulin wrote:
>> On 30/07/2021 01:13, John Harrison wrote:
>>> On 7/28/2021 17:34, Matthew Brost wrote:
>>>> If an engine associated with a context does not have a heartbeat, 
>>>> ban it
>>>> immediately. This is needed for GuC submission as a idle pulse doesn't
>>>> kick the context off the hardware where it then can check for a
>>>> heartbeat and ban the context.
>>
>> Pulse, that is a request with I915_PRIORITY_BARRIER, does not preempt 
>> a running normal priority context?
>>
>> Why does it matter then whether or not heartbeats are enabled - when 
>> heartbeat just ends up sending the same engine pulse (eventually, with 
>> raising priority)?
> The point is that the pulse is pointless. See the rest of my comments 
> below, specifically "the context will get resubmitted to the hardware 
> after the pulse completes". To re-iterate...
> 
> Yes, it preempts the context. Yes, it does so whether heartbeats are 
> enabled or not. But so what? Who cares? You have preempted a context. It 
> is no longer running on the hardware. BUT IT IS STILL A VALID CONTEXT. 

It is valid yes, and it even may be the current ABI so another question 
is whether it is okay to change that.

> The backend scheduler will just resubmit it to the hardware as soon as 
> the pulse completes. The only reason this works at all is because of the 
> horrid hack in the execlist scheduler's back end implementation (in 
> __execlists_schedule_in):
>          if (unlikely(intel_context_is_closed(ce) &&
>                       !intel_engine_has_heartbeat(engine)))
>                  intel_context_set_banned(ce);

Right, is the above code then needed with this patch - when ban is 
immediately applied on the higher level?

> The actual back end scheduler is saying "Is this a zombie context? Is 
> the heartbeat disabled? Then ban it". No other scheduler backend is 
> going to have knowledge of zombie context status or of the heartbeat 
> status. Nor are they going to call back into the higher levels of the 
> i915 driver to trigger a ban operation. Certainly a hardware implemented 
> scheduler is not going to be looking at private i915 driver information 
> to decide whether to submit a context or whether to tell the OS to kill 
> it off instead.
> 
> For persistence to work with a hardware scheduler (or a non-Intel 
> specific scheduler such as the DRM one), the handling of zombie 
> contexts, banning, etc. *must* be done entirely in the front end. It 
> cannot rely on any backend hacks. That means you can't rely on any fancy 
> behaviour of pulses.
> 
> If you want to ban a context then you must explicitly ban that context. 
> If you want to ban it at some later point then you need to track it at 
> the top level as a zombie and then explicitly ban that zombie at 
> whatever later point.

I am still trying to understand it all. If I go by the commit message:

"""
This is needed for GuC submission as a idle pulse doesn't
kick the context off the hardware where it then can check for a
heartbeat and ban the context.
"""

That did not explain things for me. Sentence does not appear to make 
sense. Now, it seems "kick off the hardware" is meant as revoke and not 
just preempt. Which is fine, perhaps just needs to be written more 
explicitly. But the part of checking for heartbeat after idle pulse does 
not compute for me. It is the heartbeat which emits idle pulses, not 
idle pulse emitting heartbeats.

But anyway, I can buy the handling at the front end story completely. It 
makes sense. We just need to agree that a) it is okay to change the ABI 
and b) remove the backend check from execlists if it is not needed any 
longer.

And if ABI change is okay then commit message needs to talk about it 
loudly and clearly.

Or perhaps there is no ABI change? I am not really clear how does 
setting banned status propagate to the GuC backend. I mean at which 
point does i915 ends up passing that info to the firmware?

Regards,

Tvrtko

> 
> 
>>
>>> It's worse than this. If the engine in question is an individual 
>>> physical engine then sending a pulse (with sufficiently high 
>>> priority) will pre-empt the engine and kick the context off. However, 
>>> the GuC 
>>
>> Why it is different for physical vs virtual, aren't both just 
>> schedulable contexts with different engine masks for what GuC is 
>> concerned? Oh, is it a matter of needing to send pulses to all engines 
>> which comprise a virtual one?
> It isn't different. It is totally broken for both. It is potentially 
> more broken for virtual engines because of the question of which engine 
> to pulse. But as stated above, the pulse is pointless anyway so the 
> which engine question doesn't even matter.
> 
> John.
> 
> 
>>
>>> scheduler does not have hacks in it to check the state of the 
>>> heartbeat or whether a context is actually a zombie or not. Thus, the 
>>> context will get resubmitted to the hardware after the pulse 
>>> completes and effectively nothing will have happened.
>>>
>>> I would assume that the DRM scheduler which we are meant to be 
>>> switching to for execlist as well as GuC submission is also unlikely 
>>> to have hacks for zombie contexts and tests for whether the i915 
>>> specific heartbeat has been disabled since the context became a 
>>> zombie. So when that switch happens, this test will also fail in 
>>> execlist mode as well as GuC mode.
>>>
>>> The choices I see here are to simply remove persistence completely 
>>> (it is a basically a bug that became UAPI because it wasn't caught 
>>> soon enough!) or to implement it in a way that does not require hacks 
>>> in the back end scheduler. Apparently, the DRM scheduler is expected 
>>> to allow zombie contexts to persist until the DRM file handle is 
>>> closed. So presumably we will have to go with option two.
>>>
>>> That means flagging a context as being a zombie when it is closed but 
>>> still active. The driver would then add it to a zombie list owned by 
>>> the DRM client object. When that client object is closed, i915 would 
>>> go through the list and genuinely kill all the contexts. No back end 
>>> scheduler hacks required and no intimate knowledge of the i915 
>>> heartbeat mechanism required either.
>>>
>>> John.
>>>
>>>
>>>>
>>>> This patch also updates intel_engine_has_heartbeat to be a vfunc as we
>>>> now need to call this function on execlists virtual engines too.
>>>>
>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>> ---
>>>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>>>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 
>>>> ++-----------------
>>>>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>>>   6 files changed, 26 insertions(+), 24 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
>>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct 
>>>> i915_gem_engines *engines, bool ban)
>>>>        */
>>>>       for_each_gem_engine(ce, engines, it) {
>>>>           struct intel_engine_cs *engine;
>>>> +        bool local_ban = ban || 
>>>> !intel_engine_has_heartbeat(ce->engine);
>>
>> In any case (pending me understanding what's really going on there), 
>> why would this check not be in kill_context with currently does this:
>>
>>     bool ban = (!i915_gem_context_is_persistent(ctx) ||
>>             !ctx->i915->params.enable_hangcheck);
>> ...
>>         kill_engines(pos, ban);
>>
>> So whether to ban decision would be consolidated to one place.
>>
>> In fact, decision on whether to allow persistent is tied to 
>> enable_hangcheck, which also drives hearbeat emission. So perhaps one 
>> part of the correct fix is to extend the above (kill_context) ban 
>> criteria to include hearbeat values anyway. Otherwise isn't it a 
>> simple miss that this check fails to account to hearbeat disablement 
>> via sysfs?
>>
>> Regards,
>>
>> Tvrtko
>>
>>>> -        if (ban && intel_context_ban(ce, NULL))
>>>> +        if (local_ban && intel_context_ban(ce, NULL))
>>>>               continue;
>>>>           /*
>>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct 
>>>> i915_gem_engines *engines, bool ban)
>>>>           engine = active_engine(ce);
>>>>           /* First attempt to gracefully cancel the context */
>>>> -        if (engine && !__cancel_engine(engine) && ban)
>>>> +        if (engine && !__cancel_engine(engine) && local_ban)
>>>>               /*
>>>>                * If we are unable to send a preemptive pulse to bump
>>>>                * the context from the GPU, we have to resort to a full
>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
>>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>> index e54351a170e2..65f2eb2a78e4 100644
>>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
>>>>       void (*reset)(struct intel_context *ce);
>>>>       void (*destroy)(struct kref *kref);
>>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
>>>> +
>>>>       /* virtual engine/context interface */
>>>>       struct intel_context *(*create_virtual)(struct intel_engine_cs 
>>>> **engine,
>>>>                           unsigned int count);
>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
>>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>> index c2a5640ae055..1b11a808acc4 100644
>>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>> @@ -283,28 +283,11 @@ struct intel_context *
>>>>   intel_engine_create_virtual(struct intel_engine_cs **siblings,
>>>>                   unsigned int count);
>>>> -static inline bool
>>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs 
>>>> *engine)
>>>> -{
>>>> -    /*
>>>> -     * For non-GuC submission we expect the back-end to look at the
>>>> -     * heartbeat status of the actual physical engine that the work
>>>> -     * has been (or is being) scheduled on, so we should only reach
>>>> -     * here with GuC submission enabled.
>>>> -     */
>>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
>>>> -
>>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
>>>> -}
>>>> -
>>>>   static inline bool
>>>>   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>>>>   {
>>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
>>>> -        return false;
>>>> -
>>>> -    if (intel_engine_is_virtual(engine))
>>>> -        return intel_virtual_engine_has_heartbeat(engine);
>>>> +    if (engine->cops->has_heartbeat)
>>>> +        return engine->cops->has_heartbeat(engine);
>>>>       else
>>>>           return READ_ONCE(engine->props.heartbeat_interval_ms);
>>>>   }
>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
>>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>> index de5f9c86b9a4..18005b5546b6 100644
>>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs 
>>>> *engine, unsigned int sibling)
>>>>       return ve->siblings[sibling];
>>>>   }
>>>> +static bool virtual_engine_has_heartbeat(const struct 
>>>> intel_engine_cs *ve)
>>>> +{
>>>> +    struct intel_engine_cs *engine;
>>>> +    intel_engine_mask_t tmp, mask = ve->mask;
>>>> +
>>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
>>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
>>>> +            return true;
>>>> +
>>>> +    return false;
>>>> +}
>>>> +
>>>>   static const struct intel_context_ops virtual_context_ops = {
>>>>       .flags = COPS_HAS_INFLIGHT,
>>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops 
>>>> virtual_context_ops = {
>>>>       .enter = virtual_context_enter,
>>>>       .exit = virtual_context_exit,
>>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
>>>> +
>>>>       .destroy = virtual_context_destroy,
>>>>       .get_sibling = virtual_get_sibling,
>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> index 89ff0e4b4bc7..ae70bff3605f 100644
>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct 
>>>> intel_context *ce)
>>>>       return lrc_alloc(ce, engine);
>>>>   }
>>>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>>>> intel_engine_cs *ve);
>>>> +
>>>>   static const struct intel_context_ops virtual_guc_context_ops = {
>>>>       .alloc = guc_virtual_context_alloc,
>>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops 
>>>> virtual_guc_context_ops = {
>>>>       .enter = guc_virtual_context_enter,
>>>>       .exit = guc_virtual_context_exit,
>>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
>>>> +
>>>>       .sched_disable = guc_context_sched_disable,
>>>>       .destroy = guc_context_destroy,
>>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs 
>>>> **siblings, unsigned int count)
>>>>       return ERR_PTR(err);
>>>>   }
>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>>>> intel_engine_cs *ve)
>>>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>>>> intel_engine_cs *ve)
>>>>   {
>>>>       struct intel_engine_cs *engine;
>>>>       intel_engine_mask_t tmp, mask = ve->mask;
>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>> index c7ef44fa0c36..c2afc3b88fd8 100644
>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct 
>>>> intel_engine_cs *engine,
>>>>                       struct i915_request *hung_rq,
>>>>                       struct drm_printer *m);
>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>>>> intel_engine_cs *ve);
>>>> -
>>>>   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>>>>                      atomic_t *wait_var,
>>>>                      bool interruptible,
>>>
>>> _______________________________________________
>>> Intel-gfx mailing list
>>> Intel-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-08-02  9:40         ` Tvrtko Ursulin
@ 2021-08-06 18:00           ` John Harrison
  2021-08-06 19:46             ` Daniel Vetter
  0 siblings, 1 reply; 18+ messages in thread
From: John Harrison @ 2021-08-06 18:00 UTC (permalink / raw)
  To: Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

On 8/2/2021 02:40, Tvrtko Ursulin wrote:
> On 30/07/2021 19:13, John Harrison wrote:
>> On 7/30/2021 02:49, Tvrtko Ursulin wrote:
>>> On 30/07/2021 01:13, John Harrison wrote:
>>>> On 7/28/2021 17:34, Matthew Brost wrote:
>>>>> If an engine associated with a context does not have a heartbeat, 
>>>>> ban it
>>>>> immediately. This is needed for GuC submission as a idle pulse 
>>>>> doesn't
>>>>> kick the context off the hardware where it then can check for a
>>>>> heartbeat and ban the context.
>>>
>>> Pulse, that is a request with I915_PRIORITY_BARRIER, does not 
>>> preempt a running normal priority context?
>>>
>>> Why does it matter then whether or not heartbeats are enabled - when 
>>> heartbeat just ends up sending the same engine pulse (eventually, 
>>> with raising priority)?
>> The point is that the pulse is pointless. See the rest of my comments 
>> below, specifically "the context will get resubmitted to the hardware 
>> after the pulse completes". To re-iterate...
>>
>> Yes, it preempts the context. Yes, it does so whether heartbeats are 
>> enabled or not. But so what? Who cares? You have preempted a context. 
>> It is no longer running on the hardware. BUT IT IS STILL A VALID 
>> CONTEXT. 
>
> It is valid yes, and it even may be the current ABI so another 
> question is whether it is okay to change that.
>
>> The backend scheduler will just resubmit it to the hardware as soon 
>> as the pulse completes. The only reason this works at all is because 
>> of the horrid hack in the execlist scheduler's back end 
>> implementation (in __execlists_schedule_in):
>>          if (unlikely(intel_context_is_closed(ce) &&
>>                       !intel_engine_has_heartbeat(engine)))
>>                  intel_context_set_banned(ce);
>
> Right, is the above code then needed with this patch - when ban is 
> immediately applied on the higher level?
>
>> The actual back end scheduler is saying "Is this a zombie context? Is 
>> the heartbeat disabled? Then ban it". No other scheduler backend is 
>> going to have knowledge of zombie context status or of the heartbeat 
>> status. Nor are they going to call back into the higher levels of the 
>> i915 driver to trigger a ban operation. Certainly a hardware 
>> implemented scheduler is not going to be looking at private i915 
>> driver information to decide whether to submit a context or whether 
>> to tell the OS to kill it off instead.
>>
>> For persistence to work with a hardware scheduler (or a non-Intel 
>> specific scheduler such as the DRM one), the handling of zombie 
>> contexts, banning, etc. *must* be done entirely in the front end. It 
>> cannot rely on any backend hacks. That means you can't rely on any 
>> fancy behaviour of pulses.
>>
>> If you want to ban a context then you must explicitly ban that 
>> context. If you want to ban it at some later point then you need to 
>> track it at the top level as a zombie and then explicitly ban that 
>> zombie at whatever later point.
>
> I am still trying to understand it all. If I go by the commit message:
>
> """
> This is needed for GuC submission as a idle pulse doesn't
> kick the context off the hardware where it then can check for a
> heartbeat and ban the context.
> """
>
> That did not explain things for me. Sentence does not appear to make 
> sense. Now, it seems "kick off the hardware" is meant as revoke and 
> not just preempt. Which is fine, perhaps just needs to be written more 
> explicitly. But the part of checking for heartbeat after idle pulse 
> does not compute for me. It is the heartbeat which emits idle pulses, 
> not idle pulse emitting heartbeats.
I am in agreement that the commit message is confusing and does not 
explain either the problem or the solution.


>
>
> But anyway, I can buy the handling at the front end story completely. 
> It makes sense. We just need to agree that a) it is okay to change the 
> ABI and b) remove the backend check from execlists if it is not needed 
> any longer.
>
> And if ABI change is okay then commit message needs to talk about it 
> loudly and clearly.
I don't think we have a choice. The current ABI is not and cannot ever 
be compatible with any scheduler external to i915. It cannot be 
implemented with a hardware scheduler such as the GuC and it cannot be 
implemented with an external software scheduler such as the DRM one.

My view is that any implementation involving knowledge of the heartbeat 
is fundamentally broken.

According to Daniel Vetter, the DRM ABI on this subject is that an 
actively executing context should persist until the DRM file handle is 
closed. That seems like a much more plausible and simple ABI than one 
that says 'if the heartbeat is running then a context will persist 
forever, if the heartbeat is not running then it will be killed 
immediately, if the heart was running but then stops running then the 
context will be killed on the next context switch, ...'. And if I 
understand it correctly, the current ABI allows a badly written user app 
to cause a denial of service by leaving contexts permanently running an 
infinite loop on the hardware even after the app has been killed! How 
can that ever be considered a good idea?

Therefore, the context close implementation should be to add an active 
context to a zombie list. If a context is in zombie state and its last 
request completes then the context can be immediately killed at that 
point. Otherwise, on DRM handle close, we go through the zombie list and 
immediately kill all contexts.

Simple, clean, no back-end scheduler hacks, no reliance on heartbeats or 
pulses. Also no opportunity for rogue (or just badly written) user 
processes to leave zombie contexts running on the hardware forever and 
causing a denial of service attack. If the host process is killed, all 
of its GPU processes are also killed irrespective of what dodgy context 
flags they may or may not have set.

John.


>
> Or perhaps there is no ABI change? I am not really clear how does 
> setting banned status propagate to the GuC backend. I mean at which 
> point does i915 ends up passing that info to the firmware?
>
> Regards,
>
> Tvrtko
>
>>
>>
>>>
>>>> It's worse than this. If the engine in question is an individual 
>>>> physical engine then sending a pulse (with sufficiently high 
>>>> priority) will pre-empt the engine and kick the context off. 
>>>> However, the GuC 
>>>
>>> Why it is different for physical vs virtual, aren't both just 
>>> schedulable contexts with different engine masks for what GuC is 
>>> concerned? Oh, is it a matter of needing to send pulses to all 
>>> engines which comprise a virtual one?
>> It isn't different. It is totally broken for both. It is potentially 
>> more broken for virtual engines because of the question of which 
>> engine to pulse. But as stated above, the pulse is pointless anyway 
>> so the which engine question doesn't even matter.
>>
>> John.
>>
>>
>>>
>>>> scheduler does not have hacks in it to check the state of the 
>>>> heartbeat or whether a context is actually a zombie or not. Thus, 
>>>> the context will get resubmitted to the hardware after the pulse 
>>>> completes and effectively nothing will have happened.
>>>>
>>>> I would assume that the DRM scheduler which we are meant to be 
>>>> switching to for execlist as well as GuC submission is also 
>>>> unlikely to have hacks for zombie contexts and tests for whether 
>>>> the i915 specific heartbeat has been disabled since the context 
>>>> became a zombie. So when that switch happens, this test will also 
>>>> fail in execlist mode as well as GuC mode.
>>>>
>>>> The choices I see here are to simply remove persistence completely 
>>>> (it is a basically a bug that became UAPI because it wasn't caught 
>>>> soon enough!) or to implement it in a way that does not require 
>>>> hacks in the back end scheduler. Apparently, the DRM scheduler is 
>>>> expected to allow zombie contexts to persist until the DRM file 
>>>> handle is closed. So presumably we will have to go with option two.
>>>>
>>>> That means flagging a context as being a zombie when it is closed 
>>>> but still active. The driver would then add it to a zombie list 
>>>> owned by the DRM client object. When that client object is closed, 
>>>> i915 would go through the list and genuinely kill all the contexts. 
>>>> No back end scheduler hacks required and no intimate knowledge of 
>>>> the i915 heartbeat mechanism required either.
>>>>
>>>> John.
>>>>
>>>>
>>>>>
>>>>> This patch also updates intel_engine_has_heartbeat to be a vfunc 
>>>>> as we
>>>>> now need to call this function on execlists virtual engines too.
>>>>>
>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>> ---
>>>>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>>>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>>>>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 
>>>>> ++-----------------
>>>>>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>>>>   6 files changed, 26 insertions(+), 24 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c 
>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct 
>>>>> i915_gem_engines *engines, bool ban)
>>>>>        */
>>>>>       for_each_gem_engine(ce, engines, it) {
>>>>>           struct intel_engine_cs *engine;
>>>>> +        bool local_ban = ban || 
>>>>> !intel_engine_has_heartbeat(ce->engine);
>>>
>>> In any case (pending me understanding what's really going on there), 
>>> why would this check not be in kill_context with currently does this:
>>>
>>>     bool ban = (!i915_gem_context_is_persistent(ctx) ||
>>>             !ctx->i915->params.enable_hangcheck);
>>> ...
>>>         kill_engines(pos, ban);
>>>
>>> So whether to ban decision would be consolidated to one place.
>>>
>>> In fact, decision on whether to allow persistent is tied to 
>>> enable_hangcheck, which also drives hearbeat emission. So perhaps 
>>> one part of the correct fix is to extend the above (kill_context) 
>>> ban criteria to include hearbeat values anyway. Otherwise isn't it a 
>>> simple miss that this check fails to account to hearbeat disablement 
>>> via sysfs?
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>> -        if (ban && intel_context_ban(ce, NULL))
>>>>> +        if (local_ban && intel_context_ban(ce, NULL))
>>>>>               continue;
>>>>>           /*
>>>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct 
>>>>> i915_gem_engines *engines, bool ban)
>>>>>           engine = active_engine(ce);
>>>>>           /* First attempt to gracefully cancel the context */
>>>>> -        if (engine && !__cancel_engine(engine) && ban)
>>>>> +        if (engine && !__cancel_engine(engine) && local_ban)
>>>>>               /*
>>>>>                * If we are unable to send a preemptive pulse to bump
>>>>>                * the context from the GPU, we have to resort to a 
>>>>> full
>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h 
>>>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>> index e54351a170e2..65f2eb2a78e4 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
>>>>>       void (*reset)(struct intel_context *ce);
>>>>>       void (*destroy)(struct kref *kref);
>>>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
>>>>> +
>>>>>       /* virtual engine/context interface */
>>>>>       struct intel_context *(*create_virtual)(struct 
>>>>> intel_engine_cs **engine,
>>>>>                           unsigned int count);
>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h 
>>>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>> index c2a5640ae055..1b11a808acc4 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>> @@ -283,28 +283,11 @@ struct intel_context *
>>>>>   intel_engine_create_virtual(struct intel_engine_cs **siblings,
>>>>>                   unsigned int count);
>>>>> -static inline bool
>>>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs 
>>>>> *engine)
>>>>> -{
>>>>> -    /*
>>>>> -     * For non-GuC submission we expect the back-end to look at the
>>>>> -     * heartbeat status of the actual physical engine that the work
>>>>> -     * has been (or is being) scheduled on, so we should only reach
>>>>> -     * here with GuC submission enabled.
>>>>> -     */
>>>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
>>>>> -
>>>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
>>>>> -}
>>>>> -
>>>>>   static inline bool
>>>>>   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>>>>>   {
>>>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
>>>>> -        return false;
>>>>> -
>>>>> -    if (intel_engine_is_virtual(engine))
>>>>> -        return intel_virtual_engine_has_heartbeat(engine);
>>>>> +    if (engine->cops->has_heartbeat)
>>>>> +        return engine->cops->has_heartbeat(engine);
>>>>>       else
>>>>>           return READ_ONCE(engine->props.heartbeat_interval_ms);
>>>>>   }
>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c 
>>>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>> index de5f9c86b9a4..18005b5546b6 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs 
>>>>> *engine, unsigned int sibling)
>>>>>       return ve->siblings[sibling];
>>>>>   }
>>>>> +static bool virtual_engine_has_heartbeat(const struct 
>>>>> intel_engine_cs *ve)
>>>>> +{
>>>>> +    struct intel_engine_cs *engine;
>>>>> +    intel_engine_mask_t tmp, mask = ve->mask;
>>>>> +
>>>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
>>>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
>>>>> +            return true;
>>>>> +
>>>>> +    return false;
>>>>> +}
>>>>> +
>>>>>   static const struct intel_context_ops virtual_context_ops = {
>>>>>       .flags = COPS_HAS_INFLIGHT,
>>>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops 
>>>>> virtual_context_ops = {
>>>>>       .enter = virtual_context_enter,
>>>>>       .exit = virtual_context_exit,
>>>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
>>>>> +
>>>>>       .destroy = virtual_context_destroy,
>>>>>       .get_sibling = virtual_get_sibling,
>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> index 89ff0e4b4bc7..ae70bff3605f 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct 
>>>>> intel_context *ce)
>>>>>       return lrc_alloc(ce, engine);
>>>>>   }
>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>>>>> intel_engine_cs *ve);
>>>>> +
>>>>>   static const struct intel_context_ops virtual_guc_context_ops = {
>>>>>       .alloc = guc_virtual_context_alloc,
>>>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops 
>>>>> virtual_guc_context_ops = {
>>>>>       .enter = guc_virtual_context_enter,
>>>>>       .exit = guc_virtual_context_exit,
>>>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
>>>>> +
>>>>>       .sched_disable = guc_context_sched_disable,
>>>>>       .destroy = guc_context_destroy,
>>>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs 
>>>>> **siblings, unsigned int count)
>>>>>       return ERR_PTR(err);
>>>>>   }
>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>>>>> intel_engine_cs *ve)
>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct 
>>>>> intel_engine_cs *ve)
>>>>>   {
>>>>>       struct intel_engine_cs *engine;
>>>>>       intel_engine_mask_t tmp, mask = ve->mask;
>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h 
>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>> index c7ef44fa0c36..c2afc3b88fd8 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct 
>>>>> intel_engine_cs *engine,
>>>>>                       struct i915_request *hung_rq,
>>>>>                       struct drm_printer *m);
>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct 
>>>>> intel_engine_cs *ve);
>>>>> -
>>>>>   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>>>>>                      atomic_t *wait_var,
>>>>>                      bool interruptible,
>>>>
>>>> _______________________________________________
>>>> Intel-gfx mailing list
>>>> Intel-gfx@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
>>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-08-06 18:00           ` John Harrison
@ 2021-08-06 19:46             ` Daniel Vetter
  2021-08-09 23:12               ` John Harrison
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Vetter @ 2021-08-06 19:46 UTC (permalink / raw)
  To: John Harrison; +Cc: Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

Seen this fly by and figured I dropped a few thoughts in here. At the
likely cost of looking a bit out of whack :-)

On Fri, Aug 6, 2021 at 8:01 PM John Harrison <john.c.harrison@intel.com> wrote:
> On 8/2/2021 02:40, Tvrtko Ursulin wrote:
> > On 30/07/2021 19:13, John Harrison wrote:
> >> On 7/30/2021 02:49, Tvrtko Ursulin wrote:
> >>> On 30/07/2021 01:13, John Harrison wrote:
> >>>> On 7/28/2021 17:34, Matthew Brost wrote:
> >>>>> If an engine associated with a context does not have a heartbeat,
> >>>>> ban it
> >>>>> immediately. This is needed for GuC submission as a idle pulse
> >>>>> doesn't
> >>>>> kick the context off the hardware where it then can check for a
> >>>>> heartbeat and ban the context.
> >>>
> >>> Pulse, that is a request with I915_PRIORITY_BARRIER, does not
> >>> preempt a running normal priority context?
> >>>
> >>> Why does it matter then whether or not heartbeats are enabled - when
> >>> heartbeat just ends up sending the same engine pulse (eventually,
> >>> with raising priority)?
> >> The point is that the pulse is pointless. See the rest of my comments
> >> below, specifically "the context will get resubmitted to the hardware
> >> after the pulse completes". To re-iterate...
> >>
> >> Yes, it preempts the context. Yes, it does so whether heartbeats are
> >> enabled or not. But so what? Who cares? You have preempted a context.
> >> It is no longer running on the hardware. BUT IT IS STILL A VALID
> >> CONTEXT.
> >
> > It is valid yes, and it even may be the current ABI so another
> > question is whether it is okay to change that.
> >
> >> The backend scheduler will just resubmit it to the hardware as soon
> >> as the pulse completes. The only reason this works at all is because
> >> of the horrid hack in the execlist scheduler's back end
> >> implementation (in __execlists_schedule_in):
> >>          if (unlikely(intel_context_is_closed(ce) &&
> >>                       !intel_engine_has_heartbeat(engine)))
> >>                  intel_context_set_banned(ce);
> >
> > Right, is the above code then needed with this patch - when ban is
> > immediately applied on the higher level?
> >
> >> The actual back end scheduler is saying "Is this a zombie context? Is
> >> the heartbeat disabled? Then ban it". No other scheduler backend is
> >> going to have knowledge of zombie context status or of the heartbeat
> >> status. Nor are they going to call back into the higher levels of the
> >> i915 driver to trigger a ban operation. Certainly a hardware
> >> implemented scheduler is not going to be looking at private i915
> >> driver information to decide whether to submit a context or whether
> >> to tell the OS to kill it off instead.
> >>
> >> For persistence to work with a hardware scheduler (or a non-Intel
> >> specific scheduler such as the DRM one), the handling of zombie
> >> contexts, banning, etc. *must* be done entirely in the front end. It
> >> cannot rely on any backend hacks. That means you can't rely on any
> >> fancy behaviour of pulses.
> >>
> >> If you want to ban a context then you must explicitly ban that
> >> context. If you want to ban it at some later point then you need to
> >> track it at the top level as a zombie and then explicitly ban that
> >> zombie at whatever later point.
> >
> > I am still trying to understand it all. If I go by the commit message:
> >
> > """
> > This is needed for GuC submission as a idle pulse doesn't
> > kick the context off the hardware where it then can check for a
> > heartbeat and ban the context.
> > """
> >
> > That did not explain things for me. Sentence does not appear to make
> > sense. Now, it seems "kick off the hardware" is meant as revoke and
> > not just preempt. Which is fine, perhaps just needs to be written more
> > explicitly. But the part of checking for heartbeat after idle pulse
> > does not compute for me. It is the heartbeat which emits idle pulses,
> > not idle pulse emitting heartbeats.
> I am in agreement that the commit message is confusing and does not
> explain either the problem or the solution.
>
>
> >
> >
> > But anyway, I can buy the handling at the front end story completely.
> > It makes sense. We just need to agree that a) it is okay to change the
> > ABI and b) remove the backend check from execlists if it is not needed
> > any longer.
> >
> > And if ABI change is okay then commit message needs to talk about it
> > loudly and clearly.
> I don't think we have a choice. The current ABI is not and cannot ever
> be compatible with any scheduler external to i915. It cannot be
> implemented with a hardware scheduler such as the GuC and it cannot be
> implemented with an external software scheduler such as the DRM one.

So generally on linux we implement helper libraries, which means
massive flexibility everywhere.

https://blog.ffwll.ch/2016/12/midlayers-once-more-with-feeling.html

So it shouldn't be an insurmountable problem to make this happen even
with drm/scheduler, we can patch it up.

Whether that's justified is another question.

> My view is that any implementation involving knowledge of the heartbeat
> is fundamentally broken.
>
> According to Daniel Vetter, the DRM ABI on this subject is that an
> actively executing context should persist until the DRM file handle is
> closed. That seems like a much more plausible and simple ABI than one

DRM ABI is maybe a bit an overkill statement. It's more "what other
drivers do", but it's generally a good idea to not ignore that :-)

> that says 'if the heartbeat is running then a context will persist
> forever, if the heartbeat is not running then it will be killed
> immediately, if the heart was running but then stops running then the
> context will be killed on the next context switch, ...'. And if I
> understand it correctly, the current ABI allows a badly written user app
> to cause a denial of service by leaving contexts permanently running an
> infinite loop on the hardware even after the app has been killed! How
> can that ever be considered a good idea?

We're not going to support changing all these settings at runtime.
There's just not point in trying to make that work race-free, it
either adds complexity to the code for no reason, or it adds overhead
to the code for no reason.

Yes I know existing customers and all that, but
- they can change this stuff, and when they change it while anyting is
in-flight they get to keep the pieces. These options taint the kernel
for a reason (and if they don't, that should be fixed)
- quite a few around heartbeat and compute support as we've merged a
while ago hang by design when trying to smash them into drm rules.
We're not going to fix that, and we should not use any existing such
assumptions as justification for code changes.

Wrt infinitely running: Right now nothing is allowed to run forever,
because hangcheck will step in and kill that job. Once we add compute
mode ctx flag we'll require killing on process exit to stop escape.

> Therefore, the context close implementation should be to add an active
> context to a zombie list. If a context is in zombie state and its last
> request completes then the context can be immediately killed at that
> point. Otherwise, on DRM handle close, we go through the zombie list and
> immediately kill all contexts.
>
> Simple, clean, no back-end scheduler hacks, no reliance on heartbeats or
> pulses. Also no opportunity for rogue (or just badly written) user
> processes to leave zombie contexts running on the hardware forever and
> causing a denial of service attack. If the host process is killed, all
> of its GPU processes are also killed irrespective of what dodgy context
> flags they may or may not have set.

Uh, the intel_context state machine is already a bit too complex, and
the implementation lacks a bunch of barriers at least from the cursor
look I've given it thus far.

So if we really need to make that more complex with more states then I
think someone needs to come up with an actual clean design, with
proper state transitions and all the barriers (or really, a design
which doesn't need barriers). This is going to be work.
-Daniel

>
> John.
>
>
> >
> > Or perhaps there is no ABI change? I am not really clear how does
> > setting banned status propagate to the GuC backend. I mean at which
> > point does i915 ends up passing that info to the firmware?
> >
> > Regards,
> >
> > Tvrtko
> >
> >>
> >>
> >>>
> >>>> It's worse than this. If the engine in question is an individual
> >>>> physical engine then sending a pulse (with sufficiently high
> >>>> priority) will pre-empt the engine and kick the context off.
> >>>> However, the GuC
> >>>
> >>> Why it is different for physical vs virtual, aren't both just
> >>> schedulable contexts with different engine masks for what GuC is
> >>> concerned? Oh, is it a matter of needing to send pulses to all
> >>> engines which comprise a virtual one?
> >> It isn't different. It is totally broken for both. It is potentially
> >> more broken for virtual engines because of the question of which
> >> engine to pulse. But as stated above, the pulse is pointless anyway
> >> so the which engine question doesn't even matter.
> >>
> >> John.
> >>
> >>
> >>>
> >>>> scheduler does not have hacks in it to check the state of the
> >>>> heartbeat or whether a context is actually a zombie or not. Thus,
> >>>> the context will get resubmitted to the hardware after the pulse
> >>>> completes and effectively nothing will have happened.
> >>>>
> >>>> I would assume that the DRM scheduler which we are meant to be
> >>>> switching to for execlist as well as GuC submission is also
> >>>> unlikely to have hacks for zombie contexts and tests for whether
> >>>> the i915 specific heartbeat has been disabled since the context
> >>>> became a zombie. So when that switch happens, this test will also
> >>>> fail in execlist mode as well as GuC mode.
> >>>>
> >>>> The choices I see here are to simply remove persistence completely
> >>>> (it is a basically a bug that became UAPI because it wasn't caught
> >>>> soon enough!) or to implement it in a way that does not require
> >>>> hacks in the back end scheduler. Apparently, the DRM scheduler is
> >>>> expected to allow zombie contexts to persist until the DRM file
> >>>> handle is closed. So presumably we will have to go with option two.
> >>>>
> >>>> That means flagging a context as being a zombie when it is closed
> >>>> but still active. The driver would then add it to a zombie list
> >>>> owned by the DRM client object. When that client object is closed,
> >>>> i915 would go through the list and genuinely kill all the contexts.
> >>>> No back end scheduler hacks required and no intimate knowledge of
> >>>> the i915 heartbeat mechanism required either.
> >>>>
> >>>> John.
> >>>>
> >>>>
> >>>>>
> >>>>> This patch also updates intel_engine_has_heartbeat to be a vfunc
> >>>>> as we
> >>>>> now need to call this function on execlists virtual engines too.
> >>>>>
> >>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >>>>> ---
> >>>>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
> >>>>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
> >>>>>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21
> >>>>> ++-----------------
> >>>>>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
> >>>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
> >>>>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
> >>>>>   6 files changed, 26 insertions(+), 24 deletions(-)
> >>>>>
> >>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
> >>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct
> >>>>> i915_gem_engines *engines, bool ban)
> >>>>>        */
> >>>>>       for_each_gem_engine(ce, engines, it) {
> >>>>>           struct intel_engine_cs *engine;
> >>>>> +        bool local_ban = ban ||
> >>>>> !intel_engine_has_heartbeat(ce->engine);
> >>>
> >>> In any case (pending me understanding what's really going on there),
> >>> why would this check not be in kill_context with currently does this:
> >>>
> >>>     bool ban = (!i915_gem_context_is_persistent(ctx) ||
> >>>             !ctx->i915->params.enable_hangcheck);
> >>> ...
> >>>         kill_engines(pos, ban);
> >>>
> >>> So whether to ban decision would be consolidated to one place.
> >>>
> >>> In fact, decision on whether to allow persistent is tied to
> >>> enable_hangcheck, which also drives hearbeat emission. So perhaps
> >>> one part of the correct fix is to extend the above (kill_context)
> >>> ban criteria to include hearbeat values anyway. Otherwise isn't it a
> >>> simple miss that this check fails to account to hearbeat disablement
> >>> via sysfs?
> >>>
> >>> Regards,
> >>>
> >>> Tvrtko
> >>>
> >>>>> -        if (ban && intel_context_ban(ce, NULL))
> >>>>> +        if (local_ban && intel_context_ban(ce, NULL))
> >>>>>               continue;
> >>>>>           /*
> >>>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct
> >>>>> i915_gem_engines *engines, bool ban)
> >>>>>           engine = active_engine(ce);
> >>>>>           /* First attempt to gracefully cancel the context */
> >>>>> -        if (engine && !__cancel_engine(engine) && ban)
> >>>>> +        if (engine && !__cancel_engine(engine) && local_ban)
> >>>>>               /*
> >>>>>                * If we are unable to send a preemptive pulse to bump
> >>>>>                * the context from the GPU, we have to resort to a
> >>>>> full
> >>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>> index e54351a170e2..65f2eb2a78e4 100644
> >>>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
> >>>>>       void (*reset)(struct intel_context *ce);
> >>>>>       void (*destroy)(struct kref *kref);
> >>>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
> >>>>> +
> >>>>>       /* virtual engine/context interface */
> >>>>>       struct intel_context *(*create_virtual)(struct
> >>>>> intel_engine_cs **engine,
> >>>>>                           unsigned int count);
> >>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>> index c2a5640ae055..1b11a808acc4 100644
> >>>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>> @@ -283,28 +283,11 @@ struct intel_context *
> >>>>>   intel_engine_create_virtual(struct intel_engine_cs **siblings,
> >>>>>                   unsigned int count);
> >>>>> -static inline bool
> >>>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs
> >>>>> *engine)
> >>>>> -{
> >>>>> -    /*
> >>>>> -     * For non-GuC submission we expect the back-end to look at the
> >>>>> -     * heartbeat status of the actual physical engine that the work
> >>>>> -     * has been (or is being) scheduled on, so we should only reach
> >>>>> -     * here with GuC submission enabled.
> >>>>> -     */
> >>>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
> >>>>> -
> >>>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
> >>>>> -}
> >>>>> -
> >>>>>   static inline bool
> >>>>>   intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
> >>>>>   {
> >>>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
> >>>>> -        return false;
> >>>>> -
> >>>>> -    if (intel_engine_is_virtual(engine))
> >>>>> -        return intel_virtual_engine_has_heartbeat(engine);
> >>>>> +    if (engine->cops->has_heartbeat)
> >>>>> +        return engine->cops->has_heartbeat(engine);
> >>>>>       else
> >>>>>           return READ_ONCE(engine->props.heartbeat_interval_ms);
> >>>>>   }
> >>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>> index de5f9c86b9a4..18005b5546b6 100644
> >>>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs
> >>>>> *engine, unsigned int sibling)
> >>>>>       return ve->siblings[sibling];
> >>>>>   }
> >>>>> +static bool virtual_engine_has_heartbeat(const struct
> >>>>> intel_engine_cs *ve)
> >>>>> +{
> >>>>> +    struct intel_engine_cs *engine;
> >>>>> +    intel_engine_mask_t tmp, mask = ve->mask;
> >>>>> +
> >>>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
> >>>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
> >>>>> +            return true;
> >>>>> +
> >>>>> +    return false;
> >>>>> +}
> >>>>> +
> >>>>>   static const struct intel_context_ops virtual_context_ops = {
> >>>>>       .flags = COPS_HAS_INFLIGHT,
> >>>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops
> >>>>> virtual_context_ops = {
> >>>>>       .enter = virtual_context_enter,
> >>>>>       .exit = virtual_context_exit,
> >>>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
> >>>>> +
> >>>>>       .destroy = virtual_context_destroy,
> >>>>>       .get_sibling = virtual_get_sibling,
> >>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>> index 89ff0e4b4bc7..ae70bff3605f 100644
> >>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct
> >>>>> intel_context *ce)
> >>>>>       return lrc_alloc(ce, engine);
> >>>>>   }
> >>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
> >>>>> intel_engine_cs *ve);
> >>>>> +
> >>>>>   static const struct intel_context_ops virtual_guc_context_ops = {
> >>>>>       .alloc = guc_virtual_context_alloc,
> >>>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops
> >>>>> virtual_guc_context_ops = {
> >>>>>       .enter = guc_virtual_context_enter,
> >>>>>       .exit = guc_virtual_context_exit,
> >>>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
> >>>>> +
> >>>>>       .sched_disable = guc_context_sched_disable,
> >>>>>       .destroy = guc_context_destroy,
> >>>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs
> >>>>> **siblings, unsigned int count)
> >>>>>       return ERR_PTR(err);
> >>>>>   }
> >>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
> >>>>> intel_engine_cs *ve)
> >>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
> >>>>> intel_engine_cs *ve)
> >>>>>   {
> >>>>>       struct intel_engine_cs *engine;
> >>>>>       intel_engine_mask_t tmp, mask = ve->mask;
> >>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>> index c7ef44fa0c36..c2afc3b88fd8 100644
> >>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct
> >>>>> intel_engine_cs *engine,
> >>>>>                       struct i915_request *hung_rq,
> >>>>>                       struct drm_printer *m);
> >>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
> >>>>> intel_engine_cs *ve);
> >>>>> -
> >>>>>   int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
> >>>>>                      atomic_t *wait_var,
> >>>>>                      bool interruptible,
> >>>>
> >>>> _______________________________________________
> >>>> Intel-gfx mailing list
> >>>> Intel-gfx@lists.freedesktop.org
> >>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> >>
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-08-06 19:46             ` Daniel Vetter
@ 2021-08-09 23:12               ` John Harrison
  2021-08-10  6:36                 ` Daniel Vetter
  0 siblings, 1 reply; 18+ messages in thread
From: John Harrison @ 2021-08-09 23:12 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

On 8/6/2021 12:46, Daniel Vetter wrote:
> Seen this fly by and figured I dropped a few thoughts in here. At the
> likely cost of looking a bit out of whack :-)
>
> On Fri, Aug 6, 2021 at 8:01 PM John Harrison <john.c.harrison@intel.com> wrote:
>> On 8/2/2021 02:40, Tvrtko Ursulin wrote:
>>> On 30/07/2021 19:13, John Harrison wrote:
>>>> On 7/30/2021 02:49, Tvrtko Ursulin wrote:
>>>>> On 30/07/2021 01:13, John Harrison wrote:
>>>>>> On 7/28/2021 17:34, Matthew Brost wrote:
>>>>>>> If an engine associated with a context does not have a heartbeat,
>>>>>>> ban it
>>>>>>> immediately. This is needed for GuC submission as a idle pulse
>>>>>>> doesn't
>>>>>>> kick the context off the hardware where it then can check for a
>>>>>>> heartbeat and ban the context.
>>>>> Pulse, that is a request with I915_PRIORITY_BARRIER, does not
>>>>> preempt a running normal priority context?
>>>>>
>>>>> Why does it matter then whether or not heartbeats are enabled - when
>>>>> heartbeat just ends up sending the same engine pulse (eventually,
>>>>> with raising priority)?
>>>> The point is that the pulse is pointless. See the rest of my comments
>>>> below, specifically "the context will get resubmitted to the hardware
>>>> after the pulse completes". To re-iterate...
>>>>
>>>> Yes, it preempts the context. Yes, it does so whether heartbeats are
>>>> enabled or not. But so what? Who cares? You have preempted a context.
>>>> It is no longer running on the hardware. BUT IT IS STILL A VALID
>>>> CONTEXT.
>>> It is valid yes, and it even may be the current ABI so another
>>> question is whether it is okay to change that.
>>>
>>>> The backend scheduler will just resubmit it to the hardware as soon
>>>> as the pulse completes. The only reason this works at all is because
>>>> of the horrid hack in the execlist scheduler's back end
>>>> implementation (in __execlists_schedule_in):
>>>>           if (unlikely(intel_context_is_closed(ce) &&
>>>>                        !intel_engine_has_heartbeat(engine)))
>>>>                   intel_context_set_banned(ce);
>>> Right, is the above code then needed with this patch - when ban is
>>> immediately applied on the higher level?
>>>
>>>> The actual back end scheduler is saying "Is this a zombie context? Is
>>>> the heartbeat disabled? Then ban it". No other scheduler backend is
>>>> going to have knowledge of zombie context status or of the heartbeat
>>>> status. Nor are they going to call back into the higher levels of the
>>>> i915 driver to trigger a ban operation. Certainly a hardware
>>>> implemented scheduler is not going to be looking at private i915
>>>> driver information to decide whether to submit a context or whether
>>>> to tell the OS to kill it off instead.
>>>>
>>>> For persistence to work with a hardware scheduler (or a non-Intel
>>>> specific scheduler such as the DRM one), the handling of zombie
>>>> contexts, banning, etc. *must* be done entirely in the front end. It
>>>> cannot rely on any backend hacks. That means you can't rely on any
>>>> fancy behaviour of pulses.
>>>>
>>>> If you want to ban a context then you must explicitly ban that
>>>> context. If you want to ban it at some later point then you need to
>>>> track it at the top level as a zombie and then explicitly ban that
>>>> zombie at whatever later point.
>>> I am still trying to understand it all. If I go by the commit message:
>>>
>>> """
>>> This is needed for GuC submission as a idle pulse doesn't
>>> kick the context off the hardware where it then can check for a
>>> heartbeat and ban the context.
>>> """
>>>
>>> That did not explain things for me. Sentence does not appear to make
>>> sense. Now, it seems "kick off the hardware" is meant as revoke and
>>> not just preempt. Which is fine, perhaps just needs to be written more
>>> explicitly. But the part of checking for heartbeat after idle pulse
>>> does not compute for me. It is the heartbeat which emits idle pulses,
>>> not idle pulse emitting heartbeats.
>> I am in agreement that the commit message is confusing and does not
>> explain either the problem or the solution.
>>
>>
>>>
>>> But anyway, I can buy the handling at the front end story completely.
>>> It makes sense. We just need to agree that a) it is okay to change the
>>> ABI and b) remove the backend check from execlists if it is not needed
>>> any longer.
>>>
>>> And if ABI change is okay then commit message needs to talk about it
>>> loudly and clearly.
>> I don't think we have a choice. The current ABI is not and cannot ever
>> be compatible with any scheduler external to i915. It cannot be
>> implemented with a hardware scheduler such as the GuC and it cannot be
>> implemented with an external software scheduler such as the DRM one.
> So generally on linux we implement helper libraries, which means
> massive flexibility everywhere.
>
> https://blog.ffwll.ch/2016/12/midlayers-once-more-with-feeling.html
>
> So it shouldn't be an insurmountable problem to make this happen even
> with drm/scheduler, we can patch it up.
>
> Whether that's justified is another question.
Helper libraries won't work with a hardware scheduler.

>
>> My view is that any implementation involving knowledge of the heartbeat
>> is fundamentally broken.
>>
>> According to Daniel Vetter, the DRM ABI on this subject is that an
>> actively executing context should persist until the DRM file handle is
>> closed. That seems like a much more plausible and simple ABI than one
> DRM ABI is maybe a bit an overkill statement. It's more "what other
> drivers do", but it's generally a good idea to not ignore that :-)
>
>> that says 'if the heartbeat is running then a context will persist
>> forever, if the heartbeat is not running then it will be killed
>> immediately, if the heart was running but then stops running then the
>> context will be killed on the next context switch, ...'. And if I
>> understand it correctly, the current ABI allows a badly written user app
>> to cause a denial of service by leaving contexts permanently running an
>> infinite loop on the hardware even after the app has been killed! How
>> can that ever be considered a good idea?
> We're not going to support changing all these settings at runtime.
> There's just not point in trying to make that work race-free, it
> either adds complexity to the code for no reason, or it adds overhead
> to the code for no reason.
>
> Yes I know existing customers and all that, but
> - they can change this stuff, and when they change it while anyting is
> in-flight they get to keep the pieces. These options taint the kernel
> for a reason (and if they don't, that should be fixed)
> - quite a few around heartbeat and compute support as we've merged a
> while ago hang by design when trying to smash them into drm rules.
> We're not going to fix that, and we should not use any existing such
> assumptions as justification for code changes.
>
> Wrt infinitely running: Right now nothing is allowed to run forever,
> because hangcheck will step in and kill that job. Once we add compute
> mode ctx flag we'll require killing on process exit to stop escape.
If the infinite loop is pre-emptible then the heartbeat won't kill it 
off. It will just run forever. Okay, it won't be a huge denial of 
service because other work can pre-empt and run. However, you are down 
one timeslice execution slot at that priority level. You have also 
permanently lost whatever memory is allocated and in use by that workload.


>
>> Therefore, the context close implementation should be to add an active
>> context to a zombie list. If a context is in zombie state and its last
>> request completes then the context can be immediately killed at that
>> point. Otherwise, on DRM handle close, we go through the zombie list and
>> immediately kill all contexts.
>>
>> Simple, clean, no back-end scheduler hacks, no reliance on heartbeats or
>> pulses. Also no opportunity for rogue (or just badly written) user
>> processes to leave zombie contexts running on the hardware forever and
>> causing a denial of service attack. If the host process is killed, all
>> of its GPU processes are also killed irrespective of what dodgy context
>> flags they may or may not have set.
> Uh, the intel_context state machine is already a bit too complex, and
> the implementation lacks a bunch of barriers at least from the cursor
> look I've given it thus far.
>
> So if we really need to make that more complex with more states then I
> think someone needs to come up with an actual clean design, with
> proper state transitions and all the barriers (or really, a design
> which doesn't need barriers). This is going to be work.
> -Daniel
Personally, I would rather just drop the whole persistence/zombie idea 
completely. If you close your context then you should expect that 
context to be destroyed and any outstanding workloads killed off. If you 
wanted the results then you should have waited for them.

If we do have to support some level of persistence then it doesn't seem 
like tracking closed contexts should be especially complex. Not sure why 
it would need special barriers either.

John.

>> John.
>>
>>
>>> Or perhaps there is no ABI change? I am not really clear how does
>>> setting banned status propagate to the GuC backend. I mean at which
>>> point does i915 ends up passing that info to the firmware?
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>
>>>>>> It's worse than this. If the engine in question is an individual
>>>>>> physical engine then sending a pulse (with sufficiently high
>>>>>> priority) will pre-empt the engine and kick the context off.
>>>>>> However, the GuC
>>>>> Why it is different for physical vs virtual, aren't both just
>>>>> schedulable contexts with different engine masks for what GuC is
>>>>> concerned? Oh, is it a matter of needing to send pulses to all
>>>>> engines which comprise a virtual one?
>>>> It isn't different. It is totally broken for both. It is potentially
>>>> more broken for virtual engines because of the question of which
>>>> engine to pulse. But as stated above, the pulse is pointless anyway
>>>> so the which engine question doesn't even matter.
>>>>
>>>> John.
>>>>
>>>>
>>>>>> scheduler does not have hacks in it to check the state of the
>>>>>> heartbeat or whether a context is actually a zombie or not. Thus,
>>>>>> the context will get resubmitted to the hardware after the pulse
>>>>>> completes and effectively nothing will have happened.
>>>>>>
>>>>>> I would assume that the DRM scheduler which we are meant to be
>>>>>> switching to for execlist as well as GuC submission is also
>>>>>> unlikely to have hacks for zombie contexts and tests for whether
>>>>>> the i915 specific heartbeat has been disabled since the context
>>>>>> became a zombie. So when that switch happens, this test will also
>>>>>> fail in execlist mode as well as GuC mode.
>>>>>>
>>>>>> The choices I see here are to simply remove persistence completely
>>>>>> (it is a basically a bug that became UAPI because it wasn't caught
>>>>>> soon enough!) or to implement it in a way that does not require
>>>>>> hacks in the back end scheduler. Apparently, the DRM scheduler is
>>>>>> expected to allow zombie contexts to persist until the DRM file
>>>>>> handle is closed. So presumably we will have to go with option two.
>>>>>>
>>>>>> That means flagging a context as being a zombie when it is closed
>>>>>> but still active. The driver would then add it to a zombie list
>>>>>> owned by the DRM client object. When that client object is closed,
>>>>>> i915 would go through the list and genuinely kill all the contexts.
>>>>>> No back end scheduler hacks required and no intimate knowledge of
>>>>>> the i915 heartbeat mechanism required either.
>>>>>>
>>>>>> John.
>>>>>>
>>>>>>
>>>>>>> This patch also updates intel_engine_has_heartbeat to be a vfunc
>>>>>>> as we
>>>>>>> now need to call this function on execlists virtual engines too.
>>>>>>>
>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>>> ---
>>>>>>>    drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>>>>>>    drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>>>>>>    drivers/gpu/drm/i915/gt/intel_engine.h        | 21
>>>>>>> ++-----------------
>>>>>>>    .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>>>>>>    .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>>>>>>    .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>>>>>>    6 files changed, 26 insertions(+), 24 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct
>>>>>>> i915_gem_engines *engines, bool ban)
>>>>>>>         */
>>>>>>>        for_each_gem_engine(ce, engines, it) {
>>>>>>>            struct intel_engine_cs *engine;
>>>>>>> +        bool local_ban = ban ||
>>>>>>> !intel_engine_has_heartbeat(ce->engine);
>>>>> In any case (pending me understanding what's really going on there),
>>>>> why would this check not be in kill_context with currently does this:
>>>>>
>>>>>      bool ban = (!i915_gem_context_is_persistent(ctx) ||
>>>>>              !ctx->i915->params.enable_hangcheck);
>>>>> ...
>>>>>          kill_engines(pos, ban);
>>>>>
>>>>> So whether to ban decision would be consolidated to one place.
>>>>>
>>>>> In fact, decision on whether to allow persistent is tied to
>>>>> enable_hangcheck, which also drives hearbeat emission. So perhaps
>>>>> one part of the correct fix is to extend the above (kill_context)
>>>>> ban criteria to include hearbeat values anyway. Otherwise isn't it a
>>>>> simple miss that this check fails to account to hearbeat disablement
>>>>> via sysfs?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>>
>>>>>>> -        if (ban && intel_context_ban(ce, NULL))
>>>>>>> +        if (local_ban && intel_context_ban(ce, NULL))
>>>>>>>                continue;
>>>>>>>            /*
>>>>>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct
>>>>>>> i915_gem_engines *engines, bool ban)
>>>>>>>            engine = active_engine(ce);
>>>>>>>            /* First attempt to gracefully cancel the context */
>>>>>>> -        if (engine && !__cancel_engine(engine) && ban)
>>>>>>> +        if (engine && !__cancel_engine(engine) && local_ban)
>>>>>>>                /*
>>>>>>>                 * If we are unable to send a preemptive pulse to bump
>>>>>>>                 * the context from the GPU, we have to resort to a
>>>>>>> full
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>> index e54351a170e2..65f2eb2a78e4 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
>>>>>>>        void (*reset)(struct intel_context *ce);
>>>>>>>        void (*destroy)(struct kref *kref);
>>>>>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
>>>>>>> +
>>>>>>>        /* virtual engine/context interface */
>>>>>>>        struct intel_context *(*create_virtual)(struct
>>>>>>> intel_engine_cs **engine,
>>>>>>>                            unsigned int count);
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>> index c2a5640ae055..1b11a808acc4 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>> @@ -283,28 +283,11 @@ struct intel_context *
>>>>>>>    intel_engine_create_virtual(struct intel_engine_cs **siblings,
>>>>>>>                    unsigned int count);
>>>>>>> -static inline bool
>>>>>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs
>>>>>>> *engine)
>>>>>>> -{
>>>>>>> -    /*
>>>>>>> -     * For non-GuC submission we expect the back-end to look at the
>>>>>>> -     * heartbeat status of the actual physical engine that the work
>>>>>>> -     * has been (or is being) scheduled on, so we should only reach
>>>>>>> -     * here with GuC submission enabled.
>>>>>>> -     */
>>>>>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
>>>>>>> -
>>>>>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
>>>>>>> -}
>>>>>>> -
>>>>>>>    static inline bool
>>>>>>>    intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>>>>>>>    {
>>>>>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
>>>>>>> -        return false;
>>>>>>> -
>>>>>>> -    if (intel_engine_is_virtual(engine))
>>>>>>> -        return intel_virtual_engine_has_heartbeat(engine);
>>>>>>> +    if (engine->cops->has_heartbeat)
>>>>>>> +        return engine->cops->has_heartbeat(engine);
>>>>>>>        else
>>>>>>>            return READ_ONCE(engine->props.heartbeat_interval_ms);
>>>>>>>    }
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>> index de5f9c86b9a4..18005b5546b6 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs
>>>>>>> *engine, unsigned int sibling)
>>>>>>>        return ve->siblings[sibling];
>>>>>>>    }
>>>>>>> +static bool virtual_engine_has_heartbeat(const struct
>>>>>>> intel_engine_cs *ve)
>>>>>>> +{
>>>>>>> +    struct intel_engine_cs *engine;
>>>>>>> +    intel_engine_mask_t tmp, mask = ve->mask;
>>>>>>> +
>>>>>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
>>>>>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
>>>>>>> +            return true;
>>>>>>> +
>>>>>>> +    return false;
>>>>>>> +}
>>>>>>> +
>>>>>>>    static const struct intel_context_ops virtual_context_ops = {
>>>>>>>        .flags = COPS_HAS_INFLIGHT,
>>>>>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops
>>>>>>> virtual_context_ops = {
>>>>>>>        .enter = virtual_context_enter,
>>>>>>>        .exit = virtual_context_exit,
>>>>>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
>>>>>>> +
>>>>>>>        .destroy = virtual_context_destroy,
>>>>>>>        .get_sibling = virtual_get_sibling,
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> index 89ff0e4b4bc7..ae70bff3605f 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct
>>>>>>> intel_context *ce)
>>>>>>>        return lrc_alloc(ce, engine);
>>>>>>>    }
>>>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
>>>>>>> intel_engine_cs *ve);
>>>>>>> +
>>>>>>>    static const struct intel_context_ops virtual_guc_context_ops = {
>>>>>>>        .alloc = guc_virtual_context_alloc,
>>>>>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops
>>>>>>> virtual_guc_context_ops = {
>>>>>>>        .enter = guc_virtual_context_enter,
>>>>>>>        .exit = guc_virtual_context_exit,
>>>>>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
>>>>>>> +
>>>>>>>        .sched_disable = guc_context_sched_disable,
>>>>>>>        .destroy = guc_context_destroy,
>>>>>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs
>>>>>>> **siblings, unsigned int count)
>>>>>>>        return ERR_PTR(err);
>>>>>>>    }
>>>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
>>>>>>> intel_engine_cs *ve)
>>>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
>>>>>>> intel_engine_cs *ve)
>>>>>>>    {
>>>>>>>        struct intel_engine_cs *engine;
>>>>>>>        intel_engine_mask_t tmp, mask = ve->mask;
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>> index c7ef44fa0c36..c2afc3b88fd8 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct
>>>>>>> intel_engine_cs *engine,
>>>>>>>                        struct i915_request *hung_rq,
>>>>>>>                        struct drm_printer *m);
>>>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
>>>>>>> intel_engine_cs *ve);
>>>>>>> -
>>>>>>>    int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>>>>>>>                       atomic_t *wait_var,
>>>>>>>                       bool interruptible,
>>>>>> _______________________________________________
>>>>>> Intel-gfx mailing list
>>>>>> Intel-gfx@lists.freedesktop.org
>>>>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-08-09 23:12               ` John Harrison
@ 2021-08-10  6:36                 ` Daniel Vetter
  2021-08-18  0:28                   ` John Harrison
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Vetter @ 2021-08-10  6:36 UTC (permalink / raw)
  To: John Harrison
  Cc: Daniel Vetter, Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

On Mon, Aug 09, 2021 at 04:12:52PM -0700, John Harrison wrote:
> On 8/6/2021 12:46, Daniel Vetter wrote:
> > Seen this fly by and figured I dropped a few thoughts in here. At the
> > likely cost of looking a bit out of whack :-)
> > 
> > On Fri, Aug 6, 2021 at 8:01 PM John Harrison <john.c.harrison@intel.com> wrote:
> > > On 8/2/2021 02:40, Tvrtko Ursulin wrote:
> > > > On 30/07/2021 19:13, John Harrison wrote:
> > > > > On 7/30/2021 02:49, Tvrtko Ursulin wrote:
> > > > > > On 30/07/2021 01:13, John Harrison wrote:
> > > > > > > On 7/28/2021 17:34, Matthew Brost wrote:
> > > > > > > > If an engine associated with a context does not have a heartbeat,
> > > > > > > > ban it
> > > > > > > > immediately. This is needed for GuC submission as a idle pulse
> > > > > > > > doesn't
> > > > > > > > kick the context off the hardware where it then can check for a
> > > > > > > > heartbeat and ban the context.
> > > > > > Pulse, that is a request with I915_PRIORITY_BARRIER, does not
> > > > > > preempt a running normal priority context?
> > > > > > 
> > > > > > Why does it matter then whether or not heartbeats are enabled - when
> > > > > > heartbeat just ends up sending the same engine pulse (eventually,
> > > > > > with raising priority)?
> > > > > The point is that the pulse is pointless. See the rest of my comments
> > > > > below, specifically "the context will get resubmitted to the hardware
> > > > > after the pulse completes". To re-iterate...
> > > > > 
> > > > > Yes, it preempts the context. Yes, it does so whether heartbeats are
> > > > > enabled or not. But so what? Who cares? You have preempted a context.
> > > > > It is no longer running on the hardware. BUT IT IS STILL A VALID
> > > > > CONTEXT.
> > > > It is valid yes, and it even may be the current ABI so another
> > > > question is whether it is okay to change that.
> > > > 
> > > > > The backend scheduler will just resubmit it to the hardware as soon
> > > > > as the pulse completes. The only reason this works at all is because
> > > > > of the horrid hack in the execlist scheduler's back end
> > > > > implementation (in __execlists_schedule_in):
> > > > >           if (unlikely(intel_context_is_closed(ce) &&
> > > > >                        !intel_engine_has_heartbeat(engine)))
> > > > >                   intel_context_set_banned(ce);
> > > > Right, is the above code then needed with this patch - when ban is
> > > > immediately applied on the higher level?
> > > > 
> > > > > The actual back end scheduler is saying "Is this a zombie context? Is
> > > > > the heartbeat disabled? Then ban it". No other scheduler backend is
> > > > > going to have knowledge of zombie context status or of the heartbeat
> > > > > status. Nor are they going to call back into the higher levels of the
> > > > > i915 driver to trigger a ban operation. Certainly a hardware
> > > > > implemented scheduler is not going to be looking at private i915
> > > > > driver information to decide whether to submit a context or whether
> > > > > to tell the OS to kill it off instead.
> > > > > 
> > > > > For persistence to work with a hardware scheduler (or a non-Intel
> > > > > specific scheduler such as the DRM one), the handling of zombie
> > > > > contexts, banning, etc. *must* be done entirely in the front end. It
> > > > > cannot rely on any backend hacks. That means you can't rely on any
> > > > > fancy behaviour of pulses.
> > > > > 
> > > > > If you want to ban a context then you must explicitly ban that
> > > > > context. If you want to ban it at some later point then you need to
> > > > > track it at the top level as a zombie and then explicitly ban that
> > > > > zombie at whatever later point.
> > > > I am still trying to understand it all. If I go by the commit message:
> > > > 
> > > > """
> > > > This is needed for GuC submission as a idle pulse doesn't
> > > > kick the context off the hardware where it then can check for a
> > > > heartbeat and ban the context.
> > > > """
> > > > 
> > > > That did not explain things for me. Sentence does not appear to make
> > > > sense. Now, it seems "kick off the hardware" is meant as revoke and
> > > > not just preempt. Which is fine, perhaps just needs to be written more
> > > > explicitly. But the part of checking for heartbeat after idle pulse
> > > > does not compute for me. It is the heartbeat which emits idle pulses,
> > > > not idle pulse emitting heartbeats.
> > > I am in agreement that the commit message is confusing and does not
> > > explain either the problem or the solution.
> > > 
> > > 
> > > > 
> > > > But anyway, I can buy the handling at the front end story completely.
> > > > It makes sense. We just need to agree that a) it is okay to change the
> > > > ABI and b) remove the backend check from execlists if it is not needed
> > > > any longer.
> > > > 
> > > > And if ABI change is okay then commit message needs to talk about it
> > > > loudly and clearly.
> > > I don't think we have a choice. The current ABI is not and cannot ever
> > > be compatible with any scheduler external to i915. It cannot be
> > > implemented with a hardware scheduler such as the GuC and it cannot be
> > > implemented with an external software scheduler such as the DRM one.
> > So generally on linux we implement helper libraries, which means
> > massive flexibility everywhere.
> > 
> > https://blog.ffwll.ch/2016/12/midlayers-once-more-with-feeling.html
> > 
> > So it shouldn't be an insurmountable problem to make this happen even
> > with drm/scheduler, we can patch it up.
> > 
> > Whether that's justified is another question.
> Helper libraries won't work with a hardware scheduler.

Hm I guess I misunderstood then what exactly the hold-up is. This entire
discussion feels at least a bit like "heartbeat is unchangeable and guc
must fit", which is pretty much the midlayer mistake. We need to figure
out an implementation that works with GuC of the goals of the uapi,
instead of assuming that the current heartbeat is the only possible way to
achieve that.

Or I'm just very confused about what the problem is.

> > > My view is that any implementation involving knowledge of the heartbeat
> > > is fundamentally broken.
> > > 
> > > According to Daniel Vetter, the DRM ABI on this subject is that an
> > > actively executing context should persist until the DRM file handle is
> > > closed. That seems like a much more plausible and simple ABI than one
> > DRM ABI is maybe a bit an overkill statement. It's more "what other
> > drivers do", but it's generally a good idea to not ignore that :-)
> > 
> > > that says 'if the heartbeat is running then a context will persist
> > > forever, if the heartbeat is not running then it will be killed
> > > immediately, if the heart was running but then stops running then the
> > > context will be killed on the next context switch, ...'. And if I
> > > understand it correctly, the current ABI allows a badly written user app
> > > to cause a denial of service by leaving contexts permanently running an
> > > infinite loop on the hardware even after the app has been killed! How
> > > can that ever be considered a good idea?
> > We're not going to support changing all these settings at runtime.
> > There's just not point in trying to make that work race-free, it
> > either adds complexity to the code for no reason, or it adds overhead
> > to the code for no reason.
> > 
> > Yes I know existing customers and all that, but
> > - they can change this stuff, and when they change it while anyting is
> > in-flight they get to keep the pieces. These options taint the kernel
> > for a reason (and if they don't, that should be fixed)
> > - quite a few around heartbeat and compute support as we've merged a
> > while ago hang by design when trying to smash them into drm rules.
> > We're not going to fix that, and we should not use any existing such
> > assumptions as justification for code changes.
> > 
> > Wrt infinitely running: Right now nothing is allowed to run forever,
> > because hangcheck will step in and kill that job. Once we add compute
> > mode ctx flag we'll require killing on process exit to stop escape.
> If the infinite loop is pre-emptible then the heartbeat won't kill it off.
> It will just run forever. Okay, it won't be a huge denial of service because
> other work can pre-empt and run. However, you are down one timeslice
> execution slot at that priority level. You have also permanently lost
> whatever memory is allocated and in use by that workload.

Ok I think I'm definitely lost.

Right now, in upstream, you can't run forever without regularly calling
execbuf to stuff new work in. So it will die out, it wont be persistent
for very long.

> > > Therefore, the context close implementation should be to add an active
> > > context to a zombie list. If a context is in zombie state and its last
> > > request completes then the context can be immediately killed at that
> > > point. Otherwise, on DRM handle close, we go through the zombie list and
> > > immediately kill all contexts.
> > > 
> > > Simple, clean, no back-end scheduler hacks, no reliance on heartbeats or
> > > pulses. Also no opportunity for rogue (or just badly written) user
> > > processes to leave zombie contexts running on the hardware forever and
> > > causing a denial of service attack. If the host process is killed, all
> > > of its GPU processes are also killed irrespective of what dodgy context
> > > flags they may or may not have set.
> > Uh, the intel_context state machine is already a bit too complex, and
> > the implementation lacks a bunch of barriers at least from the cursor
> > look I've given it thus far.
> > 
> > So if we really need to make that more complex with more states then I
> > think someone needs to come up with an actual clean design, with
> > proper state transitions and all the barriers (or really, a design
> > which doesn't need barriers). This is going to be work.
> > -Daniel
> Personally, I would rather just drop the whole persistence/zombie idea
> completely. If you close your context then you should expect that context to
> be destroyed and any outstanding workloads killed off. If you wanted the
> results then you should have waited for them.
> 
> If we do have to support some level of persistence then it doesn't seem like
> tracking closed contexts should be especially complex. Not sure why it would
> need special barriers either.

Frankly I think I'm lost, and I think the confusion (for me at least)
starts with what the current uapi is.

Can someone please document that, with kerneldoc in the uapi header
ideally? Once we have that defined I think we can have an actual
discussion about what exactly this should look like with GuC (and also
eventually with drm/scheduler), and which parts of the uapi are just
artifacts of the current implementation, and which parts actually matter.

Otherwise I think we're just spinning wheels a bit much here.
-Daniel

> 
> John.
> 
> > > John.
> > > 
> > > 
> > > > Or perhaps there is no ABI change? I am not really clear how does
> > > > setting banned status propagate to the GuC backend. I mean at which
> > > > point does i915 ends up passing that info to the firmware?
> > > > 
> > > > Regards,
> > > > 
> > > > Tvrtko
> > > > 
> > > > > 
> > > > > > > It's worse than this. If the engine in question is an individual
> > > > > > > physical engine then sending a pulse (with sufficiently high
> > > > > > > priority) will pre-empt the engine and kick the context off.
> > > > > > > However, the GuC
> > > > > > Why it is different for physical vs virtual, aren't both just
> > > > > > schedulable contexts with different engine masks for what GuC is
> > > > > > concerned? Oh, is it a matter of needing to send pulses to all
> > > > > > engines which comprise a virtual one?
> > > > > It isn't different. It is totally broken for both. It is potentially
> > > > > more broken for virtual engines because of the question of which
> > > > > engine to pulse. But as stated above, the pulse is pointless anyway
> > > > > so the which engine question doesn't even matter.
> > > > > 
> > > > > John.
> > > > > 
> > > > > 
> > > > > > > scheduler does not have hacks in it to check the state of the
> > > > > > > heartbeat or whether a context is actually a zombie or not. Thus,
> > > > > > > the context will get resubmitted to the hardware after the pulse
> > > > > > > completes and effectively nothing will have happened.
> > > > > > > 
> > > > > > > I would assume that the DRM scheduler which we are meant to be
> > > > > > > switching to for execlist as well as GuC submission is also
> > > > > > > unlikely to have hacks for zombie contexts and tests for whether
> > > > > > > the i915 specific heartbeat has been disabled since the context
> > > > > > > became a zombie. So when that switch happens, this test will also
> > > > > > > fail in execlist mode as well as GuC mode.
> > > > > > > 
> > > > > > > The choices I see here are to simply remove persistence completely
> > > > > > > (it is a basically a bug that became UAPI because it wasn't caught
> > > > > > > soon enough!) or to implement it in a way that does not require
> > > > > > > hacks in the back end scheduler. Apparently, the DRM scheduler is
> > > > > > > expected to allow zombie contexts to persist until the DRM file
> > > > > > > handle is closed. So presumably we will have to go with option two.
> > > > > > > 
> > > > > > > That means flagging a context as being a zombie when it is closed
> > > > > > > but still active. The driver would then add it to a zombie list
> > > > > > > owned by the DRM client object. When that client object is closed,
> > > > > > > i915 would go through the list and genuinely kill all the contexts.
> > > > > > > No back end scheduler hacks required and no intimate knowledge of
> > > > > > > the i915 heartbeat mechanism required either.
> > > > > > > 
> > > > > > > John.
> > > > > > > 
> > > > > > > 
> > > > > > > > This patch also updates intel_engine_has_heartbeat to be a vfunc
> > > > > > > > as we
> > > > > > > > now need to call this function on execlists virtual engines too.
> > > > > > > > 
> > > > > > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > > > > > ---
> > > > > > > >    drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
> > > > > > > >    drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
> > > > > > > >    drivers/gpu/drm/i915/gt/intel_engine.h        | 21
> > > > > > > > ++-----------------
> > > > > > > >    .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
> > > > > > > >    .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
> > > > > > > >    .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
> > > > > > > >    6 files changed, 26 insertions(+), 24 deletions(-)
> > > > > > > > 
> > > > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > > > > > > b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > > > > > > index 9c3672bac0e2..b8e01c5ba9e5 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> > > > > > > > @@ -1090,8 +1090,9 @@ static void kill_engines(struct
> > > > > > > > i915_gem_engines *engines, bool ban)
> > > > > > > >         */
> > > > > > > >        for_each_gem_engine(ce, engines, it) {
> > > > > > > >            struct intel_engine_cs *engine;
> > > > > > > > +        bool local_ban = ban ||
> > > > > > > > !intel_engine_has_heartbeat(ce->engine);
> > > > > > In any case (pending me understanding what's really going on there),
> > > > > > why would this check not be in kill_context with currently does this:
> > > > > > 
> > > > > >      bool ban = (!i915_gem_context_is_persistent(ctx) ||
> > > > > >              !ctx->i915->params.enable_hangcheck);
> > > > > > ...
> > > > > >          kill_engines(pos, ban);
> > > > > > 
> > > > > > So whether to ban decision would be consolidated to one place.
> > > > > > 
> > > > > > In fact, decision on whether to allow persistent is tied to
> > > > > > enable_hangcheck, which also drives hearbeat emission. So perhaps
> > > > > > one part of the correct fix is to extend the above (kill_context)
> > > > > > ban criteria to include hearbeat values anyway. Otherwise isn't it a
> > > > > > simple miss that this check fails to account to hearbeat disablement
> > > > > > via sysfs?
> > > > > > 
> > > > > > Regards,
> > > > > > 
> > > > > > Tvrtko
> > > > > > 
> > > > > > > > -        if (ban && intel_context_ban(ce, NULL))
> > > > > > > > +        if (local_ban && intel_context_ban(ce, NULL))
> > > > > > > >                continue;
> > > > > > > >            /*
> > > > > > > > @@ -1104,7 +1105,7 @@ static void kill_engines(struct
> > > > > > > > i915_gem_engines *engines, bool ban)
> > > > > > > >            engine = active_engine(ce);
> > > > > > > >            /* First attempt to gracefully cancel the context */
> > > > > > > > -        if (engine && !__cancel_engine(engine) && ban)
> > > > > > > > +        if (engine && !__cancel_engine(engine) && local_ban)
> > > > > > > >                /*
> > > > > > > >                 * If we are unable to send a preemptive pulse to bump
> > > > > > > >                 * the context from the GPU, we have to resort to a
> > > > > > > > full
> > > > > > > > diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > > > > > > b/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > > > > > > index e54351a170e2..65f2eb2a78e4 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > > > > > > +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> > > > > > > > @@ -55,6 +55,8 @@ struct intel_context_ops {
> > > > > > > >        void (*reset)(struct intel_context *ce);
> > > > > > > >        void (*destroy)(struct kref *kref);
> > > > > > > > +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
> > > > > > > > +
> > > > > > > >        /* virtual engine/context interface */
> > > > > > > >        struct intel_context *(*create_virtual)(struct
> > > > > > > > intel_engine_cs **engine,
> > > > > > > >                            unsigned int count);
> > > > > > > > diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h
> > > > > > > > b/drivers/gpu/drm/i915/gt/intel_engine.h
> > > > > > > > index c2a5640ae055..1b11a808acc4 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> > > > > > > > +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> > > > > > > > @@ -283,28 +283,11 @@ struct intel_context *
> > > > > > > >    intel_engine_create_virtual(struct intel_engine_cs **siblings,
> > > > > > > >                    unsigned int count);
> > > > > > > > -static inline bool
> > > > > > > > -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs
> > > > > > > > *engine)
> > > > > > > > -{
> > > > > > > > -    /*
> > > > > > > > -     * For non-GuC submission we expect the back-end to look at the
> > > > > > > > -     * heartbeat status of the actual physical engine that the work
> > > > > > > > -     * has been (or is being) scheduled on, so we should only reach
> > > > > > > > -     * here with GuC submission enabled.
> > > > > > > > -     */
> > > > > > > > -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
> > > > > > > > -
> > > > > > > > -    return intel_guc_virtual_engine_has_heartbeat(engine);
> > > > > > > > -}
> > > > > > > > -
> > > > > > > >    static inline bool
> > > > > > > >    intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
> > > > > > > >    {
> > > > > > > > -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
> > > > > > > > -        return false;
> > > > > > > > -
> > > > > > > > -    if (intel_engine_is_virtual(engine))
> > > > > > > > -        return intel_virtual_engine_has_heartbeat(engine);
> > > > > > > > +    if (engine->cops->has_heartbeat)
> > > > > > > > +        return engine->cops->has_heartbeat(engine);
> > > > > > > >        else
> > > > > > > >            return READ_ONCE(engine->props.heartbeat_interval_ms);
> > > > > > > >    }
> > > > > > > > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > > > > b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > > > > index de5f9c86b9a4..18005b5546b6 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > > > > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > > > > @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs
> > > > > > > > *engine, unsigned int sibling)
> > > > > > > >        return ve->siblings[sibling];
> > > > > > > >    }
> > > > > > > > +static bool virtual_engine_has_heartbeat(const struct
> > > > > > > > intel_engine_cs *ve)
> > > > > > > > +{
> > > > > > > > +    struct intel_engine_cs *engine;
> > > > > > > > +    intel_engine_mask_t tmp, mask = ve->mask;
> > > > > > > > +
> > > > > > > > +    for_each_engine_masked(engine, ve->gt, mask, tmp)
> > > > > > > > +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
> > > > > > > > +            return true;
> > > > > > > > +
> > > > > > > > +    return false;
> > > > > > > > +}
> > > > > > > > +
> > > > > > > >    static const struct intel_context_ops virtual_context_ops = {
> > > > > > > >        .flags = COPS_HAS_INFLIGHT,
> > > > > > > > @@ -3634,6 +3646,8 @@ static const struct intel_context_ops
> > > > > > > > virtual_context_ops = {
> > > > > > > >        .enter = virtual_context_enter,
> > > > > > > >        .exit = virtual_context_exit,
> > > > > > > > +    .has_heartbeat = virtual_engine_has_heartbeat,
> > > > > > > > +
> > > > > > > >        .destroy = virtual_context_destroy,
> > > > > > > >        .get_sibling = virtual_get_sibling,
> > > > > > > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > > > > > b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > > > > > index 89ff0e4b4bc7..ae70bff3605f 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > > > > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > > > > > > @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct
> > > > > > > > intel_context *ce)
> > > > > > > >        return lrc_alloc(ce, engine);
> > > > > > > >    }
> > > > > > > > +static bool guc_virtual_engine_has_heartbeat(const struct
> > > > > > > > intel_engine_cs *ve);
> > > > > > > > +
> > > > > > > >    static const struct intel_context_ops virtual_guc_context_ops = {
> > > > > > > >        .alloc = guc_virtual_context_alloc,
> > > > > > > > @@ -2183,6 +2185,8 @@ static const struct intel_context_ops
> > > > > > > > virtual_guc_context_ops = {
> > > > > > > >        .enter = guc_virtual_context_enter,
> > > > > > > >        .exit = guc_virtual_context_exit,
> > > > > > > > +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
> > > > > > > > +
> > > > > > > >        .sched_disable = guc_context_sched_disable,
> > > > > > > >        .destroy = guc_context_destroy,
> > > > > > > > @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs
> > > > > > > > **siblings, unsigned int count)
> > > > > > > >        return ERR_PTR(err);
> > > > > > > >    }
> > > > > > > > -bool intel_guc_virtual_engine_has_heartbeat(const struct
> > > > > > > > intel_engine_cs *ve)
> > > > > > > > +static bool guc_virtual_engine_has_heartbeat(const struct
> > > > > > > > intel_engine_cs *ve)
> > > > > > > >    {
> > > > > > > >        struct intel_engine_cs *engine;
> > > > > > > >        intel_engine_mask_t tmp, mask = ve->mask;
> > > > > > > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > > > > > > b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > > > > > > index c7ef44fa0c36..c2afc3b88fd8 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > > > > > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> > > > > > > > @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct
> > > > > > > > intel_engine_cs *engine,
> > > > > > > >                        struct i915_request *hung_rq,
> > > > > > > >                        struct drm_printer *m);
> > > > > > > > -bool intel_guc_virtual_engine_has_heartbeat(const struct
> > > > > > > > intel_engine_cs *ve);
> > > > > > > > -
> > > > > > > >    int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
> > > > > > > >                       atomic_t *wait_var,
> > > > > > > >                       bool interruptible,
> > > > > > > _______________________________________________
> > > > > > > Intel-gfx mailing list
> > > > > > > Intel-gfx@lists.freedesktop.org
> > > > > > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission
  2021-07-29  0:33 [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission Matthew Brost
                   ` (2 preceding siblings ...)
  2021-07-29  7:30 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
@ 2021-08-10  6:38 ` Daniel Vetter
  2021-08-18  0:08   ` John Harrison
  3 siblings, 1 reply; 18+ messages in thread
From: Daniel Vetter @ 2021-08-10  6:38 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-gfx, dri-devel

On Wed, Jul 28, 2021 at 05:33:59PM -0700, Matthew Brost wrote:
> Should fix below failures with GuC submission for the following tests:
> gem_exec_balancer --r noheartbeat
> gem_ctx_persistence --r heartbeat-close
> 
> Not going to fix:
> gem_ctx_persistence --r heartbeat-many
> gem_ctx_persistence --r heartbeat-stop

After looking at that big thread and being very confused: Are we fixing an
actual use-case here, or is this another case of blindly following igts
tests just because they exist?

I'm leaning towards that we should stall on this, and first document what
exactly is the actual intention behind all this, and then fix up the tests
to match (if needed). And only then fix up GuC to match whatever we
actually want to do.
-Daniel

> 
> As the above tests change the heartbeat value to 0 (off) after the
> context is closed and we have no way to detect that with GuC submission
> unless we keep a list of closed but running contexts which seems like
> overkill for a non-real world use case. We likely should just skip these
> tests with GuC submission.
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> 
> Matthew Brost (1):
>   drm/i915: Check if engine has heartbeat when closing a context
> 
>  drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>  drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>  drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
>  .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>  6 files changed, 26 insertions(+), 24 deletions(-)
> 
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission
  2021-08-10  6:38 ` [Intel-gfx] [PATCH 0/1] " Daniel Vetter
@ 2021-08-18  0:08   ` John Harrison
  2021-08-18  9:49     ` Daniel Vetter
  0 siblings, 1 reply; 18+ messages in thread
From: John Harrison @ 2021-08-18  0:08 UTC (permalink / raw)
  To: Daniel Vetter, Matthew Brost; +Cc: intel-gfx, dri-devel

On 8/9/2021 23:38, Daniel Vetter wrote:
> On Wed, Jul 28, 2021 at 05:33:59PM -0700, Matthew Brost wrote:
>> Should fix below failures with GuC submission for the following tests:
>> gem_exec_balancer --r noheartbeat
>> gem_ctx_persistence --r heartbeat-close
>>
>> Not going to fix:
>> gem_ctx_persistence --r heartbeat-many
>> gem_ctx_persistence --r heartbeat-stop
> After looking at that big thread and being very confused: Are we fixing an
> actual use-case here, or is this another case of blindly following igts
> tests just because they exist?
My understanding is that this is established behaviour and therefore 
must be maintained because the UAPI (whether documented or not) is 
inviolate. Therefore IGTs have been written to validate this past 
behaviour and now we must conform to the IGTs in order to keep the 
existing behaviour unchanged.

Whether anybody actually makes use of this behaviour or not is another 
matter entirely. I am certainly not aware of any vital use case. Others 
might have more recollection. I do know that we tell the UMD teams to 
explicitly disable persistence on every context they create.

>
> I'm leaning towards that we should stall on this, and first document what
> exactly is the actual intention behind all this, and then fix up the tests
I'm not sure there ever was an 'intention'. The rumour I heard way back 
when was that persistence was a bug on earlier platforms (or possibly we 
didn't have hardware support for doing engine resets?). But once the bug 
was realised (or the hardware support was added), it was too late to 
change the default behaviour because existing kernel behaviour must 
never change on pain of painful things. Thus the persistence flag was 
added so that people could opt out of the broken, leaky behaviour and 
have their contexts clean up properly.

Feel free to document what you believe should be the behaviour from a 
software architect point of view. Any documentation I produce is 
basically going to be created by reverse engineering the existing code. 
That is the only 'spec' that I am aware of and as I keep saying, I 
personally think it is a totally broken concept that should just be removed.

> to match (if needed). And only then fix up GuC to match whatever we
> actually want to do.
I also still maintain there is no 'fix up the GuC'. This is not 
behaviour we should be adding to a hardware scheduler. It is behaviour 
that should be implemented at the front end not the back end. If we 
absolutely need to do this then we need to do it solely at the context 
management level not at the back end submission level. And the solution 
should work by default on any submission back end.

John.


> -Daniel
>
>> As the above tests change the heartbeat value to 0 (off) after the
>> context is closed and we have no way to detect that with GuC submission
>> unless we keep a list of closed but running contexts which seems like
>> overkill for a non-real world use case. We likely should just skip these
>> tests with GuC submission.
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>
>> Matthew Brost (1):
>>    drm/i915: Check if engine has heartbeat when closing a context
>>
>>   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
>>   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>   6 files changed, 26 insertions(+), 24 deletions(-)
>>
>> -- 
>> 2.28.0
>>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-08-10  6:36                 ` Daniel Vetter
@ 2021-08-18  0:28                   ` John Harrison
  2021-08-18  9:26                     ` Daniel Vetter
  0 siblings, 1 reply; 18+ messages in thread
From: John Harrison @ 2021-08-18  0:28 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

On 8/9/2021 23:36, Daniel Vetter wrote:
> On Mon, Aug 09, 2021 at 04:12:52PM -0700, John Harrison wrote:
>> On 8/6/2021 12:46, Daniel Vetter wrote:
>>> Seen this fly by and figured I dropped a few thoughts in here. At the
>>> likely cost of looking a bit out of whack :-)
>>>
>>> On Fri, Aug 6, 2021 at 8:01 PM John Harrison <john.c.harrison@intel.com> wrote:
>>>> On 8/2/2021 02:40, Tvrtko Ursulin wrote:
>>>>> On 30/07/2021 19:13, John Harrison wrote:
>>>>>> On 7/30/2021 02:49, Tvrtko Ursulin wrote:
>>>>>>> On 30/07/2021 01:13, John Harrison wrote:
>>>>>>>> On 7/28/2021 17:34, Matthew Brost wrote:
>>>>>>>>> If an engine associated with a context does not have a heartbeat,
>>>>>>>>> ban it
>>>>>>>>> immediately. This is needed for GuC submission as a idle pulse
>>>>>>>>> doesn't
>>>>>>>>> kick the context off the hardware where it then can check for a
>>>>>>>>> heartbeat and ban the context.
>>>>>>> Pulse, that is a request with I915_PRIORITY_BARRIER, does not
>>>>>>> preempt a running normal priority context?
>>>>>>>
>>>>>>> Why does it matter then whether or not heartbeats are enabled - when
>>>>>>> heartbeat just ends up sending the same engine pulse (eventually,
>>>>>>> with raising priority)?
>>>>>> The point is that the pulse is pointless. See the rest of my comments
>>>>>> below, specifically "the context will get resubmitted to the hardware
>>>>>> after the pulse completes". To re-iterate...
>>>>>>
>>>>>> Yes, it preempts the context. Yes, it does so whether heartbeats are
>>>>>> enabled or not. But so what? Who cares? You have preempted a context.
>>>>>> It is no longer running on the hardware. BUT IT IS STILL A VALID
>>>>>> CONTEXT.
>>>>> It is valid yes, and it even may be the current ABI so another
>>>>> question is whether it is okay to change that.
>>>>>
>>>>>> The backend scheduler will just resubmit it to the hardware as soon
>>>>>> as the pulse completes. The only reason this works at all is because
>>>>>> of the horrid hack in the execlist scheduler's back end
>>>>>> implementation (in __execlists_schedule_in):
>>>>>>            if (unlikely(intel_context_is_closed(ce) &&
>>>>>>                         !intel_engine_has_heartbeat(engine)))
>>>>>>                    intel_context_set_banned(ce);
>>>>> Right, is the above code then needed with this patch - when ban is
>>>>> immediately applied on the higher level?
>>>>>
>>>>>> The actual back end scheduler is saying "Is this a zombie context? Is
>>>>>> the heartbeat disabled? Then ban it". No other scheduler backend is
>>>>>> going to have knowledge of zombie context status or of the heartbeat
>>>>>> status. Nor are they going to call back into the higher levels of the
>>>>>> i915 driver to trigger a ban operation. Certainly a hardware
>>>>>> implemented scheduler is not going to be looking at private i915
>>>>>> driver information to decide whether to submit a context or whether
>>>>>> to tell the OS to kill it off instead.
>>>>>>
>>>>>> For persistence to work with a hardware scheduler (or a non-Intel
>>>>>> specific scheduler such as the DRM one), the handling of zombie
>>>>>> contexts, banning, etc. *must* be done entirely in the front end. It
>>>>>> cannot rely on any backend hacks. That means you can't rely on any
>>>>>> fancy behaviour of pulses.
>>>>>>
>>>>>> If you want to ban a context then you must explicitly ban that
>>>>>> context. If you want to ban it at some later point then you need to
>>>>>> track it at the top level as a zombie and then explicitly ban that
>>>>>> zombie at whatever later point.
>>>>> I am still trying to understand it all. If I go by the commit message:
>>>>>
>>>>> """
>>>>> This is needed for GuC submission as a idle pulse doesn't
>>>>> kick the context off the hardware where it then can check for a
>>>>> heartbeat and ban the context.
>>>>> """
>>>>>
>>>>> That did not explain things for me. Sentence does not appear to make
>>>>> sense. Now, it seems "kick off the hardware" is meant as revoke and
>>>>> not just preempt. Which is fine, perhaps just needs to be written more
>>>>> explicitly. But the part of checking for heartbeat after idle pulse
>>>>> does not compute for me. It is the heartbeat which emits idle pulses,
>>>>> not idle pulse emitting heartbeats.
>>>> I am in agreement that the commit message is confusing and does not
>>>> explain either the problem or the solution.
>>>>
>>>>
>>>>> But anyway, I can buy the handling at the front end story completely.
>>>>> It makes sense. We just need to agree that a) it is okay to change the
>>>>> ABI and b) remove the backend check from execlists if it is not needed
>>>>> any longer.
>>>>>
>>>>> And if ABI change is okay then commit message needs to talk about it
>>>>> loudly and clearly.
>>>> I don't think we have a choice. The current ABI is not and cannot ever
>>>> be compatible with any scheduler external to i915. It cannot be
>>>> implemented with a hardware scheduler such as the GuC and it cannot be
>>>> implemented with an external software scheduler such as the DRM one.
>>> So generally on linux we implement helper libraries, which means
>>> massive flexibility everywhere.
>>>
>>> https://blog.ffwll.ch/2016/12/midlayers-once-more-with-feeling.html
>>>
>>> So it shouldn't be an insurmountable problem to make this happen even
>>> with drm/scheduler, we can patch it up.
>>>
>>> Whether that's justified is another question.
>> Helper libraries won't work with a hardware scheduler.
> Hm I guess I misunderstood then what exactly the hold-up is. This entire
> discussion feels at least a bit like "heartbeat is unchangeable and guc
> must fit", which is pretty much the midlayer mistake. We need to figure
> out an implementation that works with GuC of the goals of the uapi,
> instead of assuming that the current heartbeat is the only possible way to
> achieve that.
>
> Or I'm just very confused about what the problem is.

What I mean is that you can't add helper callback hook things into a 
hardware scheduler. It's hardware. It does what it does. Sure, the GuC 
is firmware but it is very limited in what it can do. It certainly can't 
peek into internal KMD state such as the heartbeat. Nor can it call back 
to i915 to execute code every time it wants to make a scheduling 
decision. That would be defeating the whole point of it being a CPU 
offload accelerator thing.

Also, what I'm arguing is that the heartbeat should not be involved in 
the management of persistent contexts in the first place. It is way over 
complicated, unnecessary and not intuitive to an end user in the slightest.

>
>>>> My view is that any implementation involving knowledge of the heartbeat
>>>> is fundamentally broken.
>>>>
>>>> According to Daniel Vetter, the DRM ABI on this subject is that an
>>>> actively executing context should persist until the DRM file handle is
>>>> closed. That seems like a much more plausible and simple ABI than one
>>> DRM ABI is maybe a bit an overkill statement. It's more "what other
>>> drivers do", but it's generally a good idea to not ignore that :-)
>>>
>>>> that says 'if the heartbeat is running then a context will persist
>>>> forever, if the heartbeat is not running then it will be killed
>>>> immediately, if the heart was running but then stops running then the
>>>> context will be killed on the next context switch, ...'. And if I
>>>> understand it correctly, the current ABI allows a badly written user app
>>>> to cause a denial of service by leaving contexts permanently running an
>>>> infinite loop on the hardware even after the app has been killed! How
>>>> can that ever be considered a good idea?
>>> We're not going to support changing all these settings at runtime.
>>> There's just not point in trying to make that work race-free, it
>>> either adds complexity to the code for no reason, or it adds overhead
>>> to the code for no reason.
>>>
>>> Yes I know existing customers and all that, but
>>> - they can change this stuff, and when they change it while anyting is
>>> in-flight they get to keep the pieces. These options taint the kernel
>>> for a reason (and if they don't, that should be fixed)
>>> - quite a few around heartbeat and compute support as we've merged a
>>> while ago hang by design when trying to smash them into drm rules.
>>> We're not going to fix that, and we should not use any existing such
>>> assumptions as justification for code changes.
>>>
>>> Wrt infinitely running: Right now nothing is allowed to run forever,
>>> because hangcheck will step in and kill that job. Once we add compute
>>> mode ctx flag we'll require killing on process exit to stop escape.
>> If the infinite loop is pre-emptible then the heartbeat won't kill it off.
>> It will just run forever. Okay, it won't be a huge denial of service because
>> other work can pre-empt and run. However, you are down one timeslice
>> execution slot at that priority level. You have also permanently lost
>> whatever memory is allocated and in use by that workload.
> Ok I think I'm definitely lost.
>
> Right now, in upstream, you can't run forever without regularly calling
> execbuf to stuff new work in. So it will die out, it wont be persistent
> for very long.
It is possible to write an infinite loop batch buffer that is 
pre-emptible. Once you set that running, no amount of heartbeats will 
kill it off. The heartbeat will happily pre-empt it and tell you that 
the system as a whole is still running just fine. And then the scheduler 
will set the infinite loop task running again because it still has more 
'work' to do.


>
>>>> Therefore, the context close implementation should be to add an active
>>>> context to a zombie list. If a context is in zombie state and its last
>>>> request completes then the context can be immediately killed at that
>>>> point. Otherwise, on DRM handle close, we go through the zombie list and
>>>> immediately kill all contexts.
>>>>
>>>> Simple, clean, no back-end scheduler hacks, no reliance on heartbeats or
>>>> pulses. Also no opportunity for rogue (or just badly written) user
>>>> processes to leave zombie contexts running on the hardware forever and
>>>> causing a denial of service attack. If the host process is killed, all
>>>> of its GPU processes are also killed irrespective of what dodgy context
>>>> flags they may or may not have set.
>>> Uh, the intel_context state machine is already a bit too complex, and
>>> the implementation lacks a bunch of barriers at least from the cursor
>>> look I've given it thus far.
>>>
>>> So if we really need to make that more complex with more states then I
>>> think someone needs to come up with an actual clean design, with
>>> proper state transitions and all the barriers (or really, a design
>>> which doesn't need barriers). This is going to be work.
>>> -Daniel
>> Personally, I would rather just drop the whole persistence/zombie idea
>> completely. If you close your context then you should expect that context to
>> be destroyed and any outstanding workloads killed off. If you wanted the
>> results then you should have waited for them.
>>
>> If we do have to support some level of persistence then it doesn't seem like
>> tracking closed contexts should be especially complex. Not sure why it would
>> need special barriers either.
> Frankly I think I'm lost, and I think the confusion (for me at least)
> starts with what the current uapi is.
>
> Can someone please document that, with kerneldoc in the uapi header
> ideally? Once we have that defined I think we can have an actual
> discussion about what exactly this should look like with GuC (and also
> eventually with drm/scheduler), and which parts of the uapi are just
> artifacts of the current implementation, and which parts actually matter.
>
> Otherwise I think we're just spinning wheels a bit much here.
> -Daniel
See other branch of this thread - feel free to write it yourself or 
elect someone who actually knows the history/reasons behind this to 
write it up. All I can do is reverse engineer the code and document what 
it currently does and what is required to pass the IGT test.

If you want documentation about what the interface *should* be then I 
can offer two options:

1. No persistence at all.
If you close a context (whether explicitly through a close context call 
or implicitly through closing the DRM file handle, being killed, etc.) 
then that context is destroyed immediately. All outstanding work is 
discarded.

2. Persistence until DRM handle closure.
You can close a context and have it keep running previously submitted 
work. However, as soon as the DRM file handle is closed (either 
explicitly or by being killed, etc.) then all contexts are immediately 
destroyed and all outstanding work is discarded.

Simple. Concise. Sensible. No long discussions about what the heartbeat 
enable state was when the context was closed versus what that state is 
at some future point. No platform specific caveats or interactions. And 
no opportunity to cause denial of service attacks either deliberately or 
accidentally (and no opportunity for hideously complex KMD 
implementations to introduce potential DOS bugs either).

John.


>
>> John.
>>
>>>> John.
>>>>
>>>>
>>>>> Or perhaps there is no ABI change? I am not really clear how does
>>>>> setting banned status propagate to the GuC backend. I mean at which
>>>>> point does i915 ends up passing that info to the firmware?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>>
>>>>>>>> It's worse than this. If the engine in question is an individual
>>>>>>>> physical engine then sending a pulse (with sufficiently high
>>>>>>>> priority) will pre-empt the engine and kick the context off.
>>>>>>>> However, the GuC
>>>>>>> Why it is different for physical vs virtual, aren't both just
>>>>>>> schedulable contexts with different engine masks for what GuC is
>>>>>>> concerned? Oh, is it a matter of needing to send pulses to all
>>>>>>> engines which comprise a virtual one?
>>>>>> It isn't different. It is totally broken for both. It is potentially
>>>>>> more broken for virtual engines because of the question of which
>>>>>> engine to pulse. But as stated above, the pulse is pointless anyway
>>>>>> so the which engine question doesn't even matter.
>>>>>>
>>>>>> John.
>>>>>>
>>>>>>
>>>>>>>> scheduler does not have hacks in it to check the state of the
>>>>>>>> heartbeat or whether a context is actually a zombie or not. Thus,
>>>>>>>> the context will get resubmitted to the hardware after the pulse
>>>>>>>> completes and effectively nothing will have happened.
>>>>>>>>
>>>>>>>> I would assume that the DRM scheduler which we are meant to be
>>>>>>>> switching to for execlist as well as GuC submission is also
>>>>>>>> unlikely to have hacks for zombie contexts and tests for whether
>>>>>>>> the i915 specific heartbeat has been disabled since the context
>>>>>>>> became a zombie. So when that switch happens, this test will also
>>>>>>>> fail in execlist mode as well as GuC mode.
>>>>>>>>
>>>>>>>> The choices I see here are to simply remove persistence completely
>>>>>>>> (it is a basically a bug that became UAPI because it wasn't caught
>>>>>>>> soon enough!) or to implement it in a way that does not require
>>>>>>>> hacks in the back end scheduler. Apparently, the DRM scheduler is
>>>>>>>> expected to allow zombie contexts to persist until the DRM file
>>>>>>>> handle is closed. So presumably we will have to go with option two.
>>>>>>>>
>>>>>>>> That means flagging a context as being a zombie when it is closed
>>>>>>>> but still active. The driver would then add it to a zombie list
>>>>>>>> owned by the DRM client object. When that client object is closed,
>>>>>>>> i915 would go through the list and genuinely kill all the contexts.
>>>>>>>> No back end scheduler hacks required and no intimate knowledge of
>>>>>>>> the i915 heartbeat mechanism required either.
>>>>>>>>
>>>>>>>> John.
>>>>>>>>
>>>>>>>>
>>>>>>>>> This patch also updates intel_engine_has_heartbeat to be a vfunc
>>>>>>>>> as we
>>>>>>>>> now need to call this function on execlists virtual engines too.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>>>>> ---
>>>>>>>>>     drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
>>>>>>>>>     drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
>>>>>>>>>     drivers/gpu/drm/i915/gt/intel_engine.h        | 21
>>>>>>>>> ++-----------------
>>>>>>>>>     .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
>>>>>>>>>     .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
>>>>>>>>>     .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
>>>>>>>>>     6 files changed, 26 insertions(+), 24 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>>>>>>>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct
>>>>>>>>> i915_gem_engines *engines, bool ban)
>>>>>>>>>          */
>>>>>>>>>         for_each_gem_engine(ce, engines, it) {
>>>>>>>>>             struct intel_engine_cs *engine;
>>>>>>>>> +        bool local_ban = ban ||
>>>>>>>>> !intel_engine_has_heartbeat(ce->engine);
>>>>>>> In any case (pending me understanding what's really going on there),
>>>>>>> why would this check not be in kill_context with currently does this:
>>>>>>>
>>>>>>>       bool ban = (!i915_gem_context_is_persistent(ctx) ||
>>>>>>>               !ctx->i915->params.enable_hangcheck);
>>>>>>> ...
>>>>>>>           kill_engines(pos, ban);
>>>>>>>
>>>>>>> So whether to ban decision would be consolidated to one place.
>>>>>>>
>>>>>>> In fact, decision on whether to allow persistent is tied to
>>>>>>> enable_hangcheck, which also drives hearbeat emission. So perhaps
>>>>>>> one part of the correct fix is to extend the above (kill_context)
>>>>>>> ban criteria to include hearbeat values anyway. Otherwise isn't it a
>>>>>>> simple miss that this check fails to account to hearbeat disablement
>>>>>>> via sysfs?
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Tvrtko
>>>>>>>
>>>>>>>>> -        if (ban && intel_context_ban(ce, NULL))
>>>>>>>>> +        if (local_ban && intel_context_ban(ce, NULL))
>>>>>>>>>                 continue;
>>>>>>>>>             /*
>>>>>>>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct
>>>>>>>>> i915_gem_engines *engines, bool ban)
>>>>>>>>>             engine = active_engine(ce);
>>>>>>>>>             /* First attempt to gracefully cancel the context */
>>>>>>>>> -        if (engine && !__cancel_engine(engine) && ban)
>>>>>>>>> +        if (engine && !__cancel_engine(engine) && local_ban)
>>>>>>>>>                 /*
>>>>>>>>>                  * If we are unable to send a preemptive pulse to bump
>>>>>>>>>                  * the context from the GPU, we have to resort to a
>>>>>>>>> full
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>>>> index e54351a170e2..65f2eb2a78e4 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
>>>>>>>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
>>>>>>>>>         void (*reset)(struct intel_context *ce);
>>>>>>>>>         void (*destroy)(struct kref *kref);
>>>>>>>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
>>>>>>>>> +
>>>>>>>>>         /* virtual engine/context interface */
>>>>>>>>>         struct intel_context *(*create_virtual)(struct
>>>>>>>>> intel_engine_cs **engine,
>>>>>>>>>                             unsigned int count);
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>>>> index c2a5640ae055..1b11a808acc4 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
>>>>>>>>> @@ -283,28 +283,11 @@ struct intel_context *
>>>>>>>>>     intel_engine_create_virtual(struct intel_engine_cs **siblings,
>>>>>>>>>                     unsigned int count);
>>>>>>>>> -static inline bool
>>>>>>>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs
>>>>>>>>> *engine)
>>>>>>>>> -{
>>>>>>>>> -    /*
>>>>>>>>> -     * For non-GuC submission we expect the back-end to look at the
>>>>>>>>> -     * heartbeat status of the actual physical engine that the work
>>>>>>>>> -     * has been (or is being) scheduled on, so we should only reach
>>>>>>>>> -     * here with GuC submission enabled.
>>>>>>>>> -     */
>>>>>>>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
>>>>>>>>> -
>>>>>>>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
>>>>>>>>> -}
>>>>>>>>> -
>>>>>>>>>     static inline bool
>>>>>>>>>     intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
>>>>>>>>>     {
>>>>>>>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
>>>>>>>>> -        return false;
>>>>>>>>> -
>>>>>>>>> -    if (intel_engine_is_virtual(engine))
>>>>>>>>> -        return intel_virtual_engine_has_heartbeat(engine);
>>>>>>>>> +    if (engine->cops->has_heartbeat)
>>>>>>>>> +        return engine->cops->has_heartbeat(engine);
>>>>>>>>>         else
>>>>>>>>>             return READ_ONCE(engine->props.heartbeat_interval_ms);
>>>>>>>>>     }
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>>>> index de5f9c86b9a4..18005b5546b6 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>>>>>>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs
>>>>>>>>> *engine, unsigned int sibling)
>>>>>>>>>         return ve->siblings[sibling];
>>>>>>>>>     }
>>>>>>>>> +static bool virtual_engine_has_heartbeat(const struct
>>>>>>>>> intel_engine_cs *ve)
>>>>>>>>> +{
>>>>>>>>> +    struct intel_engine_cs *engine;
>>>>>>>>> +    intel_engine_mask_t tmp, mask = ve->mask;
>>>>>>>>> +
>>>>>>>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
>>>>>>>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
>>>>>>>>> +            return true;
>>>>>>>>> +
>>>>>>>>> +    return false;
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>>     static const struct intel_context_ops virtual_context_ops = {
>>>>>>>>>         .flags = COPS_HAS_INFLIGHT,
>>>>>>>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops
>>>>>>>>> virtual_context_ops = {
>>>>>>>>>         .enter = virtual_context_enter,
>>>>>>>>>         .exit = virtual_context_exit,
>>>>>>>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
>>>>>>>>> +
>>>>>>>>>         .destroy = virtual_context_destroy,
>>>>>>>>>         .get_sibling = virtual_get_sibling,
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> index 89ff0e4b4bc7..ae70bff3605f 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct
>>>>>>>>> intel_context *ce)
>>>>>>>>>         return lrc_alloc(ce, engine);
>>>>>>>>>     }
>>>>>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
>>>>>>>>> intel_engine_cs *ve);
>>>>>>>>> +
>>>>>>>>>     static const struct intel_context_ops virtual_guc_context_ops = {
>>>>>>>>>         .alloc = guc_virtual_context_alloc,
>>>>>>>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops
>>>>>>>>> virtual_guc_context_ops = {
>>>>>>>>>         .enter = guc_virtual_context_enter,
>>>>>>>>>         .exit = guc_virtual_context_exit,
>>>>>>>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
>>>>>>>>> +
>>>>>>>>>         .sched_disable = guc_context_sched_disable,
>>>>>>>>>         .destroy = guc_context_destroy,
>>>>>>>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs
>>>>>>>>> **siblings, unsigned int count)
>>>>>>>>>         return ERR_PTR(err);
>>>>>>>>>     }
>>>>>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
>>>>>>>>> intel_engine_cs *ve)
>>>>>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
>>>>>>>>> intel_engine_cs *ve)
>>>>>>>>>     {
>>>>>>>>>         struct intel_engine_cs *engine;
>>>>>>>>>         intel_engine_mask_t tmp, mask = ve->mask;
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>>>> index c7ef44fa0c36..c2afc3b88fd8 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
>>>>>>>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct
>>>>>>>>> intel_engine_cs *engine,
>>>>>>>>>                         struct i915_request *hung_rq,
>>>>>>>>>                         struct drm_printer *m);
>>>>>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
>>>>>>>>> intel_engine_cs *ve);
>>>>>>>>> -
>>>>>>>>>     int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
>>>>>>>>>                        atomic_t *wait_var,
>>>>>>>>>                        bool interruptible,
>>>>>>>> _______________________________________________
>>>>>>>> Intel-gfx mailing list
>>>>>>>> Intel-gfx@lists.freedesktop.org
>>>>>>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context
  2021-08-18  0:28                   ` John Harrison
@ 2021-08-18  9:26                     ` Daniel Vetter
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel Vetter @ 2021-08-18  9:26 UTC (permalink / raw)
  To: John Harrison; +Cc: Tvrtko Ursulin, Matthew Brost, intel-gfx, dri-devel

On Wed, Aug 18, 2021 at 2:28 AM John Harrison <john.c.harrison@intel.com> wrote:
> On 8/9/2021 23:36, Daniel Vetter wrote:
> > On Mon, Aug 09, 2021 at 04:12:52PM -0700, John Harrison wrote:
> >> On 8/6/2021 12:46, Daniel Vetter wrote:
> >>> Seen this fly by and figured I dropped a few thoughts in here. At the
> >>> likely cost of looking a bit out of whack :-)
> >>>
> >>> On Fri, Aug 6, 2021 at 8:01 PM John Harrison <john.c.harrison@intel.com> wrote:
> >>>> On 8/2/2021 02:40, Tvrtko Ursulin wrote:
> >>>>> On 30/07/2021 19:13, John Harrison wrote:
> >>>>>> On 7/30/2021 02:49, Tvrtko Ursulin wrote:
> >>>>>>> On 30/07/2021 01:13, John Harrison wrote:
> >>>>>>>> On 7/28/2021 17:34, Matthew Brost wrote:
> >>>>>>>>> If an engine associated with a context does not have a heartbeat,
> >>>>>>>>> ban it
> >>>>>>>>> immediately. This is needed for GuC submission as a idle pulse
> >>>>>>>>> doesn't
> >>>>>>>>> kick the context off the hardware where it then can check for a
> >>>>>>>>> heartbeat and ban the context.
> >>>>>>> Pulse, that is a request with I915_PRIORITY_BARRIER, does not
> >>>>>>> preempt a running normal priority context?
> >>>>>>>
> >>>>>>> Why does it matter then whether or not heartbeats are enabled - when
> >>>>>>> heartbeat just ends up sending the same engine pulse (eventually,
> >>>>>>> with raising priority)?
> >>>>>> The point is that the pulse is pointless. See the rest of my comments
> >>>>>> below, specifically "the context will get resubmitted to the hardware
> >>>>>> after the pulse completes". To re-iterate...
> >>>>>>
> >>>>>> Yes, it preempts the context. Yes, it does so whether heartbeats are
> >>>>>> enabled or not. But so what? Who cares? You have preempted a context.
> >>>>>> It is no longer running on the hardware. BUT IT IS STILL A VALID
> >>>>>> CONTEXT.
> >>>>> It is valid yes, and it even may be the current ABI so another
> >>>>> question is whether it is okay to change that.
> >>>>>
> >>>>>> The backend scheduler will just resubmit it to the hardware as soon
> >>>>>> as the pulse completes. The only reason this works at all is because
> >>>>>> of the horrid hack in the execlist scheduler's back end
> >>>>>> implementation (in __execlists_schedule_in):
> >>>>>>            if (unlikely(intel_context_is_closed(ce) &&
> >>>>>>                         !intel_engine_has_heartbeat(engine)))
> >>>>>>                    intel_context_set_banned(ce);
> >>>>> Right, is the above code then needed with this patch - when ban is
> >>>>> immediately applied on the higher level?
> >>>>>
> >>>>>> The actual back end scheduler is saying "Is this a zombie context? Is
> >>>>>> the heartbeat disabled? Then ban it". No other scheduler backend is
> >>>>>> going to have knowledge of zombie context status or of the heartbeat
> >>>>>> status. Nor are they going to call back into the higher levels of the
> >>>>>> i915 driver to trigger a ban operation. Certainly a hardware
> >>>>>> implemented scheduler is not going to be looking at private i915
> >>>>>> driver information to decide whether to submit a context or whether
> >>>>>> to tell the OS to kill it off instead.
> >>>>>>
> >>>>>> For persistence to work with a hardware scheduler (or a non-Intel
> >>>>>> specific scheduler such as the DRM one), the handling of zombie
> >>>>>> contexts, banning, etc. *must* be done entirely in the front end. It
> >>>>>> cannot rely on any backend hacks. That means you can't rely on any
> >>>>>> fancy behaviour of pulses.
> >>>>>>
> >>>>>> If you want to ban a context then you must explicitly ban that
> >>>>>> context. If you want to ban it at some later point then you need to
> >>>>>> track it at the top level as a zombie and then explicitly ban that
> >>>>>> zombie at whatever later point.
> >>>>> I am still trying to understand it all. If I go by the commit message:
> >>>>>
> >>>>> """
> >>>>> This is needed for GuC submission as a idle pulse doesn't
> >>>>> kick the context off the hardware where it then can check for a
> >>>>> heartbeat and ban the context.
> >>>>> """
> >>>>>
> >>>>> That did not explain things for me. Sentence does not appear to make
> >>>>> sense. Now, it seems "kick off the hardware" is meant as revoke and
> >>>>> not just preempt. Which is fine, perhaps just needs to be written more
> >>>>> explicitly. But the part of checking for heartbeat after idle pulse
> >>>>> does not compute for me. It is the heartbeat which emits idle pulses,
> >>>>> not idle pulse emitting heartbeats.
> >>>> I am in agreement that the commit message is confusing and does not
> >>>> explain either the problem or the solution.
> >>>>
> >>>>
> >>>>> But anyway, I can buy the handling at the front end story completely.
> >>>>> It makes sense. We just need to agree that a) it is okay to change the
> >>>>> ABI and b) remove the backend check from execlists if it is not needed
> >>>>> any longer.
> >>>>>
> >>>>> And if ABI change is okay then commit message needs to talk about it
> >>>>> loudly and clearly.
> >>>> I don't think we have a choice. The current ABI is not and cannot ever
> >>>> be compatible with any scheduler external to i915. It cannot be
> >>>> implemented with a hardware scheduler such as the GuC and it cannot be
> >>>> implemented with an external software scheduler such as the DRM one.
> >>> So generally on linux we implement helper libraries, which means
> >>> massive flexibility everywhere.
> >>>
> >>> https://blog.ffwll.ch/2016/12/midlayers-once-more-with-feeling.html
> >>>
> >>> So it shouldn't be an insurmountable problem to make this happen even
> >>> with drm/scheduler, we can patch it up.
> >>>
> >>> Whether that's justified is another question.
> >> Helper libraries won't work with a hardware scheduler.
> > Hm I guess I misunderstood then what exactly the hold-up is. This entire
> > discussion feels at least a bit like "heartbeat is unchangeable and guc
> > must fit", which is pretty much the midlayer mistake. We need to figure
> > out an implementation that works with GuC of the goals of the uapi,
> > instead of assuming that the current heartbeat is the only possible way to
> > achieve that.
> >
> > Or I'm just very confused about what the problem is.
>
> What I mean is that you can't add helper callback hook things into a
> hardware scheduler. It's hardware. It does what it does. Sure, the GuC
> is firmware but it is very limited in what it can do. It certainly can't
> peek into internal KMD state such as the heartbeat. Nor can it call back
> to i915 to execute code every time it wants to make a scheduling
> decision. That would be defeating the whole point of it being a CPU
> offload accelerator thing.
>
> Also, what I'm arguing is that the heartbeat should not be involved in
> the management of persistent contexts in the first place. It is way over
> complicated, unnecessary and not intuitive to an end user in the slightest.

Yeah so heartbeat was also the attempt to support long-running compute
jobs without changing the uapi. That part is reverted, and now it's
essentially just a tool to make sure the gpu keeps preempting when we
expect it too.

Which also I guess should be GuC's job now, so why do we need
heartbeat even still with the guc backend? This is the part where I
meant we're looking at this way too strictly, you most definitely
_can_ change anything in the i915 kmd and igt test suite that doesn't
fit. We're maybe saying the same thing really, dunno.

Orthogonal issue, the current code trying to support changing
heartbeat status while the driver is running is also bonkers, we don't
support that. That should simplify at least the decision making a lot,
becuse we can safely assume that a persistent or non-persistent
context was only created when we thought it's ok to do so.

> >>>> My view is that any implementation involving knowledge of the heartbeat
> >>>> is fundamentally broken.
> >>>>
> >>>> According to Daniel Vetter, the DRM ABI on this subject is that an
> >>>> actively executing context should persist until the DRM file handle is
> >>>> closed. That seems like a much more plausible and simple ABI than one
> >>> DRM ABI is maybe a bit an overkill statement. It's more "what other
> >>> drivers do", but it's generally a good idea to not ignore that :-)
> >>>
> >>>> that says 'if the heartbeat is running then a context will persist
> >>>> forever, if the heartbeat is not running then it will be killed
> >>>> immediately, if the heart was running but then stops running then the
> >>>> context will be killed on the next context switch, ...'. And if I
> >>>> understand it correctly, the current ABI allows a badly written user app
> >>>> to cause a denial of service by leaving contexts permanently running an
> >>>> infinite loop on the hardware even after the app has been killed! How
> >>>> can that ever be considered a good idea?
> >>> We're not going to support changing all these settings at runtime.
> >>> There's just not point in trying to make that work race-free, it
> >>> either adds complexity to the code for no reason, or it adds overhead
> >>> to the code for no reason.
> >>>
> >>> Yes I know existing customers and all that, but
> >>> - they can change this stuff, and when they change it while anyting is
> >>> in-flight they get to keep the pieces. These options taint the kernel
> >>> for a reason (and if they don't, that should be fixed)
> >>> - quite a few around heartbeat and compute support as we've merged a
> >>> while ago hang by design when trying to smash them into drm rules.
> >>> We're not going to fix that, and we should not use any existing such
> >>> assumptions as justification for code changes.
> >>>
> >>> Wrt infinitely running: Right now nothing is allowed to run forever,
> >>> because hangcheck will step in and kill that job. Once we add compute
> >>> mode ctx flag we'll require killing on process exit to stop escape.
> >> If the infinite loop is pre-emptible then the heartbeat won't kill it off.
> >> It will just run forever. Okay, it won't be a huge denial of service because
> >> other work can pre-empt and run. However, you are down one timeslice
> >> execution slot at that priority level. You have also permanently lost
> >> whatever memory is allocated and in use by that workload.
> > Ok I think I'm definitely lost.
> >
> > Right now, in upstream, you can't run forever without regularly calling
> > execbuf to stuff new work in. So it will die out, it wont be persistent
> > for very long.
> It is possible to write an infinite loop batch buffer that is
> pre-emptible. Once you set that running, no amount of heartbeats will
> kill it off. The heartbeat will happily pre-empt it and tell you that
> the system as a whole is still running just fine. And then the scheduler
> will set the infinite loop task running again because it still has more
> 'work' to do.

There is a hangcheck timeout which kills you after 20s (which is
probably about 15s too long, but that's another bikeshed). This is
part of the contract that we can't remove, but we did (I think it's
still not yet in DII, not sure about status) and took quite long to
restore that.

So no, your scenario doesn't happen.

> >>>> Therefore, the context close implementation should be to add an active
> >>>> context to a zombie list. If a context is in zombie state and its last
> >>>> request completes then the context can be immediately killed at that
> >>>> point. Otherwise, on DRM handle close, we go through the zombie list and
> >>>> immediately kill all contexts.
> >>>>
> >>>> Simple, clean, no back-end scheduler hacks, no reliance on heartbeats or
> >>>> pulses. Also no opportunity for rogue (or just badly written) user
> >>>> processes to leave zombie contexts running on the hardware forever and
> >>>> causing a denial of service attack. If the host process is killed, all
> >>>> of its GPU processes are also killed irrespective of what dodgy context
> >>>> flags they may or may not have set.
> >>> Uh, the intel_context state machine is already a bit too complex, and
> >>> the implementation lacks a bunch of barriers at least from the cursor
> >>> look I've given it thus far.
> >>>
> >>> So if we really need to make that more complex with more states then I
> >>> think someone needs to come up with an actual clean design, with
> >>> proper state transitions and all the barriers (or really, a design
> >>> which doesn't need barriers). This is going to be work.
> >>> -Daniel
> >> Personally, I would rather just drop the whole persistence/zombie idea
> >> completely. If you close your context then you should expect that context to
> >> be destroyed and any outstanding workloads killed off. If you wanted the
> >> results then you should have waited for them.
> >>
> >> If we do have to support some level of persistence then it doesn't seem like
> >> tracking closed contexts should be especially complex. Not sure why it would
> >> need special barriers either.
> > Frankly I think I'm lost, and I think the confusion (for me at least)
> > starts with what the current uapi is.
> >
> > Can someone please document that, with kerneldoc in the uapi header
> > ideally? Once we have that defined I think we can have an actual
> > discussion about what exactly this should look like with GuC (and also
> > eventually with drm/scheduler), and which parts of the uapi are just
> > artifacts of the current implementation, and which parts actually matter.
> >
> > Otherwise I think we're just spinning wheels a bit much here.
> > -Daniel
> See other branch of this thread - feel free to write it yourself or
> elect someone who actually knows the history/reasons behind this to
> write it up. All I can do is reverse engineer the code and document what
> it currently does and what is required to pass the IGT test.
>
> If you want documentation about what the interface *should* be then I
> can offer two options:
>
> 1. No persistence at all.
> If you close a context (whether explicitly through a close context call
> or implicitly through closing the DRM file handle, being killed, etc.)
> then that context is destroyed immediately. All outstanding work is
> discarded.
>
> 2. Persistence until DRM handle closure.
> You can close a context and have it keep running previously submitted
> work. However, as soon as the DRM file handle is closed (either
> explicitly or by being killed, etc.) then all contexts are immediately
> destroyed and all outstanding work is discarded.

This one is pretty close to what I think drm/sched does too. We might
need a slight change in that userspace which explicitly asked for
non-persistent context to kill those immediately in all cases.

> Simple. Concise. Sensible. No long discussions about what the heartbeat
> enable state was when the context was closed versus what that state is
> at some future point. No platform specific caveats or interactions. And
> no opportunity to cause denial of service attacks either deliberately or
> accidentally (and no opportunity for hideously complex KMD
> implementations to introduce potential DOS bugs either).

That's another thing: That implementation just needs to be simplified.
It supports a lot of things that make little to no sense, and
especially if soemthing is in the way we should just remove it.
-Daniel

>
> John.
>
>
> >
> >> John.
> >>
> >>>> John.
> >>>>
> >>>>
> >>>>> Or perhaps there is no ABI change? I am not really clear how does
> >>>>> setting banned status propagate to the GuC backend. I mean at which
> >>>>> point does i915 ends up passing that info to the firmware?
> >>>>>
> >>>>> Regards,
> >>>>>
> >>>>> Tvrtko
> >>>>>
> >>>>>>>> It's worse than this. If the engine in question is an individual
> >>>>>>>> physical engine then sending a pulse (with sufficiently high
> >>>>>>>> priority) will pre-empt the engine and kick the context off.
> >>>>>>>> However, the GuC
> >>>>>>> Why it is different for physical vs virtual, aren't both just
> >>>>>>> schedulable contexts with different engine masks for what GuC is
> >>>>>>> concerned? Oh, is it a matter of needing to send pulses to all
> >>>>>>> engines which comprise a virtual one?
> >>>>>> It isn't different. It is totally broken for both. It is potentially
> >>>>>> more broken for virtual engines because of the question of which
> >>>>>> engine to pulse. But as stated above, the pulse is pointless anyway
> >>>>>> so the which engine question doesn't even matter.
> >>>>>>
> >>>>>> John.
> >>>>>>
> >>>>>>
> >>>>>>>> scheduler does not have hacks in it to check the state of the
> >>>>>>>> heartbeat or whether a context is actually a zombie or not. Thus,
> >>>>>>>> the context will get resubmitted to the hardware after the pulse
> >>>>>>>> completes and effectively nothing will have happened.
> >>>>>>>>
> >>>>>>>> I would assume that the DRM scheduler which we are meant to be
> >>>>>>>> switching to for execlist as well as GuC submission is also
> >>>>>>>> unlikely to have hacks for zombie contexts and tests for whether
> >>>>>>>> the i915 specific heartbeat has been disabled since the context
> >>>>>>>> became a zombie. So when that switch happens, this test will also
> >>>>>>>> fail in execlist mode as well as GuC mode.
> >>>>>>>>
> >>>>>>>> The choices I see here are to simply remove persistence completely
> >>>>>>>> (it is a basically a bug that became UAPI because it wasn't caught
> >>>>>>>> soon enough!) or to implement it in a way that does not require
> >>>>>>>> hacks in the back end scheduler. Apparently, the DRM scheduler is
> >>>>>>>> expected to allow zombie contexts to persist until the DRM file
> >>>>>>>> handle is closed. So presumably we will have to go with option two.
> >>>>>>>>
> >>>>>>>> That means flagging a context as being a zombie when it is closed
> >>>>>>>> but still active. The driver would then add it to a zombie list
> >>>>>>>> owned by the DRM client object. When that client object is closed,
> >>>>>>>> i915 would go through the list and genuinely kill all the contexts.
> >>>>>>>> No back end scheduler hacks required and no intimate knowledge of
> >>>>>>>> the i915 heartbeat mechanism required either.
> >>>>>>>>
> >>>>>>>> John.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> This patch also updates intel_engine_has_heartbeat to be a vfunc
> >>>>>>>>> as we
> >>>>>>>>> now need to call this function on execlists virtual engines too.
> >>>>>>>>>
> >>>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >>>>>>>>> ---
> >>>>>>>>>     drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
> >>>>>>>>>     drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
> >>>>>>>>>     drivers/gpu/drm/i915/gt/intel_engine.h        | 21
> >>>>>>>>> ++-----------------
> >>>>>>>>>     .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
> >>>>>>>>>     .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
> >>>>>>>>>     .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
> >>>>>>>>>     6 files changed, 26 insertions(+), 24 deletions(-)
> >>>>>>>>>
> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>>>>>> b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>>>>>> index 9c3672bac0e2..b8e01c5ba9e5 100644
> >>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>>>>>>>> @@ -1090,8 +1090,9 @@ static void kill_engines(struct
> >>>>>>>>> i915_gem_engines *engines, bool ban)
> >>>>>>>>>          */
> >>>>>>>>>         for_each_gem_engine(ce, engines, it) {
> >>>>>>>>>             struct intel_engine_cs *engine;
> >>>>>>>>> +        bool local_ban = ban ||
> >>>>>>>>> !intel_engine_has_heartbeat(ce->engine);
> >>>>>>> In any case (pending me understanding what's really going on there),
> >>>>>>> why would this check not be in kill_context with currently does this:
> >>>>>>>
> >>>>>>>       bool ban = (!i915_gem_context_is_persistent(ctx) ||
> >>>>>>>               !ctx->i915->params.enable_hangcheck);
> >>>>>>> ...
> >>>>>>>           kill_engines(pos, ban);
> >>>>>>>
> >>>>>>> So whether to ban decision would be consolidated to one place.
> >>>>>>>
> >>>>>>> In fact, decision on whether to allow persistent is tied to
> >>>>>>> enable_hangcheck, which also drives hearbeat emission. So perhaps
> >>>>>>> one part of the correct fix is to extend the above (kill_context)
> >>>>>>> ban criteria to include hearbeat values anyway. Otherwise isn't it a
> >>>>>>> simple miss that this check fails to account to hearbeat disablement
> >>>>>>> via sysfs?
> >>>>>>>
> >>>>>>> Regards,
> >>>>>>>
> >>>>>>> Tvrtko
> >>>>>>>
> >>>>>>>>> -        if (ban && intel_context_ban(ce, NULL))
> >>>>>>>>> +        if (local_ban && intel_context_ban(ce, NULL))
> >>>>>>>>>                 continue;
> >>>>>>>>>             /*
> >>>>>>>>> @@ -1104,7 +1105,7 @@ static void kill_engines(struct
> >>>>>>>>> i915_gem_engines *engines, bool ban)
> >>>>>>>>>             engine = active_engine(ce);
> >>>>>>>>>             /* First attempt to gracefully cancel the context */
> >>>>>>>>> -        if (engine && !__cancel_engine(engine) && ban)
> >>>>>>>>> +        if (engine && !__cancel_engine(engine) && local_ban)
> >>>>>>>>>                 /*
> >>>>>>>>>                  * If we are unable to send a preemptive pulse to bump
> >>>>>>>>>                  * the context from the GPU, we have to resort to a
> >>>>>>>>> full
> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>>>>>> b/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>>>>>> index e54351a170e2..65f2eb2a78e4 100644
> >>>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> >>>>>>>>> @@ -55,6 +55,8 @@ struct intel_context_ops {
> >>>>>>>>>         void (*reset)(struct intel_context *ce);
> >>>>>>>>>         void (*destroy)(struct kref *kref);
> >>>>>>>>> +    bool (*has_heartbeat)(const struct intel_engine_cs *engine);
> >>>>>>>>> +
> >>>>>>>>>         /* virtual engine/context interface */
> >>>>>>>>>         struct intel_context *(*create_virtual)(struct
> >>>>>>>>> intel_engine_cs **engine,
> >>>>>>>>>                             unsigned int count);
> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>>>>>> b/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>>>>>> index c2a5640ae055..1b11a808acc4 100644
> >>>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> >>>>>>>>> @@ -283,28 +283,11 @@ struct intel_context *
> >>>>>>>>>     intel_engine_create_virtual(struct intel_engine_cs **siblings,
> >>>>>>>>>                     unsigned int count);
> >>>>>>>>> -static inline bool
> >>>>>>>>> -intel_virtual_engine_has_heartbeat(const struct intel_engine_cs
> >>>>>>>>> *engine)
> >>>>>>>>> -{
> >>>>>>>>> -    /*
> >>>>>>>>> -     * For non-GuC submission we expect the back-end to look at the
> >>>>>>>>> -     * heartbeat status of the actual physical engine that the work
> >>>>>>>>> -     * has been (or is being) scheduled on, so we should only reach
> >>>>>>>>> -     * here with GuC submission enabled.
> >>>>>>>>> -     */
> >>>>>>>>> -    GEM_BUG_ON(!intel_engine_uses_guc(engine));
> >>>>>>>>> -
> >>>>>>>>> -    return intel_guc_virtual_engine_has_heartbeat(engine);
> >>>>>>>>> -}
> >>>>>>>>> -
> >>>>>>>>>     static inline bool
> >>>>>>>>>     intel_engine_has_heartbeat(const struct intel_engine_cs *engine)
> >>>>>>>>>     {
> >>>>>>>>> -    if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL))
> >>>>>>>>> -        return false;
> >>>>>>>>> -
> >>>>>>>>> -    if (intel_engine_is_virtual(engine))
> >>>>>>>>> -        return intel_virtual_engine_has_heartbeat(engine);
> >>>>>>>>> +    if (engine->cops->has_heartbeat)
> >>>>>>>>> +        return engine->cops->has_heartbeat(engine);
> >>>>>>>>>         else
> >>>>>>>>>             return READ_ONCE(engine->props.heartbeat_interval_ms);
> >>>>>>>>>     }
> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>>>>>> b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>>>>>> index de5f9c86b9a4..18005b5546b6 100644
> >>>>>>>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> >>>>>>>>> @@ -3619,6 +3619,18 @@ virtual_get_sibling(struct intel_engine_cs
> >>>>>>>>> *engine, unsigned int sibling)
> >>>>>>>>>         return ve->siblings[sibling];
> >>>>>>>>>     }
> >>>>>>>>> +static bool virtual_engine_has_heartbeat(const struct
> >>>>>>>>> intel_engine_cs *ve)
> >>>>>>>>> +{
> >>>>>>>>> +    struct intel_engine_cs *engine;
> >>>>>>>>> +    intel_engine_mask_t tmp, mask = ve->mask;
> >>>>>>>>> +
> >>>>>>>>> +    for_each_engine_masked(engine, ve->gt, mask, tmp)
> >>>>>>>>> +        if (READ_ONCE(engine->props.heartbeat_interval_ms))
> >>>>>>>>> +            return true;
> >>>>>>>>> +
> >>>>>>>>> +    return false;
> >>>>>>>>> +}
> >>>>>>>>> +
> >>>>>>>>>     static const struct intel_context_ops virtual_context_ops = {
> >>>>>>>>>         .flags = COPS_HAS_INFLIGHT,
> >>>>>>>>> @@ -3634,6 +3646,8 @@ static const struct intel_context_ops
> >>>>>>>>> virtual_context_ops = {
> >>>>>>>>>         .enter = virtual_context_enter,
> >>>>>>>>>         .exit = virtual_context_exit,
> >>>>>>>>> +    .has_heartbeat = virtual_engine_has_heartbeat,
> >>>>>>>>> +
> >>>>>>>>>         .destroy = virtual_context_destroy,
> >>>>>>>>>         .get_sibling = virtual_get_sibling,
> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>>>>>> index 89ff0e4b4bc7..ae70bff3605f 100644
> >>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> >>>>>>>>> @@ -2168,6 +2168,8 @@ static int guc_virtual_context_alloc(struct
> >>>>>>>>> intel_context *ce)
> >>>>>>>>>         return lrc_alloc(ce, engine);
> >>>>>>>>>     }
> >>>>>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
> >>>>>>>>> intel_engine_cs *ve);
> >>>>>>>>> +
> >>>>>>>>>     static const struct intel_context_ops virtual_guc_context_ops = {
> >>>>>>>>>         .alloc = guc_virtual_context_alloc,
> >>>>>>>>> @@ -2183,6 +2185,8 @@ static const struct intel_context_ops
> >>>>>>>>> virtual_guc_context_ops = {
> >>>>>>>>>         .enter = guc_virtual_context_enter,
> >>>>>>>>>         .exit = guc_virtual_context_exit,
> >>>>>>>>> +    .has_heartbeat = guc_virtual_engine_has_heartbeat,
> >>>>>>>>> +
> >>>>>>>>>         .sched_disable = guc_context_sched_disable,
> >>>>>>>>>         .destroy = guc_context_destroy,
> >>>>>>>>> @@ -3029,7 +3033,7 @@ guc_create_virtual(struct intel_engine_cs
> >>>>>>>>> **siblings, unsigned int count)
> >>>>>>>>>         return ERR_PTR(err);
> >>>>>>>>>     }
> >>>>>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
> >>>>>>>>> intel_engine_cs *ve)
> >>>>>>>>> +static bool guc_virtual_engine_has_heartbeat(const struct
> >>>>>>>>> intel_engine_cs *ve)
> >>>>>>>>>     {
> >>>>>>>>>         struct intel_engine_cs *engine;
> >>>>>>>>>         intel_engine_mask_t tmp, mask = ve->mask;
> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>>>>>> index c7ef44fa0c36..c2afc3b88fd8 100644
> >>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
> >>>>>>>>> @@ -29,8 +29,6 @@ void intel_guc_dump_active_requests(struct
> >>>>>>>>> intel_engine_cs *engine,
> >>>>>>>>>                         struct i915_request *hung_rq,
> >>>>>>>>>                         struct drm_printer *m);
> >>>>>>>>> -bool intel_guc_virtual_engine_has_heartbeat(const struct
> >>>>>>>>> intel_engine_cs *ve);
> >>>>>>>>> -
> >>>>>>>>>     int intel_guc_wait_for_pending_msg(struct intel_guc *guc,
> >>>>>>>>>                        atomic_t *wait_var,
> >>>>>>>>>                        bool interruptible,
> >>>>>>>> _______________________________________________
> >>>>>>>> Intel-gfx mailing list
> >>>>>>>> Intel-gfx@lists.freedesktop.org
> >>>>>>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission
  2021-08-18  0:08   ` John Harrison
@ 2021-08-18  9:49     ` Daniel Vetter
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel Vetter @ 2021-08-18  9:49 UTC (permalink / raw)
  To: John Harrison; +Cc: Daniel Vetter, Matthew Brost, intel-gfx, dri-devel

On Tue, Aug 17, 2021 at 05:08:02PM -0700, John Harrison wrote:
> On 8/9/2021 23:38, Daniel Vetter wrote:
> > On Wed, Jul 28, 2021 at 05:33:59PM -0700, Matthew Brost wrote:
> > > Should fix below failures with GuC submission for the following tests:
> > > gem_exec_balancer --r noheartbeat
> > > gem_ctx_persistence --r heartbeat-close
> > > 
> > > Not going to fix:
> > > gem_ctx_persistence --r heartbeat-many
> > > gem_ctx_persistence --r heartbeat-stop
> > After looking at that big thread and being very confused: Are we fixing an
> > actual use-case here, or is this another case of blindly following igts
> > tests just because they exist?
> My understanding is that this is established behaviour and therefore must be
> maintained because the UAPI (whether documented or not) is inviolate.
> Therefore IGTs have been written to validate this past behaviour and now we
> must conform to the IGTs in order to keep the existing behaviour unchanged.

No, we do not need to blindly conform to igts. We've found enough examples
in the past few months where the igt tests where just testing stuff
because it's possible, not because any UMD actually needs the behaviour.

And drm subsystem rules are very clear that low-level tests do _not_
qualify as userspace, so if they're wrong we just have to fix them.

> Whether anybody actually makes use of this behaviour or not is another
> matter entirely. I am certainly not aware of any vital use case. Others
> might have more recollection. I do know that we tell the UMD teams to
> explicitly disable persistence on every context they create.

Does that include mesa?

> > I'm leaning towards that we should stall on this, and first document what
> > exactly is the actual intention behind all this, and then fix up the tests
> I'm not sure there ever was an 'intention'. The rumour I heard way back when
> was that persistence was a bug on earlier platforms (or possibly we didn't
> have hardware support for doing engine resets?). But once the bug was
> realised (or the hardware support was added), it was too late to change the
> default behaviour because existing kernel behaviour must never change on
> pain of painful things. Thus the persistence flag was added so that people
> could opt out of the broken, leaky behaviour and have their contexts clean
> up properly.
> 
> Feel free to document what you believe should be the behaviour from a
> software architect point of view. Any documentation I produce is basically
> going to be created by reverse engineering the existing code. That is the
> only 'spec' that I am aware of and as I keep saying, I personally think it
> is a totally broken concept that should just be removed.

There is most likely no spec except "what does current userspace actually
expect". Yes this sucks. Also if you expect me to do this, I'm backlogged
by a few months on random studies here, and largely this boils down to
checking all the umds and checking what they actually need.

Important: What igt does doesn't matter if there's not a corresponding
real world umd use-case.

> > to match (if needed). And only then fix up GuC to match whatever we
> > actually want to do.
> I also still maintain there is no 'fix up the GuC'. This is not behaviour we
> should be adding to a hardware scheduler. It is behaviour that should be
> implemented at the front end not the back end. If we absolutely need to do
> this then we need to do it solely at the context management level not at the
> back end submission level. And the solution should work by default on any
> submission back end.

With "Fix up GuC" I dont mean necessarily the guc fw, but our entire
backend. We can very much fix that to fix most anything reasonable.

Also we don't actually need the same solution on all backends, because the
uapi can have slight differences across platforms. That's why changing the
defaults is so hard once they're set in stone.
-Daniel

> 
> John.
> 
> 
> > -Daniel
> > 
> > > As the above tests change the heartbeat value to 0 (off) after the
> > > context is closed and we have no way to detect that with GuC submission
> > > unless we keep a list of closed but running contexts which seems like
> > > overkill for a non-real world use case. We likely should just skip these
> > > tests with GuC submission.
> > > 
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > 
> > > Matthew Brost (1):
> > >    drm/i915: Check if engine has heartbeat when closing a context
> > > 
> > >   drivers/gpu/drm/i915/gem/i915_gem_context.c   |  5 +++--
> > >   drivers/gpu/drm/i915/gt/intel_context_types.h |  2 ++
> > >   drivers/gpu/drm/i915/gt/intel_engine.h        | 21 ++-----------------
> > >   .../drm/i915/gt/intel_execlists_submission.c  | 14 +++++++++++++
> > >   .../gpu/drm/i915/gt/uc/intel_guc_submission.c |  6 +++++-
> > >   .../gpu/drm/i915/gt/uc/intel_guc_submission.h |  2 --
> > >   6 files changed, 26 insertions(+), 24 deletions(-)
> > > 
> > > -- 
> > > 2.28.0
> > > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-08-18  9:49 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-29  0:33 [Intel-gfx] [PATCH 0/1] Fix gem_ctx_persistence failures with GuC submission Matthew Brost
2021-07-29  0:34 ` [Intel-gfx] [PATCH 1/1] drm/i915: Check if engine has heartbeat when closing a context Matthew Brost
2021-07-30  0:13   ` John Harrison
2021-07-30  9:49     ` Tvrtko Ursulin
2021-07-30 18:13       ` John Harrison
2021-08-02  9:40         ` Tvrtko Ursulin
2021-08-06 18:00           ` John Harrison
2021-08-06 19:46             ` Daniel Vetter
2021-08-09 23:12               ` John Harrison
2021-08-10  6:36                 ` Daniel Vetter
2021-08-18  0:28                   ` John Harrison
2021-08-18  9:26                     ` Daniel Vetter
2021-07-30 18:13       ` Matthew Brost
2021-07-29  2:08 ` [Intel-gfx] ✓ Fi.CI.BAT: success for Fix gem_ctx_persistence failures with GuC submission Patchwork
2021-07-29  7:30 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-08-10  6:38 ` [Intel-gfx] [PATCH 0/1] " Daniel Vetter
2021-08-18  0:08   ` John Harrison
2021-08-18  9:49     ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).