* [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup @ 2021-08-27 13:30 ` Tvrtko Ursulin 0 siblings, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-27 13:30 UTC (permalink / raw) To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) when rendering is done on Intel dgfx and scanout/composition on Intel igfx. Before this patch the driver was not quite ready for that setup, mainly because it was able to emit a semaphore wait between the two GPUs, which results in deadlocks because semaphore target location in HWSP is neither shared between the two, nor mapped in both GGTT spaces. To fix it the patch adds an additional check to a couple of relevant code paths in order to prevent using semaphores for inter-engine synchronisation between different driver instances. Patch also moves singly used i915_gem_object_last_write_engine to be private in its only calling unit (debugfs), while modifying it to only show activity belonging to the respective driver instance. What remains in this problem space is the question of the GEM busy ioctl. We have a somewhat ambigous comment there saying only status of native fences will be reported, which could be interpreted as either i915, or native to the drm fd. For now I have decided to leave that as is, meaning any i915 instance activity continues to be reported. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------------- drivers/gpu/drm/i915/i915_debugfs.c | 23 +++++++++++++++++++++- drivers/gpu/drm/i915/i915_request.c | 7 ++++++- drivers/gpu/drm/i915/i915_request.h | 1 + 4 files changed, 29 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 48112b9d76df..3043fcbd31bd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) i915_gem_object_unpin_pages(obj); } -static inline struct intel_engine_cs * -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) -{ - struct intel_engine_cs *engine = NULL; - struct dma_fence *fence; - - rcu_read_lock(); - fence = dma_resv_get_excl_unlocked(obj->base.resv); - rcu_read_unlock(); - - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) - engine = to_request(fence)->engine; - dma_fence_put(fence); - - return engine; -} - void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, unsigned int cache_level); void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 04351a851586..2f49ff0e8c21 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -135,6 +135,27 @@ static const char *stringify_vma_type(const struct i915_vma *vma) return "ppgtt"; } +static struct intel_engine_cs * +last_write_engine(struct drm_i915_private *i915, + struct drm_i915_gem_object *obj) +{ + struct intel_engine_cs *engine = NULL; + struct dma_fence *fence; + + rcu_read_lock(); + fence = dma_resv_get_excl_unlocked(obj->base.resv); + rcu_read_unlock(); + + if (fence && + !dma_fence_is_signaled(fence) && + dma_fence_is_i915(fence) && + to_request(fence)->i915 == i915) + engine = to_request(fence)->engine; + dma_fence_put(fence); + + return engine; +} + void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) { @@ -230,7 +251,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); - engine = i915_gem_object_last_write_engine(obj); + engine = last_write_engine(dev_priv, obj); if (engine) seq_printf(m, " (%s)", engine->name); } diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index ce446716d092..d2dec669d262 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -900,6 +900,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) * hold the intel_context reference. In execlist mode the request always * eventually points to a physical engine so this isn't an issue. */ + rq->i915 = tl->gt->i915; rq->context = intel_context_get(ce); rq->engine = ce->engine; rq->ring = ce->ring; @@ -1160,6 +1161,9 @@ emit_semaphore_wait(struct i915_request *to, const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; struct i915_sw_fence *wait = &to->submit; + if (to->i915 != from->i915) + goto await_fence; + if (!intel_context_use_semaphores(to->context)) goto await_fence; @@ -1263,7 +1267,8 @@ __i915_request_await_execution(struct i915_request *to, * immediate execution, and so we must wait until it reaches the * active slot. */ - if (intel_engine_has_semaphores(to->engine) && + if (to->i915 == from->i915 && + intel_engine_has_semaphores(to->engine) && !i915_request_has_initial_breadcrumb(to)) { err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); if (err < 0) diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index 1bc1349ba3c2..61a2ad6f1f1c 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -163,6 +163,7 @@ enum { */ struct i915_request { struct dma_fence fence; + struct drm_i915_private *i915; spinlock_t lock; /** -- 2.30.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Intel-gfx] [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup @ 2021-08-27 13:30 ` Tvrtko Ursulin 0 siblings, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-27 13:30 UTC (permalink / raw) To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) when rendering is done on Intel dgfx and scanout/composition on Intel igfx. Before this patch the driver was not quite ready for that setup, mainly because it was able to emit a semaphore wait between the two GPUs, which results in deadlocks because semaphore target location in HWSP is neither shared between the two, nor mapped in both GGTT spaces. To fix it the patch adds an additional check to a couple of relevant code paths in order to prevent using semaphores for inter-engine synchronisation between different driver instances. Patch also moves singly used i915_gem_object_last_write_engine to be private in its only calling unit (debugfs), while modifying it to only show activity belonging to the respective driver instance. What remains in this problem space is the question of the GEM busy ioctl. We have a somewhat ambigous comment there saying only status of native fences will be reported, which could be interpreted as either i915, or native to the drm fd. For now I have decided to leave that as is, meaning any i915 instance activity continues to be reported. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------------- drivers/gpu/drm/i915/i915_debugfs.c | 23 +++++++++++++++++++++- drivers/gpu/drm/i915/i915_request.c | 7 ++++++- drivers/gpu/drm/i915/i915_request.h | 1 + 4 files changed, 29 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 48112b9d76df..3043fcbd31bd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) i915_gem_object_unpin_pages(obj); } -static inline struct intel_engine_cs * -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) -{ - struct intel_engine_cs *engine = NULL; - struct dma_fence *fence; - - rcu_read_lock(); - fence = dma_resv_get_excl_unlocked(obj->base.resv); - rcu_read_unlock(); - - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) - engine = to_request(fence)->engine; - dma_fence_put(fence); - - return engine; -} - void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, unsigned int cache_level); void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 04351a851586..2f49ff0e8c21 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -135,6 +135,27 @@ static const char *stringify_vma_type(const struct i915_vma *vma) return "ppgtt"; } +static struct intel_engine_cs * +last_write_engine(struct drm_i915_private *i915, + struct drm_i915_gem_object *obj) +{ + struct intel_engine_cs *engine = NULL; + struct dma_fence *fence; + + rcu_read_lock(); + fence = dma_resv_get_excl_unlocked(obj->base.resv); + rcu_read_unlock(); + + if (fence && + !dma_fence_is_signaled(fence) && + dma_fence_is_i915(fence) && + to_request(fence)->i915 == i915) + engine = to_request(fence)->engine; + dma_fence_put(fence); + + return engine; +} + void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) { @@ -230,7 +251,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); - engine = i915_gem_object_last_write_engine(obj); + engine = last_write_engine(dev_priv, obj); if (engine) seq_printf(m, " (%s)", engine->name); } diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index ce446716d092..d2dec669d262 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -900,6 +900,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) * hold the intel_context reference. In execlist mode the request always * eventually points to a physical engine so this isn't an issue. */ + rq->i915 = tl->gt->i915; rq->context = intel_context_get(ce); rq->engine = ce->engine; rq->ring = ce->ring; @@ -1160,6 +1161,9 @@ emit_semaphore_wait(struct i915_request *to, const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; struct i915_sw_fence *wait = &to->submit; + if (to->i915 != from->i915) + goto await_fence; + if (!intel_context_use_semaphores(to->context)) goto await_fence; @@ -1263,7 +1267,8 @@ __i915_request_await_execution(struct i915_request *to, * immediate execution, and so we must wait until it reaches the * active slot. */ - if (intel_engine_has_semaphores(to->engine) && + if (to->i915 == from->i915 && + intel_engine_has_semaphores(to->engine) && !i915_request_has_initial_breadcrumb(to)) { err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); if (err < 0) diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index 1bc1349ba3c2..61a2ad6f1f1c 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -163,6 +163,7 @@ enum { */ struct i915_request { struct dma_fence fence; + struct drm_i915_private *i915; spinlock_t lock; /** -- 2.30.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin (?) @ 2021-08-27 13:50 ` Patchwork -1 siblings, 0 replies; 24+ messages in thread From: Patchwork @ 2021-08-27 13:50 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: intel-gfx == Series Details == Series: drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup URL : https://patchwork.freedesktop.org/series/94105/ State : warning == Summary == $ dim checkpatch origin/drm-tip 2758fcf46edb drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup -:25: WARNING:TYPO_SPELLING: 'ambigous' may be misspelled - perhaps 'ambiguous'? #25: We have a somewhat ambigous comment there saying only status of native ^^^^^^^^ total: 0 errors, 1 warnings, 0 checks, 90 lines checked ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin (?) (?) @ 2021-08-27 14:21 ` Patchwork -1 siblings, 0 replies; 24+ messages in thread From: Patchwork @ 2021-08-27 14:21 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: intel-gfx [-- Attachment #1: Type: text/plain, Size: 3254 bytes --] == Series Details == Series: drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup URL : https://patchwork.freedesktop.org/series/94105/ State : success == Summary == CI Bug Log - changes from CI_DRM_10530 -> Patchwork_20909 ==================================================== Summary ------- **SUCCESS** No regressions found. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/index.html Known issues ------------ Here are the changes found in Patchwork_20909 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@amdgpu/amd_basic@cs-gfx: - fi-kbl-soraka: NOTRUN -> [SKIP][1] ([fdo#109271]) +4 similar issues [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/fi-kbl-soraka/igt@amdgpu/amd_basic@cs-gfx.html * igt@core_hotunplug@unbind-rebind: - fi-rkl-guc: [PASS][2] -> [DMESG-WARN][3] ([i915#3925]) [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/fi-rkl-guc/igt@core_hotunplug@unbind-rebind.html [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/fi-rkl-guc/igt@core_hotunplug@unbind-rebind.html * igt@kms_force_connector_basic@force-connector-state: - fi-rkl-11600: [PASS][4] -> [FAIL][5] ([i915#3983]) [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/fi-rkl-11600/igt@kms_force_connector_basic@force-connector-state.html [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/fi-rkl-11600/igt@kms_force_connector_basic@force-connector-state.html * igt@runner@aborted: - fi-rkl-guc: NOTRUN -> [FAIL][6] ([i915#1602]) [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/fi-rkl-guc/igt@runner@aborted.html #### Possible fixes #### * igt@kms_chamelium@hdmi-hpd-fast: - fi-icl-u2: [DMESG-WARN][7] ([i915#2203] / [i915#2868]) -> [PASS][8] [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271 [i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602 [i915#2203]: https://gitlab.freedesktop.org/drm/intel/issues/2203 [i915#2868]: https://gitlab.freedesktop.org/drm/intel/issues/2868 [i915#3925]: https://gitlab.freedesktop.org/drm/intel/issues/3925 [i915#3983]: https://gitlab.freedesktop.org/drm/intel/issues/3983 Participating hosts (38 -> 33) ------------------------------ Missing (5): fi-ilk-m540 bat-adls-5 fi-bsw-cyan bat-jsl-1 fi-bdw-samus Build changes ------------- * Linux: CI_DRM_10530 -> Patchwork_20909 CI-20190529: 20190529 CI_DRM_10530: 63bca765c920120bd9746d9093190d82c4ace341 @ git://anongit.freedesktop.org/gfx-ci/linux IGT_6187: 1afd52c1471dafdf521eae431f3e228826de6de2 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git Patchwork_20909: 2758fcf46edb870b8a654d0a5de589b846b11861 @ git://anongit.freedesktop.org/gfx-ci/linux == Linux commits == 2758fcf46edb drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/index.html [-- Attachment #2: Type: text/html, Size: 3962 bytes --] ^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin @ 2021-08-27 14:39 ` Tvrtko Ursulin -1 siblings, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-27 14:39 UTC (permalink / raw) To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) when rendering is done on Intel dgfx and scanout/composition on Intel igfx. Before this patch the driver was not quite ready for that setup, mainly because it was able to emit a semaphore wait between the two GPUs, which results in deadlocks because semaphore target location in HWSP is neither shared between the two, nor mapped in both GGTT spaces. To fix it the patch adds an additional check to a couple of relevant code paths in order to prevent using semaphores for inter-engine synchronisation between different driver instances. Patch also moves singly used i915_gem_object_last_write_engine to be private in its only calling unit (debugfs), while modifying it to only show activity belonging to the respective driver instance. What remains in this problem space is the question of the GEM busy ioctl. We have a somewhat ambigous comment there saying only status of native fences will be reported, which could be interpreted as either i915, or native to the drm fd. For now I have decided to leave that as is, meaning any i915 instance activity continues to be reported. v2: * Avoid adding rq->i915. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- drivers/gpu/drm/i915/i915_request.c | 12 ++++++- 3 files changed, 47 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 48112b9d76df..3043fcbd31bd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) i915_gem_object_unpin_pages(obj); } -static inline struct intel_engine_cs * -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) -{ - struct intel_engine_cs *engine = NULL; - struct dma_fence *fence; - - rcu_read_lock(); - fence = dma_resv_get_excl_unlocked(obj->base.resv); - rcu_read_unlock(); - - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) - engine = to_request(fence)->engine; - dma_fence_put(fence); - - return engine; -} - void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, unsigned int cache_level); void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 04351a851586..55fd6191eb32 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) return "ppgtt"; } +static char * +last_write_engine(struct drm_i915_private *i915, + struct drm_i915_gem_object *obj) +{ + struct intel_engine_cs *engine; + struct dma_fence *fence; + char *res = NULL; + + rcu_read_lock(); + fence = dma_resv_get_excl_unlocked(obj->base.resv); + rcu_read_unlock(); + + if (!fence || dma_fence_is_signaled(fence)) + goto out; + + if (!dma_fence_is_i915(fence)) { + res = "<external-fence>"; + goto out; + } + + engine = to_request(fence)->engine; + if (engine->gt->i915 != i915) { + res = "<external-i915>"; + goto out; + } + + res = engine->name; + +out: + dma_fence_put(fence); + return res; +} + void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) { struct drm_i915_private *dev_priv = to_i915(obj->base.dev); - struct intel_engine_cs *engine; struct i915_vma *vma; int pin_count = 0; + char *engine; seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", &obj->base, @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); - engine = i915_gem_object_last_write_engine(obj); + engine = last_write_engine(dev_priv, obj); if (engine) - seq_printf(m, " (%s)", engine->name); + seq_printf(m, " (%s)", engine); } static int i915_gem_object_info(struct seq_file *m, void *data) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index ce446716d092..64adf619fe82 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, return 0; } +static bool +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) +{ + return to->engine->gt == from->engine->gt; +} + static int emit_semaphore_wait(struct i915_request *to, struct i915_request *from, @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; struct i915_sw_fence *wait = &to->submit; + if (!can_use_semaphore_wait(to, from)) + goto await_fence; + if (!intel_context_use_semaphores(to->context)) goto await_fence; @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, * immediate execution, and so we must wait until it reaches the * active slot. */ - if (intel_engine_has_semaphores(to->engine) && + if (can_use_semaphore_wait(to, from) && + intel_engine_has_semaphores(to->engine) && !i915_request_has_initial_breadcrumb(to)) { err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); if (err < 0) -- 2.30.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup @ 2021-08-27 14:39 ` Tvrtko Ursulin 0 siblings, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-27 14:39 UTC (permalink / raw) To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) when rendering is done on Intel dgfx and scanout/composition on Intel igfx. Before this patch the driver was not quite ready for that setup, mainly because it was able to emit a semaphore wait between the two GPUs, which results in deadlocks because semaphore target location in HWSP is neither shared between the two, nor mapped in both GGTT spaces. To fix it the patch adds an additional check to a couple of relevant code paths in order to prevent using semaphores for inter-engine synchronisation between different driver instances. Patch also moves singly used i915_gem_object_last_write_engine to be private in its only calling unit (debugfs), while modifying it to only show activity belonging to the respective driver instance. What remains in this problem space is the question of the GEM busy ioctl. We have a somewhat ambigous comment there saying only status of native fences will be reported, which could be interpreted as either i915, or native to the drm fd. For now I have decided to leave that as is, meaning any i915 instance activity continues to be reported. v2: * Avoid adding rq->i915. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- drivers/gpu/drm/i915/i915_request.c | 12 ++++++- 3 files changed, 47 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 48112b9d76df..3043fcbd31bd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) i915_gem_object_unpin_pages(obj); } -static inline struct intel_engine_cs * -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) -{ - struct intel_engine_cs *engine = NULL; - struct dma_fence *fence; - - rcu_read_lock(); - fence = dma_resv_get_excl_unlocked(obj->base.resv); - rcu_read_unlock(); - - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) - engine = to_request(fence)->engine; - dma_fence_put(fence); - - return engine; -} - void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, unsigned int cache_level); void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 04351a851586..55fd6191eb32 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) return "ppgtt"; } +static char * +last_write_engine(struct drm_i915_private *i915, + struct drm_i915_gem_object *obj) +{ + struct intel_engine_cs *engine; + struct dma_fence *fence; + char *res = NULL; + + rcu_read_lock(); + fence = dma_resv_get_excl_unlocked(obj->base.resv); + rcu_read_unlock(); + + if (!fence || dma_fence_is_signaled(fence)) + goto out; + + if (!dma_fence_is_i915(fence)) { + res = "<external-fence>"; + goto out; + } + + engine = to_request(fence)->engine; + if (engine->gt->i915 != i915) { + res = "<external-i915>"; + goto out; + } + + res = engine->name; + +out: + dma_fence_put(fence); + return res; +} + void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) { struct drm_i915_private *dev_priv = to_i915(obj->base.dev); - struct intel_engine_cs *engine; struct i915_vma *vma; int pin_count = 0; + char *engine; seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", &obj->base, @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); - engine = i915_gem_object_last_write_engine(obj); + engine = last_write_engine(dev_priv, obj); if (engine) - seq_printf(m, " (%s)", engine->name); + seq_printf(m, " (%s)", engine); } static int i915_gem_object_info(struct seq_file *m, void *data) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index ce446716d092..64adf619fe82 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, return 0; } +static bool +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) +{ + return to->engine->gt == from->engine->gt; +} + static int emit_semaphore_wait(struct i915_request *to, struct i915_request *from, @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; struct i915_sw_fence *wait = &to->submit; + if (!can_use_semaphore_wait(to, from)) + goto await_fence; + if (!intel_context_use_semaphores(to->context)) goto await_fence; @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, * immediate execution, and so we must wait until it reaches the * active slot. */ - if (intel_engine_has_semaphores(to->engine) && + if (can_use_semaphore_wait(to, from) && + intel_engine_has_semaphores(to->engine) && !i915_request_has_initial_breadcrumb(to)) { err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); if (err < 0) -- 2.30.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-27 14:39 ` [Intel-gfx] " Tvrtko Ursulin (?) @ 2021-08-27 14:44 ` Tvrtko Ursulin 2021-08-30 8:26 ` Daniel Vetter -1 siblings, 1 reply; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-27 14:44 UTC (permalink / raw) To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin On 27/08/2021 15:39, Tvrtko Ursulin wrote: > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > when rendering is done on Intel dgfx and scanout/composition on Intel > igfx. > > Before this patch the driver was not quite ready for that setup, mainly > because it was able to emit a semaphore wait between the two GPUs, which > results in deadlocks because semaphore target location in HWSP is neither > shared between the two, nor mapped in both GGTT spaces. > > To fix it the patch adds an additional check to a couple of relevant code > paths in order to prevent using semaphores for inter-engine > synchronisation between different driver instances. > > Patch also moves singly used i915_gem_object_last_write_engine to be > private in its only calling unit (debugfs), while modifying it to only > show activity belonging to the respective driver instance. > > What remains in this problem space is the question of the GEM busy ioctl. > We have a somewhat ambigous comment there saying only status of native > fences will be reported, which could be interpreted as either i915, or > native to the drm fd. For now I have decided to leave that as is, meaning > any i915 instance activity continues to be reported. > > v2: > * Avoid adding rq->i915. (Chris) > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > --- > drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- > drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- > drivers/gpu/drm/i915/i915_request.c | 12 ++++++- > 3 files changed, 47 insertions(+), 21 deletions(-) > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h > index 48112b9d76df..3043fcbd31bd 100644 > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h > @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) > i915_gem_object_unpin_pages(obj); > } > > -static inline struct intel_engine_cs * > -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) > -{ > - struct intel_engine_cs *engine = NULL; > - struct dma_fence *fence; > - > - rcu_read_lock(); > - fence = dma_resv_get_excl_unlocked(obj->base.resv); > - rcu_read_unlock(); > - > - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) > - engine = to_request(fence)->engine; > - dma_fence_put(fence); > - > - return engine; > -} > - > void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, > unsigned int cache_level); > void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > index 04351a851586..55fd6191eb32 100644 > --- a/drivers/gpu/drm/i915/i915_debugfs.c > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) > return "ppgtt"; > } > > +static char * > +last_write_engine(struct drm_i915_private *i915, > + struct drm_i915_gem_object *obj) > +{ > + struct intel_engine_cs *engine; > + struct dma_fence *fence; > + char *res = NULL; > + > + rcu_read_lock(); > + fence = dma_resv_get_excl_unlocked(obj->base.resv); > + rcu_read_unlock(); > + > + if (!fence || dma_fence_is_signaled(fence)) > + goto out; > + > + if (!dma_fence_is_i915(fence)) { > + res = "<external-fence>"; > + goto out; > + } > + > + engine = to_request(fence)->engine; > + if (engine->gt->i915 != i915) { > + res = "<external-i915>"; > + goto out; > + } > + > + res = engine->name; > + > +out: > + dma_fence_put(fence); > + return res; > +} > + > void > i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > { > struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > - struct intel_engine_cs *engine; > struct i915_vma *vma; > int pin_count = 0; > + char *engine; > > seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", > &obj->base, > @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > if (i915_gem_object_is_framebuffer(obj)) > seq_printf(m, " (fb)"); > > - engine = i915_gem_object_last_write_engine(obj); > + engine = last_write_engine(dev_priv, obj); > if (engine) > - seq_printf(m, " (%s)", engine->name); > + seq_printf(m, " (%s)", engine); Or I zap this from the code altogether. Not sure it is very useful since the only caller is i915_gem_framebuffer debugfs file and how much it can care about maybe hitting the timing window when exclusive fence will contain something. Regards, Tvrtko > } > > static int i915_gem_object_info(struct seq_file *m, void *data) > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > index ce446716d092..64adf619fe82 100644 > --- a/drivers/gpu/drm/i915/i915_request.c > +++ b/drivers/gpu/drm/i915/i915_request.c > @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, > return 0; > } > > +static bool > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > +{ > + return to->engine->gt == from->engine->gt; > +} > + > static int > emit_semaphore_wait(struct i915_request *to, > struct i915_request *from, > @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > struct i915_sw_fence *wait = &to->submit; > > + if (!can_use_semaphore_wait(to, from)) > + goto await_fence; > + > if (!intel_context_use_semaphores(to->context)) > goto await_fence; > > @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, > * immediate execution, and so we must wait until it reaches the > * active slot. > */ > - if (intel_engine_has_semaphores(to->engine) && > + if (can_use_semaphore_wait(to, from) && > + intel_engine_has_semaphores(to->engine) && > !i915_request_has_initial_breadcrumb(to)) { > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > if (err < 0) > ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-27 14:44 ` Tvrtko Ursulin @ 2021-08-30 8:26 ` Daniel Vetter 2021-08-31 9:15 ` Tvrtko Ursulin 0 siblings, 1 reply; 24+ messages in thread From: Daniel Vetter @ 2021-08-30 8:26 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: Intel-gfx, dri-devel, Tvrtko Ursulin On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: > > On 27/08/2021 15:39, Tvrtko Ursulin wrote: > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > > when rendering is done on Intel dgfx and scanout/composition on Intel > > igfx. > > > > Before this patch the driver was not quite ready for that setup, mainly > > because it was able to emit a semaphore wait between the two GPUs, which > > results in deadlocks because semaphore target location in HWSP is neither > > shared between the two, nor mapped in both GGTT spaces. > > > > To fix it the patch adds an additional check to a couple of relevant code > > paths in order to prevent using semaphores for inter-engine > > synchronisation between different driver instances. > > > > Patch also moves singly used i915_gem_object_last_write_engine to be > > private in its only calling unit (debugfs), while modifying it to only > > show activity belonging to the respective driver instance. > > > > What remains in this problem space is the question of the GEM busy ioctl. > > We have a somewhat ambigous comment there saying only status of native > > fences will be reported, which could be interpreted as either i915, or > > native to the drm fd. For now I have decided to leave that as is, meaning > > any i915 instance activity continues to be reported. > > > > v2: > > * Avoid adding rq->i915. (Chris) > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Can't we just delete semaphore code and done? - GuC won't have it - media team benchmarked on top of softpin media driver, found no difference - pre-gen8 semaphore code was also silently ditched and no one cared Plus removing semaphore code would greatly simplify conversion to drm/sched. > > --- > > drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- > > drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- > > drivers/gpu/drm/i915/i915_request.c | 12 ++++++- > > 3 files changed, 47 insertions(+), 21 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > index 48112b9d76df..3043fcbd31bd 100644 > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) > > i915_gem_object_unpin_pages(obj); > > } > > -static inline struct intel_engine_cs * > > -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) > > -{ > > - struct intel_engine_cs *engine = NULL; > > - struct dma_fence *fence; > > - > > - rcu_read_lock(); > > - fence = dma_resv_get_excl_unlocked(obj->base.resv); > > - rcu_read_unlock(); > > - > > - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) > > - engine = to_request(fence)->engine; > > - dma_fence_put(fence); > > - > > - return engine; > > -} > > - > > void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, > > unsigned int cache_level); > > void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > > index 04351a851586..55fd6191eb32 100644 > > --- a/drivers/gpu/drm/i915/i915_debugfs.c > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > > @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) > > return "ppgtt"; > > } > > +static char * > > +last_write_engine(struct drm_i915_private *i915, > > + struct drm_i915_gem_object *obj) > > +{ > > + struct intel_engine_cs *engine; > > + struct dma_fence *fence; > > + char *res = NULL; > > + > > + rcu_read_lock(); > > + fence = dma_resv_get_excl_unlocked(obj->base.resv); > > + rcu_read_unlock(); > > + > > + if (!fence || dma_fence_is_signaled(fence)) > > + goto out; > > + > > + if (!dma_fence_is_i915(fence)) { > > + res = "<external-fence>"; > > + goto out; > > + } > > + > > + engine = to_request(fence)->engine; > > + if (engine->gt->i915 != i915) { > > + res = "<external-i915>"; > > + goto out; > > + } > > + > > + res = engine->name; > > + > > +out: > > + dma_fence_put(fence); > > + return res; > > +} > > + > > void > > i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > { > > struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > > - struct intel_engine_cs *engine; > > struct i915_vma *vma; > > int pin_count = 0; > > + char *engine; > > seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", > > &obj->base, > > @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > if (i915_gem_object_is_framebuffer(obj)) > > seq_printf(m, " (fb)"); > > - engine = i915_gem_object_last_write_engine(obj); > > + engine = last_write_engine(dev_priv, obj); > > if (engine) > > - seq_printf(m, " (%s)", engine->name); > > + seq_printf(m, " (%s)", engine); > > Or I zap this from the code altogether. Not sure it is very useful since the > only caller is i915_gem_framebuffer debugfs file and how much it can care > about maybe hitting the timing window when exclusive fence will contain > something. Ideally we'd just look at the fence timeline name. But i915 has this very convoluted typesafe-by-rcu reuse which means we actually can't do that, and our fence timeline name is very useless. Would be good to fix that, Matt Auld has started an attempt but didn't get very far. -Daniel > > Regards, > > Tvrtko > > > } > > static int i915_gem_object_info(struct seq_file *m, void *data) > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > > index ce446716d092..64adf619fe82 100644 > > --- a/drivers/gpu/drm/i915/i915_request.c > > +++ b/drivers/gpu/drm/i915/i915_request.c > > @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, > > return 0; > > } > > +static bool > > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > > +{ > > + return to->engine->gt == from->engine->gt; > > +} > > + > > static int > > emit_semaphore_wait(struct i915_request *to, > > struct i915_request *from, > > @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, > > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > > struct i915_sw_fence *wait = &to->submit; > > + if (!can_use_semaphore_wait(to, from)) > > + goto await_fence; > > + > > if (!intel_context_use_semaphores(to->context)) > > goto await_fence; > > @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, > > * immediate execution, and so we must wait until it reaches the > > * active slot. > > */ > > - if (intel_engine_has_semaphores(to->engine) && > > + if (can_use_semaphore_wait(to, from) && > > + intel_engine_has_semaphores(to->engine) && > > !i915_request_has_initial_breadcrumb(to)) { > > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > > if (err < 0) > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-30 8:26 ` Daniel Vetter @ 2021-08-31 9:15 ` Tvrtko Ursulin 2021-08-31 12:43 ` Daniel Vetter 0 siblings, 1 reply; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-31 9:15 UTC (permalink / raw) To: Daniel Vetter; +Cc: Intel-gfx, dri-devel, Tvrtko Ursulin On 30/08/2021 09:26, Daniel Vetter wrote: > On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: >> >> On 27/08/2021 15:39, Tvrtko Ursulin wrote: >>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>> >>> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) >>> when rendering is done on Intel dgfx and scanout/composition on Intel >>> igfx. >>> >>> Before this patch the driver was not quite ready for that setup, mainly >>> because it was able to emit a semaphore wait between the two GPUs, which >>> results in deadlocks because semaphore target location in HWSP is neither >>> shared between the two, nor mapped in both GGTT spaces. >>> >>> To fix it the patch adds an additional check to a couple of relevant code >>> paths in order to prevent using semaphores for inter-engine >>> synchronisation between different driver instances. >>> >>> Patch also moves singly used i915_gem_object_last_write_engine to be >>> private in its only calling unit (debugfs), while modifying it to only >>> show activity belonging to the respective driver instance. >>> >>> What remains in this problem space is the question of the GEM busy ioctl. >>> We have a somewhat ambigous comment there saying only status of native >>> fences will be reported, which could be interpreted as either i915, or >>> native to the drm fd. For now I have decided to leave that as is, meaning >>> any i915 instance activity continues to be reported. >>> >>> v2: >>> * Avoid adding rq->i915. (Chris) >>> >>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > Can't we just delete semaphore code and done? > - GuC won't have it > - media team benchmarked on top of softpin media driver, found no > difference You have S-curve for saturated workloads or something else? How thorough and which media team I guess. From memory it was a nice win for some benchmarks (non-saturated ones), but as I have told you previously, we haven't been putting numbers in commit messages since it wasn't allowed. I may be able to dig out some more details if I went trawling through GEM channel IRC logs, although probably not the actual numbers since those were usually on pastebin. Or you go an talk with Chris since he probably remembers more details. Or you just decide you don't care and remove it. I wouldn't do that without putting the complete story in writing, but it's your call after all. Anyway, without the debugfs churn it is more or less two line patch to fix igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd just refine it to use a GGTT check instead of GT. And unless DG1 ends up being GuC only. > - pre-gen8 semaphore code was also silently ditched and no one cared > > Plus removing semaphore code would greatly simplify conversion to > drm/sched. > >>> --- >>> drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- >>> drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- >>> drivers/gpu/drm/i915/i915_request.c | 12 ++++++- >>> 3 files changed, 47 insertions(+), 21 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>> index 48112b9d76df..3043fcbd31bd 100644 >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>> @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) >>> i915_gem_object_unpin_pages(obj); >>> } >>> -static inline struct intel_engine_cs * >>> -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) >>> -{ >>> - struct intel_engine_cs *engine = NULL; >>> - struct dma_fence *fence; >>> - >>> - rcu_read_lock(); >>> - fence = dma_resv_get_excl_unlocked(obj->base.resv); >>> - rcu_read_unlock(); >>> - >>> - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) >>> - engine = to_request(fence)->engine; >>> - dma_fence_put(fence); >>> - >>> - return engine; >>> -} >>> - >>> void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, >>> unsigned int cache_level); >>> void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); >>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c >>> index 04351a851586..55fd6191eb32 100644 >>> --- a/drivers/gpu/drm/i915/i915_debugfs.c >>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c >>> @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) >>> return "ppgtt"; >>> } >>> +static char * >>> +last_write_engine(struct drm_i915_private *i915, >>> + struct drm_i915_gem_object *obj) >>> +{ >>> + struct intel_engine_cs *engine; >>> + struct dma_fence *fence; >>> + char *res = NULL; >>> + >>> + rcu_read_lock(); >>> + fence = dma_resv_get_excl_unlocked(obj->base.resv); >>> + rcu_read_unlock(); >>> + >>> + if (!fence || dma_fence_is_signaled(fence)) >>> + goto out; >>> + >>> + if (!dma_fence_is_i915(fence)) { >>> + res = "<external-fence>"; >>> + goto out; >>> + } >>> + >>> + engine = to_request(fence)->engine; >>> + if (engine->gt->i915 != i915) { >>> + res = "<external-i915>"; >>> + goto out; >>> + } >>> + >>> + res = engine->name; >>> + >>> +out: >>> + dma_fence_put(fence); >>> + return res; >>> +} >>> + >>> void >>> i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>> { >>> struct drm_i915_private *dev_priv = to_i915(obj->base.dev); >>> - struct intel_engine_cs *engine; >>> struct i915_vma *vma; >>> int pin_count = 0; >>> + char *engine; >>> seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", >>> &obj->base, >>> @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>> if (i915_gem_object_is_framebuffer(obj)) >>> seq_printf(m, " (fb)"); >>> - engine = i915_gem_object_last_write_engine(obj); >>> + engine = last_write_engine(dev_priv, obj); >>> if (engine) >>> - seq_printf(m, " (%s)", engine->name); >>> + seq_printf(m, " (%s)", engine); >> >> Or I zap this from the code altogether. Not sure it is very useful since the >> only caller is i915_gem_framebuffer debugfs file and how much it can care >> about maybe hitting the timing window when exclusive fence will contain >> something. > > Ideally we'd just look at the fence timeline name. But i915 has this very > convoluted typesafe-by-rcu reuse which means we actually can't do that, > and our fence timeline name is very useless. Why do we even care to output any of this here? I'd just remove it since it is a very transient state with an extremely short window of opportunity to make it show anything. Which I think makes it pretty useless in debugfs. Regards, Tvrtko > > Would be good to fix that, Matt Auld has started an attempt but didn't get > very far. > -Daniel > >> >> Regards, >> >> Tvrtko >> >>> } >>> static int i915_gem_object_info(struct seq_file *m, void *data) >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c >>> index ce446716d092..64adf619fe82 100644 >>> --- a/drivers/gpu/drm/i915/i915_request.c >>> +++ b/drivers/gpu/drm/i915/i915_request.c >>> @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, >>> return 0; >>> } >>> +static bool >>> +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) >>> +{ >>> + return to->engine->gt == from->engine->gt; >>> +} >>> + >>> static int >>> emit_semaphore_wait(struct i915_request *to, >>> struct i915_request *from, >>> @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, >>> const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; >>> struct i915_sw_fence *wait = &to->submit; >>> + if (!can_use_semaphore_wait(to, from)) >>> + goto await_fence; >>> + >>> if (!intel_context_use_semaphores(to->context)) >>> goto await_fence; >>> @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, >>> * immediate execution, and so we must wait until it reaches the >>> * active slot. >>> */ >>> - if (intel_engine_has_semaphores(to->engine) && >>> + if (can_use_semaphore_wait(to, from) && >>> + intel_engine_has_semaphores(to->engine) && >>> !i915_request_has_initial_breadcrumb(to)) { >>> err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); >>> if (err < 0) >>> > ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-31 9:15 ` Tvrtko Ursulin @ 2021-08-31 12:43 ` Daniel Vetter 2021-08-31 13:18 ` Tvrtko Ursulin 0 siblings, 1 reply; 24+ messages in thread From: Daniel Vetter @ 2021-08-31 12:43 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: Daniel Vetter, Intel-gfx, dri-devel, Tvrtko Ursulin On Tue, Aug 31, 2021 at 10:15:03AM +0100, Tvrtko Ursulin wrote: > > On 30/08/2021 09:26, Daniel Vetter wrote: > > On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: > > > > > > On 27/08/2021 15:39, Tvrtko Ursulin wrote: > > > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > > > > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > > > > when rendering is done on Intel dgfx and scanout/composition on Intel > > > > igfx. > > > > > > > > Before this patch the driver was not quite ready for that setup, mainly > > > > because it was able to emit a semaphore wait between the two GPUs, which > > > > results in deadlocks because semaphore target location in HWSP is neither > > > > shared between the two, nor mapped in both GGTT spaces. > > > > > > > > To fix it the patch adds an additional check to a couple of relevant code > > > > paths in order to prevent using semaphores for inter-engine > > > > synchronisation between different driver instances. > > > > > > > > Patch also moves singly used i915_gem_object_last_write_engine to be > > > > private in its only calling unit (debugfs), while modifying it to only > > > > show activity belonging to the respective driver instance. > > > > > > > > What remains in this problem space is the question of the GEM busy ioctl. > > > > We have a somewhat ambigous comment there saying only status of native > > > > fences will be reported, which could be interpreted as either i915, or > > > > native to the drm fd. For now I have decided to leave that as is, meaning > > > > any i915 instance activity continues to be reported. > > > > > > > > v2: > > > > * Avoid adding rq->i915. (Chris) > > > > > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > Can't we just delete semaphore code and done? > > - GuC won't have it > > - media team benchmarked on top of softpin media driver, found no > > difference > > You have S-curve for saturated workloads or something else? How thorough and > which media team I guess. > > From memory it was a nice win for some benchmarks (non-saturated ones), but > as I have told you previously, we haven't been putting numbers in commit > messages since it wasn't allowed. I may be able to dig out some more details > if I went trawling through GEM channel IRC logs, although probably not the > actual numbers since those were usually on pastebin. Or you go an talk with > Chris since he probably remembers more details. Or you just decide you don't > care and remove it. I wouldn't do that without putting the complete story in > writing, but it's your call after all. Media has also changed, they're not using relocations anymore. Unless there's solid data performance tuning of any kind that gets in the way simply needs to be removed. Yes this is radical, but the codebase is in a state to require this. So either way we'd need to rebenchmark this if it's really required. Also if we really need this code still someone needs to fix the design, the current code is making layering violations an art form. > Anyway, without the debugfs churn it is more or less two line patch to fix > igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd > just refine it to use a GGTT check instead of GT. And unless DG1 ends up > being GuC only. The minimal robust fix here is imo to stop us from upcasting dma_fence to i915_request if it's not for our device. Not sprinkle code here into the semaphore code. We shouldn't even get this far with foreign fences. -Daniel > > > - pre-gen8 semaphore code was also silently ditched and no one cared > > > > Plus removing semaphore code would greatly simplify conversion to > > drm/sched. > > > > > > --- > > > > drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- > > > > drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- > > > > drivers/gpu/drm/i915/i915_request.c | 12 ++++++- > > > > 3 files changed, 47 insertions(+), 21 deletions(-) > > > > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > index 48112b9d76df..3043fcbd31bd 100644 > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) > > > > i915_gem_object_unpin_pages(obj); > > > > } > > > > -static inline struct intel_engine_cs * > > > > -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) > > > > -{ > > > > - struct intel_engine_cs *engine = NULL; > > > > - struct dma_fence *fence; > > > > - > > > > - rcu_read_lock(); > > > > - fence = dma_resv_get_excl_unlocked(obj->base.resv); > > > > - rcu_read_unlock(); > > > > - > > > > - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) > > > > - engine = to_request(fence)->engine; > > > > - dma_fence_put(fence); > > > > - > > > > - return engine; > > > > -} > > > > - > > > > void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, > > > > unsigned int cache_level); > > > > void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); > > > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > > > > index 04351a851586..55fd6191eb32 100644 > > > > --- a/drivers/gpu/drm/i915/i915_debugfs.c > > > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > > > > @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) > > > > return "ppgtt"; > > > > } > > > > +static char * > > > > +last_write_engine(struct drm_i915_private *i915, > > > > + struct drm_i915_gem_object *obj) > > > > +{ > > > > + struct intel_engine_cs *engine; > > > > + struct dma_fence *fence; > > > > + char *res = NULL; > > > > + > > > > + rcu_read_lock(); > > > > + fence = dma_resv_get_excl_unlocked(obj->base.resv); > > > > + rcu_read_unlock(); > > > > + > > > > + if (!fence || dma_fence_is_signaled(fence)) > > > > + goto out; > > > > + > > > > + if (!dma_fence_is_i915(fence)) { > > > > + res = "<external-fence>"; > > > > + goto out; > > > > + } > > > > + > > > > + engine = to_request(fence)->engine; > > > > + if (engine->gt->i915 != i915) { > > > > + res = "<external-i915>"; > > > > + goto out; > > > > + } > > > > + > > > > + res = engine->name; > > > > + > > > > +out: > > > > + dma_fence_put(fence); > > > > + return res; > > > > +} > > > > + > > > > void > > > > i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > > > { > > > > struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > > > > - struct intel_engine_cs *engine; > > > > struct i915_vma *vma; > > > > int pin_count = 0; > > > > + char *engine; > > > > seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", > > > > &obj->base, > > > > @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > > > if (i915_gem_object_is_framebuffer(obj)) > > > > seq_printf(m, " (fb)"); > > > > - engine = i915_gem_object_last_write_engine(obj); > > > > + engine = last_write_engine(dev_priv, obj); > > > > if (engine) > > > > - seq_printf(m, " (%s)", engine->name); > > > > + seq_printf(m, " (%s)", engine); > > > > > > Or I zap this from the code altogether. Not sure it is very useful since the > > > only caller is i915_gem_framebuffer debugfs file and how much it can care > > > about maybe hitting the timing window when exclusive fence will contain > > > something. > > > > Ideally we'd just look at the fence timeline name. But i915 has this very > > convoluted typesafe-by-rcu reuse which means we actually can't do that, > > and our fence timeline name is very useless. > > Why do we even care to output any of this here? I'd just remove it since it > is a very transient state with an extremely short window of opportunity to > make it show anything. Which I think makes it pretty useless in debugfs. > > Regards, > > Tvrtko > > > > > Would be good to fix that, Matt Auld has started an attempt but didn't get > > very far. > > -Daniel > > > > > > > > Regards, > > > > > > Tvrtko > > > > > > > } > > > > static int i915_gem_object_info(struct seq_file *m, void *data) > > > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > > > > index ce446716d092..64adf619fe82 100644 > > > > --- a/drivers/gpu/drm/i915/i915_request.c > > > > +++ b/drivers/gpu/drm/i915/i915_request.c > > > > @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, > > > > return 0; > > > > } > > > > +static bool > > > > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > > > > +{ > > > > + return to->engine->gt == from->engine->gt; > > > > +} > > > > + > > > > static int > > > > emit_semaphore_wait(struct i915_request *to, > > > > struct i915_request *from, > > > > @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, > > > > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > > > > struct i915_sw_fence *wait = &to->submit; > > > > + if (!can_use_semaphore_wait(to, from)) > > > > + goto await_fence; > > > > + > > > > if (!intel_context_use_semaphores(to->context)) > > > > goto await_fence; > > > > @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, > > > > * immediate execution, and so we must wait until it reaches the > > > > * active slot. > > > > */ > > > > - if (intel_engine_has_semaphores(to->engine) && > > > > + if (can_use_semaphore_wait(to, from) && > > > > + intel_engine_has_semaphores(to->engine) && > > > > !i915_request_has_initial_breadcrumb(to)) { > > > > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > > > > if (err < 0) > > > > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-31 12:43 ` Daniel Vetter @ 2021-08-31 13:18 ` Tvrtko Ursulin 2021-09-02 14:33 ` Daniel Vetter 0 siblings, 1 reply; 24+ messages in thread From: Tvrtko Ursulin @ 2021-08-31 13:18 UTC (permalink / raw) To: Daniel Vetter; +Cc: Intel-gfx, dri-devel, Tvrtko Ursulin On 31/08/2021 13:43, Daniel Vetter wrote: > On Tue, Aug 31, 2021 at 10:15:03AM +0100, Tvrtko Ursulin wrote: >> >> On 30/08/2021 09:26, Daniel Vetter wrote: >>> On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: >>>> >>>> On 27/08/2021 15:39, Tvrtko Ursulin wrote: >>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>>>> >>>>> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) >>>>> when rendering is done on Intel dgfx and scanout/composition on Intel >>>>> igfx. >>>>> >>>>> Before this patch the driver was not quite ready for that setup, mainly >>>>> because it was able to emit a semaphore wait between the two GPUs, which >>>>> results in deadlocks because semaphore target location in HWSP is neither >>>>> shared between the two, nor mapped in both GGTT spaces. >>>>> >>>>> To fix it the patch adds an additional check to a couple of relevant code >>>>> paths in order to prevent using semaphores for inter-engine >>>>> synchronisation between different driver instances. >>>>> >>>>> Patch also moves singly used i915_gem_object_last_write_engine to be >>>>> private in its only calling unit (debugfs), while modifying it to only >>>>> show activity belonging to the respective driver instance. >>>>> >>>>> What remains in this problem space is the question of the GEM busy ioctl. >>>>> We have a somewhat ambigous comment there saying only status of native >>>>> fences will be reported, which could be interpreted as either i915, or >>>>> native to the drm fd. For now I have decided to leave that as is, meaning >>>>> any i915 instance activity continues to be reported. >>>>> >>>>> v2: >>>>> * Avoid adding rq->i915. (Chris) >>>>> >>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>> >>> Can't we just delete semaphore code and done? >>> - GuC won't have it >>> - media team benchmarked on top of softpin media driver, found no >>> difference >> >> You have S-curve for saturated workloads or something else? How thorough and >> which media team I guess. >> >> From memory it was a nice win for some benchmarks (non-saturated ones), but >> as I have told you previously, we haven't been putting numbers in commit >> messages since it wasn't allowed. I may be able to dig out some more details >> if I went trawling through GEM channel IRC logs, although probably not the >> actual numbers since those were usually on pastebin. Or you go an talk with >> Chris since he probably remembers more details. Or you just decide you don't >> care and remove it. I wouldn't do that without putting the complete story in >> writing, but it's your call after all. > > Media has also changed, they're not using relocations anymore. Meaning you think it changes the benchmarking story? When coupled with removal of GPU relocations then possibly yes. > Unless there's solid data performance tuning of any kind that gets in the > way simply needs to be removed. Yes this is radical, but the codebase is > in a state to require this. > > So either way we'd need to rebenchmark this if it's really required. Also Therefore can you share what benchmarks have been done or is it secret? As said, I think the non-saturated case was the more interesting one here. > if we really need this code still someone needs to fix the design, the > current code is making layering violations an art form. > >> Anyway, without the debugfs churn it is more or less two line patch to fix >> igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd >> just refine it to use a GGTT check instead of GT. And unless DG1 ends up >> being GuC only. > > The minimal robust fix here is imo to stop us from upcasting dma_fence to > i915_request if it's not for our device. Not sprinkle code here into the > semaphore code. We shouldn't even get this far with foreign fences. Device check does not work for multi-tile. It was one of my earlier attempts before I realized the problem. You'll see v3 which I think handles all the cases. You also forgot to comment on the question lower in the email. I'll just send a patch which removes that anyway so you can comment there. Regards, Tvrtko > -Daniel > >> >>> - pre-gen8 semaphore code was also silently ditched and no one cared >>> >>> Plus removing semaphore code would greatly simplify conversion to >>> drm/sched. >>> >>>>> --- >>>>> drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- >>>>> drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- >>>>> drivers/gpu/drm/i915/i915_request.c | 12 ++++++- >>>>> 3 files changed, 47 insertions(+), 21 deletions(-) >>>>> >>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>> index 48112b9d76df..3043fcbd31bd 100644 >>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>> @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) >>>>> i915_gem_object_unpin_pages(obj); >>>>> } >>>>> -static inline struct intel_engine_cs * >>>>> -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) >>>>> -{ >>>>> - struct intel_engine_cs *engine = NULL; >>>>> - struct dma_fence *fence; >>>>> - >>>>> - rcu_read_lock(); >>>>> - fence = dma_resv_get_excl_unlocked(obj->base.resv); >>>>> - rcu_read_unlock(); >>>>> - >>>>> - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) >>>>> - engine = to_request(fence)->engine; >>>>> - dma_fence_put(fence); >>>>> - >>>>> - return engine; >>>>> -} >>>>> - >>>>> void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, >>>>> unsigned int cache_level); >>>>> void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); >>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c >>>>> index 04351a851586..55fd6191eb32 100644 >>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c >>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c >>>>> @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) >>>>> return "ppgtt"; >>>>> } >>>>> +static char * >>>>> +last_write_engine(struct drm_i915_private *i915, >>>>> + struct drm_i915_gem_object *obj) >>>>> +{ >>>>> + struct intel_engine_cs *engine; >>>>> + struct dma_fence *fence; >>>>> + char *res = NULL; >>>>> + >>>>> + rcu_read_lock(); >>>>> + fence = dma_resv_get_excl_unlocked(obj->base.resv); >>>>> + rcu_read_unlock(); >>>>> + >>>>> + if (!fence || dma_fence_is_signaled(fence)) >>>>> + goto out; >>>>> + >>>>> + if (!dma_fence_is_i915(fence)) { >>>>> + res = "<external-fence>"; >>>>> + goto out; >>>>> + } >>>>> + >>>>> + engine = to_request(fence)->engine; >>>>> + if (engine->gt->i915 != i915) { >>>>> + res = "<external-i915>"; >>>>> + goto out; >>>>> + } >>>>> + >>>>> + res = engine->name; >>>>> + >>>>> +out: >>>>> + dma_fence_put(fence); >>>>> + return res; >>>>> +} >>>>> + >>>>> void >>>>> i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>>>> { >>>>> struct drm_i915_private *dev_priv = to_i915(obj->base.dev); >>>>> - struct intel_engine_cs *engine; >>>>> struct i915_vma *vma; >>>>> int pin_count = 0; >>>>> + char *engine; >>>>> seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", >>>>> &obj->base, >>>>> @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>>>> if (i915_gem_object_is_framebuffer(obj)) >>>>> seq_printf(m, " (fb)"); >>>>> - engine = i915_gem_object_last_write_engine(obj); >>>>> + engine = last_write_engine(dev_priv, obj); >>>>> if (engine) >>>>> - seq_printf(m, " (%s)", engine->name); >>>>> + seq_printf(m, " (%s)", engine); >>>> >>>> Or I zap this from the code altogether. Not sure it is very useful since the >>>> only caller is i915_gem_framebuffer debugfs file and how much it can care >>>> about maybe hitting the timing window when exclusive fence will contain >>>> something. >>> >>> Ideally we'd just look at the fence timeline name. But i915 has this very >>> convoluted typesafe-by-rcu reuse which means we actually can't do that, >>> and our fence timeline name is very useless. >> >> Why do we even care to output any of this here? I'd just remove it since it >> is a very transient state with an extremely short window of opportunity to >> make it show anything. Which I think makes it pretty useless in debugfs. >> >> Regards, >> >> Tvrtko >> >>> >>> Would be good to fix that, Matt Auld has started an attempt but didn't get >>> very far. >>> -Daniel >>> >>>> >>>> Regards, >>>> >>>> Tvrtko >>>> >>>>> } >>>>> static int i915_gem_object_info(struct seq_file *m, void *data) >>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c >>>>> index ce446716d092..64adf619fe82 100644 >>>>> --- a/drivers/gpu/drm/i915/i915_request.c >>>>> +++ b/drivers/gpu/drm/i915/i915_request.c >>>>> @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, >>>>> return 0; >>>>> } >>>>> +static bool >>>>> +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) >>>>> +{ >>>>> + return to->engine->gt == from->engine->gt; >>>>> +} >>>>> + >>>>> static int >>>>> emit_semaphore_wait(struct i915_request *to, >>>>> struct i915_request *from, >>>>> @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, >>>>> const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; >>>>> struct i915_sw_fence *wait = &to->submit; >>>>> + if (!can_use_semaphore_wait(to, from)) >>>>> + goto await_fence; >>>>> + >>>>> if (!intel_context_use_semaphores(to->context)) >>>>> goto await_fence; >>>>> @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, >>>>> * immediate execution, and so we must wait until it reaches the >>>>> * active slot. >>>>> */ >>>>> - if (intel_engine_has_semaphores(to->engine) && >>>>> + if (can_use_semaphore_wait(to, from) && >>>>> + intel_engine_has_semaphores(to->engine) && >>>>> !i915_request_has_initial_breadcrumb(to)) { >>>>> err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); >>>>> if (err < 0) >>>>> >>> > ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-31 13:18 ` Tvrtko Ursulin @ 2021-09-02 14:33 ` Daniel Vetter 2021-09-02 15:01 ` Tvrtko Ursulin 0 siblings, 1 reply; 24+ messages in thread From: Daniel Vetter @ 2021-09-02 14:33 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: Daniel Vetter, Intel-gfx, dri-devel, Tvrtko Ursulin On Tue, Aug 31, 2021 at 02:18:15PM +0100, Tvrtko Ursulin wrote: > > On 31/08/2021 13:43, Daniel Vetter wrote: > > On Tue, Aug 31, 2021 at 10:15:03AM +0100, Tvrtko Ursulin wrote: > > > > > > On 30/08/2021 09:26, Daniel Vetter wrote: > > > > On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: > > > > > > > > > > On 27/08/2021 15:39, Tvrtko Ursulin wrote: > > > > > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > > > > > > > > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > > > > > > when rendering is done on Intel dgfx and scanout/composition on Intel > > > > > > igfx. > > > > > > > > > > > > Before this patch the driver was not quite ready for that setup, mainly > > > > > > because it was able to emit a semaphore wait between the two GPUs, which > > > > > > results in deadlocks because semaphore target location in HWSP is neither > > > > > > shared between the two, nor mapped in both GGTT spaces. > > > > > > > > > > > > To fix it the patch adds an additional check to a couple of relevant code > > > > > > paths in order to prevent using semaphores for inter-engine > > > > > > synchronisation between different driver instances. > > > > > > > > > > > > Patch also moves singly used i915_gem_object_last_write_engine to be > > > > > > private in its only calling unit (debugfs), while modifying it to only > > > > > > show activity belonging to the respective driver instance. > > > > > > > > > > > > What remains in this problem space is the question of the GEM busy ioctl. > > > > > > We have a somewhat ambigous comment there saying only status of native > > > > > > fences will be reported, which could be interpreted as either i915, or > > > > > > native to the drm fd. For now I have decided to leave that as is, meaning > > > > > > any i915 instance activity continues to be reported. > > > > > > > > > > > > v2: > > > > > > * Avoid adding rq->i915. (Chris) > > > > > > > > > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > > > > > Can't we just delete semaphore code and done? > > > > - GuC won't have it > > > > - media team benchmarked on top of softpin media driver, found no > > > > difference > > > > > > You have S-curve for saturated workloads or something else? How thorough and > > > which media team I guess. > > > > > > From memory it was a nice win for some benchmarks (non-saturated ones), but > > > as I have told you previously, we haven't been putting numbers in commit > > > messages since it wasn't allowed. I may be able to dig out some more details > > > if I went trawling through GEM channel IRC logs, although probably not the > > > actual numbers since those were usually on pastebin. Or you go an talk with > > > Chris since he probably remembers more details. Or you just decide you don't > > > care and remove it. I wouldn't do that without putting the complete story in > > > writing, but it's your call after all. > > > > Media has also changed, they're not using relocations anymore. > > Meaning you think it changes the benchmarking story? When coupled with > removal of GPU relocations then possibly yes. > > > Unless there's solid data performance tuning of any kind that gets in the > > way simply needs to be removed. Yes this is radical, but the codebase is > > in a state to require this. > > > > So either way we'd need to rebenchmark this if it's really required. Also > > Therefore can you share what benchmarks have been done or is it secret? As > said, I think the non-saturated case was the more interesting one here. > > > if we really need this code still someone needs to fix the design, the > > current code is making layering violations an art form. > > > > > Anyway, without the debugfs churn it is more or less two line patch to fix > > > igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd > > > just refine it to use a GGTT check instead of GT. And unless DG1 ends up > > > being GuC only. > > > > The minimal robust fix here is imo to stop us from upcasting dma_fence to > > i915_request if it's not for our device. Not sprinkle code here into the > > semaphore code. We shouldn't even get this far with foreign fences. > > Device check does not work for multi-tile. It was one of my earlier attempts > before I realized the problem. You'll see v3 which I think handles all the > cases. There is no hw semaphores on multi-tile. But there _is_ a lot more going on than just hw semaphores that spawn driver instances. Like priority boosting, clock boosting, and all kinds of other things. I really dont' think it's very robust if we play whack-a-mole here with things leaking. -Daniel > You also forgot to comment on the question lower in the email. I'll just > send a patch which removes that anyway so you can comment there. > > Regards, > > Tvrtko > > > -Daniel > > > > > > > > > - pre-gen8 semaphore code was also silently ditched and no one cared > > > > > > > > Plus removing semaphore code would greatly simplify conversion to > > > > drm/sched. > > > > > > > > > > --- > > > > > > drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- > > > > > > drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- > > > > > > drivers/gpu/drm/i915/i915_request.c | 12 ++++++- > > > > > > 3 files changed, 47 insertions(+), 21 deletions(-) > > > > > > > > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > > > index 48112b9d76df..3043fcbd31bd 100644 > > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > > > @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) > > > > > > i915_gem_object_unpin_pages(obj); > > > > > > } > > > > > > -static inline struct intel_engine_cs * > > > > > > -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) > > > > > > -{ > > > > > > - struct intel_engine_cs *engine = NULL; > > > > > > - struct dma_fence *fence; > > > > > > - > > > > > > - rcu_read_lock(); > > > > > > - fence = dma_resv_get_excl_unlocked(obj->base.resv); > > > > > > - rcu_read_unlock(); > > > > > > - > > > > > > - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) > > > > > > - engine = to_request(fence)->engine; > > > > > > - dma_fence_put(fence); > > > > > > - > > > > > > - return engine; > > > > > > -} > > > > > > - > > > > > > void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, > > > > > > unsigned int cache_level); > > > > > > void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); > > > > > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > > > > > > index 04351a851586..55fd6191eb32 100644 > > > > > > --- a/drivers/gpu/drm/i915/i915_debugfs.c > > > > > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > > > > > > @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) > > > > > > return "ppgtt"; > > > > > > } > > > > > > +static char * > > > > > > +last_write_engine(struct drm_i915_private *i915, > > > > > > + struct drm_i915_gem_object *obj) > > > > > > +{ > > > > > > + struct intel_engine_cs *engine; > > > > > > + struct dma_fence *fence; > > > > > > + char *res = NULL; > > > > > > + > > > > > > + rcu_read_lock(); > > > > > > + fence = dma_resv_get_excl_unlocked(obj->base.resv); > > > > > > + rcu_read_unlock(); > > > > > > + > > > > > > + if (!fence || dma_fence_is_signaled(fence)) > > > > > > + goto out; > > > > > > + > > > > > > + if (!dma_fence_is_i915(fence)) { > > > > > > + res = "<external-fence>"; > > > > > > + goto out; > > > > > > + } > > > > > > + > > > > > > + engine = to_request(fence)->engine; > > > > > > + if (engine->gt->i915 != i915) { > > > > > > + res = "<external-i915>"; > > > > > > + goto out; > > > > > > + } > > > > > > + > > > > > > + res = engine->name; > > > > > > + > > > > > > +out: > > > > > > + dma_fence_put(fence); > > > > > > + return res; > > > > > > +} > > > > > > + > > > > > > void > > > > > > i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > > > > > { > > > > > > struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > > > > > > - struct intel_engine_cs *engine; > > > > > > struct i915_vma *vma; > > > > > > int pin_count = 0; > > > > > > + char *engine; > > > > > > seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", > > > > > > &obj->base, > > > > > > @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > > > > > if (i915_gem_object_is_framebuffer(obj)) > > > > > > seq_printf(m, " (fb)"); > > > > > > - engine = i915_gem_object_last_write_engine(obj); > > > > > > + engine = last_write_engine(dev_priv, obj); > > > > > > if (engine) > > > > > > - seq_printf(m, " (%s)", engine->name); > > > > > > + seq_printf(m, " (%s)", engine); > > > > > > > > > > Or I zap this from the code altogether. Not sure it is very useful since the > > > > > only caller is i915_gem_framebuffer debugfs file and how much it can care > > > > > about maybe hitting the timing window when exclusive fence will contain > > > > > something. > > > > > > > > Ideally we'd just look at the fence timeline name. But i915 has this very > > > > convoluted typesafe-by-rcu reuse which means we actually can't do that, > > > > and our fence timeline name is very useless. > > > > > > Why do we even care to output any of this here? I'd just remove it since it > > > is a very transient state with an extremely short window of opportunity to > > > make it show anything. Which I think makes it pretty useless in debugfs. > > > > > > Regards, > > > > > > Tvrtko > > > > > > > > > > > Would be good to fix that, Matt Auld has started an attempt but didn't get > > > > very far. > > > > -Daniel > > > > > > > > > > > > > > Regards, > > > > > > > > > > Tvrtko > > > > > > > > > > > } > > > > > > static int i915_gem_object_info(struct seq_file *m, void *data) > > > > > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > > > > > > index ce446716d092..64adf619fe82 100644 > > > > > > --- a/drivers/gpu/drm/i915/i915_request.c > > > > > > +++ b/drivers/gpu/drm/i915/i915_request.c > > > > > > @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, > > > > > > return 0; > > > > > > } > > > > > > +static bool > > > > > > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > > > > > > +{ > > > > > > + return to->engine->gt == from->engine->gt; > > > > > > +} > > > > > > + > > > > > > static int > > > > > > emit_semaphore_wait(struct i915_request *to, > > > > > > struct i915_request *from, > > > > > > @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, > > > > > > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > > > > > > struct i915_sw_fence *wait = &to->submit; > > > > > > + if (!can_use_semaphore_wait(to, from)) > > > > > > + goto await_fence; > > > > > > + > > > > > > if (!intel_context_use_semaphores(to->context)) > > > > > > goto await_fence; > > > > > > @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, > > > > > > * immediate execution, and so we must wait until it reaches the > > > > > > * active slot. > > > > > > */ > > > > > > - if (intel_engine_has_semaphores(to->engine) && > > > > > > + if (can_use_semaphore_wait(to, from) && > > > > > > + intel_engine_has_semaphores(to->engine) && > > > > > > !i915_request_has_initial_breadcrumb(to)) { > > > > > > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > > > > > > if (err < 0) > > > > > > > > > > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-09-02 14:33 ` Daniel Vetter @ 2021-09-02 15:01 ` Tvrtko Ursulin 2021-09-08 17:06 ` Daniel Vetter 0 siblings, 1 reply; 24+ messages in thread From: Tvrtko Ursulin @ 2021-09-02 15:01 UTC (permalink / raw) To: Daniel Vetter; +Cc: Intel-gfx, dri-devel, Tvrtko Ursulin On 02/09/2021 15:33, Daniel Vetter wrote: > On Tue, Aug 31, 2021 at 02:18:15PM +0100, Tvrtko Ursulin wrote: >> >> On 31/08/2021 13:43, Daniel Vetter wrote: >>> On Tue, Aug 31, 2021 at 10:15:03AM +0100, Tvrtko Ursulin wrote: >>>> >>>> On 30/08/2021 09:26, Daniel Vetter wrote: >>>>> On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: >>>>>> >>>>>> On 27/08/2021 15:39, Tvrtko Ursulin wrote: >>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>>>>>> >>>>>>> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) >>>>>>> when rendering is done on Intel dgfx and scanout/composition on Intel >>>>>>> igfx. >>>>>>> >>>>>>> Before this patch the driver was not quite ready for that setup, mainly >>>>>>> because it was able to emit a semaphore wait between the two GPUs, which >>>>>>> results in deadlocks because semaphore target location in HWSP is neither >>>>>>> shared between the two, nor mapped in both GGTT spaces. >>>>>>> >>>>>>> To fix it the patch adds an additional check to a couple of relevant code >>>>>>> paths in order to prevent using semaphores for inter-engine >>>>>>> synchronisation between different driver instances. >>>>>>> >>>>>>> Patch also moves singly used i915_gem_object_last_write_engine to be >>>>>>> private in its only calling unit (debugfs), while modifying it to only >>>>>>> show activity belonging to the respective driver instance. >>>>>>> >>>>>>> What remains in this problem space is the question of the GEM busy ioctl. >>>>>>> We have a somewhat ambigous comment there saying only status of native >>>>>>> fences will be reported, which could be interpreted as either i915, or >>>>>>> native to the drm fd. For now I have decided to leave that as is, meaning >>>>>>> any i915 instance activity continues to be reported. >>>>>>> >>>>>>> v2: >>>>>>> * Avoid adding rq->i915. (Chris) >>>>>>> >>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>>>> >>>>> Can't we just delete semaphore code and done? >>>>> - GuC won't have it >>>>> - media team benchmarked on top of softpin media driver, found no >>>>> difference >>>> >>>> You have S-curve for saturated workloads or something else? How thorough and >>>> which media team I guess. >>>> >>>> From memory it was a nice win for some benchmarks (non-saturated ones), but >>>> as I have told you previously, we haven't been putting numbers in commit >>>> messages since it wasn't allowed. I may be able to dig out some more details >>>> if I went trawling through GEM channel IRC logs, although probably not the >>>> actual numbers since those were usually on pastebin. Or you go an talk with >>>> Chris since he probably remembers more details. Or you just decide you don't >>>> care and remove it. I wouldn't do that without putting the complete story in >>>> writing, but it's your call after all. >>> >>> Media has also changed, they're not using relocations anymore. >> >> Meaning you think it changes the benchmarking story? When coupled with >> removal of GPU relocations then possibly yes. >> >>> Unless there's solid data performance tuning of any kind that gets in the >>> way simply needs to be removed. Yes this is radical, but the codebase is >>> in a state to require this. >>> >>> So either way we'd need to rebenchmark this if it's really required. Also >> >> Therefore can you share what benchmarks have been done or is it secret? As >> said, I think the non-saturated case was the more interesting one here. >> >>> if we really need this code still someone needs to fix the design, the >>> current code is making layering violations an art form. >>> >>>> Anyway, without the debugfs churn it is more or less two line patch to fix >>>> igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd >>>> just refine it to use a GGTT check instead of GT. And unless DG1 ends up >>>> being GuC only. >>> >>> The minimal robust fix here is imo to stop us from upcasting dma_fence to >>> i915_request if it's not for our device. Not sprinkle code here into the >>> semaphore code. We shouldn't even get this far with foreign fences. >> >> Device check does not work for multi-tile. It was one of my earlier attempts >> before I realized the problem. You'll see v3 which I think handles all the >> cases. > > There is no hw semaphores on multi-tile. You mean because of GuC? Okay, there may not be after bringup has been done. In which case an assert is needed somewhere just in case, if you are adamant not to accept this fix. It may indeed not matter hugely outside of the current transition period since I spotted patches to enable GuC on DG1. But then again it is trivial and fixes current pains for more than just me. > But there _is_ a lot more going on than just hw semaphores that spawn > driver instances. Like priority boosting, clock boosting, and all kinds of > other things. I really dont' think it's very robust if we play > whack-a-mole here with things leaking. You mean span not spawn? I audited those and they looks good to me. AFAIR scheduling was in fact designed with a global lock just so that works. Plus the cases you mention end up not holding pointers to "foreign" instances anyway, they just do priority inheritance. Which is probably nice not to lose if not unavoidable. >> You also forgot to comment on the question lower in the email. I'll just >> send a patch which removes that anyway so you can comment there. :( Regards, Tvrtko > >> Regards, >> >> Tvrtko >> >>> -Daniel >>> >>>> >>>>> - pre-gen8 semaphore code was also silently ditched and no one cared >>>>> >>>>> Plus removing semaphore code would greatly simplify conversion to >>>>> drm/sched. >>>>> >>>>>>> --- >>>>>>> drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- >>>>>>> drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- >>>>>>> drivers/gpu/drm/i915/i915_request.c | 12 ++++++- >>>>>>> 3 files changed, 47 insertions(+), 21 deletions(-) >>>>>>> >>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>>>> index 48112b9d76df..3043fcbd31bd 100644 >>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>>>> @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) >>>>>>> i915_gem_object_unpin_pages(obj); >>>>>>> } >>>>>>> -static inline struct intel_engine_cs * >>>>>>> -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) >>>>>>> -{ >>>>>>> - struct intel_engine_cs *engine = NULL; >>>>>>> - struct dma_fence *fence; >>>>>>> - >>>>>>> - rcu_read_lock(); >>>>>>> - fence = dma_resv_get_excl_unlocked(obj->base.resv); >>>>>>> - rcu_read_unlock(); >>>>>>> - >>>>>>> - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) >>>>>>> - engine = to_request(fence)->engine; >>>>>>> - dma_fence_put(fence); >>>>>>> - >>>>>>> - return engine; >>>>>>> -} >>>>>>> - >>>>>>> void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, >>>>>>> unsigned int cache_level); >>>>>>> void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); >>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c >>>>>>> index 04351a851586..55fd6191eb32 100644 >>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c >>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c >>>>>>> @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) >>>>>>> return "ppgtt"; >>>>>>> } >>>>>>> +static char * >>>>>>> +last_write_engine(struct drm_i915_private *i915, >>>>>>> + struct drm_i915_gem_object *obj) >>>>>>> +{ >>>>>>> + struct intel_engine_cs *engine; >>>>>>> + struct dma_fence *fence; >>>>>>> + char *res = NULL; >>>>>>> + >>>>>>> + rcu_read_lock(); >>>>>>> + fence = dma_resv_get_excl_unlocked(obj->base.resv); >>>>>>> + rcu_read_unlock(); >>>>>>> + >>>>>>> + if (!fence || dma_fence_is_signaled(fence)) >>>>>>> + goto out; >>>>>>> + >>>>>>> + if (!dma_fence_is_i915(fence)) { >>>>>>> + res = "<external-fence>"; >>>>>>> + goto out; >>>>>>> + } >>>>>>> + >>>>>>> + engine = to_request(fence)->engine; >>>>>>> + if (engine->gt->i915 != i915) { >>>>>>> + res = "<external-i915>"; >>>>>>> + goto out; >>>>>>> + } >>>>>>> + >>>>>>> + res = engine->name; >>>>>>> + >>>>>>> +out: >>>>>>> + dma_fence_put(fence); >>>>>>> + return res; >>>>>>> +} >>>>>>> + >>>>>>> void >>>>>>> i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>>>>>> { >>>>>>> struct drm_i915_private *dev_priv = to_i915(obj->base.dev); >>>>>>> - struct intel_engine_cs *engine; >>>>>>> struct i915_vma *vma; >>>>>>> int pin_count = 0; >>>>>>> + char *engine; >>>>>>> seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", >>>>>>> &obj->base, >>>>>>> @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>>>>>> if (i915_gem_object_is_framebuffer(obj)) >>>>>>> seq_printf(m, " (fb)"); >>>>>>> - engine = i915_gem_object_last_write_engine(obj); >>>>>>> + engine = last_write_engine(dev_priv, obj); >>>>>>> if (engine) >>>>>>> - seq_printf(m, " (%s)", engine->name); >>>>>>> + seq_printf(m, " (%s)", engine); >>>>>> >>>>>> Or I zap this from the code altogether. Not sure it is very useful since the >>>>>> only caller is i915_gem_framebuffer debugfs file and how much it can care >>>>>> about maybe hitting the timing window when exclusive fence will contain >>>>>> something. >>>>> >>>>> Ideally we'd just look at the fence timeline name. But i915 has this very >>>>> convoluted typesafe-by-rcu reuse which means we actually can't do that, >>>>> and our fence timeline name is very useless. >>>> >>>> Why do we even care to output any of this here? I'd just remove it since it >>>> is a very transient state with an extremely short window of opportunity to >>>> make it show anything. Which I think makes it pretty useless in debugfs. >>>> >>>> Regards, >>>> >>>> Tvrtko >>>> >>>>> >>>>> Would be good to fix that, Matt Auld has started an attempt but didn't get >>>>> very far. >>>>> -Daniel >>>>> >>>>>> >>>>>> Regards, >>>>>> >>>>>> Tvrtko >>>>>> >>>>>>> } >>>>>>> static int i915_gem_object_info(struct seq_file *m, void *data) >>>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c >>>>>>> index ce446716d092..64adf619fe82 100644 >>>>>>> --- a/drivers/gpu/drm/i915/i915_request.c >>>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c >>>>>>> @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, >>>>>>> return 0; >>>>>>> } >>>>>>> +static bool >>>>>>> +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) >>>>>>> +{ >>>>>>> + return to->engine->gt == from->engine->gt; >>>>>>> +} >>>>>>> + >>>>>>> static int >>>>>>> emit_semaphore_wait(struct i915_request *to, >>>>>>> struct i915_request *from, >>>>>>> @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, >>>>>>> const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; >>>>>>> struct i915_sw_fence *wait = &to->submit; >>>>>>> + if (!can_use_semaphore_wait(to, from)) >>>>>>> + goto await_fence; >>>>>>> + >>>>>>> if (!intel_context_use_semaphores(to->context)) >>>>>>> goto await_fence; >>>>>>> @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, >>>>>>> * immediate execution, and so we must wait until it reaches the >>>>>>> * active slot. >>>>>>> */ >>>>>>> - if (intel_engine_has_semaphores(to->engine) && >>>>>>> + if (can_use_semaphore_wait(to, from) && >>>>>>> + intel_engine_has_semaphores(to->engine) && >>>>>>> !i915_request_has_initial_breadcrumb(to)) { >>>>>>> err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); >>>>>>> if (err < 0) >>>>>>> >>>>> >>> > ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-09-02 15:01 ` Tvrtko Ursulin @ 2021-09-08 17:06 ` Daniel Vetter 2021-09-09 8:26 ` Tvrtko Ursulin 0 siblings, 1 reply; 24+ messages in thread From: Daniel Vetter @ 2021-09-08 17:06 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: Daniel Vetter, Intel-gfx, dri-devel, Tvrtko Ursulin On Thu, Sep 02, 2021 at 04:01:40PM +0100, Tvrtko Ursulin wrote: > > On 02/09/2021 15:33, Daniel Vetter wrote: > > On Tue, Aug 31, 2021 at 02:18:15PM +0100, Tvrtko Ursulin wrote: > > > > > > On 31/08/2021 13:43, Daniel Vetter wrote: > > > > On Tue, Aug 31, 2021 at 10:15:03AM +0100, Tvrtko Ursulin wrote: > > > > > > > > > > On 30/08/2021 09:26, Daniel Vetter wrote: > > > > > > On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: > > > > > > > > > > > > > > On 27/08/2021 15:39, Tvrtko Ursulin wrote: > > > > > > > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > > > > > > > > > > > > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > > > > > > > > when rendering is done on Intel dgfx and scanout/composition on Intel > > > > > > > > igfx. > > > > > > > > > > > > > > > > Before this patch the driver was not quite ready for that setup, mainly > > > > > > > > because it was able to emit a semaphore wait between the two GPUs, which > > > > > > > > results in deadlocks because semaphore target location in HWSP is neither > > > > > > > > shared between the two, nor mapped in both GGTT spaces. > > > > > > > > > > > > > > > > To fix it the patch adds an additional check to a couple of relevant code > > > > > > > > paths in order to prevent using semaphores for inter-engine > > > > > > > > synchronisation between different driver instances. > > > > > > > > > > > > > > > > Patch also moves singly used i915_gem_object_last_write_engine to be > > > > > > > > private in its only calling unit (debugfs), while modifying it to only > > > > > > > > show activity belonging to the respective driver instance. > > > > > > > > > > > > > > > > What remains in this problem space is the question of the GEM busy ioctl. > > > > > > > > We have a somewhat ambigous comment there saying only status of native > > > > > > > > fences will be reported, which could be interpreted as either i915, or > > > > > > > > native to the drm fd. For now I have decided to leave that as is, meaning > > > > > > > > any i915 instance activity continues to be reported. > > > > > > > > > > > > > > > > v2: > > > > > > > > * Avoid adding rq->i915. (Chris) > > > > > > > > > > > > > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > > > > > > > > > Can't we just delete semaphore code and done? > > > > > > - GuC won't have it > > > > > > - media team benchmarked on top of softpin media driver, found no > > > > > > difference > > > > > > > > > > You have S-curve for saturated workloads or something else? How thorough and > > > > > which media team I guess. > > > > > > > > > > From memory it was a nice win for some benchmarks (non-saturated ones), but > > > > > as I have told you previously, we haven't been putting numbers in commit > > > > > messages since it wasn't allowed. I may be able to dig out some more details > > > > > if I went trawling through GEM channel IRC logs, although probably not the > > > > > actual numbers since those were usually on pastebin. Or you go an talk with > > > > > Chris since he probably remembers more details. Or you just decide you don't > > > > > care and remove it. I wouldn't do that without putting the complete story in > > > > > writing, but it's your call after all. > > > > > > > > Media has also changed, they're not using relocations anymore. > > > > > > Meaning you think it changes the benchmarking story? When coupled with > > > removal of GPU relocations then possibly yes. > > > > > > > Unless there's solid data performance tuning of any kind that gets in the > > > > way simply needs to be removed. Yes this is radical, but the codebase is > > > > in a state to require this. > > > > > > > > So either way we'd need to rebenchmark this if it's really required. Also > > > > > > Therefore can you share what benchmarks have been done or is it secret? As > > > said, I think the non-saturated case was the more interesting one here. > > > > > > > if we really need this code still someone needs to fix the design, the > > > > current code is making layering violations an art form. > > > > > > > > > Anyway, without the debugfs churn it is more or less two line patch to fix > > > > > igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd > > > > > just refine it to use a GGTT check instead of GT. And unless DG1 ends up > > > > > being GuC only. > > > > > > > > The minimal robust fix here is imo to stop us from upcasting dma_fence to > > > > i915_request if it's not for our device. Not sprinkle code here into the > > > > semaphore code. We shouldn't even get this far with foreign fences. > > > > > > Device check does not work for multi-tile. It was one of my earlier attempts > > > before I realized the problem. You'll see v3 which I think handles all the > > > cases. > > > > There is no hw semaphores on multi-tile. > > You mean because of GuC? Okay, there may not be after bringup has been done. > In which case an assert is needed somewhere just in case, if you are adamant > not to accept this fix. It may indeed not matter hugely outside of the > current transition period since I spotted patches to enable GuC on DG1. But > then again it is trivial and fixes current pains for more than just me. > > > But there _is_ a lot more going on than just hw semaphores that spawn > > driver instances. Like priority boosting, clock boosting, and all kinds of > > other things. I really dont' think it's very robust if we play > > whack-a-mole here with things leaking. > > You mean span not spawn? I audited those and they looks good to me. AFAIR > scheduling was in fact designed with a global lock just so that works. Plus > the cases you mention end up not holding pointers to "foreign" instances > anyway, they just do priority inheritance. Which is probably nice not to > lose if not unavoidable. Yup span. I just think the defensive approach is better, especially since we're planning to rework the scheduler area massively anyway. I'm also worried about what happens when people combine a random igfx driver from upstream with some dgpu backport, the combinatorial explosion is nasty. Hence stopping any possible issues in dma_fence_is_i915 sounds a lot safer. E.g. just a quick grep says that the engine mask the busy ioctl returns is nonsense on shared buffers with multiple i915 instances present. Probably doesn't matter, but who knows. That was just the first one. So the bullet proof way here I think is: - change dma_fence_is_i915 to limit to our device - use to_request in hw sempahore code too If we later on have a need for sharing information across drivers through dma_fence, we can then properly engineer an interface. And likely in dma-fence.h, not somewhere in i915 code. We already have a ton of i915-isms in that area, baking in a lot more with potential uapi impact does not sound like a good plan. -Daniel > > > You also forgot to comment on the question lower in the email. I'll just > > > send a patch which removes that anyway so you can comment there. > > :( > > Regards, > > Tvrtko > > > > > > Regards, > > > > > > Tvrtko > > > > > > > -Daniel > > > > > > > > > > > > > > > - pre-gen8 semaphore code was also silently ditched and no one cared > > > > > > > > > > > > Plus removing semaphore code would greatly simplify conversion to > > > > > > drm/sched. > > > > > > > > > > > > > > --- > > > > > > > > drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- > > > > > > > > drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- > > > > > > > > drivers/gpu/drm/i915/i915_request.c | 12 ++++++- > > > > > > > > 3 files changed, 47 insertions(+), 21 deletions(-) > > > > > > > > > > > > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > > > > > index 48112b9d76df..3043fcbd31bd 100644 > > > > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h > > > > > > > > @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) > > > > > > > > i915_gem_object_unpin_pages(obj); > > > > > > > > } > > > > > > > > -static inline struct intel_engine_cs * > > > > > > > > -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) > > > > > > > > -{ > > > > > > > > - struct intel_engine_cs *engine = NULL; > > > > > > > > - struct dma_fence *fence; > > > > > > > > - > > > > > > > > - rcu_read_lock(); > > > > > > > > - fence = dma_resv_get_excl_unlocked(obj->base.resv); > > > > > > > > - rcu_read_unlock(); > > > > > > > > - > > > > > > > > - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) > > > > > > > > - engine = to_request(fence)->engine; > > > > > > > > - dma_fence_put(fence); > > > > > > > > - > > > > > > > > - return engine; > > > > > > > > -} > > > > > > > > - > > > > > > > > void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, > > > > > > > > unsigned int cache_level); > > > > > > > > void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); > > > > > > > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c > > > > > > > > index 04351a851586..55fd6191eb32 100644 > > > > > > > > --- a/drivers/gpu/drm/i915/i915_debugfs.c > > > > > > > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > > > > > > > > @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) > > > > > > > > return "ppgtt"; > > > > > > > > } > > > > > > > > +static char * > > > > > > > > +last_write_engine(struct drm_i915_private *i915, > > > > > > > > + struct drm_i915_gem_object *obj) > > > > > > > > +{ > > > > > > > > + struct intel_engine_cs *engine; > > > > > > > > + struct dma_fence *fence; > > > > > > > > + char *res = NULL; > > > > > > > > + > > > > > > > > + rcu_read_lock(); > > > > > > > > + fence = dma_resv_get_excl_unlocked(obj->base.resv); > > > > > > > > + rcu_read_unlock(); > > > > > > > > + > > > > > > > > + if (!fence || dma_fence_is_signaled(fence)) > > > > > > > > + goto out; > > > > > > > > + > > > > > > > > + if (!dma_fence_is_i915(fence)) { > > > > > > > > + res = "<external-fence>"; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + > > > > > > > > + engine = to_request(fence)->engine; > > > > > > > > + if (engine->gt->i915 != i915) { > > > > > > > > + res = "<external-i915>"; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + > > > > > > > > + res = engine->name; > > > > > > > > + > > > > > > > > +out: > > > > > > > > + dma_fence_put(fence); > > > > > > > > + return res; > > > > > > > > +} > > > > > > > > + > > > > > > > > void > > > > > > > > i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > > > > > > > { > > > > > > > > struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > > > > > > > > - struct intel_engine_cs *engine; > > > > > > > > struct i915_vma *vma; > > > > > > > > int pin_count = 0; > > > > > > > > + char *engine; > > > > > > > > seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", > > > > > > > > &obj->base, > > > > > > > > @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) > > > > > > > > if (i915_gem_object_is_framebuffer(obj)) > > > > > > > > seq_printf(m, " (fb)"); > > > > > > > > - engine = i915_gem_object_last_write_engine(obj); > > > > > > > > + engine = last_write_engine(dev_priv, obj); > > > > > > > > if (engine) > > > > > > > > - seq_printf(m, " (%s)", engine->name); > > > > > > > > + seq_printf(m, " (%s)", engine); > > > > > > > > > > > > > > Or I zap this from the code altogether. Not sure it is very useful since the > > > > > > > only caller is i915_gem_framebuffer debugfs file and how much it can care > > > > > > > about maybe hitting the timing window when exclusive fence will contain > > > > > > > something. > > > > > > > > > > > > Ideally we'd just look at the fence timeline name. But i915 has this very > > > > > > convoluted typesafe-by-rcu reuse which means we actually can't do that, > > > > > > and our fence timeline name is very useless. > > > > > > > > > > Why do we even care to output any of this here? I'd just remove it since it > > > > > is a very transient state with an extremely short window of opportunity to > > > > > make it show anything. Which I think makes it pretty useless in debugfs. > > > > > > > > > > Regards, > > > > > > > > > > Tvrtko > > > > > > > > > > > > > > > > > Would be good to fix that, Matt Auld has started an attempt but didn't get > > > > > > very far. > > > > > > -Daniel > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > > > > > > > Tvrtko > > > > > > > > > > > > > > > } > > > > > > > > static int i915_gem_object_info(struct seq_file *m, void *data) > > > > > > > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > > > > > > > > index ce446716d092..64adf619fe82 100644 > > > > > > > > --- a/drivers/gpu/drm/i915/i915_request.c > > > > > > > > +++ b/drivers/gpu/drm/i915/i915_request.c > > > > > > > > @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, > > > > > > > > return 0; > > > > > > > > } > > > > > > > > +static bool > > > > > > > > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > > > > > > > > +{ > > > > > > > > + return to->engine->gt == from->engine->gt; > > > > > > > > +} > > > > > > > > + > > > > > > > > static int > > > > > > > > emit_semaphore_wait(struct i915_request *to, > > > > > > > > struct i915_request *from, > > > > > > > > @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, > > > > > > > > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > > > > > > > > struct i915_sw_fence *wait = &to->submit; > > > > > > > > + if (!can_use_semaphore_wait(to, from)) > > > > > > > > + goto await_fence; > > > > > > > > + > > > > > > > > if (!intel_context_use_semaphores(to->context)) > > > > > > > > goto await_fence; > > > > > > > > @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, > > > > > > > > * immediate execution, and so we must wait until it reaches the > > > > > > > > * active slot. > > > > > > > > */ > > > > > > > > - if (intel_engine_has_semaphores(to->engine) && > > > > > > > > + if (can_use_semaphore_wait(to, from) && > > > > > > > > + intel_engine_has_semaphores(to->engine) && > > > > > > > > !i915_request_has_initial_breadcrumb(to)) { > > > > > > > > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > > > > > > > > if (err < 0) > > > > > > > > > > > > > > > > > > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Intel-gfx] [PATCH v2] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-09-08 17:06 ` Daniel Vetter @ 2021-09-09 8:26 ` Tvrtko Ursulin 0 siblings, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-09-09 8:26 UTC (permalink / raw) To: Daniel Vetter; +Cc: Intel-gfx, dri-devel, Tvrtko Ursulin On 08/09/2021 18:06, Daniel Vetter wrote: > On Thu, Sep 02, 2021 at 04:01:40PM +0100, Tvrtko Ursulin wrote: >> >> On 02/09/2021 15:33, Daniel Vetter wrote: >>> On Tue, Aug 31, 2021 at 02:18:15PM +0100, Tvrtko Ursulin wrote: >>>> >>>> On 31/08/2021 13:43, Daniel Vetter wrote: >>>>> On Tue, Aug 31, 2021 at 10:15:03AM +0100, Tvrtko Ursulin wrote: >>>>>> >>>>>> On 30/08/2021 09:26, Daniel Vetter wrote: >>>>>>> On Fri, Aug 27, 2021 at 03:44:42PM +0100, Tvrtko Ursulin wrote: >>>>>>>> >>>>>>>> On 27/08/2021 15:39, Tvrtko Ursulin wrote: >>>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>>>>>>>> >>>>>>>>> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) >>>>>>>>> when rendering is done on Intel dgfx and scanout/composition on Intel >>>>>>>>> igfx. >>>>>>>>> >>>>>>>>> Before this patch the driver was not quite ready for that setup, mainly >>>>>>>>> because it was able to emit a semaphore wait between the two GPUs, which >>>>>>>>> results in deadlocks because semaphore target location in HWSP is neither >>>>>>>>> shared between the two, nor mapped in both GGTT spaces. >>>>>>>>> >>>>>>>>> To fix it the patch adds an additional check to a couple of relevant code >>>>>>>>> paths in order to prevent using semaphores for inter-engine >>>>>>>>> synchronisation between different driver instances. >>>>>>>>> >>>>>>>>> Patch also moves singly used i915_gem_object_last_write_engine to be >>>>>>>>> private in its only calling unit (debugfs), while modifying it to only >>>>>>>>> show activity belonging to the respective driver instance. >>>>>>>>> >>>>>>>>> What remains in this problem space is the question of the GEM busy ioctl. >>>>>>>>> We have a somewhat ambigous comment there saying only status of native >>>>>>>>> fences will be reported, which could be interpreted as either i915, or >>>>>>>>> native to the drm fd. For now I have decided to leave that as is, meaning >>>>>>>>> any i915 instance activity continues to be reported. >>>>>>>>> >>>>>>>>> v2: >>>>>>>>> * Avoid adding rq->i915. (Chris) >>>>>>>>> >>>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>>>>>> >>>>>>> Can't we just delete semaphore code and done? >>>>>>> - GuC won't have it >>>>>>> - media team benchmarked on top of softpin media driver, found no >>>>>>> difference >>>>>> >>>>>> You have S-curve for saturated workloads or something else? How thorough and >>>>>> which media team I guess. >>>>>> >>>>>> From memory it was a nice win for some benchmarks (non-saturated ones), but >>>>>> as I have told you previously, we haven't been putting numbers in commit >>>>>> messages since it wasn't allowed. I may be able to dig out some more details >>>>>> if I went trawling through GEM channel IRC logs, although probably not the >>>>>> actual numbers since those were usually on pastebin. Or you go an talk with >>>>>> Chris since he probably remembers more details. Or you just decide you don't >>>>>> care and remove it. I wouldn't do that without putting the complete story in >>>>>> writing, but it's your call after all. >>>>> >>>>> Media has also changed, they're not using relocations anymore. >>>> >>>> Meaning you think it changes the benchmarking story? When coupled with >>>> removal of GPU relocations then possibly yes. >>>> >>>>> Unless there's solid data performance tuning of any kind that gets in the >>>>> way simply needs to be removed. Yes this is radical, but the codebase is >>>>> in a state to require this. >>>>> >>>>> So either way we'd need to rebenchmark this if it's really required. Also >>>> >>>> Therefore can you share what benchmarks have been done or is it secret? As >>>> said, I think the non-saturated case was the more interesting one here. >>>> >>>>> if we really need this code still someone needs to fix the design, the >>>>> current code is making layering violations an art form. >>>>> >>>>>> Anyway, without the debugfs churn it is more or less two line patch to fix >>>>>> igfx + dgfx hybrid setup. So while mulling it over this could go in. I'd >>>>>> just refine it to use a GGTT check instead of GT. And unless DG1 ends up >>>>>> being GuC only. >>>>> >>>>> The minimal robust fix here is imo to stop us from upcasting dma_fence to >>>>> i915_request if it's not for our device. Not sprinkle code here into the >>>>> semaphore code. We shouldn't even get this far with foreign fences. >>>> >>>> Device check does not work for multi-tile. It was one of my earlier attempts >>>> before I realized the problem. You'll see v3 which I think handles all the >>>> cases. >>> >>> There is no hw semaphores on multi-tile. >> >> You mean because of GuC? Okay, there may not be after bringup has been done. >> In which case an assert is needed somewhere just in case, if you are adamant >> not to accept this fix. It may indeed not matter hugely outside of the >> current transition period since I spotted patches to enable GuC on DG1. But >> then again it is trivial and fixes current pains for more than just me. >> >>> But there _is_ a lot more going on than just hw semaphores that spawn >>> driver instances. Like priority boosting, clock boosting, and all kinds of >>> other things. I really dont' think it's very robust if we play >>> whack-a-mole here with things leaking. >> >> You mean span not spawn? I audited those and they looks good to me. AFAIR >> scheduling was in fact designed with a global lock just so that works. Plus >> the cases you mention end up not holding pointers to "foreign" instances >> anyway, they just do priority inheritance. Which is probably nice not to >> lose if not unavoidable. > > Yup span. I just think the defensive approach is better, especially since > we're planning to rework the scheduler area massively anyway. > > I'm also worried about what happens when people combine a random igfx > driver from upstream with some dgpu backport, the combinatorial explosion > is nasty. Hence stopping any possible issues in dma_fence_is_i915 sounds a > lot safer. > > E.g. just a quick grep says that the engine mask the busy ioctl returns is > nonsense on shared buffers with multiple i915 instances present. Probably > doesn't matter, but who knows. That was just the first one. I had a version which deals with the busy ioctl on trybot early on, since I was thinking along similar lines. But then I realised the wording of the comment in there actually leaves space for interpretation. And that actually reporting more, rather than less, activity makes sense. And it's completely safe. Not sure why you declare it "nonsense" since you did not really explain. > So the bullet proof way here I think is: > - change dma_fence_is_i915 to limit to our device > - use to_request in hw sempahore code too > > If we later on have a need for sharing information across drivers through > dma_fence, we can then properly engineer an interface. And likely in > dma-fence.h, not somewhere in i915 code. We already have a ton of > i915-isms in that area, baking in a lot more with potential uapi impact > does not sound like a good plan. I think the patch is simple and clear fix and it saves time people hitting the same problem. Certainly does not make anything worse so my 2c is that it should go in while follow up work is discussed. Regards, Tvrtko > -Daniel > >>>> You also forgot to comment on the question lower in the email. I'll just >>>> send a patch which removes that anyway so you can comment there. >> >> :( >> >> Regards, >> >> Tvrtko >> >>> >>>> Regards, >>>> >>>> Tvrtko >>>> >>>>> -Daniel >>>>> >>>>>> >>>>>>> - pre-gen8 semaphore code was also silently ditched and no one cared >>>>>>> >>>>>>> Plus removing semaphore code would greatly simplify conversion to >>>>>>> drm/sched. >>>>>>> >>>>>>>>> --- >>>>>>>>> drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ---------- >>>>>>>>> drivers/gpu/drm/i915/i915_debugfs.c | 39 ++++++++++++++++++++-- >>>>>>>>> drivers/gpu/drm/i915/i915_request.c | 12 ++++++- >>>>>>>>> 3 files changed, 47 insertions(+), 21 deletions(-) >>>>>>>>> >>>>>>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>>>>>> index 48112b9d76df..3043fcbd31bd 100644 >>>>>>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>>>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h >>>>>>>>> @@ -503,23 +503,6 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj) >>>>>>>>> i915_gem_object_unpin_pages(obj); >>>>>>>>> } >>>>>>>>> -static inline struct intel_engine_cs * >>>>>>>>> -i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) >>>>>>>>> -{ >>>>>>>>> - struct intel_engine_cs *engine = NULL; >>>>>>>>> - struct dma_fence *fence; >>>>>>>>> - >>>>>>>>> - rcu_read_lock(); >>>>>>>>> - fence = dma_resv_get_excl_unlocked(obj->base.resv); >>>>>>>>> - rcu_read_unlock(); >>>>>>>>> - >>>>>>>>> - if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) >>>>>>>>> - engine = to_request(fence)->engine; >>>>>>>>> - dma_fence_put(fence); >>>>>>>>> - >>>>>>>>> - return engine; >>>>>>>>> -} >>>>>>>>> - >>>>>>>>> void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, >>>>>>>>> unsigned int cache_level); >>>>>>>>> void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj); >>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c >>>>>>>>> index 04351a851586..55fd6191eb32 100644 >>>>>>>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c >>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c >>>>>>>>> @@ -135,13 +135,46 @@ static const char *stringify_vma_type(const struct i915_vma *vma) >>>>>>>>> return "ppgtt"; >>>>>>>>> } >>>>>>>>> +static char * >>>>>>>>> +last_write_engine(struct drm_i915_private *i915, >>>>>>>>> + struct drm_i915_gem_object *obj) >>>>>>>>> +{ >>>>>>>>> + struct intel_engine_cs *engine; >>>>>>>>> + struct dma_fence *fence; >>>>>>>>> + char *res = NULL; >>>>>>>>> + >>>>>>>>> + rcu_read_lock(); >>>>>>>>> + fence = dma_resv_get_excl_unlocked(obj->base.resv); >>>>>>>>> + rcu_read_unlock(); >>>>>>>>> + >>>>>>>>> + if (!fence || dma_fence_is_signaled(fence)) >>>>>>>>> + goto out; >>>>>>>>> + >>>>>>>>> + if (!dma_fence_is_i915(fence)) { >>>>>>>>> + res = "<external-fence>"; >>>>>>>>> + goto out; >>>>>>>>> + } >>>>>>>>> + >>>>>>>>> + engine = to_request(fence)->engine; >>>>>>>>> + if (engine->gt->i915 != i915) { >>>>>>>>> + res = "<external-i915>"; >>>>>>>>> + goto out; >>>>>>>>> + } >>>>>>>>> + >>>>>>>>> + res = engine->name; >>>>>>>>> + >>>>>>>>> +out: >>>>>>>>> + dma_fence_put(fence); >>>>>>>>> + return res; >>>>>>>>> +} >>>>>>>>> + >>>>>>>>> void >>>>>>>>> i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>>>>>>>> { >>>>>>>>> struct drm_i915_private *dev_priv = to_i915(obj->base.dev); >>>>>>>>> - struct intel_engine_cs *engine; >>>>>>>>> struct i915_vma *vma; >>>>>>>>> int pin_count = 0; >>>>>>>>> + char *engine; >>>>>>>>> seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", >>>>>>>>> &obj->base, >>>>>>>>> @@ -230,9 +263,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) >>>>>>>>> if (i915_gem_object_is_framebuffer(obj)) >>>>>>>>> seq_printf(m, " (fb)"); >>>>>>>>> - engine = i915_gem_object_last_write_engine(obj); >>>>>>>>> + engine = last_write_engine(dev_priv, obj); >>>>>>>>> if (engine) >>>>>>>>> - seq_printf(m, " (%s)", engine->name); >>>>>>>>> + seq_printf(m, " (%s)", engine); >>>>>>>> >>>>>>>> Or I zap this from the code altogether. Not sure it is very useful since the >>>>>>>> only caller is i915_gem_framebuffer debugfs file and how much it can care >>>>>>>> about maybe hitting the timing window when exclusive fence will contain >>>>>>>> something. >>>>>>> >>>>>>> Ideally we'd just look at the fence timeline name. But i915 has this very >>>>>>> convoluted typesafe-by-rcu reuse which means we actually can't do that, >>>>>>> and our fence timeline name is very useless. >>>>>> >>>>>> Why do we even care to output any of this here? I'd just remove it since it >>>>>> is a very transient state with an extremely short window of opportunity to >>>>>> make it show anything. Which I think makes it pretty useless in debugfs. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Tvrtko >>>>>> >>>>>>> >>>>>>> Would be good to fix that, Matt Auld has started an attempt but didn't get >>>>>>> very far. >>>>>>> -Daniel >>>>>>> >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> Tvrtko >>>>>>>> >>>>>>>>> } >>>>>>>>> static int i915_gem_object_info(struct seq_file *m, void *data) >>>>>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c >>>>>>>>> index ce446716d092..64adf619fe82 100644 >>>>>>>>> --- a/drivers/gpu/drm/i915/i915_request.c >>>>>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c >>>>>>>>> @@ -1152,6 +1152,12 @@ __emit_semaphore_wait(struct i915_request *to, >>>>>>>>> return 0; >>>>>>>>> } >>>>>>>>> +static bool >>>>>>>>> +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) >>>>>>>>> +{ >>>>>>>>> + return to->engine->gt == from->engine->gt; >>>>>>>>> +} >>>>>>>>> + >>>>>>>>> static int >>>>>>>>> emit_semaphore_wait(struct i915_request *to, >>>>>>>>> struct i915_request *from, >>>>>>>>> @@ -1160,6 +1166,9 @@ emit_semaphore_wait(struct i915_request *to, >>>>>>>>> const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; >>>>>>>>> struct i915_sw_fence *wait = &to->submit; >>>>>>>>> + if (!can_use_semaphore_wait(to, from)) >>>>>>>>> + goto await_fence; >>>>>>>>> + >>>>>>>>> if (!intel_context_use_semaphores(to->context)) >>>>>>>>> goto await_fence; >>>>>>>>> @@ -1263,7 +1272,8 @@ __i915_request_await_execution(struct i915_request *to, >>>>>>>>> * immediate execution, and so we must wait until it reaches the >>>>>>>>> * active slot. >>>>>>>>> */ >>>>>>>>> - if (intel_engine_has_semaphores(to->engine) && >>>>>>>>> + if (can_use_semaphore_wait(to, from) && >>>>>>>>> + intel_engine_has_semaphores(to->engine) && >>>>>>>>> !i915_request_has_initial_breadcrumb(to)) { >>>>>>>>> err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); >>>>>>>>> if (err < 0) >>>>>>>>> >>>>>>> >>>>> >>> > ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin ` (3 preceding siblings ...) (?) @ 2021-08-27 15:03 ` Patchwork -1 siblings, 0 replies; 24+ messages in thread From: Patchwork @ 2021-08-27 15:03 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: intel-gfx == Series Details == Series: drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) URL : https://patchwork.freedesktop.org/series/94105/ State : warning == Summary == $ dim checkpatch origin/drm-tip a8e23cbe36b7 drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup -:25: WARNING:TYPO_SPELLING: 'ambigous' may be misspelled - perhaps 'ambiguous'? #25: We have a somewhat ambigous comment there saying only status of native ^^^^^^^^ total: 0 errors, 1 warnings, 0 checks, 111 lines checked ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin ` (4 preceding siblings ...) (?) @ 2021-08-27 15:34 ` Patchwork -1 siblings, 0 replies; 24+ messages in thread From: Patchwork @ 2021-08-27 15:34 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: intel-gfx [-- Attachment #1: Type: text/plain, Size: 6103 bytes --] == Series Details == Series: drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) URL : https://patchwork.freedesktop.org/series/94105/ State : success == Summary == CI Bug Log - changes from CI_DRM_10530 -> Patchwork_20910 ==================================================== Summary ------- **SUCCESS** No regressions found. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/index.html Known issues ------------ Here are the changes found in Patchwork_20910 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@amdgpu/amd_basic@cs-gfx: - fi-kbl-soraka: NOTRUN -> [SKIP][1] ([fdo#109271]) +16 similar issues [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-kbl-soraka/igt@amdgpu/amd_basic@cs-gfx.html * igt@core_hotunplug@unbind-rebind: - fi-rkl-guc: [PASS][2] -> [DMESG-WARN][3] ([i915#3925]) [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/fi-rkl-guc/igt@core_hotunplug@unbind-rebind.html [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-rkl-guc/igt@core_hotunplug@unbind-rebind.html * igt@gem_exec_suspend@basic-s0: - fi-tgl-1115g4: NOTRUN -> [DMESG-FAIL][4] ([i915#1888]) [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s0.html * igt@gem_huc_copy@huc-copy: - fi-tgl-1115g4: NOTRUN -> [SKIP][5] ([i915#2190]) [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@gem_huc_copy@huc-copy.html * igt@i915_pm_backlight@basic-brightness: - fi-tgl-1115g4: NOTRUN -> [SKIP][6] ([i915#1155]) [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@i915_pm_backlight@basic-brightness.html * igt@i915_pm_rpm@module-reload: - fi-tgl-1115g4: NOTRUN -> [INCOMPLETE][7] ([i915#4006]) [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@i915_pm_rpm@module-reload.html * igt@kms_addfb_basic@too-wide: - fi-tgl-1115g4: NOTRUN -> [DMESG-WARN][8] ([i915#4002]) +88 similar issues [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@kms_addfb_basic@too-wide.html * igt@kms_chamelium@common-hpd-after-suspend: - fi-tgl-1115g4: NOTRUN -> [SKIP][9] ([fdo#111827]) +8 similar issues [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@kms_chamelium@common-hpd-after-suspend.html * igt@kms_force_connector_basic@force-load-detect: - fi-tgl-1115g4: NOTRUN -> [SKIP][10] ([fdo#109285]) [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@kms_force_connector_basic@force-load-detect.html * igt@kms_psr@primary_mmap_gtt: - fi-tgl-1115g4: NOTRUN -> [SKIP][11] ([i915#1072]) +2 similar issues [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@kms_psr@primary_mmap_gtt.html * igt@kms_psr@primary_page_flip: - fi-tgl-1115g4: NOTRUN -> [SKIP][12] ([i915#1072] / [i915#1385]) [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@kms_psr@primary_page_flip.html * igt@prime_vgem@basic-userptr: - fi-tgl-1115g4: NOTRUN -> [SKIP][13] ([i915#3301]) [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@prime_vgem@basic-userptr.html * igt@runner@aborted: - fi-rkl-guc: NOTRUN -> [FAIL][14] ([i915#1602]) [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-rkl-guc/igt@runner@aborted.html - fi-tgl-1115g4: NOTRUN -> [FAIL][15] ([i915#2722] / [i915#3834]) [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-tgl-1115g4/igt@runner@aborted.html #### Possible fixes #### * igt@kms_chamelium@hdmi-hpd-fast: - fi-icl-u2: [DMESG-WARN][16] ([i915#2203] / [i915#2868]) -> [PASS][17] [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271 [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285 [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827 [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072 [i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155 [i915#1385]: https://gitlab.freedesktop.org/drm/intel/issues/1385 [i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602 [i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888 [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190 [i915#2203]: https://gitlab.freedesktop.org/drm/intel/issues/2203 [i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722 [i915#2868]: https://gitlab.freedesktop.org/drm/intel/issues/2868 [i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301 [i915#3834]: https://gitlab.freedesktop.org/drm/intel/issues/3834 [i915#3925]: https://gitlab.freedesktop.org/drm/intel/issues/3925 [i915#4002]: https://gitlab.freedesktop.org/drm/intel/issues/4002 [i915#4006]: https://gitlab.freedesktop.org/drm/intel/issues/4006 Participating hosts (38 -> 34) ------------------------------ Additional (1): fi-tgl-1115g4 Missing (5): fi-ilk-m540 bat-adls-5 fi-bsw-cyan bat-jsl-1 fi-bdw-samus Build changes ------------- * Linux: CI_DRM_10530 -> Patchwork_20910 CI-20190529: 20190529 CI_DRM_10530: 63bca765c920120bd9746d9093190d82c4ace341 @ git://anongit.freedesktop.org/gfx-ci/linux IGT_6187: 1afd52c1471dafdf521eae431f3e228826de6de2 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git Patchwork_20910: a8e23cbe36b7ed5f33882c4fa67aa42c15a5d968 @ git://anongit.freedesktop.org/gfx-ci/linux == Linux commits == a8e23cbe36b7 drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/index.html [-- Attachment #2: Type: text/html, Size: 7134 bytes --] ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin ` (5 preceding siblings ...) (?) @ 2021-08-27 17:35 ` Patchwork -1 siblings, 0 replies; 24+ messages in thread From: Patchwork @ 2021-08-27 17:35 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: intel-gfx [-- Attachment #1: Type: text/plain, Size: 30285 bytes --] == Series Details == Series: drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup URL : https://patchwork.freedesktop.org/series/94105/ State : success == Summary == CI Bug Log - changes from CI_DRM_10530_full -> Patchwork_20909_full ==================================================== Summary ------- **SUCCESS** No regressions found. Known issues ------------ Here are the changes found in Patchwork_20909_full that come from known issues: ### IGT changes ### #### Issues hit #### * igt@gem_create@create-massive: - shard-kbl: NOTRUN -> [DMESG-WARN][1] ([i915#3002]) [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@gem_create@create-massive.html * igt@gem_ctx_persistence@legacy-engines-mixed-process: - shard-snb: NOTRUN -> [SKIP][2] ([fdo#109271] / [i915#1099]) +5 similar issues [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-snb5/igt@gem_ctx_persistence@legacy-engines-mixed-process.html * igt@gem_eio@unwedge-stress: - shard-tglb: [PASS][3] -> [TIMEOUT][4] ([i915#2369] / [i915#3063] / [i915#3648]) [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-tglb3/igt@gem_eio@unwedge-stress.html [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-tglb5/igt@gem_eio@unwedge-stress.html * igt@gem_exec_fair@basic-deadline: - shard-apl: NOTRUN -> [FAIL][5] ([i915#2846]) [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl6/igt@gem_exec_fair@basic-deadline.html * igt@gem_exec_fair@basic-flow@rcs0: - shard-tglb: [PASS][6] -> [FAIL][7] ([i915#2842]) [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-tglb1/igt@gem_exec_fair@basic-flow@rcs0.html [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-tglb5/igt@gem_exec_fair@basic-flow@rcs0.html * igt@gem_exec_fair@basic-pace-solo@rcs0: - shard-glk: [PASS][8] -> [FAIL][9] ([i915#2842]) +2 similar issues [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-glk4/igt@gem_exec_fair@basic-pace-solo@rcs0.html [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk8/igt@gem_exec_fair@basic-pace-solo@rcs0.html * igt@gem_exec_fair@basic-pace@rcs0: - shard-iclb: [PASS][10] -> [FAIL][11] ([i915#2842]) [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb3/igt@gem_exec_fair@basic-pace@rcs0.html [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb1/igt@gem_exec_fair@basic-pace@rcs0.html * igt@gem_exec_fair@basic-pace@vcs1: - shard-iclb: NOTRUN -> [FAIL][12] ([i915#2842]) [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb1/igt@gem_exec_fair@basic-pace@vcs1.html - shard-kbl: [PASS][13] -> [SKIP][14] ([fdo#109271]) [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl6/igt@gem_exec_fair@basic-pace@vcs1.html [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@gem_exec_fair@basic-pace@vcs1.html * igt@gem_exec_fair@basic-throttle@rcs0: - shard-iclb: [PASS][15] -> [FAIL][16] ([i915#2849]) [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb8/igt@gem_exec_fair@basic-throttle@rcs0.html [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb6/igt@gem_exec_fair@basic-throttle@rcs0.html * igt@gem_exec_params@no-blt: - shard-iclb: NOTRUN -> [SKIP][17] ([fdo#109283]) [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb8/igt@gem_exec_params@no-blt.html * igt@gem_mmap_gtt@cpuset-big-copy-xy: - shard-glk: [PASS][18] -> [FAIL][19] ([i915#1888] / [i915#307]) [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-glk6/igt@gem_mmap_gtt@cpuset-big-copy-xy.html [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk5/igt@gem_mmap_gtt@cpuset-big-copy-xy.html * igt@gem_userptr_blits@dmabuf-sync: - shard-apl: NOTRUN -> [SKIP][20] ([fdo#109271] / [i915#3323]) [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl2/igt@gem_userptr_blits@dmabuf-sync.html * igt@gem_userptr_blits@input-checking: - shard-skl: NOTRUN -> [DMESG-WARN][21] ([i915#3002]) [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl6/igt@gem_userptr_blits@input-checking.html - shard-apl: NOTRUN -> [DMESG-WARN][22] ([i915#3002]) [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl1/igt@gem_userptr_blits@input-checking.html - shard-snb: NOTRUN -> [DMESG-WARN][23] ([i915#3002]) [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-snb6/igt@gem_userptr_blits@input-checking.html * igt@gen9_exec_parse@allowed-single: - shard-skl: [PASS][24] -> [DMESG-WARN][25] ([i915#1436] / [i915#716]) [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl3/igt@gen9_exec_parse@allowed-single.html [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl9/igt@gen9_exec_parse@allowed-single.html * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip: - shard-apl: NOTRUN -> [SKIP][26] ([fdo#109271] / [i915#3777]) +1 similar issue [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl1/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip.html * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip: - shard-kbl: NOTRUN -> [SKIP][27] ([fdo#109271] / [i915#3777]) [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl4/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip.html * igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs: - shard-glk: NOTRUN -> [SKIP][28] ([fdo#109271] / [i915#3886]) +2 similar issues [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk4/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html * igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_mc_ccs: - shard-kbl: NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3886]) +2 similar issues [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl4/igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_mc_ccs.html * igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_mc_ccs: - shard-apl: NOTRUN -> [SKIP][30] ([fdo#109271] / [i915#3886]) +12 similar issues [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl6/igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_mc_ccs.html * igt@kms_ccs@pipe-d-bad-pixel-format-y_tiled_ccs: - shard-snb: NOTRUN -> [SKIP][31] ([fdo#109271]) +327 similar issues [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-snb5/igt@kms_ccs@pipe-d-bad-pixel-format-y_tiled_ccs.html * igt@kms_ccs@pipe-d-crc-primary-rotation-180-y_tiled_gen12_mc_ccs: - shard-glk: NOTRUN -> [SKIP][32] ([fdo#109271]) +47 similar issues [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk4/igt@kms_ccs@pipe-d-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html * igt@kms_chamelium@hdmi-edid-change-during-suspend: - shard-apl: NOTRUN -> [SKIP][33] ([fdo#109271] / [fdo#111827]) +19 similar issues [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl2/igt@kms_chamelium@hdmi-edid-change-during-suspend.html * igt@kms_chamelium@hdmi-mode-timings: - shard-kbl: NOTRUN -> [SKIP][34] ([fdo#109271] / [fdo#111827]) +9 similar issues [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@kms_chamelium@hdmi-mode-timings.html * igt@kms_chamelium@vga-hpd-after-suspend: - shard-glk: NOTRUN -> [SKIP][35] ([fdo#109271] / [fdo#111827]) +2 similar issues [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk4/igt@kms_chamelium@vga-hpd-after-suspend.html * igt@kms_color@pipe-c-ctm-0-75: - shard-skl: [PASS][36] -> [DMESG-WARN][37] ([i915#1982]) [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@kms_color@pipe-c-ctm-0-75.html [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl7/igt@kms_color@pipe-c-ctm-0-75.html * igt@kms_color@pipe-d-ctm-negative: - shard-skl: NOTRUN -> [SKIP][38] ([fdo#109271]) +1 similar issue [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl6/igt@kms_color@pipe-d-ctm-negative.html * igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes: - shard-snb: NOTRUN -> [SKIP][39] ([fdo#109271] / [fdo#111827]) +15 similar issues [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-snb5/igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes.html * igt@kms_cursor_crc@pipe-c-cursor-512x512-onscreen: - shard-iclb: NOTRUN -> [SKIP][40] ([fdo#109278] / [fdo#109279]) [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb8/igt@kms_cursor_crc@pipe-c-cursor-512x512-onscreen.html * igt@kms_cursor_legacy@flip-vs-cursor-legacy: - shard-skl: [PASS][41] -> [FAIL][42] ([i915#2346]) [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl2/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html * igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a1: - shard-glk: [PASS][43] -> [FAIL][44] ([i915#2122]) [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-glk6/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a1.html [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk5/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a1.html * igt@kms_flip@flip-vs-expired-vblank@a-edp1: - shard-tglb: [PASS][45] -> [FAIL][46] ([i915#79]) [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-tglb6/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-tglb2/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1: - shard-kbl: [PASS][47] -> [DMESG-WARN][48] ([i915#180]) +2 similar issues [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl6/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl1/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html * igt@kms_flip@plain-flip-fb-recreate@c-edp1: - shard-skl: [PASS][49] -> [FAIL][50] ([i915#2122]) +1 similar issue [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@kms_flip@plain-flip-fb-recreate@c-edp1.html [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl7/igt@kms_flip@plain-flip-fb-recreate@c-edp1.html * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs: - shard-apl: NOTRUN -> [SKIP][51] ([fdo#109271] / [i915#2672]) [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt: - shard-kbl: NOTRUN -> [SKIP][52] ([fdo#109271]) +68 similar issues [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt.html * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc: - shard-apl: NOTRUN -> [SKIP][53] ([fdo#109271]) +192 similar issues [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html * igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-pwrite: - shard-tglb: NOTRUN -> [SKIP][54] ([fdo#111825]) [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-tglb5/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-pwrite.html * igt@kms_hdr@bpc-switch: - shard-skl: [PASS][55] -> [FAIL][56] ([i915#1188]) [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl7/igt@kms_hdr@bpc-switch.html [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl4/igt@kms_hdr@bpc-switch.html * igt@kms_pipe_crc_basic@hang-read-crc-pipe-d: - shard-apl: NOTRUN -> [SKIP][57] ([fdo#109271] / [i915#533]) +1 similar issue [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl6/igt@kms_pipe_crc_basic@hang-read-crc-pipe-d.html * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence: - shard-glk: NOTRUN -> [SKIP][58] ([fdo#109271] / [i915#533]) [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk4/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence.html * igt@kms_plane_alpha_blend@pipe-a-alpha-basic: - shard-apl: NOTRUN -> [FAIL][59] ([fdo#108145] / [i915#265]) +4 similar issues [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl2/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb: - shard-glk: NOTRUN -> [FAIL][60] ([fdo#108145] / [i915#265]) [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk4/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb: - shard-apl: NOTRUN -> [FAIL][61] ([i915#265]) [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl1/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max: - shard-kbl: NOTRUN -> [FAIL][62] ([fdo#108145] / [i915#265]) [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl4/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max.html * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping: - shard-apl: NOTRUN -> [SKIP][63] ([fdo#109271] / [i915#2733]) [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl6/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5: - shard-glk: NOTRUN -> [SKIP][64] ([fdo#109271] / [i915#658]) [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk4/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4: - shard-apl: NOTRUN -> [SKIP][65] ([fdo#109271] / [i915#658]) +4 similar issues [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl6/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html * igt@kms_psr2_su@page_flip: - shard-kbl: NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#658]) +1 similar issue [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@kms_psr2_su@page_flip.html - shard-iclb: [PASS][67] -> [SKIP][68] ([fdo#109642] / [fdo#111068] / [i915#658]) [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb2/igt@kms_psr2_su@page_flip.html [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb4/igt@kms_psr2_su@page_flip.html * igt@kms_psr@psr2_sprite_blt: - shard-iclb: [PASS][69] -> [SKIP][70] ([fdo#109441]) +2 similar issues [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb8/igt@kms_psr@psr2_sprite_blt.html * igt@kms_setmode@basic: - shard-snb: NOTRUN -> [FAIL][71] ([i915#31]) [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-snb5/igt@kms_setmode@basic.html * igt@kms_sysfs_edid_timing: - shard-kbl: NOTRUN -> [FAIL][72] ([IGT#2]) [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@kms_sysfs_edid_timing.html * igt@kms_writeback@writeback-check-output: - shard-kbl: NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#2437]) [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl2/igt@kms_writeback@writeback-check-output.html * igt@kms_writeback@writeback-invalid-parameters: - shard-apl: NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#2437]) [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl6/igt@kms_writeback@writeback-invalid-parameters.html * igt@runner@aborted: - shard-snb: NOTRUN -> [FAIL][75] ([i915#3002]) [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-snb6/igt@runner@aborted.html * igt@sysfs_clients@create: - shard-apl: NOTRUN -> [SKIP][76] ([fdo#109271] / [i915#2994]) +1 similar issue [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl2/igt@sysfs_clients@create.html * igt@sysfs_clients@recycle: - shard-kbl: NOTRUN -> [SKIP][77] ([fdo#109271] / [i915#2994]) [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl4/igt@sysfs_clients@recycle.html #### Possible fixes #### * igt@feature_discovery@psr2: - shard-iclb: [SKIP][78] ([i915#658]) -> [PASS][79] [78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@feature_discovery@psr2.html [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb2/igt@feature_discovery@psr2.html * igt@gem_exec_endless@dispatch@vecs0: - {shard-rkl}: [INCOMPLETE][80] ([i915#3778]) -> [PASS][81] [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-1/igt@gem_exec_endless@dispatch@vecs0.html [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-rkl-5/igt@gem_exec_endless@dispatch@vecs0.html * igt@gem_exec_fair@basic-deadline: - {shard-rkl}: [FAIL][82] ([i915#2846]) -> [PASS][83] [82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-2/igt@gem_exec_fair@basic-deadline.html [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-rkl-1/igt@gem_exec_fair@basic-deadline.html * igt@gem_exec_fair@basic-throttle@rcs0: - {shard-rkl}: [FAIL][84] ([i915#2842]) -> [PASS][85] +1 similar issue [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-6/igt@gem_exec_fair@basic-throttle@rcs0.html [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-rkl-2/igt@gem_exec_fair@basic-throttle@rcs0.html * igt@gem_mmap_gtt@cpuset-big-copy-xy: - shard-iclb: [FAIL][86] ([i915#307]) -> [PASS][87] [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@gem_mmap_gtt@cpuset-big-copy-xy.html [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb2/igt@gem_mmap_gtt@cpuset-big-copy-xy.html * igt@gem_softpin@noreloc-s3: - shard-skl: [INCOMPLETE][88] ([i915#198]) -> [PASS][89] [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl8/igt@gem_softpin@noreloc-s3.html [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl6/igt@gem_softpin@noreloc-s3.html * igt@gem_workarounds@suspend-resume-context: - shard-apl: [DMESG-WARN][90] ([i915#180]) -> [PASS][91] [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl2/igt@gem_workarounds@suspend-resume-context.html [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl1/igt@gem_workarounds@suspend-resume-context.html * igt@gen9_exec_parse@allowed-all: - shard-glk: [DMESG-WARN][92] ([i915#1436] / [i915#716]) -> [PASS][93] [92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-glk1/igt@gen9_exec_parse@allowed-all.html [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-glk6/igt@gen9_exec_parse@allowed-all.html * igt@kms_cursor_crc@pipe-b-cursor-suspend: - shard-kbl: [DMESG-WARN][94] ([i915#180]) -> [PASS][95] +1 similar issue [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl4/igt@kms_cursor_crc@pipe-b-cursor-suspend.html [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@kms_cursor_crc@pipe-b-cursor-suspend.html * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions: - shard-skl: [FAIL][96] ([i915#2346]) -> [PASS][97] [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl7/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html * igt@kms_fbcon_fbt@fbc-suspend: - shard-kbl: [INCOMPLETE][98] ([i915#155] / [i915#180] / [i915#636]) -> [PASS][99] [98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl7/igt@kms_fbcon_fbt@fbc-suspend.html [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl2/igt@kms_fbcon_fbt@fbc-suspend.html * igt@kms_flip@flip-vs-expired-vblank@a-edp1: - shard-skl: [FAIL][100] ([i915#79]) -> [PASS][101] +1 similar issue [100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl4/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl6/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html * igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1: - shard-skl: [FAIL][102] ([i915#2122]) -> [PASS][103] +2 similar issues [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl6/igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1.html [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl3/igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1.html * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min: - shard-skl: [FAIL][104] ([fdo#108145] / [i915#265]) -> [PASS][105] [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html * igt@kms_psr2_su@frontbuffer: - shard-iclb: [SKIP][106] ([fdo#109642] / [fdo#111068] / [i915#658]) -> [PASS][107] [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb1/igt@kms_psr2_su@frontbuffer.html [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb2/igt@kms_psr2_su@frontbuffer.html * igt@kms_psr@psr2_no_drrs: - shard-iclb: [SKIP][108] ([fdo#109441]) -> [PASS][109] +2 similar issues [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@kms_psr@psr2_no_drrs.html [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb2/igt@kms_psr@psr2_no_drrs.html * igt@perf@blocking-parameterized: - {shard-rkl}: [FAIL][110] ([i915#3793]) -> [PASS][111] [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-2/igt@perf@blocking-parameterized.html [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-rkl-1/igt@perf@blocking-parameterized.html * igt@perf@polling: - shard-skl: [FAIL][112] ([i915#1542]) -> [PASS][113] +1 similar issue [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl5/igt@perf@polling.html [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl10/igt@perf@polling.html #### Warnings #### * igt@i915_pm_rc6_residency@rc6-fence: - shard-iclb: [WARN][114] ([i915#1804] / [i915#2684]) -> [WARN][115] ([i915#2684]) [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb7/igt@i915_pm_rc6_residency@rc6-fence.html [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb5/igt@i915_pm_rc6_residency@rc6-fence.html * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1: - shard-iclb: [SKIP][116] ([i915#2920]) -> [SKIP][117] ([i915#658]) [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb3/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3: - shard-iclb: [SKIP][118] ([i915#658]) -> [SKIP][119] ([i915#2920]) +1 similar issue [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-iclb2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html * igt@runner@aborted: - shard-kbl: ([FAIL][120], [FAIL][121], [FAIL][122], [FAIL][123], [FAIL][124]) ([i915#180] / [i915#1814] / [i915#3002] / [i915#3363] / [i915#602] / [i915#92]) -> ([FAIL][125], [FAIL][126], [FAIL][127], [FAIL][128]) ([i915#180] / [i915#1814] / [i915#3002] / [i915#3363] / [i915#602]) [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl1/igt@runner@aborted.html [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl4/igt@runner@aborted.html [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl4/igt@runner@aborted.html [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl6/igt@runner@aborted.html [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl7/igt@runner@aborted.html [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl2/igt@runner@aborted.html [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl1/igt@runner@aborted.html [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl7/igt@runner@aborted.html [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-kbl3/igt@runner@aborted.html - shard-apl: ([FAIL][129], [FAIL][130]) ([fdo#109271] / [i915#180] / [i915#1814] / [i915#3363]) -> [FAIL][131] ([i915#3002] / [i915#3363]) [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl2/igt@runner@aborted.html [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl6/igt@runner@aborted.html [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-apl1/igt@runner@aborted.html - shard-skl: [FAIL][132] ([i915#3002] / [i915#3363]) -> ([FAIL][133], [FAIL][134], [FAIL][135]) ([i915#1436] / [i915#3002] / [i915#3363]) [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl6/igt@runner@aborted.html [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl9/igt@runner@aborted.html [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl3/igt@runner@aborted.html [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/shard-skl6/igt@runner@aborted.html {name}: This element is suppressed. This means it is ignored when computing the status of the difference (SUCCESS, WARNING, or FAILURE). [IGT#2]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/2 [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145 [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271 [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278 [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279 [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283 [fdo#109300]: https://bugs.freedesktop.org/show_bug.cgi?id=109300 [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308 [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441 [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642 [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068 [fdo#111314]: https://bugs.freedesktop.org/show_bug.cgi?id=111314 [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614 [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615 [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825 [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827 [fdo#112022]: https://bugs.freedesktop.org/show_bug.cgi?id=112022 [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072 [i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099 [i915#1149]: https://gitlab.freedesktop.org/drm/intel/issues/1149 [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188 [i915#132]: https://gitlab.freedesktop.org/drm/intel/issues/132 [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436 [i915#1542]: https://gitlab.freedesktop.org/drm/intel/issues/1542 [i915#155]: https://gitlab.freedesktop.org/drm/intel/issues/155 [i915#1722]: https://gitlab.freedesktop.org/drm/intel/issues/1722 [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180 [i915#1804]: https://gitlab.freedesktop.org/drm/intel/issues/1804 [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814 [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825 [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845 [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849 [i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888 [i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198 [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982 [i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029 [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122 [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346 [i915#2369]: https://gitlab.freedesktop.org/drm/intel/issues/2369 [i915#2410]: https == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20909/index.html [-- Attachment #2: Type: text/html, Size: 35818 bytes --] ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin ` (6 preceding siblings ...) (?) @ 2021-08-27 18:25 ` Patchwork -1 siblings, 0 replies; 24+ messages in thread From: Patchwork @ 2021-08-27 18:25 UTC (permalink / raw) To: Tvrtko Ursulin; +Cc: intel-gfx [-- Attachment #1: Type: text/plain, Size: 30292 bytes --] == Series Details == Series: drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) URL : https://patchwork.freedesktop.org/series/94105/ State : success == Summary == CI Bug Log - changes from CI_DRM_10530_full -> Patchwork_20910_full ==================================================== Summary ------- **SUCCESS** No regressions found. Known issues ------------ Here are the changes found in Patchwork_20910_full that come from known issues: ### IGT changes ### #### Issues hit #### * igt@gem_create@create-massive: - shard-kbl: NOTRUN -> [DMESG-WARN][1] ([i915#3002]) [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl2/igt@gem_create@create-massive.html * igt@gem_ctx_isolation@preservation-s3@vcs0: - shard-skl: [PASS][2] -> [INCOMPLETE][3] ([i915#146] / [i915#198]) [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl2/igt@gem_ctx_isolation@preservation-s3@vcs0.html [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl2/igt@gem_ctx_isolation@preservation-s3@vcs0.html * igt@gem_ctx_persistence@legacy-engines-mixed: - shard-snb: NOTRUN -> [SKIP][4] ([fdo#109271] / [i915#1099]) +2 similar issues [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-snb7/igt@gem_ctx_persistence@legacy-engines-mixed.html * igt@gem_eio@unwedge-stress: - shard-tglb: [PASS][5] -> [TIMEOUT][6] ([i915#2369] / [i915#3063] / [i915#3648]) [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-tglb3/igt@gem_eio@unwedge-stress.html [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-tglb6/igt@gem_eio@unwedge-stress.html * igt@gem_exec_fair@basic-deadline: - shard-apl: NOTRUN -> [FAIL][7] ([i915#2846]) [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl6/igt@gem_exec_fair@basic-deadline.html * igt@gem_exec_fair@basic-none-share@rcs0: - shard-tglb: [PASS][8] -> [FAIL][9] ([i915#2842]) [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-tglb7/igt@gem_exec_fair@basic-none-share@rcs0.html [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-tglb8/igt@gem_exec_fair@basic-none-share@rcs0.html - shard-apl: [PASS][10] -> [SKIP][11] ([fdo#109271]) [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl6/igt@gem_exec_fair@basic-none-share@rcs0.html [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl3/igt@gem_exec_fair@basic-none-share@rcs0.html * igt@gem_exec_fair@basic-none-solo@rcs0: - shard-kbl: [PASS][12] -> [FAIL][13] ([i915#2842]) +2 similar issues [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl7/igt@gem_exec_fair@basic-none-solo@rcs0.html [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl7/igt@gem_exec_fair@basic-none-solo@rcs0.html * igt@gem_exec_fair@basic-pace-solo@rcs0: - shard-glk: [PASS][14] -> [FAIL][15] ([i915#2842]) +3 similar issues [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-glk4/igt@gem_exec_fair@basic-pace-solo@rcs0.html [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@gem_exec_fair@basic-pace-solo@rcs0.html * igt@gem_exec_fair@basic-pace@rcs0: - shard-iclb: [PASS][16] -> [FAIL][17] ([i915#2842]) [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb3/igt@gem_exec_fair@basic-pace@rcs0.html [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb1/igt@gem_exec_fair@basic-pace@rcs0.html * igt@gem_exec_fair@basic-pace@vcs1: - shard-iclb: NOTRUN -> [FAIL][18] ([i915#2842]) +1 similar issue [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb1/igt@gem_exec_fair@basic-pace@vcs1.html * igt@gem_exec_fair@basic-throttle@rcs0: - shard-iclb: [PASS][19] -> [FAIL][20] ([i915#2849]) [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb8/igt@gem_exec_fair@basic-throttle@rcs0.html [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb3/igt@gem_exec_fair@basic-throttle@rcs0.html * igt@gem_exec_params@no-blt: - shard-iclb: NOTRUN -> [SKIP][21] ([fdo#109283]) [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb5/igt@gem_exec_params@no-blt.html * igt@gem_huc_copy@huc-copy: - shard-apl: NOTRUN -> [SKIP][22] ([fdo#109271] / [i915#2190]) [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@gem_huc_copy@huc-copy.html * igt@gem_userptr_blits@input-checking: - shard-skl: NOTRUN -> [DMESG-WARN][23] ([i915#3002]) [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl6/igt@gem_userptr_blits@input-checking.html - shard-apl: NOTRUN -> [DMESG-WARN][24] ([i915#3002]) +1 similar issue [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@gem_userptr_blits@input-checking.html - shard-snb: NOTRUN -> [DMESG-WARN][25] ([i915#3002]) [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-snb5/igt@gem_userptr_blits@input-checking.html * igt@gem_workarounds@suspend-resume-context: - shard-skl: [PASS][26] -> [INCOMPLETE][27] ([i915#198]) [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl3/igt@gem_workarounds@suspend-resume-context.html [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl6/igt@gem_workarounds@suspend-resume-context.html * igt@gen9_exec_parse@allowed-single: - shard-skl: [PASS][28] -> [DMESG-WARN][29] ([i915#1436] / [i915#716]) [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl3/igt@gen9_exec_parse@allowed-single.html [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl7/igt@gen9_exec_parse@allowed-single.html * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip: - shard-apl: NOTRUN -> [SKIP][30] ([fdo#109271] / [i915#3777]) [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip.html * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip: - shard-kbl: NOTRUN -> [SKIP][31] ([fdo#109271] / [i915#3777]) +1 similar issue [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl6/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip.html * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0: - shard-apl: NOTRUN -> [SKIP][32] ([fdo#109271]) +197 similar issues [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html * igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs: - shard-glk: NOTRUN -> [SKIP][33] ([fdo#109271] / [i915#3886]) +2 similar issues [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html * igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_mc_ccs: - shard-kbl: NOTRUN -> [SKIP][34] ([fdo#109271] / [i915#3886]) +1 similar issue [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl6/igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_mc_ccs.html * igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_mc_ccs: - shard-apl: NOTRUN -> [SKIP][35] ([fdo#109271] / [i915#3886]) +10 similar issues [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl1/igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_mc_ccs.html * igt@kms_ccs@pipe-d-crc-primary-rotation-180-y_tiled_gen12_mc_ccs: - shard-glk: NOTRUN -> [SKIP][36] ([fdo#109271]) +49 similar issues [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@kms_ccs@pipe-d-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html * igt@kms_chamelium@hdmi-crc-multiple: - shard-snb: NOTRUN -> [SKIP][37] ([fdo#109271] / [fdo#111827]) +5 similar issues [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-snb7/igt@kms_chamelium@hdmi-crc-multiple.html * igt@kms_chamelium@hdmi-edid-change-during-suspend: - shard-apl: NOTRUN -> [SKIP][38] ([fdo#109271] / [fdo#111827]) +20 similar issues [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@kms_chamelium@hdmi-edid-change-during-suspend.html * igt@kms_chamelium@hdmi-mode-timings: - shard-kbl: NOTRUN -> [SKIP][39] ([fdo#109271] / [fdo#111827]) +10 similar issues [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl2/igt@kms_chamelium@hdmi-mode-timings.html * igt@kms_chamelium@vga-hpd-after-suspend: - shard-glk: NOTRUN -> [SKIP][40] ([fdo#109271] / [fdo#111827]) +2 similar issues [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@kms_chamelium@vga-hpd-after-suspend.html * igt@kms_color@pipe-a-ctm-blue-to-red: - shard-skl: [PASS][41] -> [DMESG-WARN][42] ([i915#1982]) [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl3/igt@kms_color@pipe-a-ctm-blue-to-red.html [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl7/igt@kms_color@pipe-a-ctm-blue-to-red.html * igt@kms_color@pipe-d-ctm-negative: - shard-skl: NOTRUN -> [SKIP][43] ([fdo#109271]) +1 similar issue [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl6/igt@kms_color@pipe-d-ctm-negative.html * igt@kms_cursor_crc@pipe-c-cursor-512x512-onscreen: - shard-iclb: NOTRUN -> [SKIP][44] ([fdo#109278] / [fdo#109279]) [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb5/igt@kms_cursor_crc@pipe-c-cursor-512x512-onscreen.html * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1: - shard-skl: [PASS][45] -> [FAIL][46] ([i915#79]) [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl6/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl8/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html * igt@kms_flip@plain-flip-fb-recreate@b-edp1: - shard-skl: [PASS][47] -> [FAIL][48] ([i915#2122]) +1 similar issue [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@kms_flip@plain-flip-fb-recreate@b-edp1.html [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl3/igt@kms_flip@plain-flip-fb-recreate@b-edp1.html * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs: - shard-apl: NOTRUN -> [SKIP][49] ([fdo#109271] / [i915#2672]) [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt: - shard-kbl: NOTRUN -> [SKIP][50] ([fdo#109271]) +84 similar issues [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt.html * igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-pwrite: - shard-tglb: NOTRUN -> [SKIP][51] ([fdo#111825]) [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-tglb6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-pwrite.html * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence: - shard-glk: NOTRUN -> [SKIP][52] ([fdo#109271] / [i915#533]) [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence.html * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes: - shard-apl: [PASS][53] -> [DMESG-WARN][54] ([i915#180]) +1 similar issue [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl3/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl2/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html * igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb: - shard-glk: NOTRUN -> [FAIL][55] ([fdo#108145] / [i915#265]) [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb: - shard-apl: NOTRUN -> [FAIL][56] ([i915#265]) [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max: - shard-apl: NOTRUN -> [FAIL][57] ([fdo#108145] / [i915#265]) +3 similar issues [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl1/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max.html * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc: - shard-skl: [PASS][58] -> [FAIL][59] ([fdo#108145] / [i915#265]) [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl3/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max: - shard-kbl: NOTRUN -> [FAIL][60] ([fdo#108145] / [i915#265]) [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl6/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max.html * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping: - shard-apl: NOTRUN -> [SKIP][61] ([fdo#109271] / [i915#2733]) [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl1/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5: - shard-glk: NOTRUN -> [SKIP][62] ([fdo#109271] / [i915#658]) [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk6/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-5.html * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4: - shard-apl: NOTRUN -> [SKIP][63] ([fdo#109271] / [i915#658]) +5 similar issues [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl1/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html * igt@kms_psr2_su@page_flip: - shard-kbl: NOTRUN -> [SKIP][64] ([fdo#109271] / [i915#658]) +2 similar issues [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl2/igt@kms_psr2_su@page_flip.html * igt@kms_psr@psr2_sprite_blt: - shard-iclb: [PASS][65] -> [SKIP][66] ([fdo#109441]) +1 similar issue [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb2/igt@kms_psr@psr2_sprite_blt.html [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb5/igt@kms_psr@psr2_sprite_blt.html * igt@kms_sysfs_edid_timing: - shard-apl: NOTRUN -> [FAIL][67] ([IGT#2]) [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl8/igt@kms_sysfs_edid_timing.html - shard-kbl: NOTRUN -> [FAIL][68] ([IGT#2]) [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl2/igt@kms_sysfs_edid_timing.html * igt@kms_vblank@pipe-d-wait-idle: - shard-apl: NOTRUN -> [SKIP][69] ([fdo#109271] / [i915#533]) +2 similar issues [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@kms_vblank@pipe-d-wait-idle.html * igt@kms_writeback@writeback-invalid-parameters: - shard-apl: NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#2437]) [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl6/igt@kms_writeback@writeback-invalid-parameters.html * igt@runner@aborted: - shard-snb: NOTRUN -> [FAIL][71] ([i915#3002]) [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-snb5/igt@runner@aborted.html * igt@sysfs_clients@recycle: - shard-kbl: NOTRUN -> [SKIP][72] ([fdo#109271] / [i915#2994]) +1 similar issue [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl6/igt@sysfs_clients@recycle.html * igt@sysfs_clients@recycle-many: - shard-apl: NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#2994]) +1 similar issue [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@sysfs_clients@recycle-many.html * igt@sysfs_heartbeat_interval@precise@rcs0: - shard-snb: NOTRUN -> [SKIP][74] ([fdo#109271]) +104 similar issues [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-snb5/igt@sysfs_heartbeat_interval@precise@rcs0.html #### Possible fixes #### * igt@feature_discovery@psr2: - shard-iclb: [SKIP][75] ([i915#658]) -> [PASS][76] [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@feature_discovery@psr2.html [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb2/igt@feature_discovery@psr2.html * igt@gem_eio@unwedge-stress: - shard-iclb: [TIMEOUT][77] ([i915#2369] / [i915#2481] / [i915#3070]) -> [PASS][78] [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb1/igt@gem_eio@unwedge-stress.html [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb8/igt@gem_eio@unwedge-stress.html * igt@gem_exec_endless@dispatch@vecs0: - {shard-rkl}: [INCOMPLETE][79] ([i915#3778]) -> [PASS][80] [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-1/igt@gem_exec_endless@dispatch@vecs0.html [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-rkl-2/igt@gem_exec_endless@dispatch@vecs0.html * igt@gem_exec_fair@basic-none@vecs0: - shard-kbl: [FAIL][81] ([i915#2842]) -> [PASS][82] [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl7/igt@gem_exec_fair@basic-none@vecs0.html [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl7/igt@gem_exec_fair@basic-none@vecs0.html * igt@gem_exec_fair@basic-pace@vecs0: - shard-tglb: [FAIL][83] ([i915#2842]) -> [PASS][84] +1 similar issue [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-tglb8/igt@gem_exec_fair@basic-pace@vecs0.html [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-tglb7/igt@gem_exec_fair@basic-pace@vecs0.html * igt@gem_exec_fair@basic-throttle@rcs0: - {shard-rkl}: [FAIL][85] ([i915#2842]) -> [PASS][86] +1 similar issue [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-6/igt@gem_exec_fair@basic-throttle@rcs0.html [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-rkl-2/igt@gem_exec_fair@basic-throttle@rcs0.html * igt@gem_mmap_gtt@cpuset-big-copy-xy: - shard-iclb: [FAIL][87] ([i915#307]) -> [PASS][88] [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@gem_mmap_gtt@cpuset-big-copy-xy.html [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb2/igt@gem_mmap_gtt@cpuset-big-copy-xy.html * igt@gem_softpin@noreloc-s3: - shard-skl: [INCOMPLETE][89] ([i915#198]) -> [PASS][90] [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl8/igt@gem_softpin@noreloc-s3.html [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl6/igt@gem_softpin@noreloc-s3.html * igt@gen9_exec_parse@allowed-all: - shard-glk: [DMESG-WARN][91] ([i915#1436] / [i915#716]) -> [PASS][92] [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-glk1/igt@gen9_exec_parse@allowed-all.html [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-glk8/igt@gen9_exec_parse@allowed-all.html * igt@kms_cursor_crc@pipe-c-cursor-suspend: - shard-apl: [DMESG-WARN][93] ([i915#180]) -> [PASS][94] [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl6/igt@kms_cursor_crc@pipe-c-cursor-suspend.html [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl8/igt@kms_cursor_crc@pipe-c-cursor-suspend.html * igt@kms_flip@flip-vs-expired-vblank@a-edp1: - shard-skl: [FAIL][95] ([i915#79]) -> [PASS][96] +1 similar issue [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl4/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl9/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html * igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1: - shard-skl: [FAIL][97] ([i915#2122]) -> [PASS][98] +2 similar issues [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl6/igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1.html [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl4/igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1.html * igt@kms_hdr@bpc-switch-dpms: - shard-skl: [FAIL][99] ([i915#1188]) -> [PASS][100] [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl9/igt@kms_hdr@bpc-switch-dpms.html [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl7/igt@kms_hdr@bpc-switch-dpms.html * igt@kms_psr2_su@frontbuffer: - shard-iclb: [SKIP][101] ([fdo#109642] / [fdo#111068] / [i915#658]) -> [PASS][102] [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb1/igt@kms_psr2_su@frontbuffer.html [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb2/igt@kms_psr2_su@frontbuffer.html * igt@kms_psr@psr2_no_drrs: - shard-iclb: [SKIP][103] ([fdo#109441]) -> [PASS][104] +1 similar issue [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@kms_psr@psr2_no_drrs.html [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb2/igt@kms_psr@psr2_no_drrs.html * igt@kms_vblank@pipe-b-ts-continuation-suspend: - shard-kbl: [DMESG-WARN][105] ([i915#180]) -> [PASS][106] +2 similar issues [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl4/igt@kms_vblank@pipe-b-ts-continuation-suspend.html [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl6/igt@kms_vblank@pipe-b-ts-continuation-suspend.html * igt@perf@blocking-parameterized: - {shard-rkl}: [FAIL][107] ([i915#3793]) -> [PASS][108] [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-rkl-2/igt@perf@blocking-parameterized.html [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-rkl-1/igt@perf@blocking-parameterized.html * igt@perf@polling: - shard-skl: [FAIL][109] ([i915#1542]) -> [PASS][110] +1 similar issue [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl5/igt@perf@polling.html [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl4/igt@perf@polling.html #### Warnings #### * igt@i915_pm_dc@dc9-dpms: - shard-skl: [SKIP][111] ([fdo#109271]) -> [INCOMPLETE][112] ([i915#198]) [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl10/igt@i915_pm_dc@dc9-dpms.html [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl5/igt@i915_pm_dc@dc9-dpms.html * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1: - shard-iclb: [SKIP][113] ([i915#2920]) -> [SKIP][114] ([i915#658]) [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb2/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb7/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3: - shard-iclb: [SKIP][115] ([i915#658]) -> [SKIP][116] ([i915#2920]) [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-iclb4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-iclb2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html * igt@runner@aborted: - shard-kbl: ([FAIL][117], [FAIL][118], [FAIL][119], [FAIL][120], [FAIL][121]) ([i915#180] / [i915#1814] / [i915#3002] / [i915#3363] / [i915#602] / [i915#92]) -> ([FAIL][122], [FAIL][123], [FAIL][124]) ([i915#180] / [i915#3002] / [i915#3363] / [i915#92]) [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl7/igt@runner@aborted.html [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl4/igt@runner@aborted.html [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl6/igt@runner@aborted.html [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl4/igt@runner@aborted.html [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-kbl1/igt@runner@aborted.html [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl1/igt@runner@aborted.html [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl4/igt@runner@aborted.html [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-kbl2/igt@runner@aborted.html - shard-apl: ([FAIL][125], [FAIL][126]) ([fdo#109271] / [i915#180] / [i915#1814] / [i915#3363]) -> ([FAIL][127], [FAIL][128], [FAIL][129], [FAIL][130], [FAIL][131]) ([fdo#109271] / [i915#1610] / [i915#180] / [i915#1814] / [i915#3002] / [i915#3363]) [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl2/igt@runner@aborted.html [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-apl6/igt@runner@aborted.html [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl3/igt@runner@aborted.html [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl2/igt@runner@aborted.html [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl8/igt@runner@aborted.html [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl7/igt@runner@aborted.html [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-apl2/igt@runner@aborted.html - shard-skl: [FAIL][132] ([i915#3002] / [i915#3363]) -> ([FAIL][133], [FAIL][134], [FAIL][135]) ([i915#1436] / [i915#3002] / [i915#3363]) [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10530/shard-skl6/igt@runner@aborted.html [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl7/igt@runner@aborted.html [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl8/igt@runner@aborted.html [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/shard-skl6/igt@runner@aborted.html {name}: This element is suppressed. This means it is ignored when computing the status of the difference (SUCCESS, WARNING, or FAILURE). [IGT#2]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/2 [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145 [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271 [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278 [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279 [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283 [fdo#109300]: https://bugs.freedesktop.org/show_bug.cgi?id=109300 [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308 [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441 [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642 [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068 [fdo#111314]: https://bugs.freedesktop.org/show_bug.cgi?id=111314 [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614 [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615 [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825 [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827 [fdo#112022]: https://bugs.freedesktop.org/show_bug.cgi?id=112022 [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072 [i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099 [i915#1149]: https://gitlab.freedesktop.org/drm/intel/issues/1149 [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188 [i915#132]: https://gitlab.freedesktop.org/drm/intel/issues/132 [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436 [i915#146]: https://gitlab.freedesktop.org/drm/intel/issues/146 [i915#1542]: https://gitlab.freedesktop.org/drm/intel/issues/1542 [i915#1610]: https://gitlab.freedesktop.org/drm/intel/issues/1610 [i915#1722]: https://gitlab.freedesktop.org/drm/intel/issues/1722 [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180 [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814 [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825 [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845 [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849 [i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198 [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982 [i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029 [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122 [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190 [i915#2369]: https://gitlab.freedesktop.org/drm/intel/issues/2369 [i915#2410]: https://gitlab.freedesktop.org/drm/intel/issues/2410 [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437 [i915#2481]: https://gitlab.freedesktop.org/drm/intel/issues/2481 [i915#2530]: https://gitlab.freedesktop.org/drm/intel/issues/2530 [i915#2582]: https://gitlab.freedesktop.org/drm/intel/ == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20910/index.html [-- Attachment #2: Type: text/html, Size: 35711 bytes --] ^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup @ 2021-10-05 11:31 Tvrtko Ursulin 2021-10-05 13:05 ` Thomas Hellström 0 siblings, 1 reply; 24+ messages in thread From: Tvrtko Ursulin @ 2021-10-05 11:31 UTC (permalink / raw) To: Intel-gfx Cc: dri-devel, Tvrtko Ursulin, Daniel Vetter, Matthew Auld, Thomas Hellström From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) when rendering is done on Intel dgfx and scanout/composition on Intel igfx. Before this patch the driver was not quite ready for that setup, mainly because it was able to emit a semaphore wait between the two GPUs, which results in deadlocks because semaphore target location in HWSP is neither shared between the two, nor mapped in both GGTT spaces. To fix it the patch adds an additional check to a couple of relevant code paths in order to prevent using semaphores for inter-engine synchronisation when relevant objects are not in the same GGTT space. v2: * Avoid adding rq->i915. (Chris) v3: * Use GGTT which describes the limit more precisely. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> --- drivers/gpu/drm/i915/i915_request.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 79da5eca60af..4f189982f67e 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1145,6 +1145,12 @@ __emit_semaphore_wait(struct i915_request *to, return 0; } +static bool +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) +{ + return to->engine->gt->ggtt == from->engine->gt->ggtt; +} + static int emit_semaphore_wait(struct i915_request *to, struct i915_request *from, @@ -1153,6 +1159,9 @@ emit_semaphore_wait(struct i915_request *to, const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; struct i915_sw_fence *wait = &to->submit; + if (!can_use_semaphore_wait(to, from)) + goto await_fence; + if (!intel_context_use_semaphores(to->context)) goto await_fence; @@ -1256,7 +1265,8 @@ __i915_request_await_execution(struct i915_request *to, * immediate execution, and so we must wait until it reaches the * active slot. */ - if (intel_engine_has_semaphores(to->engine) && + if (can_use_semaphore_wait(to, from) && + intel_engine_has_semaphores(to->engine) && !i915_request_has_initial_breadcrumb(to)) { err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); if (err < 0) -- 2.30.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-10-05 11:31 [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup Tvrtko Ursulin @ 2021-10-05 13:05 ` Thomas Hellström 2021-10-05 14:55 ` Tvrtko Ursulin 2021-10-13 12:06 ` Daniel Vetter 0 siblings, 2 replies; 24+ messages in thread From: Thomas Hellström @ 2021-10-05 13:05 UTC (permalink / raw) To: Tvrtko Ursulin, Intel-gfx Cc: dri-devel, Tvrtko Ursulin, Daniel Vetter, Matthew Auld Hi, Tvrtko, On 10/5/21 13:31, Tvrtko Ursulin wrote: > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > when rendering is done on Intel dgfx and scanout/composition on Intel > igfx. > > Before this patch the driver was not quite ready for that setup, mainly > because it was able to emit a semaphore wait between the two GPUs, which > results in deadlocks because semaphore target location in HWSP is neither > shared between the two, nor mapped in both GGTT spaces. > > To fix it the patch adds an additional check to a couple of relevant code > paths in order to prevent using semaphores for inter-engine > synchronisation when relevant objects are not in the same GGTT space. > > v2: > * Avoid adding rq->i915. (Chris) > > v3: > * Use GGTT which describes the limit more precisely. > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > Cc: Matthew Auld <matthew.auld@intel.com> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> An IMO pretty important bugfix. I read up a bit on the previous discussion on this, and from what I understand the other two options were 1) Ripping out the semaphore code, 2) Consider dma-fences from other instances of the same driver as foreign. For imported dma-bufs we do 2), but particularly with lmem and p2p that's a more straightforward decision. I don't think 1) is a reasonable approach to fix this bug, (but perhaps as a general cleanup?), and for 2) yes I guess we might end up doing that, unless we find some real benefits in treating same-driver-separate-device dma-fences as local, but for this particular bug, IMO this is a reasonable fix. So, Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> > --- > drivers/gpu/drm/i915/i915_request.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > index 79da5eca60af..4f189982f67e 100644 > --- a/drivers/gpu/drm/i915/i915_request.c > +++ b/drivers/gpu/drm/i915/i915_request.c > @@ -1145,6 +1145,12 @@ __emit_semaphore_wait(struct i915_request *to, > return 0; > } > > +static bool > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > +{ > + return to->engine->gt->ggtt == from->engine->gt->ggtt; > +} > + > static int > emit_semaphore_wait(struct i915_request *to, > struct i915_request *from, > @@ -1153,6 +1159,9 @@ emit_semaphore_wait(struct i915_request *to, > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > struct i915_sw_fence *wait = &to->submit; > > + if (!can_use_semaphore_wait(to, from)) > + goto await_fence; > + > if (!intel_context_use_semaphores(to->context)) > goto await_fence; > > @@ -1256,7 +1265,8 @@ __i915_request_await_execution(struct i915_request *to, > * immediate execution, and so we must wait until it reaches the > * active slot. > */ > - if (intel_engine_has_semaphores(to->engine) && > + if (can_use_semaphore_wait(to, from) && > + intel_engine_has_semaphores(to->engine) && > !i915_request_has_initial_breadcrumb(to)) { > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > if (err < 0) ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-10-05 13:05 ` Thomas Hellström @ 2021-10-05 14:55 ` Tvrtko Ursulin 2021-10-13 12:06 ` Daniel Vetter 1 sibling, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-10-05 14:55 UTC (permalink / raw) To: Thomas Hellström, Intel-gfx Cc: dri-devel, Tvrtko Ursulin, Daniel Vetter, Matthew Auld On 05/10/2021 14:05, Thomas Hellström wrote: > Hi, Tvrtko, > > On 10/5/21 13:31, Tvrtko Ursulin wrote: >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >> >> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) >> when rendering is done on Intel dgfx and scanout/composition on Intel >> igfx. >> >> Before this patch the driver was not quite ready for that setup, mainly >> because it was able to emit a semaphore wait between the two GPUs, which >> results in deadlocks because semaphore target location in HWSP is neither >> shared between the two, nor mapped in both GGTT spaces. >> >> To fix it the patch adds an additional check to a couple of relevant code >> paths in order to prevent using semaphores for inter-engine >> synchronisation when relevant objects are not in the same GGTT space. >> >> v2: >> * Avoid adding rq->i915. (Chris) >> >> v3: >> * Use GGTT which describes the limit more precisely. >> >> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> >> Cc: Matthew Auld <matthew.auld@intel.com> >> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> > > An IMO pretty important bugfix. I read up a bit on the previous > discussion on this, and from what I understand the other two options were > > 1) Ripping out the semaphore code, > 2) Consider dma-fences from other instances of the same driver as foreign. Yes, with the caveat on the second point that there is a multi-tile scenario, granted of limited consequence because it only applies is someone tries to run that wo/ GuC, where the "same driver" check is not enough. This patch handles that case as well. And of course it is hypothetical someone would be able to create a inter-tile dependency there. Probably nothing in the current code does it. > For imported dma-bufs we do 2), but particularly with lmem and p2p > that's a more straightforward decision. I am not immediately familiar with p2p considerations. > I don't think 1) is a reasonable approach to fix this bug, (but perhaps > as a general cleanup?), and for 2) yes I guess we might end up doing > that, unless we find some real benefits in treating > same-driver-separate-device dma-fences as local, but for this particular > bug, IMO this is a reasonable fix. On the option of removing the semaphore inter-optimisation I would not call it cleanup since it had clear performance benefits. I personally don't have those benchmarks results saved though. So I'd proceed with caution there if the code can harmlessly remain in the confines of the execlists backend. Second topic, the whole same driver fence upcast issue, I suppose can be discussed along the lines of whether priority inheritance across drivers is useful. Like for instance page flip prio boost, which currently does safely work between i915 instances, and is relevant to hybrid graphics. It was safe when I looked at it, courtesy of global scheduler lock. If we wanted to keep that and formalise via an more explicit/generic cross driver API is the question. So unless it is not safe after all, I wouldn't rip it out before the discussion on the big picture happens. > So, > > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Thanks, I'll push it once again cleared by CI. Regards, Tvrtko > > > > >> --- >> drivers/gpu/drm/i915/i915_request.c | 12 +++++++++++- >> 1 file changed, 11 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/gpu/drm/i915/i915_request.c >> b/drivers/gpu/drm/i915/i915_request.c >> index 79da5eca60af..4f189982f67e 100644 >> --- a/drivers/gpu/drm/i915/i915_request.c >> +++ b/drivers/gpu/drm/i915/i915_request.c >> @@ -1145,6 +1145,12 @@ __emit_semaphore_wait(struct i915_request *to, >> return 0; >> } >> +static bool >> +can_use_semaphore_wait(struct i915_request *to, struct i915_request >> *from) >> +{ >> + return to->engine->gt->ggtt == from->engine->gt->ggtt; >> +} >> + >> static int >> emit_semaphore_wait(struct i915_request *to, >> struct i915_request *from, >> @@ -1153,6 +1159,9 @@ emit_semaphore_wait(struct i915_request *to, >> const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; >> struct i915_sw_fence *wait = &to->submit; >> + if (!can_use_semaphore_wait(to, from)) >> + goto await_fence; >> + >> if (!intel_context_use_semaphores(to->context)) >> goto await_fence; >> @@ -1256,7 +1265,8 @@ __i915_request_await_execution(struct >> i915_request *to, >> * immediate execution, and so we must wait until it reaches the >> * active slot. >> */ >> - if (intel_engine_has_semaphores(to->engine) && >> + if (can_use_semaphore_wait(to, from) && >> + intel_engine_has_semaphores(to->engine) && >> !i915_request_has_initial_breadcrumb(to)) { >> err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); >> if (err < 0) ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-10-05 13:05 ` Thomas Hellström 2021-10-05 14:55 ` Tvrtko Ursulin @ 2021-10-13 12:06 ` Daniel Vetter 2021-10-13 16:02 ` Tvrtko Ursulin 1 sibling, 1 reply; 24+ messages in thread From: Daniel Vetter @ 2021-10-13 12:06 UTC (permalink / raw) To: Thomas Hellström Cc: Tvrtko Ursulin, Intel-gfx, dri-devel, Tvrtko Ursulin, Daniel Vetter, Matthew Auld On Tue, Oct 05, 2021 at 03:05:25PM +0200, Thomas Hellström wrote: > Hi, Tvrtko, > > On 10/5/21 13:31, Tvrtko Ursulin wrote: > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > > > In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) > > when rendering is done on Intel dgfx and scanout/composition on Intel > > igfx. > > > > Before this patch the driver was not quite ready for that setup, mainly > > because it was able to emit a semaphore wait between the two GPUs, which > > results in deadlocks because semaphore target location in HWSP is neither > > shared between the two, nor mapped in both GGTT spaces. > > > > To fix it the patch adds an additional check to a couple of relevant code > > paths in order to prevent using semaphores for inter-engine > > synchronisation when relevant objects are not in the same GGTT space. > > > > v2: > > * Avoid adding rq->i915. (Chris) > > > > v3: > > * Use GGTT which describes the limit more precisely. > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch> > > Cc: Matthew Auld <matthew.auld@intel.com> > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> > > An IMO pretty important bugfix. I read up a bit on the previous discussion > on this, and from what I understand the other two options were > > 1) Ripping out the semaphore code, > 2) Consider dma-fences from other instances of the same driver as foreign. > > For imported dma-bufs we do 2), but particularly with lmem and p2p that's a > more straightforward decision. > > I don't think 1) is a reasonable approach to fix this bug, (but perhaps as a > general cleanup?), and for 2) yes I guess we might end up doing that, unless > we find some real benefits in treating same-driver-separate-device > dma-fences as local, but for this particular bug, IMO this is a reasonable > fix. The foreign dma-fences have uapi impact, which Tvrtko shrugged off as "it's a good idea", and not it's really just not. So we still need to that this properly. > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> But I'm also ok with just merging this as-is so the situation doesn't become too entertaining. -Daniel > > > > > > > --- > > drivers/gpu/drm/i915/i915_request.c | 12 +++++++++++- > > 1 file changed, 11 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c > > index 79da5eca60af..4f189982f67e 100644 > > --- a/drivers/gpu/drm/i915/i915_request.c > > +++ b/drivers/gpu/drm/i915/i915_request.c > > @@ -1145,6 +1145,12 @@ __emit_semaphore_wait(struct i915_request *to, > > return 0; > > } > > +static bool > > +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) > > +{ > > + return to->engine->gt->ggtt == from->engine->gt->ggtt; > > +} > > + > > static int > > emit_semaphore_wait(struct i915_request *to, > > struct i915_request *from, > > @@ -1153,6 +1159,9 @@ emit_semaphore_wait(struct i915_request *to, > > const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; > > struct i915_sw_fence *wait = &to->submit; > > + if (!can_use_semaphore_wait(to, from)) > > + goto await_fence; > > + > > if (!intel_context_use_semaphores(to->context)) > > goto await_fence; > > @@ -1256,7 +1265,8 @@ __i915_request_await_execution(struct i915_request *to, > > * immediate execution, and so we must wait until it reaches the > > * active slot. > > */ > > - if (intel_engine_has_semaphores(to->engine) && > > + if (can_use_semaphore_wait(to, from) && > > + intel_engine_has_semaphores(to->engine) && > > !i915_request_has_initial_breadcrumb(to)) { > > err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); > > if (err < 0) -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup 2021-10-13 12:06 ` Daniel Vetter @ 2021-10-13 16:02 ` Tvrtko Ursulin 0 siblings, 0 replies; 24+ messages in thread From: Tvrtko Ursulin @ 2021-10-13 16:02 UTC (permalink / raw) To: Daniel Vetter, Thomas Hellström Cc: Intel-gfx, dri-devel, Tvrtko Ursulin, Daniel Vetter, Matthew Auld On 13/10/2021 13:06, Daniel Vetter wrote: > On Tue, Oct 05, 2021 at 03:05:25PM +0200, Thomas Hellström wrote: >> Hi, Tvrtko, >> >> On 10/5/21 13:31, Tvrtko Ursulin wrote: >>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>> >>> In short this makes i915 work for hybrid setups (DRI_PRIME=1 with Mesa) >>> when rendering is done on Intel dgfx and scanout/composition on Intel >>> igfx. >>> >>> Before this patch the driver was not quite ready for that setup, mainly >>> because it was able to emit a semaphore wait between the two GPUs, which >>> results in deadlocks because semaphore target location in HWSP is neither >>> shared between the two, nor mapped in both GGTT spaces. >>> >>> To fix it the patch adds an additional check to a couple of relevant code >>> paths in order to prevent using semaphores for inter-engine >>> synchronisation when relevant objects are not in the same GGTT space. >>> >>> v2: >>> * Avoid adding rq->i915. (Chris) >>> >>> v3: >>> * Use GGTT which describes the limit more precisely. >>> >>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> >>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> >>> Cc: Matthew Auld <matthew.auld@intel.com> >>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> >> >> An IMO pretty important bugfix. I read up a bit on the previous discussion >> on this, and from what I understand the other two options were >> >> 1) Ripping out the semaphore code, >> 2) Consider dma-fences from other instances of the same driver as foreign. >> >> For imported dma-bufs we do 2), but particularly with lmem and p2p that's a >> more straightforward decision. >> >> I don't think 1) is a reasonable approach to fix this bug, (but perhaps as a >> general cleanup?), and for 2) yes I guess we might end up doing that, unless >> we find some real benefits in treating same-driver-separate-device >> dma-fences as local, but for this particular bug, IMO this is a reasonable >> fix. > > The foreign dma-fences have uapi impact, which Tvrtko shrugged off as > "it's a good idea", and not it's really just not. So we still need to that > this properly. I always said lets merge the fix and discuss it. Fix only improved one fail and did not introduce any new issues you are worried about. They were all already there. So lets start the discussion why it is not a good idea to extend the concept of priority inheritance in the hybrid case? Today we can have high priority compositor waiting for client rendering, or even I915_PRIORITY_DISPLAY which I _think_ somehow ties into page flips with full screen stuff, and with igpu we do priority inheritance in those cases. Why it is a bad idea to do the same in the hybrid setup? Regards, Tvrtko > >> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> > > But I'm also ok with just merging this as-is so the situation doesn't > become too entertaining. > -Daniel > >> >> >> >> >> >>> --- >>> drivers/gpu/drm/i915/i915_request.c | 12 +++++++++++- >>> 1 file changed, 11 insertions(+), 1 deletion(-) >>> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c >>> index 79da5eca60af..4f189982f67e 100644 >>> --- a/drivers/gpu/drm/i915/i915_request.c >>> +++ b/drivers/gpu/drm/i915/i915_request.c >>> @@ -1145,6 +1145,12 @@ __emit_semaphore_wait(struct i915_request *to, >>> return 0; >>> } >>> +static bool >>> +can_use_semaphore_wait(struct i915_request *to, struct i915_request *from) >>> +{ >>> + return to->engine->gt->ggtt == from->engine->gt->ggtt; >>> +} >>> + >>> static int >>> emit_semaphore_wait(struct i915_request *to, >>> struct i915_request *from, >>> @@ -1153,6 +1159,9 @@ emit_semaphore_wait(struct i915_request *to, >>> const intel_engine_mask_t mask = READ_ONCE(from->engine)->mask; >>> struct i915_sw_fence *wait = &to->submit; >>> + if (!can_use_semaphore_wait(to, from)) >>> + goto await_fence; >>> + >>> if (!intel_context_use_semaphores(to->context)) >>> goto await_fence; >>> @@ -1256,7 +1265,8 @@ __i915_request_await_execution(struct i915_request *to, >>> * immediate execution, and so we must wait until it reaches the >>> * active slot. >>> */ >>> - if (intel_engine_has_semaphores(to->engine) && >>> + if (can_use_semaphore_wait(to, from) && >>> + intel_engine_has_semaphores(to->engine) && >>> !i915_request_has_initial_breadcrumb(to)) { >>> err = __emit_semaphore_wait(to, from, from->fence.seqno - 1); >>> if (err < 0) > ^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2021-10-13 16:04 UTC | newest] Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-08-27 13:30 [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup Tvrtko Ursulin 2021-08-27 13:30 ` [Intel-gfx] " Tvrtko Ursulin 2021-08-27 13:50 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork 2021-08-27 14:21 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-08-27 14:39 ` [PATCH v2] " Tvrtko Ursulin 2021-08-27 14:39 ` [Intel-gfx] " Tvrtko Ursulin 2021-08-27 14:44 ` Tvrtko Ursulin 2021-08-30 8:26 ` Daniel Vetter 2021-08-31 9:15 ` Tvrtko Ursulin 2021-08-31 12:43 ` Daniel Vetter 2021-08-31 13:18 ` Tvrtko Ursulin 2021-09-02 14:33 ` Daniel Vetter 2021-09-02 15:01 ` Tvrtko Ursulin 2021-09-08 17:06 ` Daniel Vetter 2021-09-09 8:26 ` Tvrtko Ursulin 2021-08-27 15:03 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) Patchwork 2021-08-27 15:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-08-27 17:35 ` [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup Patchwork 2021-08-27 18:25 ` [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup (rev2) Patchwork 2021-10-05 11:31 [PATCH] drm/i915: Handle Intel igfx + Intel dgfx hybrid graphics setup Tvrtko Ursulin 2021-10-05 13:05 ` Thomas Hellström 2021-10-05 14:55 ` Tvrtko Ursulin 2021-10-13 12:06 ` Daniel Vetter 2021-10-13 16:02 ` Tvrtko Ursulin
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.