* [PATCH 0/2] GuC submission / DRM scheduler integration plan + new uAPI
@ 2021-06-11 23:40 ` Matthew Brost
0 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2021-06-11 23:40 UTC (permalink / raw)
To: intel-gfx, dri-devel
Cc: matthew.brost, tony.ye, tvrtko.ursulin, daniele.ceraolospurio,
carl.zhang, jason.ekstrand, michal.mrozek, jon.bloomfield,
mesa-dev, daniel.vetter, christian.koenig, john.c.harrison
Subject and patches say it all.
v2: Address comments, patches have details of changes
v3: Address comments, patches have details of changes
v4: Address comments, patches have details of changes
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Matthew Brost (2):
drm/doc/rfc: i915 GuC submission / DRM scheduler
drm/doc/rfc: i915 new parallel submission uAPI plan
Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++
Documentation/gpu/rfc/i915_scheduler.rst | 148 ++++++++++++++++++
Documentation/gpu/rfc/index.rst | 4 +
3 files changed, 269 insertions(+)
create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
--
2.28.0
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Intel-gfx] [PATCH 0/2] GuC submission / DRM scheduler integration plan + new uAPI
@ 2021-06-11 23:40 ` Matthew Brost
0 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2021-06-11 23:40 UTC (permalink / raw)
To: intel-gfx, dri-devel
Cc: carl.zhang, jason.ekstrand, mesa-dev, daniel.vetter, christian.koenig
Subject and patches say it all.
v2: Address comments, patches have details of changes
v3: Address comments, patches have details of changes
v4: Address comments, patches have details of changes
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Matthew Brost (2):
drm/doc/rfc: i915 GuC submission / DRM scheduler
drm/doc/rfc: i915 new parallel submission uAPI plan
Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++
Documentation/gpu/rfc/i915_scheduler.rst | 148 ++++++++++++++++++
Documentation/gpu/rfc/index.rst | 4 +
3 files changed, 269 insertions(+)
create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
--
2.28.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
@ 2021-06-11 23:40 ` Matthew Brost
-1 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2021-06-11 23:40 UTC (permalink / raw)
To: intel-gfx, dri-devel
Cc: matthew.brost, tony.ye, tvrtko.ursulin, daniele.ceraolospurio,
carl.zhang, jason.ekstrand, michal.mrozek, jon.bloomfield,
mesa-dev, daniel.vetter, christian.koenig, john.c.harrison
Add entry for i915 GuC submission / DRM scheduler integration plan.
Follow up patch with details of new parallel submission uAPI to come.
v2:
(Daniel Vetter)
- Expand explaination of why bonding isn't supported for GuC
submission
- CC some of the DRM scheduler maintainers
- Add priority inheritance / boosting use case
- Add reasoning for removing in order assumptions
(Daniel Stone)
- Add links to priority spec
v4:
(Tvrtko)
- Add TODOs section
(Daniel Vetter)
- Pull in 1 line from following patch
Cc: Christian König <christian.koenig@amd.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Dave Airlie <airlied@redhat.com>
---
Documentation/gpu/rfc/i915_scheduler.rst | 91 ++++++++++++++++++++++++
Documentation/gpu/rfc/index.rst | 4 ++
2 files changed, 95 insertions(+)
create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
new file mode 100644
index 000000000000..7acd386a6b49
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -0,0 +1,91 @@
+=========================================
+I915 GuC Submission/DRM Scheduler Section
+=========================================
+
+Upstream plan
+=============
+For upstream the overall plan for landing GuC submission and integrating the
+i915 with the DRM scheduler is:
+
+* Merge basic GuC submission
+ * Basic submission support for all gen11+ platforms
+ * Not enabled by default on any current platforms but can be enabled via
+ modparam enable_guc
+ * Lots of rework will need to be done to integrate with DRM scheduler so
+ no need to nit pick everything in the code, it just should be
+ functional, no major coding style / layering errors, and not regress
+ execlists
+ * Update IGTs / selftests as needed to work with GuC submission
+ * Enable CI on supported platforms for a baseline
+ * Rework / get CI heathly for GuC submission in place as needed
+* Merge new parallel submission uAPI
+ * Bonding uAPI completely incompatible with GuC submission, plus it has
+ severe design issues in general, which is why we want to retire it no
+ matter what
+ * New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
+ which configures a slot with N contexts
+ * After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
+ a slot in a single execbuf IOCTL and the batches run on the GPU in
+ paralllel
+ * Initially only for GuC submission but execlists can be supported if
+ needed
+* Convert the i915 to use the DRM scheduler
+ * GuC submission backend fully integrated with DRM scheduler
+ * All request queues removed from backend (e.g. all backpressure
+ handled in DRM scheduler)
+ * Resets / cancels hook in DRM scheduler
+ * Watchdog hooks into DRM scheduler
+ * Lots of complexity of the GuC backend can be pulled out once
+ integrated with DRM scheduler (e.g. state machine gets
+ simplier, locking gets simplier, etc...)
+ * Execlists backend will minimum required to hook in the DRM scheduler
+ * Legacy interface
+ * Features like timeslicing / preemption / virtual engines would
+ be difficult to integrate with the DRM scheduler and these
+ features are not required for GuC submission as the GuC does
+ these things for us
+ * ROI low on fully integrating into DRM scheduler
+ * Fully integrating would add lots of complexity to DRM
+ scheduler
+ * Port i915 priority inheritance / boosting feature in DRM scheduler
+ * Used for i915 page flip, may be useful to other DRM drivers as
+ well
+ * Will be an optional feature in the DRM scheduler
+ * Remove in-order completion assumptions from DRM scheduler
+ * Even when using the DRM scheduler the backends will handle
+ preemption, timeslicing, etc... so it is possible for jobs to
+ finish out of order
+ * Pull out i915 priority levels and use DRM priority levels
+ * Optimize DRM scheduler as needed
+
+TODOs for GuC submission upstream
+=================================
+
+* Need an update to GuC firmware / i915 to enable error state capture
+* Open source tool to decode GuC logs
+* Public GuC spec
+
+New uAPI for basic GuC submission
+=================================
+No major changes are required to the uAPI for basic GuC submission. The only
+change is a new scheduler attribute: I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP.
+This attribute indicates the 2k i915 user priority levels are statically mapped
+into 3 levels as follows:
+
+* -1k to -1 Low priority
+* 0 Medium priority
+* 1 to 1k High priority
+
+This is needed because the GuC only has 4 priority bands. The highest priority
+band is reserved with the kernel. This aligns with the DRM scheduler priority
+levels too.
+
+Spec references:
+----------------
+* https://www.khronos.org/registry/EGL/extensions/IMG/EGL_IMG_context_priority.txt
+* https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap5.html#devsandqueues-priority
+* https://spec.oneapi.com/level-zero/latest/core/api.html#ze-command-queue-priority-t
+
+New parallel submission uAPI
+============================
+Details to come in a following patch.
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 05670442ca1b..91e93a705230 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -19,3 +19,7 @@ host such documentation:
.. toctree::
i915_gem_lmem.rst
+
+.. toctree::
+
+ i915_scheduler.rst
--
2.28.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Intel-gfx] [PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler
@ 2021-06-11 23:40 ` Matthew Brost
0 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2021-06-11 23:40 UTC (permalink / raw)
To: intel-gfx, dri-devel
Cc: carl.zhang, jason.ekstrand, mesa-dev, daniel.vetter, christian.koenig
Add entry for i915 GuC submission / DRM scheduler integration plan.
Follow up patch with details of new parallel submission uAPI to come.
v2:
(Daniel Vetter)
- Expand explaination of why bonding isn't supported for GuC
submission
- CC some of the DRM scheduler maintainers
- Add priority inheritance / boosting use case
- Add reasoning for removing in order assumptions
(Daniel Stone)
- Add links to priority spec
v4:
(Tvrtko)
- Add TODOs section
(Daniel Vetter)
- Pull in 1 line from following patch
Cc: Christian König <christian.koenig@amd.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Dave Airlie <airlied@gmail.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Dave Airlie <airlied@redhat.com>
---
Documentation/gpu/rfc/i915_scheduler.rst | 91 ++++++++++++++++++++++++
Documentation/gpu/rfc/index.rst | 4 ++
2 files changed, 95 insertions(+)
create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
new file mode 100644
index 000000000000..7acd386a6b49
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -0,0 +1,91 @@
+=========================================
+I915 GuC Submission/DRM Scheduler Section
+=========================================
+
+Upstream plan
+=============
+For upstream the overall plan for landing GuC submission and integrating the
+i915 with the DRM scheduler is:
+
+* Merge basic GuC submission
+ * Basic submission support for all gen11+ platforms
+ * Not enabled by default on any current platforms but can be enabled via
+ modparam enable_guc
+ * Lots of rework will need to be done to integrate with DRM scheduler so
+ no need to nit pick everything in the code, it just should be
+ functional, no major coding style / layering errors, and not regress
+ execlists
+ * Update IGTs / selftests as needed to work with GuC submission
+ * Enable CI on supported platforms for a baseline
+ * Rework / get CI heathly for GuC submission in place as needed
+* Merge new parallel submission uAPI
+ * Bonding uAPI completely incompatible with GuC submission, plus it has
+ severe design issues in general, which is why we want to retire it no
+ matter what
+ * New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
+ which configures a slot with N contexts
+ * After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
+ a slot in a single execbuf IOCTL and the batches run on the GPU in
+ paralllel
+ * Initially only for GuC submission but execlists can be supported if
+ needed
+* Convert the i915 to use the DRM scheduler
+ * GuC submission backend fully integrated with DRM scheduler
+ * All request queues removed from backend (e.g. all backpressure
+ handled in DRM scheduler)
+ * Resets / cancels hook in DRM scheduler
+ * Watchdog hooks into DRM scheduler
+ * Lots of complexity of the GuC backend can be pulled out once
+ integrated with DRM scheduler (e.g. state machine gets
+ simplier, locking gets simplier, etc...)
+ * Execlists backend will minimum required to hook in the DRM scheduler
+ * Legacy interface
+ * Features like timeslicing / preemption / virtual engines would
+ be difficult to integrate with the DRM scheduler and these
+ features are not required for GuC submission as the GuC does
+ these things for us
+ * ROI low on fully integrating into DRM scheduler
+ * Fully integrating would add lots of complexity to DRM
+ scheduler
+ * Port i915 priority inheritance / boosting feature in DRM scheduler
+ * Used for i915 page flip, may be useful to other DRM drivers as
+ well
+ * Will be an optional feature in the DRM scheduler
+ * Remove in-order completion assumptions from DRM scheduler
+ * Even when using the DRM scheduler the backends will handle
+ preemption, timeslicing, etc... so it is possible for jobs to
+ finish out of order
+ * Pull out i915 priority levels and use DRM priority levels
+ * Optimize DRM scheduler as needed
+
+TODOs for GuC submission upstream
+=================================
+
+* Need an update to GuC firmware / i915 to enable error state capture
+* Open source tool to decode GuC logs
+* Public GuC spec
+
+New uAPI for basic GuC submission
+=================================
+No major changes are required to the uAPI for basic GuC submission. The only
+change is a new scheduler attribute: I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP.
+This attribute indicates the 2k i915 user priority levels are statically mapped
+into 3 levels as follows:
+
+* -1k to -1 Low priority
+* 0 Medium priority
+* 1 to 1k High priority
+
+This is needed because the GuC only has 4 priority bands. The highest priority
+band is reserved with the kernel. This aligns with the DRM scheduler priority
+levels too.
+
+Spec references:
+----------------
+* https://www.khronos.org/registry/EGL/extensions/IMG/EGL_IMG_context_priority.txt
+* https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap5.html#devsandqueues-priority
+* https://spec.oneapi.com/level-zero/latest/core/api.html#ze-command-queue-priority-t
+
+New parallel submission uAPI
+============================
+Details to come in a following patch.
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 05670442ca1b..91e93a705230 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -19,3 +19,7 @@ host such documentation:
.. toctree::
i915_gem_lmem.rst
+
+.. toctree::
+
+ i915_scheduler.rst
--
2.28.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
@ 2021-06-11 23:40 ` Matthew Brost
-1 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2021-06-11 23:40 UTC (permalink / raw)
To: intel-gfx, dri-devel
Cc: matthew.brost, tony.ye, tvrtko.ursulin, daniele.ceraolospurio,
carl.zhang, jason.ekstrand, michal.mrozek, jon.bloomfield,
mesa-dev, daniel.vetter, christian.koenig, john.c.harrison
Add entry for i915 new parallel submission uAPI plan.
v2:
(Daniel Vetter):
- Expand logical order explaination
- Add dummy header
- Only allow N BBs in execbuf IOCTL
- Configure parallel submission per slot not per gem context
v3:
(Marcin Ślusarz):
- Lot's of typos / bad english fixed
(Tvrtko Ursulin):
- Consistent pseudo code, clean up wording in descriptions
v4:
(Daniel Vetter)
- Drop flags
- Add kernel doc
- Reword a few things / fix typos
(Tvrtko)
- Reword a few things / fix typos
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tony Ye <tony.ye@intel.com>
CC: Carl Zhang <carl.zhang@intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++++++
Documentation/gpu/rfc/i915_scheduler.rst | 59 ++++++++-
2 files changed, 175 insertions(+), 1 deletion(-)
create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h b/Documentation/gpu/rfc/i915_parallel_execbuf.h
new file mode 100644
index 000000000000..c22af3a359e4
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
@@ -0,0 +1,117 @@
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see i915_context_engines_parallel_submit */
+
+/**
+ * struct drm_i915_context_engines_parallel_submit - Configure engine for
+ * parallel submission.
+ *
+ * Setup a slot in the context engine map to allow multiple BBs to be submitted
+ * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the GPU
+ * in parallel. Multiple hardware contexts are created internally in the i915
+ * run these BBs. Once a slot is configured for N BBs only N BBs can be
+ * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user
+ * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows how
+ * many BBs there are based on the slot's configuration. The N BBs are the last
+ * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set.
+ *
+ * The default placement behavior is to create implicit bonds between each
+ * context if each context maps to more than 1 physical engine (e.g. context is
+ * a virtual engine). Also we only allow contexts of same engine class and these
+ * contexts must be in logically contiguous order. Examples of the placement
+ * behavior described below. Lastly, the default is to not allow BBs to
+ * preempted mid BB rather insert coordinated preemption on all hardware
+ * contexts between each set of BBs. Flags may be added in the future to change
+ * bott of these default behaviors.
+ *
+ * Returns -EINVAL if hardware context placement configuration is invalid or if
+ * the placement configuration isn't supported on the platform / submission
+ * interface.
+ * Returns -ENODEV if extension isn't supported on the platform / submission
+ * inteface.
+ *
+ * .. code-block::
+ *
+ * Example 1 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=1,
+ * engines=CS[0],CS[1])
+ *
+ * Results in the following valid placement:
+ * CS[0], CS[1]
+ *
+ * Example 2 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CS[0],CS[2],CS[1],CS[3])
+ *
+ * Results in the following valid placements:
+ * CS[0], CS[1]
+ * CS[2], CS[3]
+ *
+ * This can also be thought of as 2 virtual engines described by 2-D array
+ * in the engines the field with bonds placed between each index of the
+ * virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to
+ * CS[3].
+ * VE[0] = CS[0], CS[2]
+ * VE[1] = CS[1], CS[3]
+ *
+ * Example 3 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CS[0],CS[1],CS[1],CS[3])
+ *
+ * Results in the following valid and invalid placements:
+ * CS[0], CS[1]
+ * CS[1], CS[3] - Not logical contiguous, return -EINVAL
+ */
+struct drm_i915_context_engines_parallel_submit {
+ /**
+ * @base: base user extension.
+ */
+ struct i915_user_extension base;
+
+ /**
+ * @engine_index: slot for parallel engine
+ */
+ __u16 engine_index;
+
+ /**
+ * @width: number of contexts per parallel engine
+ */
+ __u16 width;
+
+ /**
+ * @num_siblings: number of siblings per context
+ */
+ __u16 num_siblings;
+
+ /**
+ * @mbz16: reserved for future use; must be zero
+ */
+ __u16 mbz16;
+
+ /**
+ * @flags: all undefined flags must be zero, currently not defined flags
+ */
+ __u64 flags;
+
+ /**
+ * @mbz64: reserved for future use; must be zero
+ */
+ __u64 mbz64[3];
+
+ /**
+ * @engines: 2-d array of engine instances to configure parallel engine
+ *
+ * length = width (i) * num_siblings (j)
+ * index = j + i * num_siblings
+ */
+ struct i915_engine_class_instance engines[0];
+
+} __attribute__ ((packed));
+
diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
index 7acd386a6b49..63849b50e663 100644
--- a/Documentation/gpu/rfc/i915_scheduler.rst
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -88,4 +88,61 @@ Spec references:
New parallel submission uAPI
============================
-Details to come in a following patch.
+The existing bonding uAPI is completely broken with GuC submission because
+whether a submission is a single context submit or parallel submit isn't known
+until execbuf time activated via the I915_SUBMIT_FENCE. To submit multiple
+contexts in parallel with the GuC the context must be explicitly registered with
+N contexts and all N contexts must be submitted in a single command to the GuC.
+The GuC interfaces do not support dynamically changing between N contexts as the
+bonding uAPI does. Hence the need for a new parallel submission interface. Also
+the legacy bonding uAPI is quite confusing and not intuitive at all. Furthermore
+I915_SUBMIT_FENCE is by design a future fence, so not really something we should
+continue to support.
+
+The new parallel submission uAPI consists of 3 parts:
+
+* Export engines logical mapping
+* A 'set_parallel' extension to configure contexts for parallel
+ submission
+* Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
+
+Export engines logical mapping
+------------------------------
+Certain use cases require BBs to be placed on engine instances in logical order
+(e.g. split-frame on gen11+). The logical mapping of engine instances can change
+based on fusing. Rather than making UMDs be aware of fusing, simply expose the
+logical mapping with the existing query engine info IOCTL. Also the GuC
+submission interface currently only supports submitting multiple contexts to
+engines in logical order which is a new requirement compared to execlists.
+Lastly, all current platforms have at most 2 engine instances and the logical
+order is the same as uAPI order. This will change on platforms with more than 2
+engine instances.
+
+A single bit will be added to drm_i915_engine_info.flags indicating that the
+logical instance has been returned and a new field,
+drm_i915_engine_info.logical_instance, returns the logical instance.
+
+A 'set_parallel' extension to configure contexts for parallel submission
+------------------------------------------------------------------------
+The 'set_parallel' extension configures a slot for parallel submission of N BBs.
+It is a setup step that must be called before using any of the contexts. See
+I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE or I915_CONTEXT_ENGINES_EXT_BOND for
+similar existing examples. Once a slot is configured for parallel submission the
+execbuf2 IOCTL can be called submitting N BBs in a single IOCTL. Initially only
+supports GuC submission. Execlists supports can be added later if needed.
+
+Add I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT and
+drm_i915_context_engines_parallel_submit to the uAPI to implement this
+extension.
+
+.. kernel-doc:: Documentation/gpu/rfc/i915_parallel_execbuf.h
+ :functions: drm_i915_context_engines_parallel_submit
+
+Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
+-------------------------------------------------------------------
+Contexts that have been configured with the 'set_parallel' extension can only
+submit N BBs in a single execbuf2 IOCTL. The BBs are either the last N objects
+in in the drm_i915_gem_exec_object2 list or the first N if I915_EXEC_BATCH_FIRST
+is set. The number of BBs is implicit based on the slot submitted and how it has
+been configured by 'set_parallel' or other extensions. No uAPI changes are
+required to execbuf2 IOCTL.
--
2.28.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Intel-gfx] [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan
@ 2021-06-11 23:40 ` Matthew Brost
0 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2021-06-11 23:40 UTC (permalink / raw)
To: intel-gfx, dri-devel
Cc: carl.zhang, jason.ekstrand, mesa-dev, daniel.vetter, christian.koenig
Add entry for i915 new parallel submission uAPI plan.
v2:
(Daniel Vetter):
- Expand logical order explaination
- Add dummy header
- Only allow N BBs in execbuf IOCTL
- Configure parallel submission per slot not per gem context
v3:
(Marcin Ślusarz):
- Lot's of typos / bad english fixed
(Tvrtko Ursulin):
- Consistent pseudo code, clean up wording in descriptions
v4:
(Daniel Vetter)
- Drop flags
- Add kernel doc
- Reword a few things / fix typos
(Tvrtko)
- Reword a few things / fix typos
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tony Ye <tony.ye@intel.com>
CC: Carl Zhang <carl.zhang@intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++++++
Documentation/gpu/rfc/i915_scheduler.rst | 59 ++++++++-
2 files changed, 175 insertions(+), 1 deletion(-)
create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h b/Documentation/gpu/rfc/i915_parallel_execbuf.h
new file mode 100644
index 000000000000..c22af3a359e4
--- /dev/null
+++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
@@ -0,0 +1,117 @@
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see i915_context_engines_parallel_submit */
+
+/**
+ * struct drm_i915_context_engines_parallel_submit - Configure engine for
+ * parallel submission.
+ *
+ * Setup a slot in the context engine map to allow multiple BBs to be submitted
+ * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the GPU
+ * in parallel. Multiple hardware contexts are created internally in the i915
+ * run these BBs. Once a slot is configured for N BBs only N BBs can be
+ * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user
+ * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows how
+ * many BBs there are based on the slot's configuration. The N BBs are the last
+ * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set.
+ *
+ * The default placement behavior is to create implicit bonds between each
+ * context if each context maps to more than 1 physical engine (e.g. context is
+ * a virtual engine). Also we only allow contexts of same engine class and these
+ * contexts must be in logically contiguous order. Examples of the placement
+ * behavior described below. Lastly, the default is to not allow BBs to
+ * preempted mid BB rather insert coordinated preemption on all hardware
+ * contexts between each set of BBs. Flags may be added in the future to change
+ * bott of these default behaviors.
+ *
+ * Returns -EINVAL if hardware context placement configuration is invalid or if
+ * the placement configuration isn't supported on the platform / submission
+ * interface.
+ * Returns -ENODEV if extension isn't supported on the platform / submission
+ * inteface.
+ *
+ * .. code-block::
+ *
+ * Example 1 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=1,
+ * engines=CS[0],CS[1])
+ *
+ * Results in the following valid placement:
+ * CS[0], CS[1]
+ *
+ * Example 2 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CS[0],CS[2],CS[1],CS[3])
+ *
+ * Results in the following valid placements:
+ * CS[0], CS[1]
+ * CS[2], CS[3]
+ *
+ * This can also be thought of as 2 virtual engines described by 2-D array
+ * in the engines the field with bonds placed between each index of the
+ * virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to
+ * CS[3].
+ * VE[0] = CS[0], CS[2]
+ * VE[1] = CS[1], CS[3]
+ *
+ * Example 3 pseudo code:
+ * CS[X] = generic engine of same class, logical instance X
+ * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
+ * set_engines(INVALID)
+ * set_parallel(engine_index=0, width=2, num_siblings=2,
+ * engines=CS[0],CS[1],CS[1],CS[3])
+ *
+ * Results in the following valid and invalid placements:
+ * CS[0], CS[1]
+ * CS[1], CS[3] - Not logical contiguous, return -EINVAL
+ */
+struct drm_i915_context_engines_parallel_submit {
+ /**
+ * @base: base user extension.
+ */
+ struct i915_user_extension base;
+
+ /**
+ * @engine_index: slot for parallel engine
+ */
+ __u16 engine_index;
+
+ /**
+ * @width: number of contexts per parallel engine
+ */
+ __u16 width;
+
+ /**
+ * @num_siblings: number of siblings per context
+ */
+ __u16 num_siblings;
+
+ /**
+ * @mbz16: reserved for future use; must be zero
+ */
+ __u16 mbz16;
+
+ /**
+ * @flags: all undefined flags must be zero, currently not defined flags
+ */
+ __u64 flags;
+
+ /**
+ * @mbz64: reserved for future use; must be zero
+ */
+ __u64 mbz64[3];
+
+ /**
+ * @engines: 2-d array of engine instances to configure parallel engine
+ *
+ * length = width (i) * num_siblings (j)
+ * index = j + i * num_siblings
+ */
+ struct i915_engine_class_instance engines[0];
+
+} __attribute__ ((packed));
+
diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
index 7acd386a6b49..63849b50e663 100644
--- a/Documentation/gpu/rfc/i915_scheduler.rst
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -88,4 +88,61 @@ Spec references:
New parallel submission uAPI
============================
-Details to come in a following patch.
+The existing bonding uAPI is completely broken with GuC submission because
+whether a submission is a single context submit or parallel submit isn't known
+until execbuf time activated via the I915_SUBMIT_FENCE. To submit multiple
+contexts in parallel with the GuC the context must be explicitly registered with
+N contexts and all N contexts must be submitted in a single command to the GuC.
+The GuC interfaces do not support dynamically changing between N contexts as the
+bonding uAPI does. Hence the need for a new parallel submission interface. Also
+the legacy bonding uAPI is quite confusing and not intuitive at all. Furthermore
+I915_SUBMIT_FENCE is by design a future fence, so not really something we should
+continue to support.
+
+The new parallel submission uAPI consists of 3 parts:
+
+* Export engines logical mapping
+* A 'set_parallel' extension to configure contexts for parallel
+ submission
+* Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
+
+Export engines logical mapping
+------------------------------
+Certain use cases require BBs to be placed on engine instances in logical order
+(e.g. split-frame on gen11+). The logical mapping of engine instances can change
+based on fusing. Rather than making UMDs be aware of fusing, simply expose the
+logical mapping with the existing query engine info IOCTL. Also the GuC
+submission interface currently only supports submitting multiple contexts to
+engines in logical order which is a new requirement compared to execlists.
+Lastly, all current platforms have at most 2 engine instances and the logical
+order is the same as uAPI order. This will change on platforms with more than 2
+engine instances.
+
+A single bit will be added to drm_i915_engine_info.flags indicating that the
+logical instance has been returned and a new field,
+drm_i915_engine_info.logical_instance, returns the logical instance.
+
+A 'set_parallel' extension to configure contexts for parallel submission
+------------------------------------------------------------------------
+The 'set_parallel' extension configures a slot for parallel submission of N BBs.
+It is a setup step that must be called before using any of the contexts. See
+I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE or I915_CONTEXT_ENGINES_EXT_BOND for
+similar existing examples. Once a slot is configured for parallel submission the
+execbuf2 IOCTL can be called submitting N BBs in a single IOCTL. Initially only
+supports GuC submission. Execlists supports can be added later if needed.
+
+Add I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT and
+drm_i915_context_engines_parallel_submit to the uAPI to implement this
+extension.
+
+.. kernel-doc:: Documentation/gpu/rfc/i915_parallel_execbuf.h
+ :functions: drm_i915_context_engines_parallel_submit
+
+Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
+-------------------------------------------------------------------
+Contexts that have been configured with the 'set_parallel' extension can only
+submit N BBs in a single execbuf2 IOCTL. The BBs are either the last N objects
+in in the drm_i915_gem_exec_object2 list or the first N if I915_EXEC_BATCH_FIRST
+is set. The number of BBs is implicit based on the slot submitted and how it has
+been configured by 'set_parallel' or other extensions. No uAPI changes are
+required to execbuf2 IOCTL.
--
2.28.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for GuC submission / DRM scheduler integration plan + new uAPI
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
` (2 preceding siblings ...)
(?)
@ 2021-06-11 23:59 ` Patchwork
-1 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2021-06-11 23:59 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
== Series Details ==
Series: GuC submission / DRM scheduler integration plan + new uAPI
URL : https://patchwork.freedesktop.org/series/91417/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
2a85231e7bad drm/doc/rfc: i915 GuC submission / DRM scheduler
-:35: WARNING:BAD_SIGN_OFF: Duplicate signature
#35:
Cc: Jason Ekstrand <jason@jlekstrand.net>
-:42: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#42:
new file mode 100644
-:47: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#47: FILE: Documentation/gpu/rfc/i915_scheduler.rst:1:
+=========================================
total: 0 errors, 3 warnings, 0 checks, 98 lines checked
2ba86c355a5b drm/doc/rfc: i915 new parallel submission uAPI plan
-:39: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#39:
new file mode 100644
-:44: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#44: FILE: Documentation/gpu/rfc/i915_parallel_execbuf.h:1:
+#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see i915_context_engines_parallel_submit */
-:72: WARNING:TYPO_SPELLING: 'inteface' may be misspelled - perhaps 'interface'?
#72: FILE: Documentation/gpu/rfc/i915_parallel_execbuf.h:29:
+ * inteface.
^^^^^^^^
-:159: WARNING:PREFER_DEFINED_ATTRIBUTE_MACRO: Prefer __packed over __attribute__((packed))
#159: FILE: Documentation/gpu/rfc/i915_parallel_execbuf.h:116:
+} __attribute__ ((packed));
-:224: WARNING:REPEATED_WORD: Possible repeated word: 'in'
#224: FILE: Documentation/gpu/rfc/i915_scheduler.rst:145:
+in in the drm_i915_gem_exec_object2 list or the first N if I915_EXEC_BATCH_FIRST
total: 0 errors, 5 warnings, 0 checks, 179 lines checked
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Intel-gfx] ✗ Fi.CI.DOCS: warning for GuC submission / DRM scheduler integration plan + new uAPI
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
` (3 preceding siblings ...)
(?)
@ 2021-06-12 0:03 ` Patchwork
-1 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2021-06-12 0:03 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
== Series Details ==
Series: GuC submission / DRM scheduler integration plan + new uAPI
URL : https://patchwork.freedesktop.org/series/91417/
State : warning
== Summary ==
$ make htmldocs 2>&1 > /dev/null | grep i915
/home/cidrm/kernel/Documentation/gpu/rfc/i915_scheduler:138: ./Documentation/gpu/rfc/i915_parallel_execbuf.h:30: WARNING: Error in "code-block" directive:
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for GuC submission / DRM scheduler integration plan + new uAPI
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
` (4 preceding siblings ...)
(?)
@ 2021-06-12 0:30 ` Patchwork
-1 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2021-06-12 0:30 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
[-- Attachment #1.1: Type: text/plain, Size: 7712 bytes --]
== Series Details ==
Series: GuC submission / DRM scheduler integration plan + new uAPI
URL : https://patchwork.freedesktop.org/series/91417/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_10213 -> Patchwork_20351
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/index.html
Known issues
------------
Here are the changes found in Patchwork_20351 that come from known issues:
### IGT changes ###
#### Possible fixes ####
* igt@gem_exec_suspend@basic-s0:
- {fi-tgl-1115g4}: [FAIL][1] ([i915#1888]) -> [PASS][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s0.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-tgl-1115g4/igt@gem_exec_suspend@basic-s0.html
* igt@i915_selftest@live@objects:
- {fi-tgl-dsi}: [DMESG-WARN][3] ([i915#2867]) -> [PASS][4] +10 similar issues
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-tgl-dsi/igt@i915_selftest@live@objects.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-tgl-dsi/igt@i915_selftest@live@objects.html
#### Warnings ####
* igt@i915_pm_rpm@basic-rte:
- fi-kbl-guc: [FAIL][5] ([i915#3049]) -> [SKIP][6] ([fdo#109271])
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-kbl-guc/igt@i915_pm_rpm@basic-rte.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-kbl-guc/igt@i915_pm_rpm@basic-rte.html
* igt@i915_selftest@live@execlists:
- fi-cfl-8109u: [INCOMPLETE][7] ([i915#3462]) -> [DMESG-FAIL][8] ([i915#3462])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-cfl-8109u/igt@i915_selftest@live@execlists.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-cfl-8109u/igt@i915_selftest@live@execlists.html
- fi-bsw-nick: [DMESG-FAIL][9] ([i915#3462]) -> [INCOMPLETE][10] ([i915#2782] / [i915#2940] / [i915#3462])
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-bsw-nick/igt@i915_selftest@live@execlists.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-bsw-nick/igt@i915_selftest@live@execlists.html
* igt@runner@aborted:
- fi-skl-6600u: [FAIL][11] ([i915#1436] / [i915#2426] / [i915#3363]) -> [FAIL][12] ([i915#1436] / [i915#3363])
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-skl-6600u/igt@runner@aborted.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-skl-6600u/igt@runner@aborted.html
- fi-cfl-8109u: [FAIL][13] ([i915#3363]) -> [FAIL][14] ([i915#2426] / [i915#3363])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-cfl-8109u/igt@runner@aborted.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-cfl-8109u/igt@runner@aborted.html
- fi-glk-dsi: [FAIL][15] ([i915#3363] / [k.org#202321]) -> [FAIL][16] ([i915#2426] / [i915#3363] / [k.org#202321])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-glk-dsi/igt@runner@aborted.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-glk-dsi/igt@runner@aborted.html
- fi-bdw-5557u: [FAIL][17] ([i915#3462]) -> [FAIL][18] ([i915#1602] / [i915#2029])
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-bdw-5557u/igt@runner@aborted.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-bdw-5557u/igt@runner@aborted.html
- fi-kbl-soraka: [FAIL][19] ([i915#1436] / [i915#3363]) -> [FAIL][20] ([i915#1436] / [i915#2426] / [i915#3363])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-kbl-soraka/igt@runner@aborted.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-kbl-soraka/igt@runner@aborted.html
- fi-kbl-guc: [FAIL][21] ([i915#1436] / [i915#3363]) -> [FAIL][22] ([i915#1436] / [i915#2426] / [i915#3363])
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-kbl-guc/igt@runner@aborted.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-kbl-guc/igt@runner@aborted.html
- fi-cml-s: [FAIL][23] ([i915#3363] / [i915#3462]) -> [FAIL][24] ([i915#2082] / [i915#2426] / [i915#3363] / [i915#3462])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-cml-s/igt@runner@aborted.html
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-cml-s/igt@runner@aborted.html
- fi-kbl-7567u: [FAIL][25] ([i915#1436] / [i915#2426] / [i915#3363]) -> [FAIL][26] ([i915#1436] / [i915#3363])
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/fi-kbl-7567u/igt@runner@aborted.html
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/fi-kbl-7567u/igt@runner@aborted.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
[i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
[i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602
[i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888
[i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029
[i915#2082]: https://gitlab.freedesktop.org/drm/intel/issues/2082
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#2426]: https://gitlab.freedesktop.org/drm/intel/issues/2426
[i915#2782]: https://gitlab.freedesktop.org/drm/intel/issues/2782
[i915#2867]: https://gitlab.freedesktop.org/drm/intel/issues/2867
[i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
[i915#3012]: https://gitlab.freedesktop.org/drm/intel/issues/3012
[i915#3049]: https://gitlab.freedesktop.org/drm/intel/issues/3049
[i915#3276]: https://gitlab.freedesktop.org/drm/intel/issues/3276
[i915#3277]: https://gitlab.freedesktop.org/drm/intel/issues/3277
[i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
[i915#3283]: https://gitlab.freedesktop.org/drm/intel/issues/3283
[i915#3363]: https://gitlab.freedesktop.org/drm/intel/issues/3363
[i915#3462]: https://gitlab.freedesktop.org/drm/intel/issues/3462
[i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
[i915#3542]: https://gitlab.freedesktop.org/drm/intel/issues/3542
[i915#3544]: https://gitlab.freedesktop.org/drm/intel/issues/3544
[i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
[k.org#202321]: https://bugzilla.kernel.org/show_bug.cgi?id=202321
Participating hosts (42 -> 37)
------------------------------
Additional (1): fi-rkl-11500t
Missing (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-bwr-2160 fi-bdw-samus fi-hsw-gt1
Build changes
-------------
* Linux: CI_DRM_10213 -> Patchwork_20351
CI-20190529: 20190529
CI_DRM_10213: b09945cfd4510dfc6d9a6a03ce22b66e7419484d @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_6104: f8f81bd3752f3126a47d9dbba2d0ab29f7c17a19 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Patchwork_20351: 2ba86c355a5b53947cd72f031c5368d6a10f2527 @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
2ba86c355a5b drm/doc/rfc: i915 new parallel submission uAPI plan
2a85231e7bad drm/doc/rfc: i915 GuC submission / DRM scheduler
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/index.html
[-- Attachment #1.2: Type: text/html, Size: 9807 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Intel-gfx] ✗ Fi.CI.IGT: failure for GuC submission / DRM scheduler integration plan + new uAPI
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
` (5 preceding siblings ...)
(?)
@ 2021-06-12 1:46 ` Patchwork
-1 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2021-06-12 1:46 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
[-- Attachment #1.1: Type: text/plain, Size: 30281 bytes --]
== Series Details ==
Series: GuC submission / DRM scheduler integration plan + new uAPI
URL : https://patchwork.freedesktop.org/series/91417/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_10213_full -> Patchwork_20351_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with Patchwork_20351_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_20351_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_20351_full:
### IGT changes ###
#### Possible regressions ####
* igt@kms_flip@2x-flip-vs-suspend@ab-hdmi-a1-hdmi-a2:
- shard-glk: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk7/igt@kms_flip@2x-flip-vs-suspend@ab-hdmi-a1-hdmi-a2.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk1/igt@kms_flip@2x-flip-vs-suspend@ab-hdmi-a1-hdmi-a2.html
### Piglit changes ###
#### Possible regressions ####
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-abs-neg-int-ivec3 (NEW):
- {pig-icl-1065g7}: NOTRUN -> [INCOMPLETE][3] +7 similar issues
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/pig-icl-1065g7/spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-abs-neg-int-ivec3.html
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-not-ivec2-int (NEW):
- {pig-icl-1065g7}: NOTRUN -> [CRASH][4]
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/pig-icl-1065g7/spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-not-ivec2-int.html
New tests
---------
New tests have been introduced between CI_DRM_10213_full and Patchwork_20351_full:
### New Piglit tests (9) ###
* spec@glsl-1.30@execution@built-in-functions@fs-asinh-vec4:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@fs-op-assign-bitand-uvec4-uvec4:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitand-ivec2-int:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitand-not-ivec3-ivec3:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-abs-neg-int-ivec3:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-not-ivec2-int:
- Statuses : 1 crash(s)
- Exec time: [0.46] s
* spec@glsl-1.30@execution@built-in-functions@fs-op-bitxor-not-ivec3-int:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@vs-op-add-uvec2-uvec2:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
* spec@glsl-1.30@execution@built-in-functions@vs-op-assign-rshift-uvec3-int:
- Statuses : 1 incomplete(s)
- Exec time: [0.0] s
Known issues
------------
Here are the changes found in Patchwork_20351_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@fbdev@nullptr:
- shard-glk: [PASS][5] -> [DMESG-WARN][6] ([i915#118] / [i915#95])
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk1/igt@fbdev@nullptr.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk8/igt@fbdev@nullptr.html
* igt@gem_ctx_isolation@preservation-s3@vecs0:
- shard-apl: [PASS][7] -> [DMESG-WARN][8] ([i915#180])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-apl1/igt@gem_ctx_isolation@preservation-s3@vecs0.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl6/igt@gem_ctx_isolation@preservation-s3@vecs0.html
* igt@gem_ctx_persistence@clone:
- shard-snb: NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#1099]) +6 similar issues
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-snb6/igt@gem_ctx_persistence@clone.html
* igt@gem_exec_fair@basic-none-share@rcs0:
- shard-iclb: [PASS][10] -> [FAIL][11] ([i915#2842])
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb5/igt@gem_exec_fair@basic-none-share@rcs0.html
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb7/igt@gem_exec_fair@basic-none-share@rcs0.html
* igt@gem_exec_fair@basic-pace-solo@rcs0:
- shard-iclb: NOTRUN -> [FAIL][12] ([i915#2842]) +1 similar issue
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@gem_exec_fair@basic-pace-solo@rcs0.html
* igt@gem_exec_fair@basic-throttle@rcs0:
- shard-iclb: [PASS][13] -> [FAIL][14] ([i915#2849])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb8/igt@gem_exec_fair@basic-throttle@rcs0.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@gem_exec_fair@basic-throttle@rcs0.html
* igt@gem_exec_flush@basic-batch-kernel-default-cmd:
- shard-snb: NOTRUN -> [SKIP][15] ([fdo#109271]) +342 similar issues
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-snb6/igt@gem_exec_flush@basic-batch-kernel-default-cmd.html
* igt@gem_exec_reloc@basic-wide-active@bcs0:
- shard-apl: NOTRUN -> [FAIL][16] ([i915#2389]) +3 similar issues
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl8/igt@gem_exec_reloc@basic-wide-active@bcs0.html
* igt@gem_exec_reloc@basic-wide-active@rcs0:
- shard-kbl: NOTRUN -> [FAIL][17] ([i915#2389]) +4 similar issues
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl3/igt@gem_exec_reloc@basic-wide-active@rcs0.html
* igt@gem_huc_copy@huc-copy:
- shard-apl: NOTRUN -> [SKIP][18] ([fdo#109271] / [i915#2190])
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl3/igt@gem_huc_copy@huc-copy.html
* igt@gem_mmap_gtt@big-copy:
- shard-glk: [PASS][19] -> [FAIL][20] ([i915#307])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk3/igt@gem_mmap_gtt@big-copy.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk7/igt@gem_mmap_gtt@big-copy.html
* igt@gem_mmap_gtt@cpuset-big-copy-odd:
- shard-iclb: [PASS][21] -> [FAIL][22] ([i915#2428])
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb8/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
* igt@gem_pwrite@basic-exhaustion:
- shard-apl: NOTRUN -> [WARN][23] ([i915#2658])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@gem_pwrite@basic-exhaustion.html
* igt@gem_userptr_blits@dmabuf-sync:
- shard-apl: NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#3323])
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl7/igt@gem_userptr_blits@dmabuf-sync.html
* igt@gem_userptr_blits@input-checking:
- shard-apl: NOTRUN -> [DMESG-WARN][25] ([i915#3002]) +1 similar issue
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl6/igt@gem_userptr_blits@input-checking.html
* igt@gem_userptr_blits@vma-merge:
- shard-kbl: NOTRUN -> [FAIL][26] ([i915#3318])
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl1/igt@gem_userptr_blits@vma-merge.html
* igt@gen7_exec_parse@basic-allocation:
- shard-iclb: NOTRUN -> [SKIP][27] ([fdo#109289])
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@gen7_exec_parse@basic-allocation.html
* igt@gen9_exec_parse@bb-start-cmd:
- shard-iclb: NOTRUN -> [SKIP][28] ([fdo#112306])
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@gen9_exec_parse@bb-start-cmd.html
* igt@i915_pm_dc@dc6-psr:
- shard-skl: [PASS][29] -> [FAIL][30] ([i915#454])
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl2/igt@i915_pm_dc@dc6-psr.html
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl2/igt@i915_pm_dc@dc6-psr.html
* igt@i915_selftest@live@execlists:
- shard-apl: NOTRUN -> [DMESG-FAIL][31] ([i915#3462])
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl7/igt@i915_selftest@live@execlists.html
* igt@kms_big_fb@x-tiled-32bpp-rotate-270:
- shard-iclb: NOTRUN -> [SKIP][32] ([fdo#110725] / [fdo#111614])
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_big_fb@x-tiled-32bpp-rotate-270.html
* igt@kms_chamelium@hdmi-cmp-planar-formats:
- shard-kbl: NOTRUN -> [SKIP][33] ([fdo#109271] / [fdo#111827]) +3 similar issues
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl3/igt@kms_chamelium@hdmi-cmp-planar-formats.html
* igt@kms_chamelium@hdmi-edid-change-during-suspend:
- shard-apl: NOTRUN -> [SKIP][34] ([fdo#109271] / [fdo#111827]) +27 similar issues
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_chamelium@hdmi-edid-change-during-suspend.html
* igt@kms_color_chamelium@pipe-a-ctm-limited-range:
- shard-iclb: NOTRUN -> [SKIP][35] ([fdo#109284] / [fdo#111827]) +1 similar issue
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_color_chamelium@pipe-a-ctm-limited-range.html
* igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes:
- shard-snb: NOTRUN -> [SKIP][36] ([fdo#109271] / [fdo#111827]) +21 similar issues
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-snb7/igt@kms_color_chamelium@pipe-invalid-ctm-matrix-sizes.html
* igt@kms_content_protection@atomic-dpms:
- shard-apl: NOTRUN -> [TIMEOUT][37] ([i915#1319])
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_content_protection@atomic-dpms.html
* igt@kms_cursor_crc@pipe-b-cursor-512x170-offscreen:
- shard-iclb: NOTRUN -> [SKIP][38] ([fdo#109278] / [fdo#109279]) +1 similar issue
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_cursor_crc@pipe-b-cursor-512x170-offscreen.html
* igt@kms_cursor_crc@pipe-c-cursor-64x64-onscreen:
- shard-glk: [PASS][39] -> [FAIL][40] ([i915#3444])
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk9/igt@kms_cursor_crc@pipe-c-cursor-64x64-onscreen.html
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk7/igt@kms_cursor_crc@pipe-c-cursor-64x64-onscreen.html
* igt@kms_cursor_edge_walk@pipe-d-256x256-right-edge:
- shard-iclb: NOTRUN -> [SKIP][41] ([fdo#109278]) +7 similar issues
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_cursor_edge_walk@pipe-d-256x256-right-edge.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-iclb: NOTRUN -> [SKIP][42] ([fdo#109274] / [fdo#109278])
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_dp_tiled_display@basic-test-pattern:
- shard-iclb: NOTRUN -> [SKIP][43] ([i915#426])
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_dp_tiled_display@basic-test-pattern.html
* igt@kms_flip@2x-flip-vs-rmfb-interruptible:
- shard-iclb: NOTRUN -> [SKIP][44] ([fdo#109274]) +2 similar issues
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
- shard-apl: NOTRUN -> [SKIP][45] ([fdo#109271] / [i915#2672])
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt:
- shard-kbl: NOTRUN -> [SKIP][46] ([fdo#109271]) +50 similar issues
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-mmap-cpu:
- shard-iclb: NOTRUN -> [SKIP][47] ([fdo#109280]) +10 similar issues
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-mmap-cpu.html
* igt@kms_hdr@bpc-switch-suspend:
- shard-kbl: [PASS][48] -> [DMESG-WARN][49] ([i915#180]) +3 similar issues
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl6/igt@kms_hdr@bpc-switch-suspend.html
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl7/igt@kms_hdr@bpc-switch-suspend.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
- shard-apl: NOTRUN -> [SKIP][50] ([fdo#109271] / [i915#533]) +2 similar issues
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl6/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
- shard-apl: NOTRUN -> [FAIL][51] ([fdo#108145] / [i915#265]) +3 similar issues
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb:
- shard-kbl: NOTRUN -> [FAIL][52] ([i915#265])
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl3/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html
* igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
- shard-skl: [PASS][53] -> [FAIL][54] ([fdo#108145] / [i915#265])
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl3/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl7/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
* igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
- shard-apl: NOTRUN -> [FAIL][55] ([i915#265]) +1 similar issue
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl3/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html
* igt@kms_plane_lowres@pipe-c-tiling-none:
- shard-iclb: NOTRUN -> [SKIP][56] ([i915#3536])
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_plane_lowres@pipe-c-tiling-none.html
* igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping:
- shard-apl: NOTRUN -> [SKIP][57] ([fdo#109271] / [i915#2733])
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-scaler-with-clipping-clamping.html
* igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
- shard-apl: NOTRUN -> [SKIP][58] ([fdo#109271] / [i915#658]) +8 similar issues
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html
* igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
- shard-kbl: NOTRUN -> [SKIP][59] ([fdo#109271] / [i915#658])
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl1/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
* igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2:
- shard-iclb: NOTRUN -> [SKIP][60] ([i915#658])
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html
* igt@kms_psr@psr2_basic:
- shard-iclb: [PASS][61] -> [SKIP][62] ([fdo#109441]) +2 similar issues
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb2/igt@kms_psr@psr2_basic.html
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb8/igt@kms_psr@psr2_basic.html
* igt@kms_sysfs_edid_timing:
- shard-apl: NOTRUN -> [FAIL][63] ([IGT#2])
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl3/igt@kms_sysfs_edid_timing.html
* igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend:
- shard-kbl: [PASS][64] -> [INCOMPLETE][65] ([i915#155])
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl1/igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend.html
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl4/igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend.html
* igt@kms_vblank@pipe-a-ts-continuation-suspend:
- shard-kbl: [PASS][66] -> [DMESG-WARN][67] ([i915#180] / [i915#295])
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl4/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl4/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
* igt@kms_vblank@pipe-d-ts-continuation-idle:
- shard-apl: NOTRUN -> [SKIP][68] ([fdo#109271]) +306 similar issues
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl2/igt@kms_vblank@pipe-d-ts-continuation-idle.html
* igt@kms_writeback@writeback-check-output:
- shard-apl: NOTRUN -> [SKIP][69] ([fdo#109271] / [i915#2437]) +2 similar issues
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@kms_writeback@writeback-check-output.html
* igt@nouveau_crc@ctx-flip-threshold-reset-after-capture:
- shard-iclb: NOTRUN -> [SKIP][70] ([i915#2530])
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@nouveau_crc@ctx-flip-threshold-reset-after-capture.html
* igt@perf@polling-small-buf:
- shard-skl: [PASS][71] -> [FAIL][72] ([i915#1722])
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl5/igt@perf@polling-small-buf.html
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl7/igt@perf@polling-small-buf.html
* igt@prime_nv_api@nv_i915_reimport_twice_check_flink_name:
- shard-iclb: NOTRUN -> [SKIP][73] ([fdo#109291])
[73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@prime_nv_api@nv_i915_reimport_twice_check_flink_name.html
* igt@prime_vgem@coherency-gtt:
- shard-iclb: NOTRUN -> [SKIP][74] ([fdo#109292])
[74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@prime_vgem@coherency-gtt.html
* igt@prime_vgem@fence-write-hang:
- shard-iclb: NOTRUN -> [SKIP][75] ([fdo#109295])
[75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@prime_vgem@fence-write-hang.html
* igt@runner@aborted:
- shard-apl: NOTRUN -> ([FAIL][76], [FAIL][77], [FAIL][78], [FAIL][79]) ([fdo#109271] / [i915#180] / [i915#3002] / [i915#3363])
[76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@runner@aborted.html
[77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl7/igt@runner@aborted.html
[78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl6/igt@runner@aborted.html
[79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl6/igt@runner@aborted.html
* igt@sysfs_clients@fair-7:
- shard-apl: NOTRUN -> [SKIP][80] ([fdo#109271] / [i915#2994]) +4 similar issues
[80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl1/igt@sysfs_clients@fair-7.html
* igt@sysfs_clients@sema-10:
- shard-kbl: NOTRUN -> [SKIP][81] ([fdo#109271] / [i915#2994])
[81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl3/igt@sysfs_clients@sema-10.html
* igt@sysfs_clients@sema-50:
- shard-iclb: NOTRUN -> [SKIP][82] ([i915#2994])
[82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb1/igt@sysfs_clients@sema-50.html
#### Possible fixes ####
* igt@gem_exec_fair@basic-none@vcs1:
- shard-kbl: [FAIL][83] ([i915#2842]) -> [PASS][84]
[83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl6/igt@gem_exec_fair@basic-none@vcs1.html
[84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl6/igt@gem_exec_fair@basic-none@vcs1.html
* igt@gem_exec_fair@basic-none@vecs0:
- shard-apl: [FAIL][85] ([i915#2842] / [i915#3468]) -> [PASS][86]
[85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-apl1/igt@gem_exec_fair@basic-none@vecs0.html
[86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-apl6/igt@gem_exec_fair@basic-none@vecs0.html
* igt@gem_exec_fair@basic-pace-share@rcs0:
- shard-tglb: [FAIL][87] ([i915#2842]) -> [PASS][88]
[87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html
[88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html
* igt@gem_exec_suspend@basic-s3:
- shard-kbl: [DMESG-WARN][89] ([i915#180]) -> [PASS][90] +2 similar issues
[89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl4/igt@gem_exec_suspend@basic-s3.html
[90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl1/igt@gem_exec_suspend@basic-s3.html
* igt@gem_exec_whisper@basic-fds-forked:
- shard-glk: [DMESG-WARN][91] ([i915#118] / [i915#95]) -> [PASS][92]
[91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk6/igt@gem_exec_whisper@basic-fds-forked.html
[92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk6/igt@gem_exec_whisper@basic-fds-forked.html
* igt@gem_mmap_gtt@cpuset-big-copy-odd:
- shard-glk: [FAIL][93] ([i915#307]) -> [PASS][94] +1 similar issue
[93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk5/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
[94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk5/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
* igt@gem_mmap_gtt@cpuset-big-copy-xy:
- shard-iclb: [FAIL][95] ([i915#2428]) -> [PASS][96]
[95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb5/igt@gem_mmap_gtt@cpuset-big-copy-xy.html
[96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb2/igt@gem_mmap_gtt@cpuset-big-copy-xy.html
* igt@kms_async_flips@alternate-sync-async-flip:
- shard-glk: [FAIL][97] ([i915#2521]) -> [PASS][98]
[97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-glk6/igt@kms_async_flips@alternate-sync-async-flip.html
[98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-glk4/igt@kms_async_flips@alternate-sync-async-flip.html
* igt@kms_draw_crc@draw-method-rgb565-mmap-cpu-ytiled:
- shard-skl: [FAIL][99] ([i915#3451]) -> [PASS][100]
[99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl1/igt@kms_draw_crc@draw-method-rgb565-mmap-cpu-ytiled.html
[100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl9/igt@kms_draw_crc@draw-method-rgb565-mmap-cpu-ytiled.html
* igt@kms_hdr@bpc-switch-dpms:
- shard-skl: [FAIL][101] ([i915#1188]) -> [PASS][102]
[101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl9/igt@kms_hdr@bpc-switch-dpms.html
[102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl5/igt@kms_hdr@bpc-switch-dpms.html
* igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
- shard-skl: [FAIL][103] ([fdo#108145] / [i915#265]) -> [PASS][104]
[103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl1/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
[104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl9/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
* igt@perf@polling:
- shard-skl: [FAIL][105] ([i915#1542]) -> [PASS][106]
[105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl6/igt@perf@polling.html
[106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl9/igt@perf@polling.html
* igt@sysfs_heartbeat_interval@mixed@vecs0:
- shard-skl: [FAIL][107] ([i915#1731]) -> [PASS][108]
[107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-skl8/igt@sysfs_heartbeat_interval@mixed@vecs0.html
[108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-skl1/igt@sysfs_heartbeat_interval@mixed@vecs0.html
#### Warnings ####
* igt@i915_pm_rc6_residency@rc6-idle:
- shard-iclb: [WARN][109] ([i915#2684]) -> [WARN][110] ([i915#1804] / [i915#2684])
[109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb2/igt@i915_pm_rc6_residency@rc6-idle.html
[110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb6/igt@i915_pm_rc6_residency@rc6-idle.html
* igt@i915_selftest@live@execlists:
- shard-tglb: [INCOMPLETE][111] ([i915#3462]) -> [DMESG-FAIL][112] ([i915#3462])
[111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-tglb2/igt@i915_selftest@live@execlists.html
[112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-tglb7/igt@i915_selftest@live@execlists.html
- shard-iclb: [INCOMPLETE][113] ([i915#2782] / [i915#3462]) -> [DMESG-FAIL][114] ([i915#3462])
[113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb4/igt@i915_selftest@live@execlists.html
[114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb5/igt@i915_selftest@live@execlists.html
* igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1:
- shard-iclb: [SKIP][115] ([i915#658]) -> [SKIP][116] ([i915#2920]) +3 similar issues
[115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb7/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
[116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-1.html
* igt@runner@aborted:
- shard-kbl: ([FAIL][117], [FAIL][118], [FAIL][119], [FAIL][120], [FAIL][121], [FAIL][122], [FAIL][123], [FAIL][124]) ([i915#1436] / [i915#180] / [i915#1814] / [i915#3002] / [i915#3363]) -> ([FAIL][125], [FAIL][126], [FAIL][127], [FAIL][128], [FAIL][129], [FAIL][130], [FAIL][131], [FAIL][132], [FAIL][133]) ([i915#1436] / [i915#1814] / [i915#2505] / [i915#3002] / [i915#3363] / [i915#602])
[117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl7/igt@runner@aborted.html
[118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl3/igt@runner@aborted.html
[119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl7/igt@runner@aborted.html
[120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl1/igt@runner@aborted.html
[121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl4/igt@runner@aborted.html
[122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl3/igt@runner@aborted.html
[123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl4/igt@runner@aborted.html
[124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-kbl4/igt@runner@aborted.html
[125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl7/igt@runner@aborted.html
[126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl7/igt@runner@aborted.html
[127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl4/igt@runner@aborted.html
[128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl6/igt@runner@aborted.html
[129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl2/igt@runner@aborted.html
[130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl7/igt@runner@aborted.html
[131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl4/igt@runner@aborted.html
[132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl4/igt@runner@aborted.html
[133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-kbl7/igt@runner@aborted.html
- shard-iclb: ([FAIL][134], [FAIL][135], [FAIL][136]) ([i915#2782] / [i915#3002]) -> ([FAIL][137], [FAIL][138], [FAIL][139]) ([i915#2426] / [i915#2782] / [i915#3002])
[134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb5/igt@runner@aborted.html
[135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb4/igt@runner@aborted.html
[136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-iclb1/igt@runner@aborted.html
[137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb7/igt@runner@aborted.html
[138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb7/igt@runner@aborted.html
[139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-iclb5/igt@runner@aborted.html
- shard-tglb: ([FAIL][140], [FAIL][141], [FAIL][142]) ([i915#1436] / [i915#2966] / [i915#3002]) -> ([FAIL][143], [FAIL][144], [FAIL][145]) ([i915#1436] / [i915#2426] / [i915#2966] / [i915#3002])
[140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-tglb2/igt@runner@aborted.html
[141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-tglb2/igt@runner@aborted.html
[142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10213/shard-tglb1/igt@runner@aborted.html
[143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/shard-tglb7/igt@runner@aborted.html
[144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20351/index.html
[-- Attachment #1.2: Type: text/html, Size: 34026 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Intel-gfx] [PATCH 0/2] GuC submission / DRM scheduler integration plan + new uAPI
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
@ 2021-06-17 17:00 ` Daniel Vetter
-1 siblings, 0 replies; 14+ messages in thread
From: Daniel Vetter @ 2021-06-17 17:00 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-gfx, dri-devel, carl.zhang, jason.ekstrand, daniel.vetter,
mesa-dev, christian.koenig
On Fri, Jun 11, 2021 at 04:40:42PM -0700, Matthew Brost wrote:
> Subject and patches say it all.
>
> v2: Address comments, patches have details of changes
> v3: Address comments, patches have details of changes
> v4: Address comments, patches have details of changes
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Imo ready (well overdue) for merging, please annoy Carl or someone from
media for an ack and then ask John or Daniele to merge it into
drm-intel-gt-next.
-Daniel
>
> Matthew Brost (2):
> drm/doc/rfc: i915 GuC submission / DRM scheduler
> drm/doc/rfc: i915 new parallel submission uAPI plan
>
> Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++
> Documentation/gpu/rfc/i915_scheduler.rst | 148 ++++++++++++++++++
> Documentation/gpu/rfc/index.rst | 4 +
> 3 files changed, 269 insertions(+)
> create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
> create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
>
> --
> 2.28.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Intel-gfx] [PATCH 0/2] GuC submission / DRM scheduler integration plan + new uAPI
@ 2021-06-17 17:00 ` Daniel Vetter
0 siblings, 0 replies; 14+ messages in thread
From: Daniel Vetter @ 2021-06-17 17:00 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-gfx, dri-devel, carl.zhang, jason.ekstrand, daniel.vetter,
mesa-dev, christian.koenig
On Fri, Jun 11, 2021 at 04:40:42PM -0700, Matthew Brost wrote:
> Subject and patches say it all.
>
> v2: Address comments, patches have details of changes
> v3: Address comments, patches have details of changes
> v4: Address comments, patches have details of changes
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Imo ready (well overdue) for merging, please annoy Carl or someone from
media for an ack and then ask John or Daniele to merge it into
drm-intel-gt-next.
-Daniel
>
> Matthew Brost (2):
> drm/doc/rfc: i915 GuC submission / DRM scheduler
> drm/doc/rfc: i915 new parallel submission uAPI plan
>
> Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++
> Documentation/gpu/rfc/i915_scheduler.rst | 148 ++++++++++++++++++
> Documentation/gpu/rfc/index.rst | 4 +
> 3 files changed, 269 insertions(+)
> create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
> create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
>
> --
> 2.28.0
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
@ 2021-06-18 17:58 ` Ye, Tony
-1 siblings, 0 replies; 14+ messages in thread
From: Ye, Tony @ 2021-06-18 17:58 UTC (permalink / raw)
To: Brost, Matthew, intel-gfx, dri-devel
Cc: Ursulin, Tvrtko, Ceraolo Spurio, Daniele, Zhang, Carl, Ekstrand,
Jason, Mrozek, Michal, Bloomfield, Jon, mesa-dev, Vetter, Daniel,
christian.koenig, Harrison, John C
Acked-by: Tony Ye <tony.ye@intel.com>
Regards,
Tony
On 6/11/2021 4:40 PM, Matthew Brost wrote:
> Add entry for i915 new parallel submission uAPI plan.
>
> v2:
> (Daniel Vetter):
> - Expand logical order explaination
> - Add dummy header
> - Only allow N BBs in execbuf IOCTL
> - Configure parallel submission per slot not per gem context
> v3:
> (Marcin Ślusarz):
> - Lot's of typos / bad english fixed
> (Tvrtko Ursulin):
> - Consistent pseudo code, clean up wording in descriptions
> v4:
> (Daniel Vetter)
> - Drop flags
> - Add kernel doc
> - Reword a few things / fix typos
> (Tvrtko)
> - Reword a few things / fix typos
>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Tony Ye <tony.ye@intel.com>
> CC: Carl Zhang <carl.zhang@intel.com>
> Cc: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Jason Ekstrand <jason@jlekstrand.net>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
> Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++++++
> Documentation/gpu/rfc/i915_scheduler.rst | 59 ++++++++-
> 2 files changed, 175 insertions(+), 1 deletion(-)
> create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
>
> diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> new file mode 100644
> index 000000000000..c22af3a359e4
> --- /dev/null
> +++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> @@ -0,0 +1,117 @@
> +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see i915_context_engines_parallel_submit */
> +
> +/**
> + * struct drm_i915_context_engines_parallel_submit - Configure engine for
> + * parallel submission.
> + *
> + * Setup a slot in the context engine map to allow multiple BBs to be submitted
> + * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the GPU
> + * in parallel. Multiple hardware contexts are created internally in the i915
> + * run these BBs. Once a slot is configured for N BBs only N BBs can be
> + * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user
> + * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows how
> + * many BBs there are based on the slot's configuration. The N BBs are the last
> + * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set.
> + *
> + * The default placement behavior is to create implicit bonds between each
> + * context if each context maps to more than 1 physical engine (e.g. context is
> + * a virtual engine). Also we only allow contexts of same engine class and these
> + * contexts must be in logically contiguous order. Examples of the placement
> + * behavior described below. Lastly, the default is to not allow BBs to
> + * preempted mid BB rather insert coordinated preemption on all hardware
> + * contexts between each set of BBs. Flags may be added in the future to change
> + * bott of these default behaviors.
> + *
> + * Returns -EINVAL if hardware context placement configuration is invalid or if
> + * the placement configuration isn't supported on the platform / submission
> + * interface.
> + * Returns -ENODEV if extension isn't supported on the platform / submission
> + * inteface.
> + *
> + * .. code-block::
> + *
> + * Example 1 pseudo code:
> + * CS[X] = generic engine of same class, logical instance X
> + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + * set_engines(INVALID)
> + * set_parallel(engine_index=0, width=2, num_siblings=1,
> + * engines=CS[0],CS[1])
> + *
> + * Results in the following valid placement:
> + * CS[0], CS[1]
> + *
> + * Example 2 pseudo code:
> + * CS[X] = generic engine of same class, logical instance X
> + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + * set_engines(INVALID)
> + * set_parallel(engine_index=0, width=2, num_siblings=2,
> + * engines=CS[0],CS[2],CS[1],CS[3])
> + *
> + * Results in the following valid placements:
> + * CS[0], CS[1]
> + * CS[2], CS[3]
> + *
> + * This can also be thought of as 2 virtual engines described by 2-D array
> + * in the engines the field with bonds placed between each index of the
> + * virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to
> + * CS[3].
> + * VE[0] = CS[0], CS[2]
> + * VE[1] = CS[1], CS[3]
> + *
> + * Example 3 pseudo code:
> + * CS[X] = generic engine of same class, logical instance X
> + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + * set_engines(INVALID)
> + * set_parallel(engine_index=0, width=2, num_siblings=2,
> + * engines=CS[0],CS[1],CS[1],CS[3])
> + *
> + * Results in the following valid and invalid placements:
> + * CS[0], CS[1]
> + * CS[1], CS[3] - Not logical contiguous, return -EINVAL
> + */
> +struct drm_i915_context_engines_parallel_submit {
> + /**
> + * @base: base user extension.
> + */
> + struct i915_user_extension base;
> +
> + /**
> + * @engine_index: slot for parallel engine
> + */
> + __u16 engine_index;
> +
> + /**
> + * @width: number of contexts per parallel engine
> + */
> + __u16 width;
> +
> + /**
> + * @num_siblings: number of siblings per context
> + */
> + __u16 num_siblings;
> +
> + /**
> + * @mbz16: reserved for future use; must be zero
> + */
> + __u16 mbz16;
> +
> + /**
> + * @flags: all undefined flags must be zero, currently not defined flags
> + */
> + __u64 flags;
> +
> + /**
> + * @mbz64: reserved for future use; must be zero
> + */
> + __u64 mbz64[3];
> +
> + /**
> + * @engines: 2-d array of engine instances to configure parallel engine
> + *
> + * length = width (i) * num_siblings (j)
> + * index = j + i * num_siblings
> + */
> + struct i915_engine_class_instance engines[0];
> +
> +} __attribute__ ((packed));
> +
> diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
> index 7acd386a6b49..63849b50e663 100644
> --- a/Documentation/gpu/rfc/i915_scheduler.rst
> +++ b/Documentation/gpu/rfc/i915_scheduler.rst
> @@ -88,4 +88,61 @@ Spec references:
>
> New parallel submission uAPI
> ============================
> -Details to come in a following patch.
> +The existing bonding uAPI is completely broken with GuC submission because
> +whether a submission is a single context submit or parallel submit isn't known
> +until execbuf time activated via the I915_SUBMIT_FENCE. To submit multiple
> +contexts in parallel with the GuC the context must be explicitly registered with
> +N contexts and all N contexts must be submitted in a single command to the GuC.
> +The GuC interfaces do not support dynamically changing between N contexts as the
> +bonding uAPI does. Hence the need for a new parallel submission interface. Also
> +the legacy bonding uAPI is quite confusing and not intuitive at all. Furthermore
> +I915_SUBMIT_FENCE is by design a future fence, so not really something we should
> +continue to support.
> +
> +The new parallel submission uAPI consists of 3 parts:
> +
> +* Export engines logical mapping
> +* A 'set_parallel' extension to configure contexts for parallel
> + submission
> +* Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
> +
> +Export engines logical mapping
> +------------------------------
> +Certain use cases require BBs to be placed on engine instances in logical order
> +(e.g. split-frame on gen11+). The logical mapping of engine instances can change
> +based on fusing. Rather than making UMDs be aware of fusing, simply expose the
> +logical mapping with the existing query engine info IOCTL. Also the GuC
> +submission interface currently only supports submitting multiple contexts to
> +engines in logical order which is a new requirement compared to execlists.
> +Lastly, all current platforms have at most 2 engine instances and the logical
> +order is the same as uAPI order. This will change on platforms with more than 2
> +engine instances.
> +
> +A single bit will be added to drm_i915_engine_info.flags indicating that the
> +logical instance has been returned and a new field,
> +drm_i915_engine_info.logical_instance, returns the logical instance.
> +
> +A 'set_parallel' extension to configure contexts for parallel submission
> +------------------------------------------------------------------------
> +The 'set_parallel' extension configures a slot for parallel submission of N BBs.
> +It is a setup step that must be called before using any of the contexts. See
> +I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE or I915_CONTEXT_ENGINES_EXT_BOND for
> +similar existing examples. Once a slot is configured for parallel submission the
> +execbuf2 IOCTL can be called submitting N BBs in a single IOCTL. Initially only
> +supports GuC submission. Execlists supports can be added later if needed.
> +
> +Add I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT and
> +drm_i915_context_engines_parallel_submit to the uAPI to implement this
> +extension.
> +
> +.. kernel-doc:: Documentation/gpu/rfc/i915_parallel_execbuf.h
> + :functions: drm_i915_context_engines_parallel_submit
> +
> +Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
> +-------------------------------------------------------------------
> +Contexts that have been configured with the 'set_parallel' extension can only
> +submit N BBs in a single execbuf2 IOCTL. The BBs are either the last N objects
> +in in the drm_i915_gem_exec_object2 list or the first N if I915_EXEC_BATCH_FIRST
> +is set. The number of BBs is implicit based on the slot submitted and how it has
> +been configured by 'set_parallel' or other extensions. No uAPI changes are
> +required to execbuf2 IOCTL.
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Intel-gfx] [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan
@ 2021-06-18 17:58 ` Ye, Tony
0 siblings, 0 replies; 14+ messages in thread
From: Ye, Tony @ 2021-06-18 17:58 UTC (permalink / raw)
To: Brost, Matthew, intel-gfx, dri-devel
Cc: Zhang, Carl, Ekstrand, Jason, mesa-dev, Vetter, Daniel, christian.koenig
Acked-by: Tony Ye <tony.ye@intel.com>
Regards,
Tony
On 6/11/2021 4:40 PM, Matthew Brost wrote:
> Add entry for i915 new parallel submission uAPI plan.
>
> v2:
> (Daniel Vetter):
> - Expand logical order explaination
> - Add dummy header
> - Only allow N BBs in execbuf IOCTL
> - Configure parallel submission per slot not per gem context
> v3:
> (Marcin Ślusarz):
> - Lot's of typos / bad english fixed
> (Tvrtko Ursulin):
> - Consistent pseudo code, clean up wording in descriptions
> v4:
> (Daniel Vetter)
> - Drop flags
> - Add kernel doc
> - Reword a few things / fix typos
> (Tvrtko)
> - Reword a few things / fix typos
>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Tony Ye <tony.ye@intel.com>
> CC: Carl Zhang <carl.zhang@intel.com>
> Cc: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Jason Ekstrand <jason@jlekstrand.net>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
> Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++++++++++++++++++
> Documentation/gpu/rfc/i915_scheduler.rst | 59 ++++++++-
> 2 files changed, 175 insertions(+), 1 deletion(-)
> create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
>
> diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> new file mode 100644
> index 000000000000..c22af3a359e4
> --- /dev/null
> +++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> @@ -0,0 +1,117 @@
> +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see i915_context_engines_parallel_submit */
> +
> +/**
> + * struct drm_i915_context_engines_parallel_submit - Configure engine for
> + * parallel submission.
> + *
> + * Setup a slot in the context engine map to allow multiple BBs to be submitted
> + * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the GPU
> + * in parallel. Multiple hardware contexts are created internally in the i915
> + * run these BBs. Once a slot is configured for N BBs only N BBs can be
> + * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user
> + * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows how
> + * many BBs there are based on the slot's configuration. The N BBs are the last
> + * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set.
> + *
> + * The default placement behavior is to create implicit bonds between each
> + * context if each context maps to more than 1 physical engine (e.g. context is
> + * a virtual engine). Also we only allow contexts of same engine class and these
> + * contexts must be in logically contiguous order. Examples of the placement
> + * behavior described below. Lastly, the default is to not allow BBs to
> + * preempted mid BB rather insert coordinated preemption on all hardware
> + * contexts between each set of BBs. Flags may be added in the future to change
> + * bott of these default behaviors.
> + *
> + * Returns -EINVAL if hardware context placement configuration is invalid or if
> + * the placement configuration isn't supported on the platform / submission
> + * interface.
> + * Returns -ENODEV if extension isn't supported on the platform / submission
> + * inteface.
> + *
> + * .. code-block::
> + *
> + * Example 1 pseudo code:
> + * CS[X] = generic engine of same class, logical instance X
> + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + * set_engines(INVALID)
> + * set_parallel(engine_index=0, width=2, num_siblings=1,
> + * engines=CS[0],CS[1])
> + *
> + * Results in the following valid placement:
> + * CS[0], CS[1]
> + *
> + * Example 2 pseudo code:
> + * CS[X] = generic engine of same class, logical instance X
> + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + * set_engines(INVALID)
> + * set_parallel(engine_index=0, width=2, num_siblings=2,
> + * engines=CS[0],CS[2],CS[1],CS[3])
> + *
> + * Results in the following valid placements:
> + * CS[0], CS[1]
> + * CS[2], CS[3]
> + *
> + * This can also be thought of as 2 virtual engines described by 2-D array
> + * in the engines the field with bonds placed between each index of the
> + * virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to
> + * CS[3].
> + * VE[0] = CS[0], CS[2]
> + * VE[1] = CS[1], CS[3]
> + *
> + * Example 3 pseudo code:
> + * CS[X] = generic engine of same class, logical instance X
> + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + * set_engines(INVALID)
> + * set_parallel(engine_index=0, width=2, num_siblings=2,
> + * engines=CS[0],CS[1],CS[1],CS[3])
> + *
> + * Results in the following valid and invalid placements:
> + * CS[0], CS[1]
> + * CS[1], CS[3] - Not logical contiguous, return -EINVAL
> + */
> +struct drm_i915_context_engines_parallel_submit {
> + /**
> + * @base: base user extension.
> + */
> + struct i915_user_extension base;
> +
> + /**
> + * @engine_index: slot for parallel engine
> + */
> + __u16 engine_index;
> +
> + /**
> + * @width: number of contexts per parallel engine
> + */
> + __u16 width;
> +
> + /**
> + * @num_siblings: number of siblings per context
> + */
> + __u16 num_siblings;
> +
> + /**
> + * @mbz16: reserved for future use; must be zero
> + */
> + __u16 mbz16;
> +
> + /**
> + * @flags: all undefined flags must be zero, currently not defined flags
> + */
> + __u64 flags;
> +
> + /**
> + * @mbz64: reserved for future use; must be zero
> + */
> + __u64 mbz64[3];
> +
> + /**
> + * @engines: 2-d array of engine instances to configure parallel engine
> + *
> + * length = width (i) * num_siblings (j)
> + * index = j + i * num_siblings
> + */
> + struct i915_engine_class_instance engines[0];
> +
> +} __attribute__ ((packed));
> +
> diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
> index 7acd386a6b49..63849b50e663 100644
> --- a/Documentation/gpu/rfc/i915_scheduler.rst
> +++ b/Documentation/gpu/rfc/i915_scheduler.rst
> @@ -88,4 +88,61 @@ Spec references:
>
> New parallel submission uAPI
> ============================
> -Details to come in a following patch.
> +The existing bonding uAPI is completely broken with GuC submission because
> +whether a submission is a single context submit or parallel submit isn't known
> +until execbuf time activated via the I915_SUBMIT_FENCE. To submit multiple
> +contexts in parallel with the GuC the context must be explicitly registered with
> +N contexts and all N contexts must be submitted in a single command to the GuC.
> +The GuC interfaces do not support dynamically changing between N contexts as the
> +bonding uAPI does. Hence the need for a new parallel submission interface. Also
> +the legacy bonding uAPI is quite confusing and not intuitive at all. Furthermore
> +I915_SUBMIT_FENCE is by design a future fence, so not really something we should
> +continue to support.
> +
> +The new parallel submission uAPI consists of 3 parts:
> +
> +* Export engines logical mapping
> +* A 'set_parallel' extension to configure contexts for parallel
> + submission
> +* Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
> +
> +Export engines logical mapping
> +------------------------------
> +Certain use cases require BBs to be placed on engine instances in logical order
> +(e.g. split-frame on gen11+). The logical mapping of engine instances can change
> +based on fusing. Rather than making UMDs be aware of fusing, simply expose the
> +logical mapping with the existing query engine info IOCTL. Also the GuC
> +submission interface currently only supports submitting multiple contexts to
> +engines in logical order which is a new requirement compared to execlists.
> +Lastly, all current platforms have at most 2 engine instances and the logical
> +order is the same as uAPI order. This will change on platforms with more than 2
> +engine instances.
> +
> +A single bit will be added to drm_i915_engine_info.flags indicating that the
> +logical instance has been returned and a new field,
> +drm_i915_engine_info.logical_instance, returns the logical instance.
> +
> +A 'set_parallel' extension to configure contexts for parallel submission
> +------------------------------------------------------------------------
> +The 'set_parallel' extension configures a slot for parallel submission of N BBs.
> +It is a setup step that must be called before using any of the contexts. See
> +I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE or I915_CONTEXT_ENGINES_EXT_BOND for
> +similar existing examples. Once a slot is configured for parallel submission the
> +execbuf2 IOCTL can be called submitting N BBs in a single IOCTL. Initially only
> +supports GuC submission. Execlists supports can be added later if needed.
> +
> +Add I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT and
> +drm_i915_context_engines_parallel_submit to the uAPI to implement this
> +extension.
> +
> +.. kernel-doc:: Documentation/gpu/rfc/i915_parallel_execbuf.h
> + :functions: drm_i915_context_engines_parallel_submit
> +
> +Extend execbuf2 IOCTL to support submitting N BBs in a single IOCTL
> +-------------------------------------------------------------------
> +Contexts that have been configured with the 'set_parallel' extension can only
> +submit N BBs in a single execbuf2 IOCTL. The BBs are either the last N objects
> +in in the drm_i915_gem_exec_object2 list or the first N if I915_EXEC_BATCH_FIRST
> +is set. The number of BBs is implicit based on the slot submitted and how it has
> +been configured by 'set_parallel' or other extensions. No uAPI changes are
> +required to execbuf2 IOCTL.
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-06-18 17:58 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-11 23:40 [PATCH 0/2] GuC submission / DRM scheduler integration plan + new uAPI Matthew Brost
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
2021-06-11 23:40 ` [PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler Matthew Brost
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
2021-06-11 23:40 ` [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan Matthew Brost
2021-06-11 23:40 ` [Intel-gfx] " Matthew Brost
2021-06-18 17:58 ` Ye, Tony
2021-06-18 17:58 ` [Intel-gfx] " Ye, Tony
2021-06-11 23:59 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for GuC submission / DRM scheduler integration plan + new uAPI Patchwork
2021-06-12 0:03 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2021-06-12 0:30 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-06-12 1:46 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-06-17 17:00 ` [Intel-gfx] [PATCH 0/2] " Daniel Vetter
2021-06-17 17:00 ` Daniel Vetter
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.