All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>,
	intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	carl.zhang@intel.com, jason.ekstrand@intel.com,
	daniel.vetter@intel.com, mesa-dev@lists.freedesktop.org,
	christian.koenig@amd.com
Subject: Re: [Intel-gfx] [RFC PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler
Date: Thu, 27 May 2021 13:24:05 +0200	[thread overview]
Message-ID: <YK+BVbCFvpVR1qxj@phenom.ffwll.local> (raw)
In-Reply-To: <5a4ae6d0-cb47-fb8a-1f07-4f22f64cb919@linux.intel.com>

On Thu, May 27, 2021 at 11:06:38AM +0100, Tvrtko Ursulin wrote:
> 
> On 27/05/2021 00:33, Matthew Brost wrote:
> > Add entry for i915 GuC submission / DRM scheduler integration plan.
> > Follow up patch with details of new parallel submission uAPI to come.
> > 
> > v2:
> >   (Daniel Vetter)
> >    - Expand explaination of why bonding isn't supported for GuC
> >      submission
> >    - CC some of the DRM scheduler maintainers
> >    - Add priority inheritance / boosting use case
> >    - Add reasoning for removing in order assumptions
> >   (Daniel Stone)
> >    - Add links to priority spec
> 
> Where will the outstanding items like, from the top of my head only, error
> capture and open source logging tool be tracked? I thought here but maybe
> not.

I thought the same that we'd put these really important bits into the
rfc/todo here. Matt, can you pls do that?
-Daniel

> 
> Regards,
> 
> Tvrtko
> 
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Luben Tuikov <luben.tuikov@amd.com>
> > Cc: Alex Deucher <alexander.deucher@amd.com>
> > Cc: Steven Price <steven.price@arm.com>
> > Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> > Cc: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Dave Airlie <airlied@gmail.com>
> > Cc: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   Documentation/gpu/rfc/i915_scheduler.rst | 85 ++++++++++++++++++++++++
> >   Documentation/gpu/rfc/index.rst          |  4 ++
> >   2 files changed, 89 insertions(+)
> >   create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
> > 
> > diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
> > new file mode 100644
> > index 000000000000..7faa46cde088
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/i915_scheduler.rst
> > @@ -0,0 +1,85 @@
> > +=========================================
> > +I915 GuC Submission/DRM Scheduler Section
> > +=========================================
> > +
> > +Upstream plan
> > +=============
> > +For upstream the overall plan for landing GuC submission and integrating the
> > +i915 with the DRM scheduler is:
> > +
> > +* Merge basic GuC submission
> > +	* Basic submission support for all gen11+ platforms
> > +	* Not enabled by default on any current platforms but can be enabled via
> > +	  modparam enable_guc
> > +	* Lots of rework will need to be done to integrate with DRM scheduler so
> > +	  no need to nit pick everything in the code, it just should be
> > +	  functional, no major coding style / layering errors, and not regress
> > +	  execlists
> > +	* Update IGTs / selftests as needed to work with GuC submission
> > +	* Enable CI on supported platforms for a baseline
> > +	* Rework / get CI heathly for GuC submission in place as needed
> > +* Merge new parallel submission uAPI
> > +	* Bonding uAPI completely incompatible with GuC submission, plus it has
> > +	  severe design issues in general, which is why we want to retire it no
> > +	  matter what
> > +	* New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
> > +	  which configures a slot with N contexts
> > +	* After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
> > +	  a slot in a single execbuf IOCTL and the batches run on the GPU in
> > +	  paralllel
> > +	* Initially only for GuC submission but execlists can be supported if
> > +	  needed
> > +* Convert the i915 to use the DRM scheduler
> > +	* GuC submission backend fully integrated with DRM scheduler
> > +		* All request queues removed from backend (e.g. all backpressure
> > +		  handled in DRM scheduler)
> > +		* Resets / cancels hook in DRM scheduler
> > +		* Watchdog hooks into DRM scheduler
> > +		* Lots of complexity of the GuC backend can be pulled out once
> > +		  integrated with DRM scheduler (e.g. state machine gets
> > +		  simplier, locking gets simplier, etc...)
> > +	* Execlist backend will do the minimum required to hook in the DRM
> > +	  scheduler so it can live next to the fully integrated GuC backend
> > +		* Legacy interface
> > +		* Features like timeslicing / preemption / virtual engines would
> > +		  be difficult to integrate with the DRM scheduler and these
> > +		  features are not required for GuC submission as the GuC does
> > +		  these things for us
> > +		* ROI low on fully integrating into DRM scheduler
> > +		* Fully integrating would add lots of complexity to DRM
> > +		  scheduler
> > +	* Port i915 priority inheritance / boosting feature in DRM scheduler
> > +		* Used for i915 page flip, may be useful to other DRM drivers as
> > +		  well
> > +		* Will be an optional feature in the DRM scheduler
> > +	* Remove in-order completion assumptions from DRM scheduler
> > +		* Even when using the DRM scheduler the backends will handle
> > +		  preemption, timeslicing, etc... so it is possible for jobs to
> > +		  finish out of order
> > +	* Pull out i915 priority levels and use DRM priority levels
> > +	* Optimize DRM scheduler as needed
> > +
> > +New uAPI for basic GuC submission
> > +=================================
> > +No major changes are required to the uAPI for basic GuC submission. The only
> > +change is a new scheduler attribute: I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP.
> > +This attribute indicates the 2k i915 user priority levels are statically mapped
> > +into 3 levels as follows:
> > +
> > +* -1k to -1 Low priority
> > +* 0 Medium priority
> > +* 1 to 1k High priority
> > +
> > +This is needed because the GuC only has 4 priority bands. The highest priority
> > +band is reserved with the kernel. This aligns with the DRM scheduler priority
> > +levels too.
> > +
> > +Spec references:
> > +----------------
> > +https://www.khronos.org/registry/EGL/extensions/IMG/EGL_IMG_context_priority.txt
> > +https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap5.html#devsandqueues-priority
> > +https://spec.oneapi.com/level-zero/latest/core/api.html#ze-command-queue-priority-t
> > +
> > +New parallel submission uAPI
> > +============================
> > +Details to come in a following patch.
> > diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
> > index 05670442ca1b..91e93a705230 100644
> > --- a/Documentation/gpu/rfc/index.rst
> > +++ b/Documentation/gpu/rfc/index.rst
> > @@ -19,3 +19,7 @@ host such documentation:
> >   .. toctree::
> >       i915_gem_lmem.rst
> > +
> > +.. toctree::
> > +
> > +    i915_scheduler.rst
> > 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	carl.zhang@intel.com, jason.ekstrand@intel.com,
	daniel.vetter@intel.com, mesa-dev@lists.freedesktop.org,
	christian.koenig@amd.com
Subject: Re: [Intel-gfx] [RFC PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler
Date: Thu, 27 May 2021 13:24:05 +0200	[thread overview]
Message-ID: <YK+BVbCFvpVR1qxj@phenom.ffwll.local> (raw)
In-Reply-To: <5a4ae6d0-cb47-fb8a-1f07-4f22f64cb919@linux.intel.com>

On Thu, May 27, 2021 at 11:06:38AM +0100, Tvrtko Ursulin wrote:
> 
> On 27/05/2021 00:33, Matthew Brost wrote:
> > Add entry for i915 GuC submission / DRM scheduler integration plan.
> > Follow up patch with details of new parallel submission uAPI to come.
> > 
> > v2:
> >   (Daniel Vetter)
> >    - Expand explaination of why bonding isn't supported for GuC
> >      submission
> >    - CC some of the DRM scheduler maintainers
> >    - Add priority inheritance / boosting use case
> >    - Add reasoning for removing in order assumptions
> >   (Daniel Stone)
> >    - Add links to priority spec
> 
> Where will the outstanding items like, from the top of my head only, error
> capture and open source logging tool be tracked? I thought here but maybe
> not.

I thought the same that we'd put these really important bits into the
rfc/todo here. Matt, can you pls do that?
-Daniel

> 
> Regards,
> 
> Tvrtko
> 
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Luben Tuikov <luben.tuikov@amd.com>
> > Cc: Alex Deucher <alexander.deucher@amd.com>
> > Cc: Steven Price <steven.price@arm.com>
> > Cc: Jon Bloomfield <jon.bloomfield@intel.com>
> > Cc: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: Dave Airlie <airlied@gmail.com>
> > Cc: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Jason Ekstrand <jason@jlekstrand.net>
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   Documentation/gpu/rfc/i915_scheduler.rst | 85 ++++++++++++++++++++++++
> >   Documentation/gpu/rfc/index.rst          |  4 ++
> >   2 files changed, 89 insertions(+)
> >   create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
> > 
> > diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
> > new file mode 100644
> > index 000000000000..7faa46cde088
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/i915_scheduler.rst
> > @@ -0,0 +1,85 @@
> > +=========================================
> > +I915 GuC Submission/DRM Scheduler Section
> > +=========================================
> > +
> > +Upstream plan
> > +=============
> > +For upstream the overall plan for landing GuC submission and integrating the
> > +i915 with the DRM scheduler is:
> > +
> > +* Merge basic GuC submission
> > +	* Basic submission support for all gen11+ platforms
> > +	* Not enabled by default on any current platforms but can be enabled via
> > +	  modparam enable_guc
> > +	* Lots of rework will need to be done to integrate with DRM scheduler so
> > +	  no need to nit pick everything in the code, it just should be
> > +	  functional, no major coding style / layering errors, and not regress
> > +	  execlists
> > +	* Update IGTs / selftests as needed to work with GuC submission
> > +	* Enable CI on supported platforms for a baseline
> > +	* Rework / get CI heathly for GuC submission in place as needed
> > +* Merge new parallel submission uAPI
> > +	* Bonding uAPI completely incompatible with GuC submission, plus it has
> > +	  severe design issues in general, which is why we want to retire it no
> > +	  matter what
> > +	* New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
> > +	  which configures a slot with N contexts
> > +	* After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
> > +	  a slot in a single execbuf IOCTL and the batches run on the GPU in
> > +	  paralllel
> > +	* Initially only for GuC submission but execlists can be supported if
> > +	  needed
> > +* Convert the i915 to use the DRM scheduler
> > +	* GuC submission backend fully integrated with DRM scheduler
> > +		* All request queues removed from backend (e.g. all backpressure
> > +		  handled in DRM scheduler)
> > +		* Resets / cancels hook in DRM scheduler
> > +		* Watchdog hooks into DRM scheduler
> > +		* Lots of complexity of the GuC backend can be pulled out once
> > +		  integrated with DRM scheduler (e.g. state machine gets
> > +		  simplier, locking gets simplier, etc...)
> > +	* Execlist backend will do the minimum required to hook in the DRM
> > +	  scheduler so it can live next to the fully integrated GuC backend
> > +		* Legacy interface
> > +		* Features like timeslicing / preemption / virtual engines would
> > +		  be difficult to integrate with the DRM scheduler and these
> > +		  features are not required for GuC submission as the GuC does
> > +		  these things for us
> > +		* ROI low on fully integrating into DRM scheduler
> > +		* Fully integrating would add lots of complexity to DRM
> > +		  scheduler
> > +	* Port i915 priority inheritance / boosting feature in DRM scheduler
> > +		* Used for i915 page flip, may be useful to other DRM drivers as
> > +		  well
> > +		* Will be an optional feature in the DRM scheduler
> > +	* Remove in-order completion assumptions from DRM scheduler
> > +		* Even when using the DRM scheduler the backends will handle
> > +		  preemption, timeslicing, etc... so it is possible for jobs to
> > +		  finish out of order
> > +	* Pull out i915 priority levels and use DRM priority levels
> > +	* Optimize DRM scheduler as needed
> > +
> > +New uAPI for basic GuC submission
> > +=================================
> > +No major changes are required to the uAPI for basic GuC submission. The only
> > +change is a new scheduler attribute: I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP.
> > +This attribute indicates the 2k i915 user priority levels are statically mapped
> > +into 3 levels as follows:
> > +
> > +* -1k to -1 Low priority
> > +* 0 Medium priority
> > +* 1 to 1k High priority
> > +
> > +This is needed because the GuC only has 4 priority bands. The highest priority
> > +band is reserved with the kernel. This aligns with the DRM scheduler priority
> > +levels too.
> > +
> > +Spec references:
> > +----------------
> > +https://www.khronos.org/registry/EGL/extensions/IMG/EGL_IMG_context_priority.txt
> > +https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap5.html#devsandqueues-priority
> > +https://spec.oneapi.com/level-zero/latest/core/api.html#ze-command-queue-priority-t
> > +
> > +New parallel submission uAPI
> > +============================
> > +Details to come in a following patch.
> > diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
> > index 05670442ca1b..91e93a705230 100644
> > --- a/Documentation/gpu/rfc/index.rst
> > +++ b/Documentation/gpu/rfc/index.rst
> > @@ -19,3 +19,7 @@ host such documentation:
> >   .. toctree::
> >       i915_gem_lmem.rst
> > +
> > +.. toctree::
> > +
> > +    i915_scheduler.rst
> > 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-05-27 11:24 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-26 23:33 [RFC PATCH 0/2] GuC submission / DRM scheduler integration plan + new uAPI Matthew Brost
2021-05-26 23:33 ` [Intel-gfx] " Matthew Brost
2021-05-26 23:33 ` [RFC PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler Matthew Brost
2021-05-26 23:33   ` [Intel-gfx] " Matthew Brost
2021-05-27 10:06   ` Tvrtko Ursulin
2021-05-27 10:06     ` Tvrtko Ursulin
2021-05-27 11:24     ` Daniel Vetter [this message]
2021-05-27 11:24       ` Daniel Vetter
2021-06-04 17:39   ` Daniel Vetter
2021-06-04 17:39     ` Daniel Vetter
2021-06-04 19:48     ` [Mesa-dev] " Dave Airlie
2021-06-04 19:48       ` [Intel-gfx] [Mesa-dev] " Dave Airlie
2021-05-26 23:33 ` [RFC PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan Matthew Brost
2021-05-26 23:33   ` [Intel-gfx] " Matthew Brost
2021-05-27 15:01   ` Tvrtko Ursulin
2021-05-27 15:01     ` Tvrtko Ursulin
2021-06-04 17:59   ` Daniel Vetter
2021-06-04 17:59     ` Daniel Vetter
2021-06-11 19:50     ` Matthew Brost
2021-06-11 19:50       ` Matthew Brost
2021-06-17 16:46       ` Daniel Vetter
2021-06-17 16:46         ` Daniel Vetter
2021-06-17 17:27         ` Matthew Brost
2021-06-17 17:27           ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YK+BVbCFvpVR1qxj@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=carl.zhang@intel.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jason.ekstrand@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=mesa-dev@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.