All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/1] Inherit GPU scheduling priority from process nice
@ 2022-04-07 15:16 ` Tvrtko Ursulin
  0 siblings, 0 replies; 18+ messages in thread
From: Tvrtko Ursulin @ 2022-04-07 15:16 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Current processing landscape seems to be more and more composed of pipelines
where computations are done on multiple hardware devices. Furthermore some of
the non-CPU devices, like in this case many GPUs supported by the i915 driver,
actually support priority based scheduling which is currently rather
inaccessible to the user (in terms of being able to control it from the
outside).

From these two statements a question arises on how to allow for a simple,
effective and consolidated user experience. In other words why user would not be
able to do something like:

 $ nice ffmmpeg ...transcode my videos...
 $ my-favourite-game

And have the nice hint apply to GPU parts of the transcode pipeline as well?

This would in fact follow the approach taken by kernel's block scheduler where
ionice is by default inherited from process nice.

This series implements the same idea by inheriting context creator and batch
buffer submitted nice value as context nice. To avoid influencing GPU scheduling
aware clients this is done only one for contexts where userspace hasn't
explicitly specified a non-default scheduling priority

The approach is completely compatible with GuC and drm/scheduler since all
support at least low/normal/high priority levels with just the granularity of
available control differing. In other words with GuC scheduling there is no
difference between nice 5 and 10, both would map to low priority, but the
default case of positive or negative nice, versus nice 0, is still correctly
propagated to the firmware scheduler.

With the series applied I simulated the scenario of a background GPU task
running simultaneously with an interactive client, varying the former's nice
value.

Simulating a non-interactive GPU background task was:
  vblank_mode=0 nice -n <N> glxgears -geometry 1600x800

Interactive client was simulated with:
  gem_wsim -w ~/test.wsim -r 300 -v # (This one is self-capped at ~60fps.)

These were the results on DG1, first with execlists (default):

   Background nice  |   Interactive FPS
 -------------------+--------------------
      <not running> |         59
                  0 |         35
                 10 |         42

As we can see running the background load with nice 10 can somewhat help the
performance of the interactive/foreground task. (Although to be noted is that
without the fair scheduler completed there are possible starvation issues
depending on the workload which cannot be fixed by this patch.)

Now results with GuC (although it is not default on DG1):

   Background nice  |   Interactive FPS
 -------------------+--------------------
      <not running> |         58
                  0 |         26
                 10 |         25

Unfortunately GuC is not showing any change (25 vs 26 is rounding/run error).
But reverse mesurement with background client at nice 0 and foreground at nice
-10 does give 40 FPS proving the priority adjustment does work. (Same reverse
test gives 46 FPS with execlists). What is happening with GuC here is something
to be looked at since it seems normal-vs-low GuC priority time slices
differently than normal-vs-high. Normal does not seem to be preferred over low,
in this test at least.

v2:
 * Moved notifier outside task_rq_lock.
 * Some improvements and restructuring on the i915 side of the series.

v3:
 * Dropped task nice notifier - inheriting nice on request submit time is good
   enough.

v4:
 * Realisation came that this can be heavily simplified and only one simple
   patch is enough to achieve the desired behaviour.
 * Fixed the priority adjustment location to actually worked after rebase!
 * Re-done the benchmarking.

Tvrtko Ursulin (1):
  drm/i915: Inherit submitter nice when scheduling requests

 drivers/gpu/drm/i915/i915_request.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 18+ messages in thread
* [PATCH 0/1] Inherit GPU scheduling priority from process nice
@ 2022-04-07 15:28 Tvrtko Ursulin
  2022-04-07 15:28 ` [PATCH 1/1] drm/i915: Inherit submitter nice when scheduling requests Tvrtko Ursulin
  0 siblings, 1 reply; 18+ messages in thread
From: Tvrtko Ursulin @ 2022-04-07 15:28 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Current processing landscape seems to be more and more composed of pipelines
where computations are done on multiple hardware devices. Furthermore some of
the non-CPU devices, like in this case many GPUs supported by the i915 driver,
actually support priority based scheduling which is currently rather
inaccessible to the user (in terms of being able to control it from the
outside).

>From these two statements a question arises on how to allow for a simple,
effective and consolidated user experience. In other words why user would not be
able to do something like:

 $ nice ffmmpeg ...transcode my videos...
 $ my-favourite-game

And have the nice hint apply to GPU parts of the transcode pipeline as well?

This would in fact follow the approach taken by kernel's block scheduler where
ionice is by default inherited from process nice.

This series implements the same idea by inheriting context creator and batch
buffer submitted nice value as context nice. To avoid influencing GPU scheduling
aware clients this is done only one for contexts where userspace hasn't
explicitly specified a non-default scheduling priority

The approach is completely compatible with GuC and drm/scheduler since all
support at least low/normal/high priority levels with just the granularity of
available control differing. In other words with GuC scheduling there is no
difference between nice 5 and 10, both would map to low priority, but the
default case of positive or negative nice, versus nice 0, is still correctly
propagated to the firmware scheduler.

With the series applied I simulated the scenario of a background GPU task
running simultaneously with an interactive client, varying the former's nice
value.

Simulating a non-interactive GPU background task was:
  vblank_mode=0 nice -n <N> glxgears -geometry 1600x800

Interactive client was simulated with:
  gem_wsim -w ~/test.wsim -r 300 -v # (This one is self-capped at ~60fps.)

These were the results on DG1, first with execlists (default):

   Background nice  |   Interactive FPS
 -------------------+--------------------
      <not running> |         59
                  0 |         35
                 10 |         42

As we can see running the background load with nice 10 can somewhat help the
performance of the interactive/foreground task. (Although to be noted is that
without the fair scheduler completed there are possible starvation issues
depending on the workload which cannot be fixed by this patch.)

Now results with GuC (although it is not default on DG1):

   Background nice  |   Interactive FPS
 -------------------+--------------------
      <not running> |         58
                  0 |         26
                 10 |         25

Unfortunately GuC is not showing any change (25 vs 26 is rounding/run error).
But reverse mesurement with background client at nice 0 and foreground at nice
-10 does give 40 FPS proving the priority adjustment does work. (Same reverse
test gives 46 FPS with execlists). What is happening with GuC here is something
to be looked at since it seems normal-vs-low GuC priority time slices
differently than normal-vs-high. Normal does not seem to be preferred over low,
in this test at least.

v2:
 * Moved notifier outside task_rq_lock.
 * Some improvements and restructuring on the i915 side of the series.

v3:
 * Dropped task nice notifier - inheriting nice on request submit time is good
   enough.

v4:
 * Realisation came that this can be heavily simplified and only one simple
   patch is enough to achieve the desired behaviour.
 * Fixed the priority adjustment location to actually worked after rebase!
 * Re-done the benchmarking.

 v5:
 * I am sending wrong files out yet again (v4), apologies for the spam..

Tvrtko Ursulin (1):
  drm/i915: Inherit submitter nice when scheduling requests

 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 8 ++++++++
 1 file changed, 8 insertions(+)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-04-25 11:54 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-07 15:16 [PATCH 0/1] Inherit GPU scheduling priority from process nice Tvrtko Ursulin
2022-04-07 15:16 ` [Intel-gfx] " Tvrtko Ursulin
2022-04-07 15:16 ` [PATCH 1/1] drm/i915: Inherit submitter nice when scheduling requests Tvrtko Ursulin
2022-04-07 15:16   ` [Intel-gfx] " Tvrtko Ursulin
2022-04-08  7:58   ` Daniel Vetter
2022-04-08  7:58     ` [Intel-gfx] " Daniel Vetter
2022-04-08  8:25     ` Tvrtko Ursulin
2022-04-08  8:25       ` [Intel-gfx] " Tvrtko Ursulin
2022-04-08  9:50       ` Dave Airlie
2022-04-08  9:50         ` [Intel-gfx] " Dave Airlie
2022-04-08 10:29         ` Tvrtko Ursulin
2022-04-08 10:29           ` [Intel-gfx] " Tvrtko Ursulin
2022-04-08 15:10           ` Daniel Vetter
2022-04-08 15:10             ` [Intel-gfx] " Daniel Vetter
2022-04-25 11:54             ` Tvrtko Ursulin
2022-04-25 11:54               ` [Intel-gfx] " Tvrtko Ursulin
2022-04-07 18:05 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Inherit GPU scheduling priority from process nice (rev2) Patchwork
2022-04-07 15:28 [PATCH 0/1] Inherit GPU scheduling priority from process nice Tvrtko Ursulin
2022-04-07 15:28 ` [PATCH 1/1] drm/i915: Inherit submitter nice when scheduling requests Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.