All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] Per client engine busyness
@ 2021-05-13 10:59 ` Tvrtko Ursulin
  0 siblings, 0 replies; 108+ messages in thread
From: Tvrtko Ursulin @ 2021-05-13 10:59 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Resurrect of the previosuly merged per client engine busyness patches. In a
nutshell it enables intel_gpu_top to be more top(1) like useful and show not
only physical GPU engine usage but per process view as well.

Example screen capture:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Internally we track time spent on engines for each struct intel_context, both
for current and past contexts belonging to each open DRM file.

This can serve as a building block for several features from the wanted list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, setrlimit(2) like controls, cgroups controller,
dynamic SSEU tuning, ...

To enable userspace access to the tracked data, we expose time spent on GPU per
client and per engine class in sysfs with a hierarchy like the below:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

Tvrtko Ursulin (7):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in closed and unreachable GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Track context current active time
  drm/i915: Expose per-engine client busyness

 drivers/gpu/drm/i915/Makefile                 |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  27 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |  15 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 .../drm/i915/gt/intel_execlists_submission.c  |  23 +-
 .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 +
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +-
 drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 365 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 123 ++++++
 drivers/gpu/drm/i915/i915_drv.c               |   6 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  21 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +-
 drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 19 files changed, 716 insertions(+), 81 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 108+ messages in thread
* [PATCH 0/8] Per client engine busyness
@ 2021-07-12 12:17 Tvrtko Ursulin
  2021-07-12 12:37 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
  0 siblings, 1 reply; 108+ messages in thread
From: Tvrtko Ursulin @ 2021-07-12 12:17 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Same old work but now rebased and series ending with some DRM docs proposing
the common specification which should enable nice common userspace tools to be
written.

For the moment I only have intel_gpu_top converted to use this and that seems to
work okay.

Tvrtko Ursulin (8):
  drm/i915: Explicitly track DRM clients
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in closed and unreachable GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Track context current active time
  drm/i915: Expose client engine utilisation via fdinfo
  drm: Document fdinfo format specification

 Documentation/gpu/drm-usage-stats.rst         |  99 +++++++
 Documentation/gpu/i915.rst                    |  27 ++
 Documentation/gpu/index.rst                   |   1 +
 drivers/gpu/drm/i915/Makefile                 |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  55 +++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  27 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |  15 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 .../drm/i915/gt/intel_execlists_submission.c  |  23 +-
 .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 +
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +-
 drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 243 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 107 ++++++++
 drivers/gpu/drm/i915/i915_drv.c               |   9 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  21 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +--
 drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +-
 21 files changed, 696 insertions(+), 79 deletions(-)
 create mode 100644 Documentation/gpu/drm-usage-stats.rst
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 108+ messages in thread
* [RFC 0/7] Per client engine busyness
@ 2021-05-20 15:12 Tvrtko Ursulin
  2021-05-20 15:56 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
  0 siblings, 1 reply; 108+ messages in thread
From: Tvrtko Ursulin @ 2021-05-20 15:12 UTC (permalink / raw)
  To: Intel-gfx; +Cc: dri-devel, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Continuing on the identically named thread. First six patches are i915 specific
so please mostly look at the last one only which discusses the common options
for DRM drivers.

I haven't updated intel_gpu_top to use this yet so can't report any performance
numbers.

Tvrtko Ursulin (7):
  drm/i915: Explicitly track DRM clients
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in closed and unreachable GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Track context current active time
  drm/i915: Expose client engine utilisation via fdinfo

 drivers/gpu/drm/i915/Makefile                 |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  61 ++++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  16 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  27 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |  15 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 .../drm/i915/gt/intel_execlists_submission.c  |  23 +-
 .../gpu/drm/i915/gt/intel_gt_clock_utils.c    |   4 +
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  27 +-
 drivers/gpu/drm/i915/gt/intel_lrc.h           |  24 ++
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 238 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 107 ++++++++
 drivers/gpu/drm/i915/i915_drv.c               |   9 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  21 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  31 +--
 drivers/gpu/drm/i915/i915_gpu_error.h         |   2 +-
 18 files changed, 568 insertions(+), 81 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 108+ messages in thread
* [Intel-gfx] [PATCH 0/9] Per client engine busyness
@ 2020-09-14 13:12 Tvrtko Ursulin
  2020-09-14 17:48 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
  0 siblings, 1 reply; 108+ messages in thread
From: Tvrtko Ursulin @ 2020-09-14 13:12 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * Make sure i915_gem_context is fully initialized before adding to
   gem.contexts.list.
 * A couple checkpatch warnings.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

Tvrtko Ursulin (9):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track context current active time
  drm/i915: Prefer software tracked context busyness

 drivers/gpu/drm/i915/Makefile                 |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  56 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  21 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  11 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  57 ++-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  29 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 434 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  93 ++++
 drivers/gpu/drm/i915/i915_drv.c               |   4 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  23 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  26 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 733 insertions(+), 79 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.25.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 108+ messages in thread
* [Intel-gfx] [PATCH 0/9] Per client engine busyness
@ 2020-09-04 12:59 Tvrtko Ursulin
  2020-09-04 14:06 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
  0 siblings, 1 reply; 108+ messages in thread
From: Tvrtko Ursulin @ 2020-09-04 12:59 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * Small fix for cyclic client id allocation.
 * Otherwise just a rebase to catch up with drm-tip.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

Tvrtko Ursulin (9):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track context current active time
  drm/i915: Prefer software tracked context busyness

 drivers/gpu/drm/i915/Makefile                 |   5 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  56 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  21 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  11 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  57 ++-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  29 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 432 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  94 ++++
 drivers/gpu/drm/i915/i915_drv.c               |   4 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  23 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  26 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 732 insertions(+), 79 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.25.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 108+ messages in thread
* [Intel-gfx] [PATCH 0/9] Per client engine busyness
@ 2020-04-15 10:11 Tvrtko Ursulin
  2020-04-15 11:11 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
  0 siblings, 1 reply; 108+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * One patch got merged in the meantime, otherwise just a rebase.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.

Tvrtko Ursulin (9):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track context current active time
  drm/i915: Prefer software tracked context busyness

 drivers/gpu/drm/i915/Makefile                 |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  60 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  21 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  18 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  58 ++-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  29 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 431 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  93 ++++
 drivers/gpu/drm/i915/i915_drv.c               |   3 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  25 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  25 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 740 insertions(+), 79 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 108+ messages in thread

end of thread, other threads:[~2021-07-12 12:37 UTC | newest]

Thread overview: 108+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-13 10:59 [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-13 10:59 ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2021-05-13 10:59   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin
2021-05-13 10:59   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2021-05-13 10:59   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin
2021-05-13 10:59   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-05-13 11:00   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin
2021-05-13 11:00   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2021-05-13 11:00   ` [Intel-gfx] " Tvrtko Ursulin
2021-05-13 11:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork
2021-05-13 11:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-05-13 11:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-05-13 15:48 ` [PATCH 0/7] " Alex Deucher
2021-05-13 15:48   ` [Intel-gfx] " Alex Deucher
2021-05-13 16:40   ` Tvrtko Ursulin
2021-05-13 16:40     ` [Intel-gfx] " Tvrtko Ursulin
2021-05-14  5:58     ` Alex Deucher
2021-05-14  5:58       ` [Intel-gfx] " Alex Deucher
2021-05-14  7:22       ` Nieto, David M
2021-05-14  7:22         ` [Intel-gfx] " Nieto, David M
2021-05-14  8:04         ` Christian König
2021-05-14  8:04           ` [Intel-gfx] " Christian König
2021-05-14 13:42           ` Tvrtko Ursulin
2021-05-14 13:42             ` [Intel-gfx] " Tvrtko Ursulin
2021-05-14 13:53             ` Christian König
2021-05-14 13:53               ` [Intel-gfx] " Christian König
2021-05-14 14:47               ` Tvrtko Ursulin
2021-05-14 14:47                 ` [Intel-gfx] " Tvrtko Ursulin
2021-05-14 14:56                 ` Christian König
2021-05-14 14:56                   ` [Intel-gfx] " Christian König
2021-05-14 15:03                   ` Tvrtko Ursulin
2021-05-14 15:03                     ` [Intel-gfx] " Tvrtko Ursulin
2021-05-14 15:10                     ` Christian König
2021-05-14 15:10                       ` [Intel-gfx] " Christian König
2021-05-17 14:30                       ` Daniel Vetter
2021-05-17 14:30                         ` [Intel-gfx] " Daniel Vetter
2021-05-17 14:39                         ` Nieto, David M
2021-05-17 14:39                           ` [Intel-gfx] " Nieto, David M
2021-05-17 16:00                           ` Tvrtko Ursulin
2021-05-17 16:00                             ` [Intel-gfx] " Tvrtko Ursulin
2021-05-17 18:02                             ` Nieto, David M
2021-05-17 18:02                               ` [Intel-gfx] " Nieto, David M
2021-05-17 18:16                               ` [Nouveau] " Nieto, David M
2021-05-17 18:16                                 ` [Intel-gfx] " Nieto, David M
2021-05-17 18:16                                 ` Nieto, David M
2021-05-17 19:03                                 ` [Nouveau] " Simon Ser
2021-05-17 19:03                                   ` [Intel-gfx] " Simon Ser
2021-05-17 19:03                                   ` Simon Ser
2021-05-18  9:08                                   ` [Nouveau] " Tvrtko Ursulin
2021-05-18  9:08                                     ` [Intel-gfx] " Tvrtko Ursulin
2021-05-18  9:08                                     ` Tvrtko Ursulin
2021-05-18  9:16                                     ` [Nouveau] " Daniel Stone
2021-05-18  9:16                                       ` [Intel-gfx] " Daniel Stone
2021-05-18  9:16                                       ` Daniel Stone
2021-05-18  9:40                                       ` [Nouveau] " Tvrtko Ursulin
2021-05-18  9:40                                         ` [Intel-gfx] " Tvrtko Ursulin
2021-05-18  9:40                                         ` Tvrtko Ursulin
2021-05-19 16:16                                         ` [Nouveau] " Tvrtko Ursulin
2021-05-19 16:16                                           ` [Intel-gfx] " Tvrtko Ursulin
2021-05-19 16:16                                           ` Tvrtko Ursulin
2021-05-19 18:23                                           ` [Nouveau] [Intel-gfx] " Daniel Vetter
2021-05-19 18:23                                             ` Daniel Vetter
2021-05-19 18:23                                             ` Daniel Vetter
2021-05-19 23:17                                             ` [Nouveau] " Nieto, David M
2021-05-19 23:17                                               ` Nieto, David M
2021-05-19 23:17                                               ` Nieto, David M
2021-05-20 14:11                                               ` [Nouveau] " Daniel Vetter
2021-05-20 14:11                                                 ` Daniel Vetter
2021-05-20 14:11                                                 ` Daniel Vetter
2021-05-20 14:12                                                 ` [Nouveau] " Christian König
2021-05-20 14:12                                                   ` Christian König
2021-05-20 14:12                                                   ` Christian König
2021-05-20 14:17                                                   ` [Nouveau] " arabek
2021-05-20 14:17                                                     ` [Intel-gfx] [Nouveau] " arabek
2021-05-20 14:17                                                     ` [Nouveau] [Intel-gfx] " arabek
2021-05-20  8:35                                             ` Tvrtko Ursulin
2021-05-20  8:35                                               ` Tvrtko Ursulin
2021-05-20  8:35                                               ` Tvrtko Ursulin
2021-05-24 10:48                                               ` [Nouveau] " Tvrtko Ursulin
2021-05-24 10:48                                                 ` Tvrtko Ursulin
2021-05-24 10:48                                                 ` Tvrtko Ursulin
2021-05-18  9:35                               ` Tvrtko Ursulin
2021-05-18  9:35                                 ` [Intel-gfx] " Tvrtko Ursulin
2021-05-18 12:06                                 ` Christian König
2021-05-18 12:06                                   ` [Intel-gfx] " Christian König
2021-05-17 19:16                         ` Christian König
2021-05-17 19:16                           ` [Intel-gfx] " Christian König
2021-06-28 10:16                       ` Tvrtko Ursulin
2021-06-28 10:16                         ` [Intel-gfx] " Tvrtko Ursulin
2021-06-28 14:37                         ` Daniel Vetter
2021-06-28 14:37                           ` [Intel-gfx] " Daniel Vetter
2021-05-15 10:40                     ` Maxime Schmitt
2021-05-17 16:13                       ` Tvrtko Ursulin
2021-05-17 14:20   ` Daniel Vetter
2021-05-17 14:20     ` [Intel-gfx] " Daniel Vetter
2021-05-13 16:38 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2021-07-12 12:17 [PATCH 0/8] " Tvrtko Ursulin
2021-07-12 12:37 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
2021-05-20 15:12 [RFC 0/7] " Tvrtko Ursulin
2021-05-20 15:56 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
2020-09-14 13:12 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-09-14 17:48 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
2020-09-04 12:59 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-09-04 14:06 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-04-15 11:11 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.