All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org
Subject: [Intel-gfx] [RFC 00/12] Per client engine busyness
Date: Wed, 11 Mar 2020 18:26:08 +0000	[thread overview]
Message-ID: <20200311182618.21513-1-tvrtko.ursulin@linux.intel.com> (raw)

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * Last two patches contain a hybrid method of tracking context runtime. PPHWSP
   tracked one is used as a baseline and then on top i915 tracks the start time
   of when context was last started executing on the GPU. Together this should
   give better overall resilience against spammy workloads and also provides
   visibility to long/infinite batches.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.

Tvrtko Ursulin (10):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Use explicit flag to mark unreachable intel_context
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track context current active time
  drm/i915: Prefer software tracked context busyness

 drivers/gpu/drm/i915/Makefile                 |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  66 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  21 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   2 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  18 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  25 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  55 ++-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  31 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 434 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  94 ++++
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  35 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  25 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 756 insertions(+), 82 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

             reply	other threads:[~2020-03-11 18:26 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-11 18:26 Tvrtko Ursulin [this message]
2020-03-11 18:26 ` [Intel-gfx] [RFC 01/10] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-03-13 10:36   ` Chris Wilson
2020-03-11 18:26 ` [Intel-gfx] [RFC 02/10] drm/i915: Update client name on context create Tvrtko Ursulin
2020-03-13 10:41   ` Chris Wilson
2020-03-11 18:26 ` [Intel-gfx] [RFC 03/10] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-03-13 11:00   ` Chris Wilson
2020-03-11 18:26 ` [Intel-gfx] [RFC 04/10] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
2020-03-13 11:03   ` Chris Wilson
2020-03-11 18:26 ` [Intel-gfx] [RFC 05/10] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-03-11 18:26 ` [Intel-gfx] [RFC 06/10] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-03-11 18:26 ` [Intel-gfx] [RFC 07/10] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-03-11 18:26 ` [Intel-gfx] [RFC 08/10] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-03-11 18:26 ` [Intel-gfx] [RFC 09/10] drm/i915: Track context current active time Tvrtko Ursulin
2020-03-11 18:26 ` [Intel-gfx] [RFC 10/10] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-03-12  2:08 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev6) Patchwork
2020-03-12  2:14 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-12  2:33 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-03-13 10:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev7) Patchwork
2020-03-13 10:34 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-13 10:54 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-03-13 17:26 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
2020-03-09 22:02 ` Chris Wilson
2020-03-09 23:30   ` Tvrtko Ursulin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200311182618.21513-1-tvrtko.ursulin@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.