From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org
Subject: [Intel-gfx] [RFC 00/12] Per client engine busyness
Date: Mon, 9 Mar 2020 18:31:17 +0000 [thread overview]
Message-ID: <20200309183129.2296-1-tvrtko.ursulin@linux.intel.com> (raw)
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Another re-spin of the per-client engine busyness series. Highlights from this
version:
* Different way of tracking runtime of exited/unreachable context. This time
round I accumulate those per context/client and engine class, but active
contexts are kept in a list and tallied on sysfs reads.
* I had to do a small tweak in the engine release code since I needed the
GEM context for a bit longer. (So I can accumulate the intel_context runtime
into it as it is getting freed, because context complete can be late.)
* PPHWSP method is back and even comes first in the series this time. It still
can't show the currently running workloads but the software tracking method
suffers from the CSB processing delay with high frequency and very short
batches.
Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...
Externally, in sysfs, we expose time spent on GPU per client and per engine
class.
Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top - 906/ 955 MHz; 0% RC6; 5.30 Watts; 933 irqs/s
IMC reads: 4414 MiB/s
IMC writes: 3805 MiB/s
ENGINE BUSY MI_SEMA MI_WAIT
Render/3D/0 93.46% |████████████████████████████████▋ | 0% 0%
Blitter/0 0.00% | | 0% 0%
Video/0 0.00% | | 0% 0%
VideoEnhance/0 0.00% | | 0% 0%
PID NAME Render/3D Blitter Video VideoEnhance
2733 neverball |██████▌ || || || |
2047 Xorg |███▊ || || || |
2737 glxgears |█▍ || || || |
2128 xfwm4 | || || || |
2047 Xorg | || || || |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Implementation wise we add a a bunch of files in sysfs like:
# cd /sys/class/drm/card0/clients/
# tree
.
├── 7
│ ├── busy
│ │ ├── 0
│ │ ├── 1
│ │ ├── 2
│ │ └── 3
│ ├── name
│ └── pid
├── 8
│ ├── busy
│ │ ├── 0
│ │ ├── 1
│ │ ├── 2
│ │ └── 3
│ ├── name
│ └── pid
└── 9
├── busy
│ ├── 0
│ ├── 1
│ ├── 2
│ └── 3
├── name
└── pid
Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.
It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.
Tvrtko Ursulin (12):
drm/i915: Expose list of clients in sysfs
drm/i915: Update client name on context create
drm/i915: Make GEM contexts track DRM clients
drm/i915: Use explicit flag to mark unreachable intel_context
drm/i915: Track runtime spent in unreachable intel_contexts
drm/i915: Track runtime spent in closed GEM contexts
drm/i915: Track all user contexts per client
drm/i915: Expose per-engine client busyness
drm/i915: Track per-context engine busyness
drm/i915: Carry over past software tracked context runtime
drm/i915: Prefer software tracked context busyness
compare runtimes
drivers/gpu/drm/i915/Makefile | 3 +-
drivers/gpu/drm/i915/gem/i915_gem_context.c | 83 +++-
.../gpu/drm/i915/gem/i915_gem_context_types.h | 26 +-
.../gpu/drm/i915/gem/i915_gem_execbuffer.c | 4 +-
drivers/gpu/drm/i915/gt/intel_context.c | 20 +
drivers/gpu/drm/i915/gt/intel_context.h | 13 +
drivers/gpu/drm/i915/gt/intel_context_types.h | 10 +
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 15 +-
drivers/gpu/drm/i915/gt/intel_lrc.c | 34 +-
drivers/gpu/drm/i915/i915_debugfs.c | 31 +-
drivers/gpu/drm/i915/i915_drm_client.c | 413 ++++++++++++++++++
drivers/gpu/drm/i915/i915_drm_client.h | 100 +++++
drivers/gpu/drm/i915/i915_drv.h | 5 +
drivers/gpu/drm/i915/i915_gem.c | 37 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 21 +-
drivers/gpu/drm/i915/i915_sysfs.c | 8 +
16 files changed, 767 insertions(+), 56 deletions(-)
create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next reply other threads:[~2020-03-09 18:31 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-09 18:31 Tvrtko Ursulin [this message]
2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-03-09 21:34 ` Chris Wilson
2020-03-09 23:26 ` Tvrtko Ursulin
2020-03-10 0:13 ` Chris Wilson
2020-03-10 8:44 ` Tvrtko Ursulin
2020-03-10 11:41 ` Chris Wilson
2020-03-10 12:04 ` Tvrtko Ursulin
2020-03-10 17:59 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
2020-03-10 18:11 ` Chris Wilson
2020-03-10 19:52 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-03-10 18:20 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
2020-03-10 15:30 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-03-10 18:25 ` Chris Wilson
2020-03-10 20:00 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-03-10 18:28 ` Chris Wilson
2020-03-10 20:01 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-03-10 18:32 ` Chris Wilson
2020-03-10 20:04 ` Tvrtko Ursulin
2020-03-10 20:12 ` Chris Wilson
2020-03-11 10:17 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2020-03-10 18:36 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
2020-03-09 23:30 ` Tvrtko Ursulin
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-11 18:26 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200309183129.2296-1-tvrtko.ursulin@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=Intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).