From: Alex Deucher <alexdeucher@gmail.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Intel Graphics Development <Intel-gfx@lists.freedesktop.org>,
Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 0/7] Per client engine busyness
Date: Thu, 13 May 2021 11:48:08 -0400 [thread overview]
Message-ID: <CADnq5_NEg4s2AWBTkjW7NXoBe+WB=qQUHCMPP6DcpGSLbBF-rg@mail.gmail.com> (raw)
In-Reply-To: <20210513110002.3641705-1-tvrtko.ursulin@linux.intel.com>
On Thu, May 13, 2021 at 7:00 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
> Resurrect of the previosuly merged per client engine busyness patches. In a
> nutshell it enables intel_gpu_top to be more top(1) like useful and show not
> only physical GPU engine usage but per process view as well.
>
> Example screen capture:
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> intel-gpu-top - 906/ 955 MHz; 0% RC6; 5.30 Watts; 933 irqs/s
>
> IMC reads: 4414 MiB/s
> IMC writes: 3805 MiB/s
>
> ENGINE BUSY MI_SEMA MI_WAIT
> Render/3D/0 93.46% |████████████████████████████████▋ | 0% 0%
> Blitter/0 0.00% | | 0% 0%
> Video/0 0.00% | | 0% 0%
> VideoEnhance/0 0.00% | | 0% 0%
>
> PID NAME Render/3D Blitter Video VideoEnhance
> 2733 neverball |██████▌ || || || |
> 2047 Xorg |███▊ || || || |
> 2737 glxgears |█▍ || || || |
> 2128 xfwm4 | || || || |
> 2047 Xorg | || || || |
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Internally we track time spent on engines for each struct intel_context, both
> for current and past contexts belonging to each open DRM file.
>
> This can serve as a building block for several features from the wanted list:
> smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
> wanted by some customers, setrlimit(2) like controls, cgroups controller,
> dynamic SSEU tuning, ...
>
> To enable userspace access to the tracked data, we expose time spent on GPU per
> client and per engine class in sysfs with a hierarchy like the below:
>
> # cd /sys/class/drm/card0/clients/
> # tree
> .
> ├── 7
> │ ├── busy
> │ │ ├── 0
> │ │ ├── 1
> │ │ ├── 2
> │ │ └── 3
> │ ├── name
> │ └── pid
> ├── 8
> │ ├── busy
> │ │ ├── 0
> │ │ ├── 1
> │ │ ├── 2
> │ │ └── 3
> │ ├── name
> │ └── pid
> └── 9
> ├── busy
> │ ├── 0
> │ ├── 1
> │ ├── 2
> │ └── 3
> ├── name
> └── pid
>
> Files in 'busy' directories are numbered using the engine class ABI values and
> they contain accumulated nanoseconds each client spent on engines of a
> respective class.
We did something similar in amdgpu using the gpu scheduler. We then
expose the data via fdinfo. See
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=1774baa64f9395fa884ea9ed494bcb043f3b83f5
https://cgit.freedesktop.org/drm/drm-misc/commit/?id=874442541133f78c78b6880b8cc495bab5c61704
Alex
>
> Tvrtko Ursulin (7):
> drm/i915: Expose list of clients in sysfs
> drm/i915: Update client name on context create
> drm/i915: Make GEM contexts track DRM clients
> drm/i915: Track runtime spent in closed and unreachable GEM contexts
> drm/i915: Track all user contexts per client
> drm/i915: Track context current active time
> drm/i915: Expose per-engine client busyness
>
> drivers/gpu/drm/i915/Makefile | 5 +-
> drivers/gpu/drm/i915/gem/i915_gem_context.c | 61 ++-
> .../gpu/drm/i915/gem/i915_gem_context_types.h | 16 +-
> drivers/gpu/drm/i915/gt/intel_context.c | 27 +-
> drivers/gpu/drm/i915/gt/intel_context.h | 15 +-
> drivers/gpu/drm/i915/gt/intel_context_types.h | 24 +-
> .../drm/i915/gt/intel_execlists_submission.c | 23 +-
> .../gpu/drm/i915/gt/intel_gt_clock_utils.c | 4 +
> drivers/gpu/drm/i915/gt/intel_lrc.c | 27 +-
> drivers/gpu/drm/i915/gt/intel_lrc.h | 24 ++
> drivers/gpu/drm/i915/gt/selftest_lrc.c | 10 +-
> drivers/gpu/drm/i915/i915_drm_client.c | 365 ++++++++++++++++++
> drivers/gpu/drm/i915/i915_drm_client.h | 123 ++++++
> drivers/gpu/drm/i915/i915_drv.c | 6 +
> drivers/gpu/drm/i915/i915_drv.h | 5 +
> drivers/gpu/drm/i915/i915_gem.c | 21 +-
> drivers/gpu/drm/i915/i915_gpu_error.c | 31 +-
> drivers/gpu/drm/i915/i915_gpu_error.h | 2 +-
> drivers/gpu/drm/i915/i915_sysfs.c | 8 +
> 19 files changed, 716 insertions(+), 81 deletions(-)
> create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
> create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
>
> --
> 2.30.2
>
next prev parent reply other threads:[~2021-05-13 15:48 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-13 10:59 [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2021-05-13 15:48 ` Alex Deucher [this message]
2021-05-13 16:40 ` [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-14 5:58 ` Alex Deucher
2021-05-14 7:22 ` Nieto, David M
2021-05-14 8:04 ` Christian König
2021-05-14 13:42 ` Tvrtko Ursulin
2021-05-14 13:53 ` Christian König
2021-05-14 14:47 ` Tvrtko Ursulin
2021-05-14 14:56 ` Christian König
2021-05-14 15:03 ` Tvrtko Ursulin
2021-05-14 15:10 ` Christian König
2021-05-17 14:30 ` Daniel Vetter
2021-05-17 14:39 ` Nieto, David M
2021-05-17 16:00 ` Tvrtko Ursulin
2021-05-17 18:02 ` Nieto, David M
2021-05-17 18:16 ` Nieto, David M
2021-05-17 19:03 ` Simon Ser
2021-05-18 9:08 ` Tvrtko Ursulin
2021-05-18 9:16 ` Daniel Stone
2021-05-18 9:40 ` Tvrtko Ursulin
2021-05-19 16:16 ` Tvrtko Ursulin
2021-05-19 18:23 ` [Intel-gfx] " Daniel Vetter
2021-05-19 23:17 ` Nieto, David M
2021-05-20 14:11 ` Daniel Vetter
2021-05-20 14:12 ` Christian König
2021-05-20 14:17 ` [Nouveau] " arabek
2021-05-20 8:35 ` Tvrtko Ursulin
2021-05-24 10:48 ` Tvrtko Ursulin
2021-05-18 9:35 ` Tvrtko Ursulin
2021-05-18 12:06 ` Christian König
2021-05-17 19:16 ` Christian König
2021-06-28 10:16 ` Tvrtko Ursulin
2021-06-28 14:37 ` Daniel Vetter
2021-05-17 14:20 ` Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CADnq5_NEg4s2AWBTkjW7NXoBe+WB=qQUHCMPP6DcpGSLbBF-rg@mail.gmail.com' \
--to=alexdeucher@gmail.com \
--cc=Intel-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=tvrtko.ursulin@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).