From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Subject: [PATCH v4 0/5] fdinfo memory stats Date: Mon, 12 Jun 2023 11:46:53 +0100 [thread overview] Message-ID: <20230612104658.1386996-1-tvrtko.ursulin@linux.intel.com> (raw) From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> I added tracking of most classes of objects which contribute to client's memory footprint and accouting along the similar lines as in Rob's msm code. Then printing it out to fdinfo using the drm helper Rob added. Accounting by keeping per client lists may not be the most effient method, perhaps we should simply add and subtract stats directly at convenient sites, but that too is not straightforward due no existing connection between buffer objects and clients. Possibly some other tricky bits in the buffer sharing deparment. So lets see if this works for now. Infrequent reader penalty should not be too bad (may be even useful to dump the lists in debugfs?) and additional list_head per object pretty much drowns in the noise. Example fdinfo with the series applied: # cat /proc/1383/fdinfo/8 pos: 0 flags: 02100002 mnt_id: 21 ino: 397 drm-driver: i915 drm-client-id: 18 drm-pdev: 0000:00:02.0 drm-total-system: 125 MiB drm-shared-system: 16 MiB drm-active-system: 110 MiB drm-resident-system: 125 MiB drm-purgeable-system: 2 MiB drm-total-stolen-system: 0 drm-shared-stolen-system: 0 drm-active-stolen-system: 0 drm-resident-stolen-system: 0 drm-purgeable-stolen-system: 0 drm-engine-render: 25662044495 ns drm-engine-copy: 0 ns drm-engine-video: 0 ns drm-engine-video-enhance: 0 ns Example gputop output (local patches currently): DRM minor 0 PID SMEM SMEMRSS render copy video NAME 1233 124M 124M |████████|| || || | neverball 1130 59M 59M |█▌ || || || | Xorg 1207 12M 12M | || || || | xfwm4 v2: * Now actually per client. v3: * Track imported dma-buf objects. v4: * Rely on DRM GEM handles for tracking user objects. * Fix internal object accounting (no placements). Tvrtko Ursulin (5): drm/i915: Add ability for tracking buffer objects per client drm/i915: Record which client owns a VM drm/i915: Track page table backing store usage drm/i915: Account ring buffer and context state storage drm/i915: Implement fdinfo memory stats printing drivers/gpu/drm/i915/gem/i915_gem_context.c | 11 +- .../gpu/drm/i915/gem/i915_gem_context_types.h | 3 + drivers/gpu/drm/i915/gem/i915_gem_object.c | 5 + .../gpu/drm/i915/gem/i915_gem_object_types.h | 12 ++ .../gpu/drm/i915/gem/selftests/mock_context.c | 4 +- drivers/gpu/drm/i915/gt/intel_context.c | 8 ++ drivers/gpu/drm/i915/gt/intel_gtt.c | 6 + drivers/gpu/drm/i915/gt/intel_gtt.h | 1 + drivers/gpu/drm/i915/i915_drm_client.c | 124 +++++++++++++++++- drivers/gpu/drm/i915/i915_drm_client.h | 42 +++++- drivers/gpu/drm/i915/i915_gem.c | 2 +- 11 files changed, 210 insertions(+), 8 deletions(-) -- 2.39.2
WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [Intel-gfx] [PATCH v4 0/5] fdinfo memory stats Date: Mon, 12 Jun 2023 11:46:53 +0100 [thread overview] Message-ID: <20230612104658.1386996-1-tvrtko.ursulin@linux.intel.com> (raw) From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> I added tracking of most classes of objects which contribute to client's memory footprint and accouting along the similar lines as in Rob's msm code. Then printing it out to fdinfo using the drm helper Rob added. Accounting by keeping per client lists may not be the most effient method, perhaps we should simply add and subtract stats directly at convenient sites, but that too is not straightforward due no existing connection between buffer objects and clients. Possibly some other tricky bits in the buffer sharing deparment. So lets see if this works for now. Infrequent reader penalty should not be too bad (may be even useful to dump the lists in debugfs?) and additional list_head per object pretty much drowns in the noise. Example fdinfo with the series applied: # cat /proc/1383/fdinfo/8 pos: 0 flags: 02100002 mnt_id: 21 ino: 397 drm-driver: i915 drm-client-id: 18 drm-pdev: 0000:00:02.0 drm-total-system: 125 MiB drm-shared-system: 16 MiB drm-active-system: 110 MiB drm-resident-system: 125 MiB drm-purgeable-system: 2 MiB drm-total-stolen-system: 0 drm-shared-stolen-system: 0 drm-active-stolen-system: 0 drm-resident-stolen-system: 0 drm-purgeable-stolen-system: 0 drm-engine-render: 25662044495 ns drm-engine-copy: 0 ns drm-engine-video: 0 ns drm-engine-video-enhance: 0 ns Example gputop output (local patches currently): DRM minor 0 PID SMEM SMEMRSS render copy video NAME 1233 124M 124M |████████|| || || | neverball 1130 59M 59M |█▌ || || || | Xorg 1207 12M 12M | || || || | xfwm4 v2: * Now actually per client. v3: * Track imported dma-buf objects. v4: * Rely on DRM GEM handles for tracking user objects. * Fix internal object accounting (no placements). Tvrtko Ursulin (5): drm/i915: Add ability for tracking buffer objects per client drm/i915: Record which client owns a VM drm/i915: Track page table backing store usage drm/i915: Account ring buffer and context state storage drm/i915: Implement fdinfo memory stats printing drivers/gpu/drm/i915/gem/i915_gem_context.c | 11 +- .../gpu/drm/i915/gem/i915_gem_context_types.h | 3 + drivers/gpu/drm/i915/gem/i915_gem_object.c | 5 + .../gpu/drm/i915/gem/i915_gem_object_types.h | 12 ++ .../gpu/drm/i915/gem/selftests/mock_context.c | 4 +- drivers/gpu/drm/i915/gt/intel_context.c | 8 ++ drivers/gpu/drm/i915/gt/intel_gtt.c | 6 + drivers/gpu/drm/i915/gt/intel_gtt.h | 1 + drivers/gpu/drm/i915/i915_drm_client.c | 124 +++++++++++++++++- drivers/gpu/drm/i915/i915_drm_client.h | 42 +++++- drivers/gpu/drm/i915/i915_gem.c | 2 +- 11 files changed, 210 insertions(+), 8 deletions(-) -- 2.39.2
next reply other threads:[~2023-06-12 10:47 UTC|newest] Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-06-12 10:46 Tvrtko Ursulin [this message] 2023-06-12 10:46 ` [Intel-gfx] [PATCH v4 0/5] fdinfo memory stats Tvrtko Ursulin 2023-06-12 10:46 ` [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin 2023-06-12 10:46 ` [Intel-gfx] " Tvrtko Ursulin 2023-06-12 10:46 ` [PATCH 2/5] drm/i915: Record which client owns a VM Tvrtko Ursulin 2023-06-12 10:46 ` [Intel-gfx] " Tvrtko Ursulin 2023-06-12 10:46 ` [PATCH 3/5] drm/i915: Track page table backing store usage Tvrtko Ursulin 2023-06-12 10:46 ` [Intel-gfx] " Tvrtko Ursulin 2023-06-12 10:46 ` [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin 2023-06-12 10:46 ` [Intel-gfx] " Tvrtko Ursulin 2023-06-12 10:46 ` [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin 2023-06-12 10:46 ` [Intel-gfx] " Tvrtko Ursulin 2023-06-12 12:45 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for fdinfo memory stats (rev3) Patchwork 2023-06-12 12:45 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork 2023-06-12 13:02 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20230612104658.1386996-1-tvrtko.ursulin@linux.intel.com \ --to=tvrtko.ursulin@linux.intel.com \ --cc=Intel-gfx@lists.freedesktop.org \ --cc=dri-devel@lists.freedesktop.org \ --cc=tvrtko.ursulin@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.