All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/5] fdinfo memory stats
@ 2023-07-07 13:02 ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

I added tracking of most classes of objects which contribute to client's memory
footprint and accouting along the similar lines as in Rob's msm code. Then
printing it out to fdinfo using the drm helper Rob added.

Accounting by keeping per client lists may not be the most effient method,
perhaps we should simply add and subtract stats directly at convenient sites,
but that too is not straightforward due no existing connection between buffer
objects and clients. Possibly some other tricky bits in the buffer sharing
deparment. So lets see if this works for now. Infrequent reader penalty should
not be too bad (may be even useful to dump the lists in debugfs?) and additional
list_head per object pretty much drowns in the noise.

Example fdinfo with the series applied:

# cat /proc/1383/fdinfo/8
pos:    0
flags:  02100002
mnt_id: 21
ino:    397
drm-driver:     i915
drm-client-id:  18
drm-pdev:       0000:00:02.0
drm-total-system:       125 MiB
drm-shared-system:      16 MiB
drm-active-system:      110 MiB
drm-resident-system:    125 MiB
drm-purgeable-system:   2 MiB
drm-total-stolen-system:        0
drm-shared-stolen-system:       0
drm-active-stolen-system:       0
drm-resident-stolen-system:     0
drm-purgeable-stolen-system:    0
drm-engine-render:      25662044495 ns
drm-engine-copy:        0 ns
drm-engine-video:       0 ns
drm-engine-video-enhance:       0 ns

Example gputop output (local patches currently):

DRM minor 0
 PID     SMEM  SMEMRSS   render     copy     video    NAME
1233     124M     124M |████████||        ||        ||        | neverball
1130      59M      59M |█▌      ||        ||        ||        | Xorg
1207      12M      12M |        ||        ||        ||        | xfwm4

v2:
 * Now actually per client.

v3:
 * Track imported dma-buf objects.

v4:
 * Rely on DRM GEM handles for tracking user objects.
 * Fix internal object accounting (no placements).

v5:
 * Fixed brain fart of overwriting the loop cursor.
 * Fixed object destruction racing with fdinfo reads.
 * Take reference to GEM context while using it.

Tvrtko Ursulin (5):
  drm/i915: Add ability for tracking buffer objects per client
  drm/i915: Record which client owns a VM
  drm/i915: Track page table backing store usage
  drm/i915: Account ring buffer and context state storage
  drm/i915: Implement fdinfo memory stats printing

 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  11 +-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |   3 +
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  13 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  12 ++
 .../gpu/drm/i915/gem/selftests/mock_context.c |   4 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  13 ++
 drivers/gpu/drm/i915/gt/intel_gtt.c           |   6 +
 drivers/gpu/drm/i915/gt/intel_gtt.h           |   1 +
 drivers/gpu/drm/i915/i915_drm_client.c        | 131 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  40 ++++++
 10 files changed, 226 insertions(+), 8 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-gfx] [PATCH v5 0/5] fdinfo memory stats
@ 2023-07-07 13:02 ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

I added tracking of most classes of objects which contribute to client's memory
footprint and accouting along the similar lines as in Rob's msm code. Then
printing it out to fdinfo using the drm helper Rob added.

Accounting by keeping per client lists may not be the most effient method,
perhaps we should simply add and subtract stats directly at convenient sites,
but that too is not straightforward due no existing connection between buffer
objects and clients. Possibly some other tricky bits in the buffer sharing
deparment. So lets see if this works for now. Infrequent reader penalty should
not be too bad (may be even useful to dump the lists in debugfs?) and additional
list_head per object pretty much drowns in the noise.

Example fdinfo with the series applied:

# cat /proc/1383/fdinfo/8
pos:    0
flags:  02100002
mnt_id: 21
ino:    397
drm-driver:     i915
drm-client-id:  18
drm-pdev:       0000:00:02.0
drm-total-system:       125 MiB
drm-shared-system:      16 MiB
drm-active-system:      110 MiB
drm-resident-system:    125 MiB
drm-purgeable-system:   2 MiB
drm-total-stolen-system:        0
drm-shared-stolen-system:       0
drm-active-stolen-system:       0
drm-resident-stolen-system:     0
drm-purgeable-stolen-system:    0
drm-engine-render:      25662044495 ns
drm-engine-copy:        0 ns
drm-engine-video:       0 ns
drm-engine-video-enhance:       0 ns

Example gputop output (local patches currently):

DRM minor 0
 PID     SMEM  SMEMRSS   render     copy     video    NAME
1233     124M     124M |████████||        ||        ||        | neverball
1130      59M      59M |█▌      ||        ||        ||        | Xorg
1207      12M      12M |        ||        ||        ||        | xfwm4

v2:
 * Now actually per client.

v3:
 * Track imported dma-buf objects.

v4:
 * Rely on DRM GEM handles for tracking user objects.
 * Fix internal object accounting (no placements).

v5:
 * Fixed brain fart of overwriting the loop cursor.
 * Fixed object destruction racing with fdinfo reads.
 * Take reference to GEM context while using it.

Tvrtko Ursulin (5):
  drm/i915: Add ability for tracking buffer objects per client
  drm/i915: Record which client owns a VM
  drm/i915: Track page table backing store usage
  drm/i915: Account ring buffer and context state storage
  drm/i915: Implement fdinfo memory stats printing

 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  11 +-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |   3 +
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  13 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  12 ++
 .../gpu/drm/i915/gem/selftests/mock_context.c |   4 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  13 ++
 drivers/gpu/drm/i915/gt/intel_gtt.c           |   6 +
 drivers/gpu/drm/i915/gt/intel_gtt.h           |   1 +
 drivers/gpu/drm/i915/i915_drm_client.c        | 131 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  40 ++++++
 10 files changed, 226 insertions(+), 8 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In order to show per client memory usage lets add some infrastructure
which enables tracking buffer objects owned by clients.

We add a per client list protected by a new per client lock and to support
delayed destruction (post client exit) we make tracked objects hold
references to the owning client.

Also, object memory region teardown is moved to the existing RCU free
callback to allow safe dereference from the fdinfo RCU read section.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
 drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
 4 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 97ac6fb37958..3dc4fbb67d2b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -105,6 +105,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 
 	INIT_LIST_HEAD(&obj->mm.link);
 
+#ifdef CONFIG_PROC_FS
+	INIT_LIST_HEAD(&obj->client_link);
+#endif
+
 	INIT_LIST_HEAD(&obj->lut_list);
 	spin_lock_init(&obj->lut_lock);
 
@@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct rcu_head *head)
 		container_of(head, typeof(*obj), rcu);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
+	/* We need to keep this alive for RCU read access from fdinfo. */
+	if (obj->mm.n_placements > 1)
+		kfree(obj->mm.placements);
+
 	i915_gem_object_free(obj);
 
 	GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
@@ -388,9 +396,6 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
 	if (obj->ops->release)
 		obj->ops->release(obj);
 
-	if (obj->mm.n_placements > 1)
-		kfree(obj->mm.placements);
-
 	if (obj->shares_resv_from)
 		i915_vm_resv_put(obj->shares_resv_from);
 
@@ -441,6 +446,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
 
 	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
 
+	i915_drm_client_remove_object(obj);
+
 	/*
 	 * Before we free the object, make sure any pure RCU-only
 	 * read-side critical sections are complete, e.g.
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index e72c57716bee..8de2b91b3edf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -300,6 +300,18 @@ struct drm_i915_gem_object {
 	 */
 	struct i915_address_space *shares_resv_from;
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @client: @i915_drm_client which created the object
+	 */
+	struct i915_drm_client *client;
+
+	/**
+	 * @client_link: Link into @i915_drm_client.objects_list
+	 */
+	struct list_head client_link;
+#endif
+
 	union {
 		struct rcu_head rcu;
 		struct llist_node freed;
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2a44b3876cb5..2e5e69edc0f9 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
 	kref_init(&client->kref);
 	spin_lock_init(&client->ctx_lock);
 	INIT_LIST_HEAD(&client->ctx_list);
+#ifdef CONFIG_PROC_FS
+	spin_lock_init(&client->objects_lock);
+	INIT_LIST_HEAD(&client->objects_list);
+#endif
 
 	return client;
 }
@@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
 		show_client_class(p, i915, file_priv->client, i);
 }
+
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj)
+{
+	unsigned long flags;
+
+	GEM_WARN_ON(obj->client);
+	GEM_WARN_ON(!list_empty(&obj->client_link));
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	obj->client = i915_drm_client_get(client);
+	list_add_tail_rcu(&obj->client_link, &client->objects_list);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+}
+
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+	struct i915_drm_client *client = fetch_and_zero(&obj->client);
+	unsigned long flags;
+
+	/* Object may not be associated with a client. */
+	if (!client)
+		return false;
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	list_del_rcu(&obj->client_link);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+
+	i915_drm_client_put(client);
+
+	return true;
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 67816c912bca..5f58fdf7dcb8 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -12,6 +12,9 @@
 
 #include <uapi/drm/i915_drm.h>
 
+#include "i915_file_private.h"
+#include "gem/i915_gem_object_types.h"
+
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
 struct drm_file;
@@ -25,6 +28,20 @@ struct i915_drm_client {
 	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
 	struct list_head ctx_list; /* List of contexts belonging to client. */
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @objects_lock: lock protecting @objects_list
+	 */
+	spinlock_t objects_lock;
+
+	/**
+	 * @objects_list: list of objects created by this client
+	 *
+	 * Protected by @objects_lock.
+	 */
+	struct list_head objects_list;
+#endif
+
 	/**
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
@@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
 
 void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
 
+#ifdef CONFIG_PROC_FS
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj);
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+#else
+static inline void i915_drm_client_add_object(struct i915_drm_client *client,
+					      struct drm_i915_gem_object *obj)
+{
+}
+
+static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+}
+#endif
+
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-gfx] [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In order to show per client memory usage lets add some infrastructure
which enables tracking buffer objects owned by clients.

We add a per client list protected by a new per client lock and to support
delayed destruction (post client exit) we make tracked objects hold
references to the owning client.

Also, object memory region teardown is moved to the existing RCU free
callback to allow safe dereference from the fdinfo RCU read section.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
 drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
 4 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 97ac6fb37958..3dc4fbb67d2b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -105,6 +105,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 
 	INIT_LIST_HEAD(&obj->mm.link);
 
+#ifdef CONFIG_PROC_FS
+	INIT_LIST_HEAD(&obj->client_link);
+#endif
+
 	INIT_LIST_HEAD(&obj->lut_list);
 	spin_lock_init(&obj->lut_lock);
 
@@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct rcu_head *head)
 		container_of(head, typeof(*obj), rcu);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
+	/* We need to keep this alive for RCU read access from fdinfo. */
+	if (obj->mm.n_placements > 1)
+		kfree(obj->mm.placements);
+
 	i915_gem_object_free(obj);
 
 	GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
@@ -388,9 +396,6 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
 	if (obj->ops->release)
 		obj->ops->release(obj);
 
-	if (obj->mm.n_placements > 1)
-		kfree(obj->mm.placements);
-
 	if (obj->shares_resv_from)
 		i915_vm_resv_put(obj->shares_resv_from);
 
@@ -441,6 +446,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
 
 	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
 
+	i915_drm_client_remove_object(obj);
+
 	/*
 	 * Before we free the object, make sure any pure RCU-only
 	 * read-side critical sections are complete, e.g.
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index e72c57716bee..8de2b91b3edf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -300,6 +300,18 @@ struct drm_i915_gem_object {
 	 */
 	struct i915_address_space *shares_resv_from;
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @client: @i915_drm_client which created the object
+	 */
+	struct i915_drm_client *client;
+
+	/**
+	 * @client_link: Link into @i915_drm_client.objects_list
+	 */
+	struct list_head client_link;
+#endif
+
 	union {
 		struct rcu_head rcu;
 		struct llist_node freed;
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2a44b3876cb5..2e5e69edc0f9 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
 	kref_init(&client->kref);
 	spin_lock_init(&client->ctx_lock);
 	INIT_LIST_HEAD(&client->ctx_list);
+#ifdef CONFIG_PROC_FS
+	spin_lock_init(&client->objects_lock);
+	INIT_LIST_HEAD(&client->objects_list);
+#endif
 
 	return client;
 }
@@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
 		show_client_class(p, i915, file_priv->client, i);
 }
+
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj)
+{
+	unsigned long flags;
+
+	GEM_WARN_ON(obj->client);
+	GEM_WARN_ON(!list_empty(&obj->client_link));
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	obj->client = i915_drm_client_get(client);
+	list_add_tail_rcu(&obj->client_link, &client->objects_list);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+}
+
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+	struct i915_drm_client *client = fetch_and_zero(&obj->client);
+	unsigned long flags;
+
+	/* Object may not be associated with a client. */
+	if (!client)
+		return false;
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	list_del_rcu(&obj->client_link);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+
+	i915_drm_client_put(client);
+
+	return true;
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 67816c912bca..5f58fdf7dcb8 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -12,6 +12,9 @@
 
 #include <uapi/drm/i915_drm.h>
 
+#include "i915_file_private.h"
+#include "gem/i915_gem_object_types.h"
+
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
 struct drm_file;
@@ -25,6 +28,20 @@ struct i915_drm_client {
 	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
 	struct list_head ctx_list; /* List of contexts belonging to client. */
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @objects_lock: lock protecting @objects_list
+	 */
+	spinlock_t objects_lock;
+
+	/**
+	 * @objects_list: list of objects created by this client
+	 *
+	 * Protected by @objects_lock.
+	 */
+	struct list_head objects_list;
+#endif
+
 	/**
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
@@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
 
 void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
 
+#ifdef CONFIG_PROC_FS
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj);
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+#else
+static inline void i915_drm_client_add_object(struct i915_drm_client *client,
+					      struct drm_i915_gem_object *obj)
+{
+}
+
+static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+}
+#endif
+
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 2/5] drm/i915: Record which client owns a VM
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

To enable accounting of indirect client memory usage (such as page tables)
in the following patch, lets start recording the creator of each PPGTT.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 11 ++++++++---
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  3 +++
 drivers/gpu/drm/i915/gem/selftests/mock_context.c |  4 ++--
 drivers/gpu/drm/i915/gt/intel_gtt.h               |  1 +
 4 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 9a9ff84c90d7..35cf6608180e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -279,7 +279,8 @@ static int proto_context_set_protected(struct drm_i915_private *i915,
 }
 
 static struct i915_gem_proto_context *
-proto_context_create(struct drm_i915_private *i915, unsigned int flags)
+proto_context_create(struct drm_i915_file_private *fpriv,
+		     struct drm_i915_private *i915, unsigned int flags)
 {
 	struct i915_gem_proto_context *pc, *err;
 
@@ -287,6 +288,7 @@ proto_context_create(struct drm_i915_private *i915, unsigned int flags)
 	if (!pc)
 		return ERR_PTR(-ENOMEM);
 
+	pc->fpriv = fpriv;
 	pc->num_user_engines = -1;
 	pc->user_engines = NULL;
 	pc->user_flags = BIT(UCONTEXT_BANNABLE) |
@@ -1621,6 +1623,7 @@ i915_gem_create_context(struct drm_i915_private *i915,
 			err = PTR_ERR(ppgtt);
 			goto err_ctx;
 		}
+		ppgtt->vm.fpriv = pc->fpriv;
 		vm = &ppgtt->vm;
 	}
 	if (vm)
@@ -1740,7 +1743,7 @@ int i915_gem_context_open(struct drm_i915_private *i915,
 	/* 0 reserved for invalid/unassigned ppgtt */
 	xa_init_flags(&file_priv->vm_xa, XA_FLAGS_ALLOC1);
 
-	pc = proto_context_create(i915, 0);
+	pc = proto_context_create(file_priv, i915, 0);
 	if (IS_ERR(pc)) {
 		err = PTR_ERR(pc);
 		goto err;
@@ -1822,6 +1825,7 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data,
 
 	GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
 	args->vm_id = id;
+	ppgtt->vm.fpriv = file_priv;
 	return 0;
 
 err_put:
@@ -2284,7 +2288,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 		return -EIO;
 	}
 
-	ext_data.pc = proto_context_create(i915, args->flags);
+	ext_data.pc = proto_context_create(file->driver_priv, i915,
+					   args->flags);
 	if (IS_ERR(ext_data.pc))
 		return PTR_ERR(ext_data.pc);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index cb78214a7dcd..c573c067779f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -188,6 +188,9 @@ struct i915_gem_proto_engine {
  * CONTEXT_CREATE_SET_PARAM during GEM_CONTEXT_CREATE.
  */
 struct i915_gem_proto_context {
+	/** @fpriv: Client which creates the context */
+	struct drm_i915_file_private *fpriv;
+
 	/** @vm: See &i915_gem_context.vm */
 	struct i915_address_space *vm;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
index 8ac6726ec16b..125584ada282 100644
--- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
@@ -83,7 +83,7 @@ live_context(struct drm_i915_private *i915, struct file *file)
 	int err;
 	u32 id;
 
-	pc = proto_context_create(i915, 0);
+	pc = proto_context_create(fpriv, i915, 0);
 	if (IS_ERR(pc))
 		return ERR_CAST(pc);
 
@@ -152,7 +152,7 @@ kernel_context(struct drm_i915_private *i915,
 	struct i915_gem_context *ctx;
 	struct i915_gem_proto_context *pc;
 
-	pc = proto_context_create(i915, 0);
+	pc = proto_context_create(NULL, i915, 0);
 	if (IS_ERR(pc))
 		return ERR_CAST(pc);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index 4d6296cdbcfd..7192a534a654 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -248,6 +248,7 @@ struct i915_address_space {
 	struct drm_mm mm;
 	struct intel_gt *gt;
 	struct drm_i915_private *i915;
+	struct drm_i915_file_private *fpriv;
 	struct device *dma;
 	u64 total;		/* size addr space maps (ex. 2GB for ggtt) */
 	u64 reserved;		/* size addr space reserved */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-gfx] [PATCH 2/5] drm/i915: Record which client owns a VM
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

To enable accounting of indirect client memory usage (such as page tables)
in the following patch, lets start recording the creator of each PPGTT.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 11 ++++++++---
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  3 +++
 drivers/gpu/drm/i915/gem/selftests/mock_context.c |  4 ++--
 drivers/gpu/drm/i915/gt/intel_gtt.h               |  1 +
 4 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 9a9ff84c90d7..35cf6608180e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -279,7 +279,8 @@ static int proto_context_set_protected(struct drm_i915_private *i915,
 }
 
 static struct i915_gem_proto_context *
-proto_context_create(struct drm_i915_private *i915, unsigned int flags)
+proto_context_create(struct drm_i915_file_private *fpriv,
+		     struct drm_i915_private *i915, unsigned int flags)
 {
 	struct i915_gem_proto_context *pc, *err;
 
@@ -287,6 +288,7 @@ proto_context_create(struct drm_i915_private *i915, unsigned int flags)
 	if (!pc)
 		return ERR_PTR(-ENOMEM);
 
+	pc->fpriv = fpriv;
 	pc->num_user_engines = -1;
 	pc->user_engines = NULL;
 	pc->user_flags = BIT(UCONTEXT_BANNABLE) |
@@ -1621,6 +1623,7 @@ i915_gem_create_context(struct drm_i915_private *i915,
 			err = PTR_ERR(ppgtt);
 			goto err_ctx;
 		}
+		ppgtt->vm.fpriv = pc->fpriv;
 		vm = &ppgtt->vm;
 	}
 	if (vm)
@@ -1740,7 +1743,7 @@ int i915_gem_context_open(struct drm_i915_private *i915,
 	/* 0 reserved for invalid/unassigned ppgtt */
 	xa_init_flags(&file_priv->vm_xa, XA_FLAGS_ALLOC1);
 
-	pc = proto_context_create(i915, 0);
+	pc = proto_context_create(file_priv, i915, 0);
 	if (IS_ERR(pc)) {
 		err = PTR_ERR(pc);
 		goto err;
@@ -1822,6 +1825,7 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data,
 
 	GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
 	args->vm_id = id;
+	ppgtt->vm.fpriv = file_priv;
 	return 0;
 
 err_put:
@@ -2284,7 +2288,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 		return -EIO;
 	}
 
-	ext_data.pc = proto_context_create(i915, args->flags);
+	ext_data.pc = proto_context_create(file->driver_priv, i915,
+					   args->flags);
 	if (IS_ERR(ext_data.pc))
 		return PTR_ERR(ext_data.pc);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index cb78214a7dcd..c573c067779f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -188,6 +188,9 @@ struct i915_gem_proto_engine {
  * CONTEXT_CREATE_SET_PARAM during GEM_CONTEXT_CREATE.
  */
 struct i915_gem_proto_context {
+	/** @fpriv: Client which creates the context */
+	struct drm_i915_file_private *fpriv;
+
 	/** @vm: See &i915_gem_context.vm */
 	struct i915_address_space *vm;
 
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
index 8ac6726ec16b..125584ada282 100644
--- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
@@ -83,7 +83,7 @@ live_context(struct drm_i915_private *i915, struct file *file)
 	int err;
 	u32 id;
 
-	pc = proto_context_create(i915, 0);
+	pc = proto_context_create(fpriv, i915, 0);
 	if (IS_ERR(pc))
 		return ERR_CAST(pc);
 
@@ -152,7 +152,7 @@ kernel_context(struct drm_i915_private *i915,
 	struct i915_gem_context *ctx;
 	struct i915_gem_proto_context *pc;
 
-	pc = proto_context_create(i915, 0);
+	pc = proto_context_create(NULL, i915, 0);
 	if (IS_ERR(pc))
 		return ERR_CAST(pc);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
index 4d6296cdbcfd..7192a534a654 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
@@ -248,6 +248,7 @@ struct i915_address_space {
 	struct drm_mm mm;
 	struct intel_gt *gt;
 	struct drm_i915_private *i915;
+	struct drm_i915_file_private *fpriv;
 	struct device *dma;
 	u64 total;		/* size addr space maps (ex. 2GB for ggtt) */
 	u64 reserved;		/* size addr space reserved */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 3/5] drm/i915: Track page table backing store usage
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Account page table backing store against the owning client memory usage
stats.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gtt.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 2f6a9be0ffe6..126269a0d728 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -58,6 +58,9 @@ struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz)
 	if (!IS_ERR(obj)) {
 		obj->base.resv = i915_vm_resv_get(vm);
 		obj->shares_resv_from = vm;
+
+		if (vm->fpriv)
+			i915_drm_client_add_object(vm->fpriv->client, obj);
 	}
 
 	return obj;
@@ -79,6 +82,9 @@ struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz)
 	if (!IS_ERR(obj)) {
 		obj->base.resv = i915_vm_resv_get(vm);
 		obj->shares_resv_from = vm;
+
+		if (vm->fpriv)
+			i915_drm_client_add_object(vm->fpriv->client, obj);
 	}
 
 	return obj;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-gfx] [PATCH 3/5] drm/i915: Track page table backing store usage
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Account page table backing store against the owning client memory usage
stats.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_gtt.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 2f6a9be0ffe6..126269a0d728 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -58,6 +58,9 @@ struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz)
 	if (!IS_ERR(obj)) {
 		obj->base.resv = i915_vm_resv_get(vm);
 		obj->shares_resv_from = vm;
+
+		if (vm->fpriv)
+			i915_drm_client_add_object(vm->fpriv->client, obj);
 	}
 
 	return obj;
@@ -79,6 +82,9 @@ struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz)
 	if (!IS_ERR(obj)) {
 		obj->base.resv = i915_vm_resv_get(vm);
 		obj->shares_resv_from = vm;
+
+		if (vm->fpriv)
+			i915_drm_client_add_object(vm->fpriv->client, obj);
 	}
 
 	return obj;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 4/5] drm/i915: Account ring buffer and context state storage
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Account ring buffers and logical context space against the owning client
memory usage stats.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c | 13 +++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.c  | 10 ++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h  |  8 ++++++++
 3 files changed, 31 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index a53b26178f0a..8a395b9201e9 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -6,6 +6,7 @@
 #include "gem/i915_gem_context.h"
 #include "gem/i915_gem_pm.h"
 
+#include "i915_drm_client.h"
 #include "i915_drv.h"
 #include "i915_trace.h"
 
@@ -50,6 +51,7 @@ intel_context_create(struct intel_engine_cs *engine)
 
 int intel_context_alloc_state(struct intel_context *ce)
 {
+	struct i915_gem_context *ctx;
 	int err = 0;
 
 	if (mutex_lock_interruptible(&ce->pin_mutex))
@@ -66,6 +68,17 @@ int intel_context_alloc_state(struct intel_context *ce)
 			goto unlock;
 
 		set_bit(CONTEXT_ALLOC_BIT, &ce->flags);
+
+		rcu_read_lock();
+		ctx = rcu_dereference(ce->gem_context);
+		if (ctx && !kref_get_unless_zero(&ctx->ref))
+			ctx = NULL;
+		rcu_read_unlock();
+		if (ctx) {
+			if (ctx->client)
+				i915_drm_client_add_context(ctx->client, ce);
+			i915_gem_context_put(ctx);
+		}
 	}
 
 unlock:
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2e5e69edc0f9..ffccb6239789 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -144,4 +144,14 @@ bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
 
 	return true;
 }
+
+void i915_drm_client_add_context(struct i915_drm_client *client,
+				 struct intel_context *ce)
+{
+	if (ce->state)
+		i915_drm_client_add_object(client, ce->state->obj);
+
+	if (ce->ring != ce->engine->legacy.ring && ce->ring->vma)
+		i915_drm_client_add_object(client, ce->ring->vma->obj);
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 5f58fdf7dcb8..39616b10a51f 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -14,6 +14,7 @@
 
 #include "i915_file_private.h"
 #include "gem/i915_gem_object_types.h"
+#include "gt/intel_context_types.h"
 
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
@@ -70,6 +71,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
 void i915_drm_client_add_object(struct i915_drm_client *client,
 				struct drm_i915_gem_object *obj);
 bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+void i915_drm_client_add_context(struct i915_drm_client *client,
+				 struct intel_context *ce);
 #else
 static inline void i915_drm_client_add_object(struct i915_drm_client *client,
 					      struct drm_i915_gem_object *obj)
@@ -79,6 +82,11 @@ static inline void i915_drm_client_add_object(struct i915_drm_client *client,
 static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
 {
 }
+
+static inline void i915_drm_client_add_context(struct i915_drm_client *client,
+					       struct intel_context *ce)
+{
+}
 #endif
 
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Account ring buffers and logical context space against the owning client
memory usage stats.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c | 13 +++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.c  | 10 ++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h  |  8 ++++++++
 3 files changed, 31 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index a53b26178f0a..8a395b9201e9 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -6,6 +6,7 @@
 #include "gem/i915_gem_context.h"
 #include "gem/i915_gem_pm.h"
 
+#include "i915_drm_client.h"
 #include "i915_drv.h"
 #include "i915_trace.h"
 
@@ -50,6 +51,7 @@ intel_context_create(struct intel_engine_cs *engine)
 
 int intel_context_alloc_state(struct intel_context *ce)
 {
+	struct i915_gem_context *ctx;
 	int err = 0;
 
 	if (mutex_lock_interruptible(&ce->pin_mutex))
@@ -66,6 +68,17 @@ int intel_context_alloc_state(struct intel_context *ce)
 			goto unlock;
 
 		set_bit(CONTEXT_ALLOC_BIT, &ce->flags);
+
+		rcu_read_lock();
+		ctx = rcu_dereference(ce->gem_context);
+		if (ctx && !kref_get_unless_zero(&ctx->ref))
+			ctx = NULL;
+		rcu_read_unlock();
+		if (ctx) {
+			if (ctx->client)
+				i915_drm_client_add_context(ctx->client, ce);
+			i915_gem_context_put(ctx);
+		}
 	}
 
 unlock:
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2e5e69edc0f9..ffccb6239789 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -144,4 +144,14 @@ bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
 
 	return true;
 }
+
+void i915_drm_client_add_context(struct i915_drm_client *client,
+				 struct intel_context *ce)
+{
+	if (ce->state)
+		i915_drm_client_add_object(client, ce->state->obj);
+
+	if (ce->ring != ce->engine->legacy.ring && ce->ring->vma)
+		i915_drm_client_add_object(client, ce->ring->vma->obj);
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 5f58fdf7dcb8..39616b10a51f 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -14,6 +14,7 @@
 
 #include "i915_file_private.h"
 #include "gem/i915_gem_object_types.h"
+#include "gt/intel_context_types.h"
 
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
@@ -70,6 +71,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
 void i915_drm_client_add_object(struct i915_drm_client *client,
 				struct drm_i915_gem_object *obj);
 bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+void i915_drm_client_add_context(struct i915_drm_client *client,
+				 struct intel_context *ce);
 #else
 static inline void i915_drm_client_add_object(struct i915_drm_client *client,
 					      struct drm_i915_gem_object *obj)
@@ -79,6 +82,11 @@ static inline void i915_drm_client_add_object(struct i915_drm_client *client,
 static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
 {
 }
+
+static inline void i915_drm_client_add_context(struct i915_drm_client *client,
+					       struct intel_context *ce)
+{
+}
 #endif
 
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Aravind Iddamsetty, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the newly added drm_print_memory_stats helper to show memory
utilisation of our objects in drm/driver specific fdinfo output.

To collect the stats we walk the per memory regions object lists
and accumulate object size into the respective drm_memory_stats
categories.

Objects with multiple possible placements are reported in multiple
regions for total and shared sizes, while other categories are
counted only for the currently active region.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
Cc: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index ffccb6239789..5c77d6987d90 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)
 }
 
 #ifdef CONFIG_PROC_FS
+static void
+obj_meminfo(struct drm_i915_gem_object *obj,
+	    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN])
+{
+	struct intel_memory_region *mr;
+	u64 sz = obj->base.size;
+	enum intel_region_id id;
+	unsigned int i;
+
+	/* Attribute size and shared to all possible memory regions. */
+	for (i = 0; i < obj->mm.n_placements; i++) {
+		mr = obj->mm.placements[i];
+		id = mr->id;
+
+		if (obj->base.handle_count > 1)
+			stats[id].shared += sz;
+		else
+			stats[id].private += sz;
+	}
+
+	/* Attribute other categories to only the current region. */
+	mr = obj->mm.region;
+	if (mr)
+		id = mr->id;
+	else
+		id = INTEL_REGION_SMEM;
+
+	if (!obj->mm.n_placements) {
+		if (obj->base.handle_count > 1)
+			stats[id].shared += sz;
+		else
+			stats[id].private += sz;
+	}
+
+	if (i915_gem_object_has_pages(obj)) {
+		stats[id].resident += sz;
+
+		if (!dma_resv_test_signaled(obj->base.resv,
+					    dma_resv_usage_rw(true)))
+			stats[id].active += sz;
+		else if (i915_gem_object_is_shrinkable(obj) &&
+			 obj->mm.madv == I915_MADV_DONTNEED)
+			stats[id].purgeable += sz;
+	}
+}
+
+static void show_meminfo(struct drm_printer *p, struct drm_file *file)
+{
+	struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
+	struct drm_i915_file_private *fpriv = file->driver_priv;
+	struct i915_drm_client *client = fpriv->client;
+	struct drm_i915_private *i915 = fpriv->i915;
+	struct drm_i915_gem_object *obj;
+	struct intel_memory_region *mr;
+	struct list_head *pos;
+	unsigned int id;
+
+	/* Public objects. */
+	spin_lock(&file->table_lock);
+	idr_for_each_entry (&file->object_idr, obj, id)
+		obj_meminfo(obj, stats);
+	spin_unlock(&file->table_lock);
+
+	/* Internal objects. */
+	rcu_read_lock();
+	list_for_each_rcu(pos, &client->objects_list) {
+		obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
+							 client_link));
+		if (!obj)
+			continue;
+		obj_meminfo(obj, stats);
+		i915_gem_object_put(obj);
+	}
+	rcu_read_unlock();
+
+	for_each_memory_region(mr, i915, id)
+		drm_print_memory_stats(p,
+				       &stats[id],
+				       DRM_GEM_OBJECT_RESIDENT |
+				       DRM_GEM_OBJECT_PURGEABLE,
+				       mr->name);
+}
+
 static const char * const uabi_class_names[] = {
 	[I915_ENGINE_CLASS_RENDER] = "render",
 	[I915_ENGINE_CLASS_COPY] = "copy",
@@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	 * ******************************************************************
 	 */
 
+	show_meminfo(p, file);
+
 	if (GRAPHICS_VER(i915) < 8)
 		return;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
@ 2023-07-07 13:02   ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-07 13:02 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the newly added drm_print_memory_stats helper to show memory
utilisation of our objects in drm/driver specific fdinfo output.

To collect the stats we walk the per memory regions object lists
and accumulate object size into the respective drm_memory_stats
categories.

Objects with multiple possible placements are reported in multiple
regions for total and shared sizes, while other categories are
counted only for the currently active region.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
Cc: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index ffccb6239789..5c77d6987d90 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)
 }
 
 #ifdef CONFIG_PROC_FS
+static void
+obj_meminfo(struct drm_i915_gem_object *obj,
+	    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN])
+{
+	struct intel_memory_region *mr;
+	u64 sz = obj->base.size;
+	enum intel_region_id id;
+	unsigned int i;
+
+	/* Attribute size and shared to all possible memory regions. */
+	for (i = 0; i < obj->mm.n_placements; i++) {
+		mr = obj->mm.placements[i];
+		id = mr->id;
+
+		if (obj->base.handle_count > 1)
+			stats[id].shared += sz;
+		else
+			stats[id].private += sz;
+	}
+
+	/* Attribute other categories to only the current region. */
+	mr = obj->mm.region;
+	if (mr)
+		id = mr->id;
+	else
+		id = INTEL_REGION_SMEM;
+
+	if (!obj->mm.n_placements) {
+		if (obj->base.handle_count > 1)
+			stats[id].shared += sz;
+		else
+			stats[id].private += sz;
+	}
+
+	if (i915_gem_object_has_pages(obj)) {
+		stats[id].resident += sz;
+
+		if (!dma_resv_test_signaled(obj->base.resv,
+					    dma_resv_usage_rw(true)))
+			stats[id].active += sz;
+		else if (i915_gem_object_is_shrinkable(obj) &&
+			 obj->mm.madv == I915_MADV_DONTNEED)
+			stats[id].purgeable += sz;
+	}
+}
+
+static void show_meminfo(struct drm_printer *p, struct drm_file *file)
+{
+	struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
+	struct drm_i915_file_private *fpriv = file->driver_priv;
+	struct i915_drm_client *client = fpriv->client;
+	struct drm_i915_private *i915 = fpriv->i915;
+	struct drm_i915_gem_object *obj;
+	struct intel_memory_region *mr;
+	struct list_head *pos;
+	unsigned int id;
+
+	/* Public objects. */
+	spin_lock(&file->table_lock);
+	idr_for_each_entry (&file->object_idr, obj, id)
+		obj_meminfo(obj, stats);
+	spin_unlock(&file->table_lock);
+
+	/* Internal objects. */
+	rcu_read_lock();
+	list_for_each_rcu(pos, &client->objects_list) {
+		obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
+							 client_link));
+		if (!obj)
+			continue;
+		obj_meminfo(obj, stats);
+		i915_gem_object_put(obj);
+	}
+	rcu_read_unlock();
+
+	for_each_memory_region(mr, i915, id)
+		drm_print_memory_stats(p,
+				       &stats[id],
+				       DRM_GEM_OBJECT_RESIDENT |
+				       DRM_GEM_OBJECT_PURGEABLE,
+				       mr->name);
+}
+
 static const char * const uabi_class_names[] = {
 	[I915_ENGINE_CLASS_RENDER] = "render",
 	[I915_ENGINE_CLASS_COPY] = "copy",
@@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	 * ******************************************************************
 	 */
 
+	show_meminfo(p, file);
+
 	if (GRAPHICS_VER(i915) < 8)
 		return;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for fdinfo memory stats (rev4)
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (5 preceding siblings ...)
  (?)
@ 2023-07-07 16:00 ` Patchwork
  -1 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2023-07-07 16:00 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: fdinfo memory stats (rev4)
URL   : https://patchwork.freedesktop.org/series/119082/
State : warning

== Summary ==

Error: dim checkpatch failed
74f7de01301b drm/i915: Add ability for tracking buffer objects per client
6edf495b8d56 drm/i915: Record which client owns a VM
8b5016e3c255 drm/i915: Track page table backing store usage
287b4b723d0e drm/i915: Account ring buffer and context state storage
edb9d2dc6dcb drm/i915: Implement fdinfo memory stats printing
-:88: WARNING:SPACING: space prohibited between function name and open parenthesis '('
#88: FILE: drivers/gpu/drm/i915/i915_drm_client.c:107:
+	idr_for_each_entry (&file->object_idr, obj, id)

total: 0 errors, 1 warnings, 0 checks, 97 lines checked



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for fdinfo memory stats (rev4)
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (6 preceding siblings ...)
  (?)
@ 2023-07-07 16:00 ` Patchwork
  -1 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2023-07-07 16:00 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: fdinfo memory stats (rev4)
URL   : https://patchwork.freedesktop.org/series/119082/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for fdinfo memory stats (rev4)
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (7 preceding siblings ...)
  (?)
@ 2023-07-07 16:10 ` Patchwork
  -1 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2023-07-07 16:10 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 10013 bytes --]

== Series Details ==

Series: fdinfo memory stats (rev4)
URL   : https://patchwork.freedesktop.org/series/119082/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_13355 -> Patchwork_119082v4
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/index.html

Participating hosts (39 -> 40)
------------------------------

  Additional (2): fi-skl-guc bat-mtlp-8 
  Missing    (1): fi-snb-2520m 

Known issues
------------

  Here are the changes found in Patchwork_119082v4 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@core_auth@basic-auth:
    - bat-adlp-11:        NOTRUN -> [ABORT][1] ([i915#8011])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-adlp-11/igt@core_auth@basic-auth.html

  * igt@debugfs_test@basic-hwmon:
    - bat-mtlp-8:         NOTRUN -> [SKIP][2] ([i915#7456])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@debugfs_test@basic-hwmon.html

  * igt@gem_exec_suspend@basic-s0@smem:
    - bat-jsl-3:          [PASS][3] -> [ABORT][4] ([i915#5122])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/bat-jsl-3/igt@gem_exec_suspend@basic-s0@smem.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-jsl-3/igt@gem_exec_suspend@basic-s0@smem.html

  * igt@gem_lmem_swapping@basic:
    - fi-skl-guc:         NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#4613]) +3 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/fi-skl-guc/igt@gem_lmem_swapping@basic.html

  * igt@gem_mmap@basic:
    - bat-mtlp-8:         NOTRUN -> [SKIP][6] ([i915#4083])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@gem_mmap@basic.html

  * igt@gem_mmap_gtt@basic:
    - bat-mtlp-8:         NOTRUN -> [SKIP][7] ([i915#4077]) +3 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@gem_mmap_gtt@basic.html

  * igt@gem_render_tiled_blits@basic:
    - bat-mtlp-8:         NOTRUN -> [SKIP][8] ([i915#4079]) +1 similar issue
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@gem_render_tiled_blits@basic.html

  * igt@i915_pm_rpm@basic-pci-d3-state:
    - bat-mtlp-8:         NOTRUN -> [ABORT][9] ([i915#7077] / [i915#7977] / [i915#8668])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@i915_pm_rpm@basic-pci-d3-state.html

  * igt@i915_selftest@live@gt_heartbeat:
    - fi-apl-guc:         [PASS][10] -> [DMESG-FAIL][11] ([i915#5334])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/fi-apl-guc/igt@i915_selftest@live@gt_heartbeat.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/fi-apl-guc/igt@i915_selftest@live@gt_heartbeat.html

  * igt@i915_selftest@live@mman:
    - bat-rpls-2:         [PASS][12] -> [TIMEOUT][13] ([i915#6794] / [i915#7392])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/bat-rpls-2/igt@i915_selftest@live@mman.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-rpls-2/igt@i915_selftest@live@mman.html

  * igt@i915_selftest@live@slpc:
    - bat-rpls-1:         NOTRUN -> [DMESG-WARN][14] ([i915#6367])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-rpls-1/igt@i915_selftest@live@slpc.html

  * igt@i915_suspend@basic-s2idle-without-i915:
    - bat-rpls-2:         NOTRUN -> [ABORT][15] ([i915#6687] / [i915#8668])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-rpls-2/igt@i915_suspend@basic-s2idle-without-i915.html

  * igt@i915_suspend@basic-s3-without-i915:
    - bat-jsl-3:          [PASS][16] -> [FAIL][17] ([fdo#103375])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/bat-jsl-3/igt@i915_suspend@basic-s3-without-i915.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-jsl-3/igt@i915_suspend@basic-s3-without-i915.html

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - bat-mtlp-8:         NOTRUN -> [SKIP][18] ([i915#5190])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_addfb_basic@basic-y-tiled-legacy:
    - bat-mtlp-8:         NOTRUN -> [SKIP][19] ([i915#4212]) +8 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_addfb_basic@basic-y-tiled-legacy.html

  * igt@kms_chamelium_frames@hdmi-crc-fast:
    - bat-mtlp-8:         NOTRUN -> [SKIP][20] ([i915#7828]) +7 similar issues
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_chamelium_frames@hdmi-crc-fast.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
    - bat-mtlp-8:         NOTRUN -> [SKIP][21] ([i915#4213]) +1 similar issue
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html

  * igt@kms_force_connector_basic@force-load-detect:
    - bat-mtlp-8:         NOTRUN -> [SKIP][22] ([fdo#109285])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_force_connector_basic@prune-stale-modes:
    - bat-mtlp-8:         NOTRUN -> [SKIP][23] ([i915#5274])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_force_connector_basic@prune-stale-modes.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12@pipe-a-hdmi-a-2:
    - fi-skl-guc:         NOTRUN -> [SKIP][24] ([fdo#109271]) +20 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/fi-skl-guc/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12@pipe-a-hdmi-a-2.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - bat-mtlp-8:         NOTRUN -> [SKIP][25] ([i915#8809])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-mtlp-8/igt@kms_setmode@basic-clone-single-crtc.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@mman:
    - bat-rpls-1:         [TIMEOUT][26] ([i915#6794] / [i915#7392]) -> [PASS][27]
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/bat-rpls-1/igt@i915_selftest@live@mman.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-rpls-1/igt@i915_selftest@live@mman.html

  * igt@i915_suspend@basic-s2idle-without-i915:
    - bat-rpls-1:         [WARN][28] ([i915#8747]) -> [PASS][29]
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/bat-rpls-1/igt@i915_suspend@basic-s2idle-without-i915.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-rpls-1/igt@i915_suspend@basic-s2idle-without-i915.html

  
#### Warnings ####

  * igt@i915_module_load@load:
    - bat-adlp-11:        [ABORT][30] ([i915#4423]) -> [DMESG-WARN][31] ([i915#4423])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/bat-adlp-11/igt@i915_module_load@load.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/bat-adlp-11/igt@i915_module_load@load.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4423]: https://gitlab.freedesktop.org/drm/intel/issues/4423
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#5122]: https://gitlab.freedesktop.org/drm/intel/issues/5122
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5334]: https://gitlab.freedesktop.org/drm/intel/issues/5334
  [i915#6367]: https://gitlab.freedesktop.org/drm/intel/issues/6367
  [i915#6687]: https://gitlab.freedesktop.org/drm/intel/issues/6687
  [i915#6794]: https://gitlab.freedesktop.org/drm/intel/issues/6794
  [i915#7077]: https://gitlab.freedesktop.org/drm/intel/issues/7077
  [i915#7392]: https://gitlab.freedesktop.org/drm/intel/issues/7392
  [i915#7456]: https://gitlab.freedesktop.org/drm/intel/issues/7456
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7977]: https://gitlab.freedesktop.org/drm/intel/issues/7977
  [i915#8011]: https://gitlab.freedesktop.org/drm/intel/issues/8011
  [i915#8668]: https://gitlab.freedesktop.org/drm/intel/issues/8668
  [i915#8747]: https://gitlab.freedesktop.org/drm/intel/issues/8747
  [i915#8809]: https://gitlab.freedesktop.org/drm/intel/issues/8809


Build changes
-------------

  * Linux: CI_DRM_13355 -> Patchwork_119082v4

  CI-20190529: 20190529
  CI_DRM_13355: 8f40aae3b99ac28dd81d00933f5dc9124dbfc881 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7377: d1574543ba4bb322597345530053475c07be0eb9 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_119082v4: 8f40aae3b99ac28dd81d00933f5dc9124dbfc881 @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

8cf15e9320cd drm/i915: Implement fdinfo memory stats printing
e9268c7d65f0 drm/i915: Account ring buffer and context state storage
ff0e29b89b7e drm/i915: Track page table backing store usage
d4591f4001e8 drm/i915: Record which client owns a VM
f5e67eedfc78 drm/i915: Add ability for tracking buffer objects per client

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/index.html

[-- Attachment #2: Type: text/html, Size: 11581 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for fdinfo memory stats (rev4)
  2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (8 preceding siblings ...)
  (?)
@ 2023-07-07 20:57 ` Patchwork
  -1 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2023-07-07 20:57 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30248 bytes --]

== Series Details ==

Series: fdinfo memory stats (rev4)
URL   : https://patchwork.freedesktop.org/series/119082/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_13355_full -> Patchwork_119082v4_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_119082v4_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_119082v4_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (9 -> 10)
------------------------------

  Additional (1): shard-rkl0 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_119082v4_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_rotation_crc@primary-x-tiled-reflect-x-180:
    - shard-rkl:          [PASS][1] -> [ABORT][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-2/igt@kms_rotation_crc@primary-x-tiled-reflect-x-180.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-4/igt@kms_rotation_crc@primary-x-tiled-reflect-x-180.html

  
New tests
---------

  New tests have been introduced between CI_DRM_13355_full and Patchwork_119082v4_full:

### New IGT tests (7) ###

  * igt@kms_flip@2x-absolute-wf_vblank@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  * igt@kms_flip@2x-dpms-vs-vblank-race-interruptible@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  * igt@kms_flip@2x-flip-vs-dpms@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  * igt@kms_flip@2x-flip-vs-rmfb@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  * igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  * igt@kms_flip@2x-modeset-vs-vblank-race-interruptible@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  * igt@kms_flip@2x-plain-flip-ts-check@ab-vga1-hdmi-a1:
    - Statuses : 1 pass(s)
    - Exec time: [0.0] s

  

Known issues
------------

  Here are the changes found in Patchwork_119082v4_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@drm_fdinfo@virtual-busy-all:
    - shard-mtlp:         NOTRUN -> [SKIP][3] ([i915#8414])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@drm_fdinfo@virtual-busy-all.html

  * igt@drm_fdinfo@virtual-idle:
    - shard-rkl:          [PASS][4] -> [FAIL][5] ([i915#7742]) +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-7/igt@drm_fdinfo@virtual-idle.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-4/igt@drm_fdinfo@virtual-idle.html

  * igt@gem_exec_fair@basic-flow@rcs0:
    - shard-tglu:         [PASS][6] -> [FAIL][7] ([i915#2842]) +1 similar issue
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-tglu-5/igt@gem_exec_fair@basic-flow@rcs0.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-2/igt@gem_exec_fair@basic-flow@rcs0.html

  * igt@gem_exec_fair@basic-none@bcs0:
    - shard-rkl:          [PASS][8] -> [FAIL][9] ([i915#2842]) +1 similar issue
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-4/igt@gem_exec_fair@basic-none@bcs0.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-6/igt@gem_exec_fair@basic-none@bcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [PASS][10] -> [FAIL][11] ([i915#2842])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-glk2/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-glk7/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_reloc@basic-range:
    - shard-mtlp:         NOTRUN -> [SKIP][12] ([i915#3281])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@gem_exec_reloc@basic-range.html

  * igt@gem_mmap_gtt@fault-concurrent:
    - shard-mtlp:         NOTRUN -> [SKIP][13] ([i915#4077]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@gem_mmap_gtt@fault-concurrent.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-apl:          [PASS][14] -> [ABORT][15] ([i915#5566])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-apl7/igt@gen9_exec_parse@allowed-single.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-apl4/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-mtlp:         [PASS][16] -> [ABORT][17] ([i915#8489] / [i915#8668])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-1/igt@i915_module_load@reload-with-fault-injection.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_pm_dc@dc9-dpms:
    - shard-tglu:         [PASS][18] -> [SKIP][19] ([i915#4281])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-tglu-2/igt@i915_pm_dc@dc9-dpms.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-7/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_rpm@dpms-non-lpsp:
    - shard-rkl:          [PASS][20] -> [SKIP][21] ([i915#1397]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-6/igt@i915_pm_rpm@dpms-non-lpsp.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-7/igt@i915_pm_rpm@dpms-non-lpsp.html

  * igt@i915_pm_rpm@modeset-non-lpsp-stress:
    - shard-dg2:          [PASS][22] -> [SKIP][23] ([i915#1397]) +1 similar issue
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-5/igt@i915_pm_rpm@modeset-non-lpsp-stress.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-10/igt@i915_pm_rpm@modeset-non-lpsp-stress.html

  * igt@i915_selftest@live@gt_mocs:
    - shard-mtlp:         [PASS][24] -> [DMESG-FAIL][25] ([i915#7059])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-3/igt@i915_selftest@live@gt_mocs.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@i915_selftest@live@gt_mocs.html

  * igt@i915_selftest@perf@request:
    - shard-mtlp:         [PASS][26] -> [ABORT][27] ([i915#8704])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-5/igt@i915_selftest@perf@request.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-8/igt@i915_selftest@perf@request.html

  * igt@i915_suspend@basic-s3-without-i915:
    - shard-rkl:          [PASS][28] -> [FAIL][29] ([fdo#103375])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-4/igt@i915_suspend@basic-s3-without-i915.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-6/igt@i915_suspend@basic-s3-without-i915.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-hdmi-a-2-y-rc_ccs:
    - shard-rkl:          NOTRUN -> [SKIP][30] ([i915#8502]) +3 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-2/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-hdmi-a-2-y-rc_ccs.html

  * igt@kms_async_flips@crc@pipe-a-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [FAIL][31] ([i915#8247]) +3 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-1/igt@kms_async_flips@crc@pipe-a-hdmi-a-3.html

  * igt@kms_atomic@plane-primary-overlay-mutable-zpos:
    - shard-tglu:         NOTRUN -> [SKIP][32] ([i915#404])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-4/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-180:
    - shard-mtlp:         [PASS][33] -> [FAIL][34] ([i915#5138])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-8/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-6/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-mtlp:         [PASS][35] -> [FAIL][36] ([i915#3743])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-8/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-6/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-0:
    - shard-mtlp:         NOTRUN -> [SKIP][37] ([fdo#111615])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_big_fb@y-tiled-64bpp-rotate-0.html

  * igt@kms_ccs@pipe-a-crc-sprite-planes-basic-yf_tiled_ccs:
    - shard-mtlp:         NOTRUN -> [SKIP][38] ([i915#6095]) +1 similar issue
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-yf_tiled_ccs.html

  * igt@kms_ccs@pipe-c-missing-ccs-buffer-yf_tiled_ccs:
    - shard-tglu:         NOTRUN -> [SKIP][39] ([fdo#111615] / [i915#3689] / [i915#5354] / [i915#6095])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-4/igt@kms_ccs@pipe-c-missing-ccs-buffer-yf_tiled_ccs.html

  * igt@kms_cdclk@mode-transition@pipe-d-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][40] ([i915#4087] / [i915#7213]) +3 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-8/igt@kms_cdclk@mode-transition@pipe-d-hdmi-a-3.html

  * igt@kms_chamelium_edid@vga-edid-read:
    - shard-tglu:         NOTRUN -> [SKIP][41] ([i915#7828])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-4/igt@kms_chamelium_edid@vga-edid-read.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-dg2:          NOTRUN -> [SKIP][42] ([i915#7118]) +1 similar issue
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-7/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@uevent@pipe-a-dp-2:
    - shard-dg2:          NOTRUN -> [FAIL][43] ([i915#1339])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-12/igt@kms_content_protection@uevent@pipe-a-dp-2.html

  * igt@kms_cursor_crc@cursor-onscreen-32x32:
    - shard-mtlp:         NOTRUN -> [SKIP][44] ([i915#8814])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_cursor_crc@cursor-onscreen-32x32.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-apl:          [PASS][45] -> [FAIL][46] ([i915#2346])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-apl1/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-apl7/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
    - shard-glk:          [PASS][47] -> [FAIL][48] ([i915#2346])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-glk8/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-glk3/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_dither@fb-8bpc-vs-panel-6bpc:
    - shard-dg2:          NOTRUN -> [SKIP][49] ([i915#3555]) +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-3/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html

  * igt@kms_flip@flip-vs-suspend@a-hdmi-a3:
    - shard-dg2:          [PASS][50] -> [FAIL][51] ([fdo#103375] / [i915#6121]) +5 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-6/igt@kms_flip@flip-vs-suspend@a-hdmi-a3.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-5/igt@kms_flip@flip-vs-suspend@a-hdmi-a3.html

  * igt@kms_flip@plain-flip-ts-check-interruptible@b-hdmi-a1:
    - shard-glk:          [PASS][52] -> [FAIL][53] ([i915#2122])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-glk2/igt@kms_flip@plain-flip-ts-check-interruptible@b-hdmi-a1.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-glk7/igt@kms_flip@plain-flip-ts-check-interruptible@b-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][54] ([i915#2672])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-default-mode.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render:
    - shard-mtlp:         NOTRUN -> [SKIP][55] ([i915#1825]) +2 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render.html

  * igt@kms_plane_scaling@intel-max-src-size@pipe-a-dp-2:
    - shard-dg2:          NOTRUN -> [FAIL][56] ([i915#8292])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-12/igt@kms_plane_scaling@intel-max-src-size@pipe-a-dp-2.html

  * igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [FAIL][57] ([i915#8292])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-4/igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-2.html

  * igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-5@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][58] ([i915#5176]) +1 similar issue
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-2/igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-5@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-5@pipe-c-dp-4:
    - shard-dg2:          NOTRUN -> [SKIP][59] ([i915#5176]) +3 similar issues
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-9/igt@kms_plane_scaling@plane-downscale-with-rotation-factor-0-5@pipe-c-dp-4.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-a-hdmi-a-1:
    - shard-rkl:          NOTRUN -> [SKIP][60] ([i915#5235]) +5 similar issues
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-7/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-a-hdmi-a-1.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-d-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][61] ([i915#5235]) +23 similar issues
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-8/igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-d-hdmi-a-3.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-d-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][62] ([i915#5235]) +3 similar issues
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-d-edp-1.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-a-hdmi-a-1:
    - shard-snb:          NOTRUN -> [SKIP][63] ([fdo#109271]) +26 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-snb1/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-a-hdmi-a-1.html

  * igt@v3d/v3d_submit_csd@bad-perfmon:
    - shard-mtlp:         NOTRUN -> [SKIP][64] ([i915#2575])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@v3d/v3d_submit_csd@bad-perfmon.html

  
#### Possible fixes ####

  * igt@gem_barrier_race@remote-request@rcs0:
    - shard-tglu:         [ABORT][65] ([i915#8211] / [i915#8234]) -> [PASS][66]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-tglu-4/igt@gem_barrier_race@remote-request@rcs0.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-4/igt@gem_barrier_race@remote-request@rcs0.html

  * igt@gem_eio@kms:
    - shard-dg2:          [INCOMPLETE][67] ([i915#1982] / [i915#7892]) -> [PASS][68]
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-3/igt@gem_eio@kms.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-11/igt@gem_eio@kms.html

  * igt@gem_eio@unwedge-stress:
    - {shard-dg1}:        [FAIL][69] ([i915#5784]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg1-12/igt@gem_eio@unwedge-stress.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg1-14/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_balancer@full-pulse:
    - shard-dg2:          [FAIL][71] ([i915#6032]) -> [PASS][72]
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-12/igt@gem_exec_balancer@full-pulse.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-12/igt@gem_exec_balancer@full-pulse.html

  * igt@gem_exec_fair@basic-none-solo@rcs0:
    - shard-apl:          [FAIL][73] ([i915#2842]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-apl1/igt@gem_exec_fair@basic-none-solo@rcs0.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-apl7/igt@gem_exec_fair@basic-none-solo@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs0:
    - shard-rkl:          [FAIL][75] ([i915#2842]) -> [PASS][76]
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-1/igt@gem_exec_fair@basic-pace@vcs0.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-6/igt@gem_exec_fair@basic-pace@vcs0.html

  * igt@i915_pm_rc6_residency@rc6-idle@bcs0:
    - {shard-dg1}:        [FAIL][77] ([i915#3591]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg1-15/igt@i915_pm_rc6_residency@rc6-idle@bcs0.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg1-16/igt@i915_pm_rc6_residency@rc6-idle@bcs0.html

  * igt@i915_pm_rpm@dpms-mode-unset-lpsp:
    - {shard-dg1}:        [SKIP][79] ([i915#1397]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg1-13/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg1-19/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@i915_pm_rpm@system-suspend:
    - {shard-dg1}:        [DMESG-WARN][81] ([i915#4391]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg1-19/igt@i915_pm_rpm@system-suspend.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg1-13/igt@i915_pm_rpm@system-suspend.html

  * igt@i915_selftest@live@slpc:
    - shard-mtlp:         [DMESG-WARN][83] ([i915#6367]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-3/igt@i915_selftest@live@slpc.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@i915_selftest@live@slpc.html

  * igt@i915_suspend@debugfs-reader:
    - shard-dg2:          [FAIL][85] ([fdo#103375] / [i915#6121]) -> [PASS][86] +1 similar issue
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-5/igt@i915_suspend@debugfs-reader.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-2/igt@i915_suspend@debugfs-reader.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-mtlp:         [FAIL][87] ([i915#5138]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-1/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-7/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - shard-rkl:          [FAIL][89] ([i915#3743]) -> [PASS][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-7/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-2/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_cursor_legacy@single-bo@all-pipes:
    - shard-mtlp:         [DMESG-WARN][91] ([i915#2017]) -> [PASS][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-4/igt@kms_cursor_legacy@single-bo@all-pipes.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-1/igt@kms_cursor_legacy@single-bo@all-pipes.html

  * igt@kms_vblank@pipe-d-wait-forked:
    - {shard-dg1}:        [DMESG-WARN][93] ([i915#4391] / [i915#4423]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg1-15/igt@kms_vblank@pipe-d-wait-forked.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg1-19/igt@kms_vblank@pipe-d-wait-forked.html

  
#### Warnings ####

  * igt@gem_exec_whisper@basic-contexts-forked-all:
    - shard-mtlp:         [ABORT][95] ([i915#8131]) -> [TIMEOUT][96] ([i915#8628])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-7/igt@gem_exec_whisper@basic-contexts-forked-all.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@gem_exec_whisper@basic-contexts-forked-all.html

  * igt@gem_exec_whisper@basic-contexts-priority-all:
    - shard-mtlp:         [FAIL][97] ([i915#6363]) -> [ABORT][98] ([i915#8131])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-8/igt@gem_exec_whisper@basic-contexts-priority-all.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-6/igt@gem_exec_whisper@basic-contexts-priority-all.html

  * igt@kms_async_flips@crc@pipe-c-edp-1:
    - shard-mtlp:         [DMESG-FAIL][99] ([i915#8561]) -> [FAIL][100] ([i915#8247])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-2/igt@kms_async_flips@crc@pipe-c-edp-1.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_async_flips@crc@pipe-c-edp-1.html

  * igt@kms_async_flips@crc@pipe-d-edp-1:
    - shard-mtlp:         [FAIL][101] ([i915#8247]) -> [DMESG-FAIL][102] ([i915#8561]) +1 similar issue
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-mtlp-2/igt@kms_async_flips@crc@pipe-d-edp-1.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-mtlp-4/igt@kms_async_flips@crc@pipe-d-edp-1.html

  * igt@kms_content_protection@content_type_change:
    - shard-dg2:          [SKIP][103] ([i915#7118]) -> [SKIP][104] ([i915#7118] / [i915#7162])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-8/igt@kms_content_protection@content_type_change.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-12/igt@kms_content_protection@content_type_change.html

  * igt@kms_dsc@dsc-with-formats:
    - shard-tglu:         [SKIP][105] ([i915#3555]) -> [SKIP][106] ([i915#3555] / [i915#3840]) +1 similar issue
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-tglu-4/igt@kms_dsc@dsc-with-formats.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-tglu-4/igt@kms_dsc@dsc-with-formats.html

  * igt@kms_dsc@dsc-with-output-formats:
    - shard-dg2:          [SKIP][107] ([i915#3555]) -> [SKIP][108] ([i915#3555] / [i915#3840]) +1 similar issue
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-dg2-1/igt@kms_dsc@dsc-with-output-formats.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-dg2-9/igt@kms_dsc@dsc-with-output-formats.html
    - shard-rkl:          [SKIP][109] ([i915#3555]) -> [SKIP][110] ([i915#3555] / [i915#3840]) +1 similar issue
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-6/igt@kms_dsc@dsc-with-output-formats.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-7/igt@kms_dsc@dsc-with-output-formats.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-rkl:          [SKIP][111] ([i915#4816]) -> [SKIP][112] ([i915#4070] / [i915#4816])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13355/shard-rkl-7/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/shard-rkl-2/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1339]: https://gitlab.freedesktop.org/drm/intel/issues/1339
  [i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1937]: https://gitlab.freedesktop.org/drm/intel/issues/1937
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2017]: https://gitlab.freedesktop.org/drm/intel/issues/2017
  [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3591]: https://gitlab.freedesktop.org/drm/intel/issues/3591
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#404]: https://gitlab.freedesktop.org/drm/intel/issues/404
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
  [i915#4087]: https://gitlab.freedesktop.org/drm/intel/issues/4087
  [i915#4281]: https://gitlab.freedesktop.org/drm/intel/issues/4281
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4423]: https://gitlab.freedesktop.org/drm/intel/issues/4423
  [i915#4816]: https://gitlab.freedesktop.org/drm/intel/issues/4816
  [i915#5138]: https://gitlab.freedesktop.org/drm/intel/issues/5138
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
  [i915#5566]: https://gitlab.freedesktop.org/drm/intel/issues/5566
  [i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
  [i915#6032]: https://gitlab.freedesktop.org/drm/intel/issues/6032
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6121]: https://gitlab.freedesktop.org/drm/intel/issues/6121
  [i915#6363]: https://gitlab.freedesktop.org/drm/intel/issues/6363
  [i915#6367]: https://gitlab.freedesktop.org/drm/intel/issues/6367
  [i915#7059]: https://gitlab.freedesktop.org/drm/intel/issues/7059
  [i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
  [i915#7162]: https://gitlab.freedesktop.org/drm/intel/issues/7162
  [i915#7213]: https://gitlab.freedesktop.org/drm/intel/issues/7213
  [i915#7742]: https://gitlab.freedesktop.org/drm/intel/issues/7742
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7892]: https://gitlab.freedesktop.org/drm/intel/issues/7892
  [i915#7975]: https://gitlab.freedesktop.org/drm/intel/issues/7975
  [i915#8131]: https://gitlab.freedesktop.org/drm/intel/issues/8131
  [i915#8211]: https://gitlab.freedesktop.org/drm/intel/issues/8211
  [i915#8213]: https://gitlab.freedesktop.org/drm/intel/issues/8213
  [i915#8224]: https://gitlab.freedesktop.org/drm/intel/issues/8224
  [i915#8234]: https://gitlab.freedesktop.org/drm/intel/issues/8234
  [i915#8247]: https://gitlab.freedesktop.org/drm/intel/issues/8247
  [i915#8292]: https://gitlab.freedesktop.org/drm/intel/issues/8292
  [i915#8414]: https://gitlab.freedesktop.org/drm/intel/issues/8414
  [i915#8489]: https://gitlab.freedesktop.org/drm/intel/issues/8489
  [i915#8502]: https://gitlab.freedesktop.org/drm/intel/issues/8502
  [i915#8561]: https://gitlab.freedesktop.org/drm/intel/issues/8561
  [i915#8628]: https://gitlab.freedesktop.org/drm/intel/issues/8628
  [i915#8668]: https://gitlab.freedesktop.org/drm/intel/issues/8668
  [i915#8704]: https://gitlab.freedesktop.org/drm/intel/issues/8704
  [i915#8709]: https://gitlab.freedesktop.org/drm/intel/issues/8709
  [i915#8814]: https://gitlab.freedesktop.org/drm/intel/issues/8814


Build changes
-------------

  * Linux: CI_DRM_13355 -> Patchwork_119082v4

  CI-20190529: 20190529
  CI_DRM_13355: 8f40aae3b99ac28dd81d00933f5dc9124dbfc881 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7377: d1574543ba4bb322597345530053475c07be0eb9 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_119082v4: 8f40aae3b99ac28dd81d00933f5dc9124dbfc881 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_119082v4/index.html

[-- Attachment #2: Type: text/html, Size: 34555 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
  (?)
@ 2023-07-10 10:44   ` Iddamsetty, Aravind
  2023-07-10 13:20     ` Tvrtko Ursulin
  -1 siblings, 1 reply; 33+ messages in thread
From: Iddamsetty, Aravind @ 2023-07-10 10:44 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



On 07-07-2023 18:32, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> In order to show per client memory usage lets add some infrastructure
> which enables tracking buffer objects owned by clients.
> 
> We add a per client list protected by a new per client lock and to support
> delayed destruction (post client exit) we make tracked objects hold
> references to the owning client.
> 
> Also, object memory region teardown is moved to the existing RCU free
> callback to allow safe dereference from the fdinfo RCU read section.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
>  .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
>  drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
>  drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
>  4 files changed, 90 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 97ac6fb37958..3dc4fbb67d2b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -105,6 +105,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
>  
>  	INIT_LIST_HEAD(&obj->mm.link);
>  
> +#ifdef CONFIG_PROC_FS
> +	INIT_LIST_HEAD(&obj->client_link);
> +#endif
> +
>  	INIT_LIST_HEAD(&obj->lut_list);
>  	spin_lock_init(&obj->lut_lock);
>  
> @@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct rcu_head *head)
>  		container_of(head, typeof(*obj), rcu);
>  	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>  
> +	/* We need to keep this alive for RCU read access from fdinfo. */
> +	if (obj->mm.n_placements > 1)
> +		kfree(obj->mm.placements);
> +
>  	i915_gem_object_free(obj);
>  
>  	GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
> @@ -388,9 +396,6 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
>  	if (obj->ops->release)
>  		obj->ops->release(obj);
>  
> -	if (obj->mm.n_placements > 1)
> -		kfree(obj->mm.placements);
> -
>  	if (obj->shares_resv_from)
>  		i915_vm_resv_put(obj->shares_resv_from);
>  
> @@ -441,6 +446,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
>  
>  	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
>  
> +	i915_drm_client_remove_object(obj);
> +
>  	/*
>  	 * Before we free the object, make sure any pure RCU-only
>  	 * read-side critical sections are complete, e.g.
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> index e72c57716bee..8de2b91b3edf 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
> @@ -300,6 +300,18 @@ struct drm_i915_gem_object {
>  	 */
>  	struct i915_address_space *shares_resv_from;
>  
> +#ifdef CONFIG_PROC_FS
> +	/**
> +	 * @client: @i915_drm_client which created the object
> +	 */
> +	struct i915_drm_client *client;
> +
> +	/**
> +	 * @client_link: Link into @i915_drm_client.objects_list
> +	 */
> +	struct list_head client_link;
> +#endif
> +
>  	union {
>  		struct rcu_head rcu;
>  		struct llist_node freed;
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
> index 2a44b3876cb5..2e5e69edc0f9 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.c
> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> @@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
>  	kref_init(&client->kref);
>  	spin_lock_init(&client->ctx_lock);
>  	INIT_LIST_HEAD(&client->ctx_list);
> +#ifdef CONFIG_PROC_FS
> +	spin_lock_init(&client->objects_lock);
> +	INIT_LIST_HEAD(&client->objects_list);
> +#endif
>  
>  	return client;
>  }
> @@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
>  	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
>  		show_client_class(p, i915, file_priv->client, i);
>  }
> +
> +void i915_drm_client_add_object(struct i915_drm_client *client,
> +				struct drm_i915_gem_object *obj)
> +{
> +	unsigned long flags;
> +
> +	GEM_WARN_ON(obj->client);
> +	GEM_WARN_ON(!list_empty(&obj->client_link));
> +
> +	spin_lock_irqsave(&client->objects_lock, flags);
> +	obj->client = i915_drm_client_get(client);
> +	list_add_tail_rcu(&obj->client_link, &client->objects_list);
> +	spin_unlock_irqrestore(&client->objects_lock, flags);
> +}

would it be nice to mention that we use this client infra only to track
internal objects. While the user created through file->object_idr added
during handle creation time.

Thanks,
Aravind.
> +
> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
> +{
> +	struct i915_drm_client *client = fetch_and_zero(&obj->client);
> +	unsigned long flags;
> +
> +	/* Object may not be associated with a client. */
> +	if (!client)
> +		return false;
> +
> +	spin_lock_irqsave(&client->objects_lock, flags);
> +	list_del_rcu(&obj->client_link);
> +	spin_unlock_irqrestore(&client->objects_lock, flags);
> +
> +	i915_drm_client_put(client);
> +
> +	return true;
> +}
>  #endif
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
> index 67816c912bca..5f58fdf7dcb8 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.h
> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
> @@ -12,6 +12,9 @@
>  
>  #include <uapi/drm/i915_drm.h>
>  
> +#include "i915_file_private.h"
> +#include "gem/i915_gem_object_types.h"
> +
>  #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
>  
>  struct drm_file;
> @@ -25,6 +28,20 @@ struct i915_drm_client {
>  	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
>  	struct list_head ctx_list; /* List of contexts belonging to client. */
>  
> +#ifdef CONFIG_PROC_FS
> +	/**
> +	 * @objects_lock: lock protecting @objects_list
> +	 */
> +	spinlock_t objects_lock;
> +
> +	/**
> +	 * @objects_list: list of objects created by this client
> +	 *
> +	 * Protected by @objects_lock.
> +	 */
> +	struct list_head objects_list;
> +#endif
> +
>  	/**
>  	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
>  	 */
> @@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
>  
>  void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
>  
> +#ifdef CONFIG_PROC_FS
> +void i915_drm_client_add_object(struct i915_drm_client *client,
> +				struct drm_i915_gem_object *obj);
> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
> +#else
> +static inline void i915_drm_client_add_object(struct i915_drm_client *client,
> +					      struct drm_i915_gem_object *obj)
> +{
> +}
> +
> +static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
> +{
> +}
> +#endif
> +
>  #endif /* !__I915_DRM_CLIENT_H__ */

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-07-10 10:44   ` Iddamsetty, Aravind
@ 2023-07-10 13:20     ` Tvrtko Ursulin
  2023-07-11  7:48       ` Iddamsetty, Aravind
  0 siblings, 1 reply; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-10 13:20 UTC (permalink / raw)
  To: Iddamsetty, Aravind, Intel-gfx, dri-devel


On 10/07/2023 11:44, Iddamsetty, Aravind wrote:
> On 07-07-2023 18:32, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> In order to show per client memory usage lets add some infrastructure
>> which enables tracking buffer objects owned by clients.
>>
>> We add a per client list protected by a new per client lock and to support
>> delayed destruction (post client exit) we make tracked objects hold
>> references to the owning client.
>>
>> Also, object memory region teardown is moved to the existing RCU free
>> callback to allow safe dereference from the fdinfo RCU read section.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
>>   .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
>>   drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
>>   drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
>>   4 files changed, 90 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
>> index 97ac6fb37958..3dc4fbb67d2b 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
>> @@ -105,6 +105,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
>>   
>>   	INIT_LIST_HEAD(&obj->mm.link);
>>   
>> +#ifdef CONFIG_PROC_FS
>> +	INIT_LIST_HEAD(&obj->client_link);
>> +#endif
>> +
>>   	INIT_LIST_HEAD(&obj->lut_list);
>>   	spin_lock_init(&obj->lut_lock);
>>   
>> @@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct rcu_head *head)
>>   		container_of(head, typeof(*obj), rcu);
>>   	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>   
>> +	/* We need to keep this alive for RCU read access from fdinfo. */
>> +	if (obj->mm.n_placements > 1)
>> +		kfree(obj->mm.placements);
>> +
>>   	i915_gem_object_free(obj);
>>   
>>   	GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
>> @@ -388,9 +396,6 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
>>   	if (obj->ops->release)
>>   		obj->ops->release(obj);
>>   
>> -	if (obj->mm.n_placements > 1)
>> -		kfree(obj->mm.placements);
>> -
>>   	if (obj->shares_resv_from)
>>   		i915_vm_resv_put(obj->shares_resv_from);
>>   
>> @@ -441,6 +446,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
>>   
>>   	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
>>   
>> +	i915_drm_client_remove_object(obj);
>> +
>>   	/*
>>   	 * Before we free the object, make sure any pure RCU-only
>>   	 * read-side critical sections are complete, e.g.
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>> index e72c57716bee..8de2b91b3edf 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>> @@ -300,6 +300,18 @@ struct drm_i915_gem_object {
>>   	 */
>>   	struct i915_address_space *shares_resv_from;
>>   
>> +#ifdef CONFIG_PROC_FS
>> +	/**
>> +	 * @client: @i915_drm_client which created the object
>> +	 */
>> +	struct i915_drm_client *client;
>> +
>> +	/**
>> +	 * @client_link: Link into @i915_drm_client.objects_list
>> +	 */
>> +	struct list_head client_link;
>> +#endif
>> +
>>   	union {
>>   		struct rcu_head rcu;
>>   		struct llist_node freed;
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
>> index 2a44b3876cb5..2e5e69edc0f9 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>> @@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
>>   	kref_init(&client->kref);
>>   	spin_lock_init(&client->ctx_lock);
>>   	INIT_LIST_HEAD(&client->ctx_list);
>> +#ifdef CONFIG_PROC_FS
>> +	spin_lock_init(&client->objects_lock);
>> +	INIT_LIST_HEAD(&client->objects_list);
>> +#endif
>>   
>>   	return client;
>>   }
>> @@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
>>   	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
>>   		show_client_class(p, i915, file_priv->client, i);
>>   }
>> +
>> +void i915_drm_client_add_object(struct i915_drm_client *client,
>> +				struct drm_i915_gem_object *obj)
>> +{
>> +	unsigned long flags;
>> +
>> +	GEM_WARN_ON(obj->client);
>> +	GEM_WARN_ON(!list_empty(&obj->client_link));
>> +
>> +	spin_lock_irqsave(&client->objects_lock, flags);
>> +	obj->client = i915_drm_client_get(client);
>> +	list_add_tail_rcu(&obj->client_link, &client->objects_list);
>> +	spin_unlock_irqrestore(&client->objects_lock, flags);
>> +}
> 
> would it be nice to mention that we use this client infra only to track
> internal objects. While the user created through file->object_idr added
> during handle creation time.

In this series it is indeed only used for that.

But it would be nicer to use it to track everything, so fdinfo readers 
would not be hitting the idr lock, which would avoid injecting latency 
to real DRM clients.

The only fly in the ointment IMO is that I needed that drm core helper 
to be able to track dmabuf imports. Possibly something for flink too, 
did not look into that yet.

In the light of all that I can mention in the cover letter next time 
round. It is a bit stale anyway (the cover letter).

Regards,

Tvrtko

>> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>> +{
>> +	struct i915_drm_client *client = fetch_and_zero(&obj->client);
>> +	unsigned long flags;
>> +
>> +	/* Object may not be associated with a client. */
>> +	if (!client)
>> +		return false;
>> +
>> +	spin_lock_irqsave(&client->objects_lock, flags);
>> +	list_del_rcu(&obj->client_link);
>> +	spin_unlock_irqrestore(&client->objects_lock, flags);
>> +
>> +	i915_drm_client_put(client);
>> +
>> +	return true;
>> +}
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
>> index 67816c912bca..5f58fdf7dcb8 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.h
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>> @@ -12,6 +12,9 @@
>>   
>>   #include <uapi/drm/i915_drm.h>
>>   
>> +#include "i915_file_private.h"
>> +#include "gem/i915_gem_object_types.h"
>> +
>>   #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
>>   
>>   struct drm_file;
>> @@ -25,6 +28,20 @@ struct i915_drm_client {
>>   	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
>>   	struct list_head ctx_list; /* List of contexts belonging to client. */
>>   
>> +#ifdef CONFIG_PROC_FS
>> +	/**
>> +	 * @objects_lock: lock protecting @objects_list
>> +	 */
>> +	spinlock_t objects_lock;
>> +
>> +	/**
>> +	 * @objects_list: list of objects created by this client
>> +	 *
>> +	 * Protected by @objects_lock.
>> +	 */
>> +	struct list_head objects_list;
>> +#endif
>> +
>>   	/**
>>   	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
>>   	 */
>> @@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
>>   
>>   void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
>>   
>> +#ifdef CONFIG_PROC_FS
>> +void i915_drm_client_add_object(struct i915_drm_client *client,
>> +				struct drm_i915_gem_object *obj);
>> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
>> +#else
>> +static inline void i915_drm_client_add_object(struct i915_drm_client *client,
>> +					      struct drm_i915_gem_object *obj)
>> +{
>> +}
>> +
>> +static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>> +{
>> +}
>> +#endif
>> +
>>   #endif /* !__I915_DRM_CLIENT_H__ */

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-07-10 13:20     ` Tvrtko Ursulin
@ 2023-07-11  7:48       ` Iddamsetty, Aravind
  2023-07-11  9:39         ` Tvrtko Ursulin
  0 siblings, 1 reply; 33+ messages in thread
From: Iddamsetty, Aravind @ 2023-07-11  7:48 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



On 10-07-2023 18:50, Tvrtko Ursulin wrote:
> 
> On 10/07/2023 11:44, Iddamsetty, Aravind wrote:
>> On 07-07-2023 18:32, Tvrtko Ursulin wrote:
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> In order to show per client memory usage lets add some infrastructure
>>> which enables tracking buffer objects owned by clients.
>>>
>>> We add a per client list protected by a new per client lock and to
>>> support
>>> delayed destruction (post client exit) we make tracked objects hold
>>> references to the owning client.
>>>
>>> Also, object memory region teardown is moved to the existing RCU free
>>> callback to allow safe dereference from the fdinfo RCU read section.
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
>>>   .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
>>>   drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
>>>   drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
>>>   4 files changed, 90 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>> b/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>> index 97ac6fb37958..3dc4fbb67d2b 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>> @@ -105,6 +105,10 @@ void i915_gem_object_init(struct
>>> drm_i915_gem_object *obj,
>>>         INIT_LIST_HEAD(&obj->mm.link);
>>>   +#ifdef CONFIG_PROC_FS
>>> +    INIT_LIST_HEAD(&obj->client_link);
>>> +#endif
>>> +
>>>       INIT_LIST_HEAD(&obj->lut_list);
>>>       spin_lock_init(&obj->lut_lock);
>>>   @@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct
>>> rcu_head *head)
>>>           container_of(head, typeof(*obj), rcu);
>>>       struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>>   +    /* We need to keep this alive for RCU read access from fdinfo. */
>>> +    if (obj->mm.n_placements > 1)
>>> +        kfree(obj->mm.placements);
>>> +
>>>       i915_gem_object_free(obj);
>>>         GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
>>> @@ -388,9 +396,6 @@ void __i915_gem_free_object(struct
>>> drm_i915_gem_object *obj)
>>>       if (obj->ops->release)
>>>           obj->ops->release(obj);
>>>   -    if (obj->mm.n_placements > 1)
>>> -        kfree(obj->mm.placements);
>>> -
>>>       if (obj->shares_resv_from)
>>>           i915_vm_resv_put(obj->shares_resv_from);
>>>   @@ -441,6 +446,8 @@ static void i915_gem_free_object(struct
>>> drm_gem_object *gem_obj)
>>>         GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
>>>   +    i915_drm_client_remove_object(obj);
>>> +
>>>       /*
>>>        * Before we free the object, make sure any pure RCU-only
>>>        * read-side critical sections are complete, e.g.
>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> index e72c57716bee..8de2b91b3edf 100644
>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>> @@ -300,6 +300,18 @@ struct drm_i915_gem_object {
>>>        */
>>>       struct i915_address_space *shares_resv_from;
>>>   +#ifdef CONFIG_PROC_FS
>>> +    /**
>>> +     * @client: @i915_drm_client which created the object
>>> +     */
>>> +    struct i915_drm_client *client;
>>> +
>>> +    /**
>>> +     * @client_link: Link into @i915_drm_client.objects_list
>>> +     */
>>> +    struct list_head client_link;
>>> +#endif
>>> +
>>>       union {
>>>           struct rcu_head rcu;
>>>           struct llist_node freed;
>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
>>> b/drivers/gpu/drm/i915/i915_drm_client.c
>>> index 2a44b3876cb5..2e5e69edc0f9 100644
>>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>>> @@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
>>>       kref_init(&client->kref);
>>>       spin_lock_init(&client->ctx_lock);
>>>       INIT_LIST_HEAD(&client->ctx_list);
>>> +#ifdef CONFIG_PROC_FS
>>> +    spin_lock_init(&client->objects_lock);
>>> +    INIT_LIST_HEAD(&client->objects_list);
>>> +#endif
>>>         return client;
>>>   }
>>> @@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer
>>> *p, struct drm_file *file)
>>>       for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
>>>           show_client_class(p, i915, file_priv->client, i);
>>>   }
>>> +
>>> +void i915_drm_client_add_object(struct i915_drm_client *client,
>>> +                struct drm_i915_gem_object *obj)
>>> +{
>>> +    unsigned long flags;
>>> +
>>> +    GEM_WARN_ON(obj->client);
>>> +    GEM_WARN_ON(!list_empty(&obj->client_link));
>>> +
>>> +    spin_lock_irqsave(&client->objects_lock, flags);
>>> +    obj->client = i915_drm_client_get(client);
>>> +    list_add_tail_rcu(&obj->client_link, &client->objects_list);
>>> +    spin_unlock_irqrestore(&client->objects_lock, flags);
>>> +}
>>
>> would it be nice to mention that we use this client infra only to track
>> internal objects. While the user created through file->object_idr added
>> during handle creation time.
> 
> In this series it is indeed only used for that.
> 
> But it would be nicer to use it to track everything, so fdinfo readers
> would not be hitting the idr lock, which would avoid injecting latency
> to real DRM clients.
> 
> The only fly in the ointment IMO is that I needed that drm core helper
> to be able to track dmabuf imports. Possibly something for flink too,
> did not look into that yet.

wouldn't dmabuf be tracked via object_idr as a new handle is created for it.

Thanks,
Aravind.
> 
> In the light of all that I can mention in the cover letter next time
> round. It is a bit stale anyway (the cover letter).
> 
> Regards,
> 
> Tvrtko
> 
>>> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>>> +{
>>> +    struct i915_drm_client *client = fetch_and_zero(&obj->client);
>>> +    unsigned long flags;
>>> +
>>> +    /* Object may not be associated with a client. */
>>> +    if (!client)
>>> +        return false;
>>> +
>>> +    spin_lock_irqsave(&client->objects_lock, flags);
>>> +    list_del_rcu(&obj->client_link);
>>> +    spin_unlock_irqrestore(&client->objects_lock, flags);
>>> +
>>> +    i915_drm_client_put(client);
>>> +
>>> +    return true;
>>> +}
>>>   #endif
>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h
>>> b/drivers/gpu/drm/i915/i915_drm_client.h
>>> index 67816c912bca..5f58fdf7dcb8 100644
>>> --- a/drivers/gpu/drm/i915/i915_drm_client.h
>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>>> @@ -12,6 +12,9 @@
>>>     #include <uapi/drm/i915_drm.h>
>>>   +#include "i915_file_private.h"
>>> +#include "gem/i915_gem_object_types.h"
>>> +
>>>   #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
>>>     struct drm_file;
>>> @@ -25,6 +28,20 @@ struct i915_drm_client {
>>>       spinlock_t ctx_lock; /* For add/remove from ctx_list. */
>>>       struct list_head ctx_list; /* List of contexts belonging to
>>> client. */
>>>   +#ifdef CONFIG_PROC_FS
>>> +    /**
>>> +     * @objects_lock: lock protecting @objects_list
>>> +     */
>>> +    spinlock_t objects_lock;
>>> +
>>> +    /**
>>> +     * @objects_list: list of objects created by this client
>>> +     *
>>> +     * Protected by @objects_lock.
>>> +     */
>>> +    struct list_head objects_list;
>>> +#endif
>>> +
>>>       /**
>>>        * @past_runtime: Accumulation of pphwsp runtimes from closed
>>> contexts.
>>>        */
>>> @@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
>>>     void i915_drm_client_fdinfo(struct drm_printer *p, struct
>>> drm_file *file);
>>>   +#ifdef CONFIG_PROC_FS
>>> +void i915_drm_client_add_object(struct i915_drm_client *client,
>>> +                struct drm_i915_gem_object *obj);
>>> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
>>> +#else
>>> +static inline void i915_drm_client_add_object(struct i915_drm_client
>>> *client,
>>> +                          struct drm_i915_gem_object *obj)
>>> +{
>>> +}
>>> +
>>> +static inline bool i915_drm_client_remove_object(struct
>>> drm_i915_gem_object *obj)
>>> +{
>>> +}
>>> +#endif
>>> +
>>>   #endif /* !__I915_DRM_CLIENT_H__ */

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/5] drm/i915: Record which client owns a VM
  2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-07-11  9:08     ` Iddamsetty, Aravind
  -1 siblings, 0 replies; 33+ messages in thread
From: Iddamsetty, Aravind @ 2023-07-11  9:08 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin



On 07-07-2023 18:32, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> To enable accounting of indirect client memory usage (such as page tables)
> in the following patch, lets start recording the creator of each PPGTT.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c       | 11 ++++++++---
>  drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  3 +++
>  drivers/gpu/drm/i915/gem/selftests/mock_context.c |  4 ++--
>  drivers/gpu/drm/i915/gt/intel_gtt.h               |  1 +
>  4 files changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 9a9ff84c90d7..35cf6608180e 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -279,7 +279,8 @@ static int proto_context_set_protected(struct drm_i915_private *i915,
>  }
>  
>  static struct i915_gem_proto_context *
> -proto_context_create(struct drm_i915_private *i915, unsigned int flags)
> +proto_context_create(struct drm_i915_file_private *fpriv,
> +		     struct drm_i915_private *i915, unsigned int flags)
>  {
>  	struct i915_gem_proto_context *pc, *err;
>  
> @@ -287,6 +288,7 @@ proto_context_create(struct drm_i915_private *i915, unsigned int flags)
>  	if (!pc)
>  		return ERR_PTR(-ENOMEM);
>  
> +	pc->fpriv = fpriv;
>  	pc->num_user_engines = -1;
>  	pc->user_engines = NULL;
>  	pc->user_flags = BIT(UCONTEXT_BANNABLE) |
> @@ -1621,6 +1623,7 @@ i915_gem_create_context(struct drm_i915_private *i915,
>  			err = PTR_ERR(ppgtt);
>  			goto err_ctx;
>  		}
> +		ppgtt->vm.fpriv = pc->fpriv;
>  		vm = &ppgtt->vm;
>  	}
>  	if (vm)
> @@ -1740,7 +1743,7 @@ int i915_gem_context_open(struct drm_i915_private *i915,
>  	/* 0 reserved for invalid/unassigned ppgtt */
>  	xa_init_flags(&file_priv->vm_xa, XA_FLAGS_ALLOC1);
>  
> -	pc = proto_context_create(i915, 0);
> +	pc = proto_context_create(file_priv, i915, 0);
>  	if (IS_ERR(pc)) {
>  		err = PTR_ERR(pc);
>  		goto err;
> @@ -1822,6 +1825,7 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data,
>  
>  	GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
>  	args->vm_id = id;
> +	ppgtt->vm.fpriv = file_priv;
>  	return 0;
>  
>  err_put:
> @@ -2284,7 +2288,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>  		return -EIO;
>  	}
>  
> -	ext_data.pc = proto_context_create(i915, args->flags);
> +	ext_data.pc = proto_context_create(file->driver_priv, i915,
> +					   args->flags);
>  	if (IS_ERR(ext_data.pc))
>  		return PTR_ERR(ext_data.pc);
>  
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index cb78214a7dcd..c573c067779f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -188,6 +188,9 @@ struct i915_gem_proto_engine {
>   * CONTEXT_CREATE_SET_PARAM during GEM_CONTEXT_CREATE.
>   */
>  struct i915_gem_proto_context {
> +	/** @fpriv: Client which creates the context */
> +	struct drm_i915_file_private *fpriv;
> +
>  	/** @vm: See &i915_gem_context.vm */
>  	struct i915_address_space *vm;
>  
> diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> index 8ac6726ec16b..125584ada282 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> @@ -83,7 +83,7 @@ live_context(struct drm_i915_private *i915, struct file *file)
>  	int err;
>  	u32 id;
>  
> -	pc = proto_context_create(i915, 0);
> +	pc = proto_context_create(fpriv, i915, 0);
>  	if (IS_ERR(pc))
>  		return ERR_CAST(pc);
>  
> @@ -152,7 +152,7 @@ kernel_context(struct drm_i915_private *i915,
>  	struct i915_gem_context *ctx;
>  	struct i915_gem_proto_context *pc;
>  
> -	pc = proto_context_create(i915, 0);
> +	pc = proto_context_create(NULL, i915, 0);
>  	if (IS_ERR(pc))
>  		return ERR_CAST(pc);
>  
> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
> index 4d6296cdbcfd..7192a534a654 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gtt.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
> @@ -248,6 +248,7 @@ struct i915_address_space {
>  	struct drm_mm mm;
>  	struct intel_gt *gt;
>  	struct drm_i915_private *i915;
> +	struct drm_i915_file_private *fpriv;
>  	struct device *dma;
>  	u64 total;		/* size addr space maps (ex. 2GB for ggtt) */
>  	u64 reserved;		/* size addr space reserved */

Looks good to me.
Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>

Thanks,
Aravind.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Record which client owns a VM
@ 2023-07-11  9:08     ` Iddamsetty, Aravind
  0 siblings, 0 replies; 33+ messages in thread
From: Iddamsetty, Aravind @ 2023-07-11  9:08 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



On 07-07-2023 18:32, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> To enable accounting of indirect client memory usage (such as page tables)
> in the following patch, lets start recording the creator of each PPGTT.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c       | 11 ++++++++---
>  drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  3 +++
>  drivers/gpu/drm/i915/gem/selftests/mock_context.c |  4 ++--
>  drivers/gpu/drm/i915/gt/intel_gtt.h               |  1 +
>  4 files changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 9a9ff84c90d7..35cf6608180e 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -279,7 +279,8 @@ static int proto_context_set_protected(struct drm_i915_private *i915,
>  }
>  
>  static struct i915_gem_proto_context *
> -proto_context_create(struct drm_i915_private *i915, unsigned int flags)
> +proto_context_create(struct drm_i915_file_private *fpriv,
> +		     struct drm_i915_private *i915, unsigned int flags)
>  {
>  	struct i915_gem_proto_context *pc, *err;
>  
> @@ -287,6 +288,7 @@ proto_context_create(struct drm_i915_private *i915, unsigned int flags)
>  	if (!pc)
>  		return ERR_PTR(-ENOMEM);
>  
> +	pc->fpriv = fpriv;
>  	pc->num_user_engines = -1;
>  	pc->user_engines = NULL;
>  	pc->user_flags = BIT(UCONTEXT_BANNABLE) |
> @@ -1621,6 +1623,7 @@ i915_gem_create_context(struct drm_i915_private *i915,
>  			err = PTR_ERR(ppgtt);
>  			goto err_ctx;
>  		}
> +		ppgtt->vm.fpriv = pc->fpriv;
>  		vm = &ppgtt->vm;
>  	}
>  	if (vm)
> @@ -1740,7 +1743,7 @@ int i915_gem_context_open(struct drm_i915_private *i915,
>  	/* 0 reserved for invalid/unassigned ppgtt */
>  	xa_init_flags(&file_priv->vm_xa, XA_FLAGS_ALLOC1);
>  
> -	pc = proto_context_create(i915, 0);
> +	pc = proto_context_create(file_priv, i915, 0);
>  	if (IS_ERR(pc)) {
>  		err = PTR_ERR(pc);
>  		goto err;
> @@ -1822,6 +1825,7 @@ int i915_gem_vm_create_ioctl(struct drm_device *dev, void *data,
>  
>  	GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
>  	args->vm_id = id;
> +	ppgtt->vm.fpriv = file_priv;
>  	return 0;
>  
>  err_put:
> @@ -2284,7 +2288,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>  		return -EIO;
>  	}
>  
> -	ext_data.pc = proto_context_create(i915, args->flags);
> +	ext_data.pc = proto_context_create(file->driver_priv, i915,
> +					   args->flags);
>  	if (IS_ERR(ext_data.pc))
>  		return PTR_ERR(ext_data.pc);
>  
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index cb78214a7dcd..c573c067779f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -188,6 +188,9 @@ struct i915_gem_proto_engine {
>   * CONTEXT_CREATE_SET_PARAM during GEM_CONTEXT_CREATE.
>   */
>  struct i915_gem_proto_context {
> +	/** @fpriv: Client which creates the context */
> +	struct drm_i915_file_private *fpriv;
> +
>  	/** @vm: See &i915_gem_context.vm */
>  	struct i915_address_space *vm;
>  
> diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> index 8ac6726ec16b..125584ada282 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c
> @@ -83,7 +83,7 @@ live_context(struct drm_i915_private *i915, struct file *file)
>  	int err;
>  	u32 id;
>  
> -	pc = proto_context_create(i915, 0);
> +	pc = proto_context_create(fpriv, i915, 0);
>  	if (IS_ERR(pc))
>  		return ERR_CAST(pc);
>  
> @@ -152,7 +152,7 @@ kernel_context(struct drm_i915_private *i915,
>  	struct i915_gem_context *ctx;
>  	struct i915_gem_proto_context *pc;
>  
> -	pc = proto_context_create(i915, 0);
> +	pc = proto_context_create(NULL, i915, 0);
>  	if (IS_ERR(pc))
>  		return ERR_CAST(pc);
>  
> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
> index 4d6296cdbcfd..7192a534a654 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gtt.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
> @@ -248,6 +248,7 @@ struct i915_address_space {
>  	struct drm_mm mm;
>  	struct intel_gt *gt;
>  	struct drm_i915_private *i915;
> +	struct drm_i915_file_private *fpriv;
>  	struct device *dma;
>  	u64 total;		/* size addr space maps (ex. 2GB for ggtt) */
>  	u64 reserved;		/* size addr space reserved */

Looks good to me.
Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>

Thanks,
Aravind.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 3/5] drm/i915: Track page table backing store usage
  2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
  (?)
@ 2023-07-11  9:08   ` Iddamsetty, Aravind
  -1 siblings, 0 replies; 33+ messages in thread
From: Iddamsetty, Aravind @ 2023-07-11  9:08 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



On 07-07-2023 18:32, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Account page table backing store against the owning client memory usage
> stats.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_gtt.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
> index 2f6a9be0ffe6..126269a0d728 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gtt.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
> @@ -58,6 +58,9 @@ struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz)
>  	if (!IS_ERR(obj)) {
>  		obj->base.resv = i915_vm_resv_get(vm);
>  		obj->shares_resv_from = vm;
> +
> +		if (vm->fpriv)
> +			i915_drm_client_add_object(vm->fpriv->client, obj);
>  	}
>  
>  	return obj;
> @@ -79,6 +82,9 @@ struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz)
>  	if (!IS_ERR(obj)) {
>  		obj->base.resv = i915_vm_resv_get(vm);
>  		obj->shares_resv_from = vm;
> +
> +		if (vm->fpriv)
> +			i915_drm_client_add_object(vm->fpriv->client, obj);
>  	}
>  
>  	return obj;

Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>

Thanks,
Aravind.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage
  2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
  (?)
@ 2023-07-11  9:29   ` Iddamsetty, Aravind
  2023-07-11  9:44     ` Tvrtko Ursulin
  -1 siblings, 1 reply; 33+ messages in thread
From: Iddamsetty, Aravind @ 2023-07-11  9:29 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



On 07-07-2023 18:32, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Account ring buffers and logical context space against the owning client
> memory usage stats.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_context.c | 13 +++++++++++++
>  drivers/gpu/drm/i915/i915_drm_client.c  | 10 ++++++++++
>  drivers/gpu/drm/i915/i915_drm_client.h  |  8 ++++++++
>  3 files changed, 31 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> index a53b26178f0a..8a395b9201e9 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> @@ -6,6 +6,7 @@
>  #include "gem/i915_gem_context.h"
>  #include "gem/i915_gem_pm.h"
>  
> +#include "i915_drm_client.h"
>  #include "i915_drv.h"
>  #include "i915_trace.h"
>  
> @@ -50,6 +51,7 @@ intel_context_create(struct intel_engine_cs *engine)
>  
>  int intel_context_alloc_state(struct intel_context *ce)
>  {
> +	struct i915_gem_context *ctx;
>  	int err = 0;
>  
>  	if (mutex_lock_interruptible(&ce->pin_mutex))
> @@ -66,6 +68,17 @@ int intel_context_alloc_state(struct intel_context *ce)
>  			goto unlock;
>  
>  		set_bit(CONTEXT_ALLOC_BIT, &ce->flags);
> +
> +		rcu_read_lock();
> +		ctx = rcu_dereference(ce->gem_context);
> +		if (ctx && !kref_get_unless_zero(&ctx->ref))
> +			ctx = NULL;
> +		rcu_read_unlock();
> +		if (ctx) {
> +			if (ctx->client)
> +				i915_drm_client_add_context(ctx->client, ce);
> +			i915_gem_context_put(ctx);
> +		}
>  	}
>  
>  unlock:
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
> index 2e5e69edc0f9..ffccb6239789 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.c
> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> @@ -144,4 +144,14 @@ bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>  
>  	return true;
>  }
> +
> +void i915_drm_client_add_context(struct i915_drm_client *client,
> +				 struct intel_context *ce)

do you think we can rename to i915_drm_client_add_context_objects?

> +{
> +	if (ce->state)
> +		i915_drm_client_add_object(client, ce->state->obj);
> +
> +	if (ce->ring != ce->engine->legacy.ring && ce->ring->vma)
> +		i915_drm_client_add_object(client, ce->ring->vma->obj);
> +}
>  #endif
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
> index 5f58fdf7dcb8..39616b10a51f 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.h
> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
> @@ -14,6 +14,7 @@
>  
>  #include "i915_file_private.h"
>  #include "gem/i915_gem_object_types.h"
> +#include "gt/intel_context_types.h"
>  
>  #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
>  
> @@ -70,6 +71,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
>  void i915_drm_client_add_object(struct i915_drm_client *client,
>  				struct drm_i915_gem_object *obj);
>  bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
> +void i915_drm_client_add_context(struct i915_drm_client *client,
> +				 struct intel_context *ce);
>  #else
>  static inline void i915_drm_client_add_object(struct i915_drm_client *client,
>  					      struct drm_i915_gem_object *obj)
> @@ -79,6 +82,11 @@ static inline void i915_drm_client_add_object(struct i915_drm_client *client,
>  static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>  {
>  }
> +
> +static inline void i915_drm_client_add_context(struct i915_drm_client *client,
> +					       struct intel_context *ce)
> +{
> +}
>  #endif
>  
>  #endif /* !__I915_DRM_CLIENT_H__ */

Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>

Thanks,
Aravind.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-07-11  7:48       ` Iddamsetty, Aravind
@ 2023-07-11  9:39         ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-11  9:39 UTC (permalink / raw)
  To: Iddamsetty, Aravind, Intel-gfx, dri-devel


On 11/07/2023 08:48, Iddamsetty, Aravind wrote:
> On 10-07-2023 18:50, Tvrtko Ursulin wrote:
>>
>> On 10/07/2023 11:44, Iddamsetty, Aravind wrote:
>>> On 07-07-2023 18:32, Tvrtko Ursulin wrote:
>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>
>>>> In order to show per client memory usage lets add some infrastructure
>>>> which enables tracking buffer objects owned by clients.
>>>>
>>>> We add a per client list protected by a new per client lock and to
>>>> support
>>>> delayed destruction (post client exit) we make tracked objects hold
>>>> references to the owning client.
>>>>
>>>> Also, object memory region teardown is moved to the existing RCU free
>>>> callback to allow safe dereference from the fdinfo RCU read section.
>>>>
>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>> ---
>>>>    drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
>>>>    .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
>>>>    drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
>>>>    drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
>>>>    4 files changed, 90 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>>> b/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>>> index 97ac6fb37958..3dc4fbb67d2b 100644
>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
>>>> @@ -105,6 +105,10 @@ void i915_gem_object_init(struct
>>>> drm_i915_gem_object *obj,
>>>>          INIT_LIST_HEAD(&obj->mm.link);
>>>>    +#ifdef CONFIG_PROC_FS
>>>> +    INIT_LIST_HEAD(&obj->client_link);
>>>> +#endif
>>>> +
>>>>        INIT_LIST_HEAD(&obj->lut_list);
>>>>        spin_lock_init(&obj->lut_lock);
>>>>    @@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct
>>>> rcu_head *head)
>>>>            container_of(head, typeof(*obj), rcu);
>>>>        struct drm_i915_private *i915 = to_i915(obj->base.dev);
>>>>    +    /* We need to keep this alive for RCU read access from fdinfo. */
>>>> +    if (obj->mm.n_placements > 1)
>>>> +        kfree(obj->mm.placements);
>>>> +
>>>>        i915_gem_object_free(obj);
>>>>          GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
>>>> @@ -388,9 +396,6 @@ void __i915_gem_free_object(struct
>>>> drm_i915_gem_object *obj)
>>>>        if (obj->ops->release)
>>>>            obj->ops->release(obj);
>>>>    -    if (obj->mm.n_placements > 1)
>>>> -        kfree(obj->mm.placements);
>>>> -
>>>>        if (obj->shares_resv_from)
>>>>            i915_vm_resv_put(obj->shares_resv_from);
>>>>    @@ -441,6 +446,8 @@ static void i915_gem_free_object(struct
>>>> drm_gem_object *gem_obj)
>>>>          GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
>>>>    +    i915_drm_client_remove_object(obj);
>>>> +
>>>>        /*
>>>>         * Before we free the object, make sure any pure RCU-only
>>>>         * read-side critical sections are complete, e.g.
>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>>> b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>>> index e72c57716bee..8de2b91b3edf 100644
>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>>>> @@ -300,6 +300,18 @@ struct drm_i915_gem_object {
>>>>         */
>>>>        struct i915_address_space *shares_resv_from;
>>>>    +#ifdef CONFIG_PROC_FS
>>>> +    /**
>>>> +     * @client: @i915_drm_client which created the object
>>>> +     */
>>>> +    struct i915_drm_client *client;
>>>> +
>>>> +    /**
>>>> +     * @client_link: Link into @i915_drm_client.objects_list
>>>> +     */
>>>> +    struct list_head client_link;
>>>> +#endif
>>>> +
>>>>        union {
>>>>            struct rcu_head rcu;
>>>>            struct llist_node freed;
>>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
>>>> b/drivers/gpu/drm/i915/i915_drm_client.c
>>>> index 2a44b3876cb5..2e5e69edc0f9 100644
>>>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>>>> @@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
>>>>        kref_init(&client->kref);
>>>>        spin_lock_init(&client->ctx_lock);
>>>>        INIT_LIST_HEAD(&client->ctx_list);
>>>> +#ifdef CONFIG_PROC_FS
>>>> +    spin_lock_init(&client->objects_lock);
>>>> +    INIT_LIST_HEAD(&client->objects_list);
>>>> +#endif
>>>>          return client;
>>>>    }
>>>> @@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer
>>>> *p, struct drm_file *file)
>>>>        for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
>>>>            show_client_class(p, i915, file_priv->client, i);
>>>>    }
>>>> +
>>>> +void i915_drm_client_add_object(struct i915_drm_client *client,
>>>> +                struct drm_i915_gem_object *obj)
>>>> +{
>>>> +    unsigned long flags;
>>>> +
>>>> +    GEM_WARN_ON(obj->client);
>>>> +    GEM_WARN_ON(!list_empty(&obj->client_link));
>>>> +
>>>> +    spin_lock_irqsave(&client->objects_lock, flags);
>>>> +    obj->client = i915_drm_client_get(client);
>>>> +    list_add_tail_rcu(&obj->client_link, &client->objects_list);
>>>> +    spin_unlock_irqrestore(&client->objects_lock, flags);
>>>> +}
>>>
>>> would it be nice to mention that we use this client infra only to track
>>> internal objects. While the user created through file->object_idr added
>>> during handle creation time.
>>
>> In this series it is indeed only used for that.
>>
>> But it would be nicer to use it to track everything, so fdinfo readers
>> would not be hitting the idr lock, which would avoid injecting latency
>> to real DRM clients.
>>
>> The only fly in the ointment IMO is that I needed that drm core helper
>> to be able to track dmabuf imports. Possibly something for flink too,
>> did not look into that yet.
> 
> wouldn't dmabuf be tracked via object_idr as a new handle is created for it.

Yes it is/would. I was talking about hypothetically not using object_idr 
and instead tracking everything via the mechanism introduced in this 
patch, which would allow for lockless fdinfo reads for everything. If 
you remember I had that approach in an earlier version but it needed a 
patch to drm code to split the prime helpers (or so) and also did not 
cover the question on how to handle flink.

Regards,

Tvrtko

> 
> Thanks,
> Aravind.
>>
>> In the light of all that I can mention in the cover letter next time
>> round. It is a bit stale anyway (the cover letter).
>>
>> Regards,
>>
>> Tvrtko
>>
>>>> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>>>> +{
>>>> +    struct i915_drm_client *client = fetch_and_zero(&obj->client);
>>>> +    unsigned long flags;
>>>> +
>>>> +    /* Object may not be associated with a client. */
>>>> +    if (!client)
>>>> +        return false;
>>>> +
>>>> +    spin_lock_irqsave(&client->objects_lock, flags);
>>>> +    list_del_rcu(&obj->client_link);
>>>> +    spin_unlock_irqrestore(&client->objects_lock, flags);
>>>> +
>>>> +    i915_drm_client_put(client);
>>>> +
>>>> +    return true;
>>>> +}
>>>>    #endif
>>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h
>>>> b/drivers/gpu/drm/i915/i915_drm_client.h
>>>> index 67816c912bca..5f58fdf7dcb8 100644
>>>> --- a/drivers/gpu/drm/i915/i915_drm_client.h
>>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>>>> @@ -12,6 +12,9 @@
>>>>      #include <uapi/drm/i915_drm.h>
>>>>    +#include "i915_file_private.h"
>>>> +#include "gem/i915_gem_object_types.h"
>>>> +
>>>>    #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
>>>>      struct drm_file;
>>>> @@ -25,6 +28,20 @@ struct i915_drm_client {
>>>>        spinlock_t ctx_lock; /* For add/remove from ctx_list. */
>>>>        struct list_head ctx_list; /* List of contexts belonging to
>>>> client. */
>>>>    +#ifdef CONFIG_PROC_FS
>>>> +    /**
>>>> +     * @objects_lock: lock protecting @objects_list
>>>> +     */
>>>> +    spinlock_t objects_lock;
>>>> +
>>>> +    /**
>>>> +     * @objects_list: list of objects created by this client
>>>> +     *
>>>> +     * Protected by @objects_lock.
>>>> +     */
>>>> +    struct list_head objects_list;
>>>> +#endif
>>>> +
>>>>        /**
>>>>         * @past_runtime: Accumulation of pphwsp runtimes from closed
>>>> contexts.
>>>>         */
>>>> @@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
>>>>      void i915_drm_client_fdinfo(struct drm_printer *p, struct
>>>> drm_file *file);
>>>>    +#ifdef CONFIG_PROC_FS
>>>> +void i915_drm_client_add_object(struct i915_drm_client *client,
>>>> +                struct drm_i915_gem_object *obj);
>>>> +bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
>>>> +#else
>>>> +static inline void i915_drm_client_add_object(struct i915_drm_client
>>>> *client,
>>>> +                          struct drm_i915_gem_object *obj)
>>>> +{
>>>> +}
>>>> +
>>>> +static inline bool i915_drm_client_remove_object(struct
>>>> drm_i915_gem_object *obj)
>>>> +{
>>>> +}
>>>> +#endif
>>>> +
>>>>    #endif /* !__I915_DRM_CLIENT_H__ */

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage
  2023-07-11  9:29   ` Iddamsetty, Aravind
@ 2023-07-11  9:44     ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-11  9:44 UTC (permalink / raw)
  To: Iddamsetty, Aravind, Intel-gfx, dri-devel


On 11/07/2023 10:29, Iddamsetty, Aravind wrote:
> 
> 
> On 07-07-2023 18:32, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Account ring buffers and logical context space against the owning client
>> memory usage stats.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/intel_context.c | 13 +++++++++++++
>>   drivers/gpu/drm/i915/i915_drm_client.c  | 10 ++++++++++
>>   drivers/gpu/drm/i915/i915_drm_client.h  |  8 ++++++++
>>   3 files changed, 31 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
>> index a53b26178f0a..8a395b9201e9 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_context.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
>> @@ -6,6 +6,7 @@
>>   #include "gem/i915_gem_context.h"
>>   #include "gem/i915_gem_pm.h"
>>   
>> +#include "i915_drm_client.h"
>>   #include "i915_drv.h"
>>   #include "i915_trace.h"
>>   
>> @@ -50,6 +51,7 @@ intel_context_create(struct intel_engine_cs *engine)
>>   
>>   int intel_context_alloc_state(struct intel_context *ce)
>>   {
>> +	struct i915_gem_context *ctx;
>>   	int err = 0;
>>   
>>   	if (mutex_lock_interruptible(&ce->pin_mutex))
>> @@ -66,6 +68,17 @@ int intel_context_alloc_state(struct intel_context *ce)
>>   			goto unlock;
>>   
>>   		set_bit(CONTEXT_ALLOC_BIT, &ce->flags);
>> +
>> +		rcu_read_lock();
>> +		ctx = rcu_dereference(ce->gem_context);
>> +		if (ctx && !kref_get_unless_zero(&ctx->ref))
>> +			ctx = NULL;
>> +		rcu_read_unlock();
>> +		if (ctx) {
>> +			if (ctx->client)
>> +				i915_drm_client_add_context(ctx->client, ce);
>> +			i915_gem_context_put(ctx);
>> +		}
>>   	}
>>   
>>   unlock:
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
>> index 2e5e69edc0f9..ffccb6239789 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>> @@ -144,4 +144,14 @@ bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>>   
>>   	return true;
>>   }
>> +
>> +void i915_drm_client_add_context(struct i915_drm_client *client,
>> +				 struct intel_context *ce)
> 
> do you think we can rename to i915_drm_client_add_context_objects?

I like it, will do, thanks!

Regards,

Tvrtko

> 
>> +{
>> +	if (ce->state)
>> +		i915_drm_client_add_object(client, ce->state->obj);
>> +
>> +	if (ce->ring != ce->engine->legacy.ring && ce->ring->vma)
>> +		i915_drm_client_add_object(client, ce->ring->vma->obj);
>> +}
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
>> index 5f58fdf7dcb8..39616b10a51f 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.h
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>> @@ -14,6 +14,7 @@
>>   
>>   #include "i915_file_private.h"
>>   #include "gem/i915_gem_object_types.h"
>> +#include "gt/intel_context_types.h"
>>   
>>   #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
>>   
>> @@ -70,6 +71,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
>>   void i915_drm_client_add_object(struct i915_drm_client *client,
>>   				struct drm_i915_gem_object *obj);
>>   bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
>> +void i915_drm_client_add_context(struct i915_drm_client *client,
>> +				 struct intel_context *ce);
>>   #else
>>   static inline void i915_drm_client_add_object(struct i915_drm_client *client,
>>   					      struct drm_i915_gem_object *obj)
>> @@ -79,6 +82,11 @@ static inline void i915_drm_client_add_object(struct i915_drm_client *client,
>>   static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
>>   {
>>   }
>> +
>> +static inline void i915_drm_client_add_context(struct i915_drm_client *client,
>> +					       struct intel_context *ce)
>> +{
>> +}
>>   #endif
>>   
>>   #endif /* !__I915_DRM_CLIENT_H__ */
> 
> Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> 
> Thanks,
> Aravind.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
  2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-08-24 11:35     ` Upadhyay, Tejas
  -1 siblings, 0 replies; 33+ messages in thread
From: Upadhyay, Tejas @ 2023-08-24 11:35 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Tvrtko
> Ursulin
> Sent: Friday, July 7, 2023 6:32 PM
> To: Intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats
> printing
> 
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Use the newly added drm_print_memory_stats helper to show memory
> utilisation of our objects in drm/driver specific fdinfo output.
> 
> To collect the stats we walk the per memory regions object lists and
> accumulate object size into the respective drm_memory_stats categories.
> 
> Objects with multiple possible placements are reported in multiple regions for
> total and shared sizes, while other categories are counted only for the
> currently active region.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> Cc: Rob Clark <robdclark@gmail.com>
> ---
>  drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
>  1 file changed, 85 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
> b/drivers/gpu/drm/i915/i915_drm_client.c
> index ffccb6239789..5c77d6987d90 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.c
> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)  }
> 
>  #ifdef CONFIG_PROC_FS
> +static void
> +obj_meminfo(struct drm_i915_gem_object *obj,
> +	    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN]) {
> +	struct intel_memory_region *mr;
> +	u64 sz = obj->base.size;
> +	enum intel_region_id id;
> +	unsigned int i;
> +
> +	/* Attribute size and shared to all possible memory regions. */
> +	for (i = 0; i < obj->mm.n_placements; i++) {
> +		mr = obj->mm.placements[i];
> +		id = mr->id;
> +
> +		if (obj->base.handle_count > 1)
> +			stats[id].shared += sz;
> +		else
> +			stats[id].private += sz;
> +	}
> +
> +	/* Attribute other categories to only the current region. */
> +	mr = obj->mm.region;
> +	if (mr)
> +		id = mr->id;
> +	else
> +		id = INTEL_REGION_SMEM;
> +
> +	if (!obj->mm.n_placements) {
> +		if (obj->base.handle_count > 1)
> +			stats[id].shared += sz;
> +		else
> +			stats[id].private += sz;
> +	}
> +
> +	if (i915_gem_object_has_pages(obj)) {
> +		stats[id].resident += sz;
> +
> +		if (!dma_resv_test_signaled(obj->base.resv,
> +					    dma_resv_usage_rw(true)))

Should not DMA_RESV_USAGE_BOOKKEEP also considered active (why only "rw")? Some app is syncing with syncjobs and has added dma_fence with DMA_RESV_USAGE_BOOKKEEP during execbuf while that BO is busy on waiting on work!

Thanks,
Tejas
> +			stats[id].active += sz;
> +		else if (i915_gem_object_is_shrinkable(obj) &&
> +			 obj->mm.madv == I915_MADV_DONTNEED)
> +			stats[id].purgeable += sz;
> +	}
> +}
> +
> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
> +{
> +	struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
> +	struct drm_i915_file_private *fpriv = file->driver_priv;
> +	struct i915_drm_client *client = fpriv->client;
> +	struct drm_i915_private *i915 = fpriv->i915;
> +	struct drm_i915_gem_object *obj;
> +	struct intel_memory_region *mr;
> +	struct list_head *pos;
> +	unsigned int id;
> +
> +	/* Public objects. */
> +	spin_lock(&file->table_lock);
> +	idr_for_each_entry (&file->object_idr, obj, id)
> +		obj_meminfo(obj, stats);
> +	spin_unlock(&file->table_lock);
> +
> +	/* Internal objects. */
> +	rcu_read_lock();
> +	list_for_each_rcu(pos, &client->objects_list) {
> +		obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
> +							 client_link));
> +		if (!obj)
> +			continue;
> +		obj_meminfo(obj, stats);
> +		i915_gem_object_put(obj);
> +	}
> +	rcu_read_unlock();
> +
> +	for_each_memory_region(mr, i915, id)
> +		drm_print_memory_stats(p,
> +				       &stats[id],
> +				       DRM_GEM_OBJECT_RESIDENT |
> +				       DRM_GEM_OBJECT_PURGEABLE,
> +				       mr->name);
> +}
> +
>  static const char * const uabi_class_names[] = {
>  	[I915_ENGINE_CLASS_RENDER] = "render",
>  	[I915_ENGINE_CLASS_COPY] = "copy",
> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p,
> struct drm_file *file)
>  	 *
> ****************************************************************
> **
>  	 */
> 
> +	show_meminfo(p, file);
> +
>  	if (GRAPHICS_VER(i915) < 8)
>  		return;
> 
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 33+ messages in thread

* RE: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
@ 2023-08-24 11:35     ` Upadhyay, Tejas
  0 siblings, 0 replies; 33+ messages in thread
From: Upadhyay, Tejas @ 2023-08-24 11:35 UTC (permalink / raw)
  To: Tvrtko Ursulin, Intel-gfx, dri-devel



> -----Original Message-----
> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Tvrtko
> Ursulin
> Sent: Friday, July 7, 2023 6:32 PM
> To: Intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats
> printing
> 
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Use the newly added drm_print_memory_stats helper to show memory
> utilisation of our objects in drm/driver specific fdinfo output.
> 
> To collect the stats we walk the per memory regions object lists and
> accumulate object size into the respective drm_memory_stats categories.
> 
> Objects with multiple possible placements are reported in multiple regions for
> total and shared sizes, while other categories are counted only for the
> currently active region.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> Cc: Rob Clark <robdclark@gmail.com>
> ---
>  drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
>  1 file changed, 85 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
> b/drivers/gpu/drm/i915/i915_drm_client.c
> index ffccb6239789..5c77d6987d90 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.c
> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)  }
> 
>  #ifdef CONFIG_PROC_FS
> +static void
> +obj_meminfo(struct drm_i915_gem_object *obj,
> +	    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN]) {
> +	struct intel_memory_region *mr;
> +	u64 sz = obj->base.size;
> +	enum intel_region_id id;
> +	unsigned int i;
> +
> +	/* Attribute size and shared to all possible memory regions. */
> +	for (i = 0; i < obj->mm.n_placements; i++) {
> +		mr = obj->mm.placements[i];
> +		id = mr->id;
> +
> +		if (obj->base.handle_count > 1)
> +			stats[id].shared += sz;
> +		else
> +			stats[id].private += sz;
> +	}
> +
> +	/* Attribute other categories to only the current region. */
> +	mr = obj->mm.region;
> +	if (mr)
> +		id = mr->id;
> +	else
> +		id = INTEL_REGION_SMEM;
> +
> +	if (!obj->mm.n_placements) {
> +		if (obj->base.handle_count > 1)
> +			stats[id].shared += sz;
> +		else
> +			stats[id].private += sz;
> +	}
> +
> +	if (i915_gem_object_has_pages(obj)) {
> +		stats[id].resident += sz;
> +
> +		if (!dma_resv_test_signaled(obj->base.resv,
> +					    dma_resv_usage_rw(true)))

Should not DMA_RESV_USAGE_BOOKKEEP also considered active (why only "rw")? Some app is syncing with syncjobs and has added dma_fence with DMA_RESV_USAGE_BOOKKEEP during execbuf while that BO is busy on waiting on work!

Thanks,
Tejas
> +			stats[id].active += sz;
> +		else if (i915_gem_object_is_shrinkable(obj) &&
> +			 obj->mm.madv == I915_MADV_DONTNEED)
> +			stats[id].purgeable += sz;
> +	}
> +}
> +
> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
> +{
> +	struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
> +	struct drm_i915_file_private *fpriv = file->driver_priv;
> +	struct i915_drm_client *client = fpriv->client;
> +	struct drm_i915_private *i915 = fpriv->i915;
> +	struct drm_i915_gem_object *obj;
> +	struct intel_memory_region *mr;
> +	struct list_head *pos;
> +	unsigned int id;
> +
> +	/* Public objects. */
> +	spin_lock(&file->table_lock);
> +	idr_for_each_entry (&file->object_idr, obj, id)
> +		obj_meminfo(obj, stats);
> +	spin_unlock(&file->table_lock);
> +
> +	/* Internal objects. */
> +	rcu_read_lock();
> +	list_for_each_rcu(pos, &client->objects_list) {
> +		obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
> +							 client_link));
> +		if (!obj)
> +			continue;
> +		obj_meminfo(obj, stats);
> +		i915_gem_object_put(obj);
> +	}
> +	rcu_read_unlock();
> +
> +	for_each_memory_region(mr, i915, id)
> +		drm_print_memory_stats(p,
> +				       &stats[id],
> +				       DRM_GEM_OBJECT_RESIDENT |
> +				       DRM_GEM_OBJECT_PURGEABLE,
> +				       mr->name);
> +}
> +
>  static const char * const uabi_class_names[] = {
>  	[I915_ENGINE_CLASS_RENDER] = "render",
>  	[I915_ENGINE_CLASS_COPY] = "copy",
> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p,
> struct drm_file *file)
>  	 *
> ****************************************************************
> **
>  	 */
> 
> +	show_meminfo(p, file);
> +
>  	if (GRAPHICS_VER(i915) < 8)
>  		return;
> 
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
  2023-08-24 11:35     ` Upadhyay, Tejas
  (?)
@ 2023-09-20 14:22     ` Tvrtko Ursulin
  2023-09-20 14:39         ` Rob Clark
  -1 siblings, 1 reply; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-09-20 14:22 UTC (permalink / raw)
  To: Upadhyay, Tejas, Intel-gfx, dri-devel, Rob Clark


On 24/08/2023 12:35, Upadhyay, Tejas wrote:
>> -----Original Message-----
>> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Tvrtko
>> Ursulin
>> Sent: Friday, July 7, 2023 6:32 PM
>> To: Intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
>> Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats
>> printing
>>
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Use the newly added drm_print_memory_stats helper to show memory
>> utilisation of our objects in drm/driver specific fdinfo output.
>>
>> To collect the stats we walk the per memory regions object lists and
>> accumulate object size into the respective drm_memory_stats categories.
>>
>> Objects with multiple possible placements are reported in multiple regions for
>> total and shared sizes, while other categories are counted only for the
>> currently active region.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
>> Cc: Rob Clark <robdclark@gmail.com>
>> ---
>>   drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
>>   1 file changed, 85 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
>> b/drivers/gpu/drm/i915/i915_drm_client.c
>> index ffccb6239789..5c77d6987d90 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)  }
>>
>>   #ifdef CONFIG_PROC_FS
>> +static void
>> +obj_meminfo(struct drm_i915_gem_object *obj,
>> +	    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN]) {
>> +	struct intel_memory_region *mr;
>> +	u64 sz = obj->base.size;
>> +	enum intel_region_id id;
>> +	unsigned int i;
>> +
>> +	/* Attribute size and shared to all possible memory regions. */
>> +	for (i = 0; i < obj->mm.n_placements; i++) {
>> +		mr = obj->mm.placements[i];
>> +		id = mr->id;
>> +
>> +		if (obj->base.handle_count > 1)
>> +			stats[id].shared += sz;
>> +		else
>> +			stats[id].private += sz;
>> +	}
>> +
>> +	/* Attribute other categories to only the current region. */
>> +	mr = obj->mm.region;
>> +	if (mr)
>> +		id = mr->id;
>> +	else
>> +		id = INTEL_REGION_SMEM;
>> +
>> +	if (!obj->mm.n_placements) {
>> +		if (obj->base.handle_count > 1)
>> +			stats[id].shared += sz;
>> +		else
>> +			stats[id].private += sz;
>> +	}
>> +
>> +	if (i915_gem_object_has_pages(obj)) {
>> +		stats[id].resident += sz;
>> +
>> +		if (!dma_resv_test_signaled(obj->base.resv,
>> +					    dma_resv_usage_rw(true)))
> 
> Should not DMA_RESV_USAGE_BOOKKEEP also considered active (why only "rw")? Some app is syncing with syncjobs and has added dma_fence with DMA_RESV_USAGE_BOOKKEEP during execbuf while that BO is busy on waiting on work!

Hmm do we have a path which adds DMA_RESV_USAGE_BOOKKEEP usage in execbuf?

Rob, any comments here? Given how I basically lifted the logic from 
686b21b5f6ca ("drm: Add fdinfo memory stats"), does it sound plausible 
to upgrade the test against all fences?

Regards,

Tvrtko

>> +			stats[id].active += sz;
>> +		else if (i915_gem_object_is_shrinkable(obj) &&
>> +			 obj->mm.madv == I915_MADV_DONTNEED)
>> +			stats[id].purgeable += sz;
>> +	}
>> +}
>> +
>> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
>> +{
>> +	struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
>> +	struct drm_i915_file_private *fpriv = file->driver_priv;
>> +	struct i915_drm_client *client = fpriv->client;
>> +	struct drm_i915_private *i915 = fpriv->i915;
>> +	struct drm_i915_gem_object *obj;
>> +	struct intel_memory_region *mr;
>> +	struct list_head *pos;
>> +	unsigned int id;
>> +
>> +	/* Public objects. */
>> +	spin_lock(&file->table_lock);
>> +	idr_for_each_entry (&file->object_idr, obj, id)
>> +		obj_meminfo(obj, stats);
>> +	spin_unlock(&file->table_lock);
>> +
>> +	/* Internal objects. */
>> +	rcu_read_lock();
>> +	list_for_each_rcu(pos, &client->objects_list) {
>> +		obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
>> +							 client_link));
>> +		if (!obj)
>> +			continue;
>> +		obj_meminfo(obj, stats);
>> +		i915_gem_object_put(obj);
>> +	}
>> +	rcu_read_unlock();
>> +
>> +	for_each_memory_region(mr, i915, id)
>> +		drm_print_memory_stats(p,
>> +				       &stats[id],
>> +				       DRM_GEM_OBJECT_RESIDENT |
>> +				       DRM_GEM_OBJECT_PURGEABLE,
>> +				       mr->name);
>> +}
>> +
>>   static const char * const uabi_class_names[] = {
>>   	[I915_ENGINE_CLASS_RENDER] = "render",
>>   	[I915_ENGINE_CLASS_COPY] = "copy",
>> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p,
>> struct drm_file *file)
>>   	 *
>> ****************************************************************
>> **
>>   	 */
>>
>> +	show_meminfo(p, file);
>> +
>>   	if (GRAPHICS_VER(i915) < 8)
>>   		return;
>>
>> --
>> 2.39.2
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
  2023-09-20 14:22     ` Tvrtko Ursulin
@ 2023-09-20 14:39         ` Rob Clark
  0 siblings, 0 replies; 33+ messages in thread
From: Rob Clark @ 2023-09-20 14:39 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel-gfx, Upadhyay, Tejas, dri-devel

On Wed, Sep 20, 2023 at 7:35 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 24/08/2023 12:35, Upadhyay, Tejas wrote:
> >> -----Original Message-----
> >> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Tvrtko
> >> Ursulin
> >> Sent: Friday, July 7, 2023 6:32 PM
> >> To: Intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> >> Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats
> >> printing
> >>
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> Use the newly added drm_print_memory_stats helper to show memory
> >> utilisation of our objects in drm/driver specific fdinfo output.
> >>
> >> To collect the stats we walk the per memory regions object lists and
> >> accumulate object size into the respective drm_memory_stats categories.
> >>
> >> Objects with multiple possible placements are reported in multiple regions for
> >> total and shared sizes, while other categories are counted only for the
> >> currently active region.
> >>
> >> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> >> Cc: Rob Clark <robdclark@gmail.com>
> >> ---
> >>   drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
> >>   1 file changed, 85 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
> >> b/drivers/gpu/drm/i915/i915_drm_client.c
> >> index ffccb6239789..5c77d6987d90 100644
> >> --- a/drivers/gpu/drm/i915/i915_drm_client.c
> >> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> >> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)  }
> >>
> >>   #ifdef CONFIG_PROC_FS
> >> +static void
> >> +obj_meminfo(struct drm_i915_gem_object *obj,
> >> +        struct drm_memory_stats stats[INTEL_REGION_UNKNOWN]) {
> >> +    struct intel_memory_region *mr;
> >> +    u64 sz = obj->base.size;
> >> +    enum intel_region_id id;
> >> +    unsigned int i;
> >> +
> >> +    /* Attribute size and shared to all possible memory regions. */
> >> +    for (i = 0; i < obj->mm.n_placements; i++) {
> >> +            mr = obj->mm.placements[i];
> >> +            id = mr->id;
> >> +
> >> +            if (obj->base.handle_count > 1)
> >> +                    stats[id].shared += sz;
> >> +            else
> >> +                    stats[id].private += sz;
> >> +    }
> >> +
> >> +    /* Attribute other categories to only the current region. */
> >> +    mr = obj->mm.region;
> >> +    if (mr)
> >> +            id = mr->id;
> >> +    else
> >> +            id = INTEL_REGION_SMEM;
> >> +
> >> +    if (!obj->mm.n_placements) {
> >> +            if (obj->base.handle_count > 1)
> >> +                    stats[id].shared += sz;
> >> +            else
> >> +                    stats[id].private += sz;
> >> +    }
> >> +
> >> +    if (i915_gem_object_has_pages(obj)) {
> >> +            stats[id].resident += sz;
> >> +
> >> +            if (!dma_resv_test_signaled(obj->base.resv,
> >> +                                        dma_resv_usage_rw(true)))
> >
> > Should not DMA_RESV_USAGE_BOOKKEEP also considered active (why only "rw")? Some app is syncing with syncjobs and has added dma_fence with DMA_RESV_USAGE_BOOKKEEP during execbuf while that BO is busy on waiting on work!
>
> Hmm do we have a path which adds DMA_RESV_USAGE_BOOKKEEP usage in execbuf?
>
> Rob, any comments here? Given how I basically lifted the logic from
> 686b21b5f6ca ("drm: Add fdinfo memory stats"), does it sound plausible
> to upgrade the test against all fences?

Yes, I think so.. I don't have any use for BOOKKEEP so I hadn't considered it

BR,
-R


>
> Regards,
>
> Tvrtko
>
> >> +                    stats[id].active += sz;
> >> +            else if (i915_gem_object_is_shrinkable(obj) &&
> >> +                     obj->mm.madv == I915_MADV_DONTNEED)
> >> +                    stats[id].purgeable += sz;
> >> +    }
> >> +}
> >> +
> >> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
> >> +{
> >> +    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
> >> +    struct drm_i915_file_private *fpriv = file->driver_priv;
> >> +    struct i915_drm_client *client = fpriv->client;
> >> +    struct drm_i915_private *i915 = fpriv->i915;
> >> +    struct drm_i915_gem_object *obj;
> >> +    struct intel_memory_region *mr;
> >> +    struct list_head *pos;
> >> +    unsigned int id;
> >> +
> >> +    /* Public objects. */
> >> +    spin_lock(&file->table_lock);
> >> +    idr_for_each_entry (&file->object_idr, obj, id)
> >> +            obj_meminfo(obj, stats);
> >> +    spin_unlock(&file->table_lock);
> >> +
> >> +    /* Internal objects. */
> >> +    rcu_read_lock();
> >> +    list_for_each_rcu(pos, &client->objects_list) {
> >> +            obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
> >> +                                                     client_link));
> >> +            if (!obj)
> >> +                    continue;
> >> +            obj_meminfo(obj, stats);
> >> +            i915_gem_object_put(obj);
> >> +    }
> >> +    rcu_read_unlock();
> >> +
> >> +    for_each_memory_region(mr, i915, id)
> >> +            drm_print_memory_stats(p,
> >> +                                   &stats[id],
> >> +                                   DRM_GEM_OBJECT_RESIDENT |
> >> +                                   DRM_GEM_OBJECT_PURGEABLE,
> >> +                                   mr->name);
> >> +}
> >> +
> >>   static const char * const uabi_class_names[] = {
> >>      [I915_ENGINE_CLASS_RENDER] = "render",
> >>      [I915_ENGINE_CLASS_COPY] = "copy",
> >> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p,
> >> struct drm_file *file)
> >>       *
> >> ****************************************************************
> >> **
> >>       */
> >>
> >> +    show_meminfo(p, file);
> >> +
> >>      if (GRAPHICS_VER(i915) < 8)
> >>              return;
> >>
> >> --
> >> 2.39.2
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
@ 2023-09-20 14:39         ` Rob Clark
  0 siblings, 0 replies; 33+ messages in thread
From: Rob Clark @ 2023-09-20 14:39 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel-gfx, dri-devel

On Wed, Sep 20, 2023 at 7:35 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 24/08/2023 12:35, Upadhyay, Tejas wrote:
> >> -----Original Message-----
> >> From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Tvrtko
> >> Ursulin
> >> Sent: Friday, July 7, 2023 6:32 PM
> >> To: Intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
> >> Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats
> >> printing
> >>
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> Use the newly added drm_print_memory_stats helper to show memory
> >> utilisation of our objects in drm/driver specific fdinfo output.
> >>
> >> To collect the stats we walk the per memory regions object lists and
> >> accumulate object size into the respective drm_memory_stats categories.
> >>
> >> Objects with multiple possible placements are reported in multiple regions for
> >> total and shared sizes, while other categories are counted only for the
> >> currently active region.
> >>
> >> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> >> Cc: Rob Clark <robdclark@gmail.com>
> >> ---
> >>   drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
> >>   1 file changed, 85 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
> >> b/drivers/gpu/drm/i915/i915_drm_client.c
> >> index ffccb6239789..5c77d6987d90 100644
> >> --- a/drivers/gpu/drm/i915/i915_drm_client.c
> >> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> >> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)  }
> >>
> >>   #ifdef CONFIG_PROC_FS
> >> +static void
> >> +obj_meminfo(struct drm_i915_gem_object *obj,
> >> +        struct drm_memory_stats stats[INTEL_REGION_UNKNOWN]) {
> >> +    struct intel_memory_region *mr;
> >> +    u64 sz = obj->base.size;
> >> +    enum intel_region_id id;
> >> +    unsigned int i;
> >> +
> >> +    /* Attribute size and shared to all possible memory regions. */
> >> +    for (i = 0; i < obj->mm.n_placements; i++) {
> >> +            mr = obj->mm.placements[i];
> >> +            id = mr->id;
> >> +
> >> +            if (obj->base.handle_count > 1)
> >> +                    stats[id].shared += sz;
> >> +            else
> >> +                    stats[id].private += sz;
> >> +    }
> >> +
> >> +    /* Attribute other categories to only the current region. */
> >> +    mr = obj->mm.region;
> >> +    if (mr)
> >> +            id = mr->id;
> >> +    else
> >> +            id = INTEL_REGION_SMEM;
> >> +
> >> +    if (!obj->mm.n_placements) {
> >> +            if (obj->base.handle_count > 1)
> >> +                    stats[id].shared += sz;
> >> +            else
> >> +                    stats[id].private += sz;
> >> +    }
> >> +
> >> +    if (i915_gem_object_has_pages(obj)) {
> >> +            stats[id].resident += sz;
> >> +
> >> +            if (!dma_resv_test_signaled(obj->base.resv,
> >> +                                        dma_resv_usage_rw(true)))
> >
> > Should not DMA_RESV_USAGE_BOOKKEEP also considered active (why only "rw")? Some app is syncing with syncjobs and has added dma_fence with DMA_RESV_USAGE_BOOKKEEP during execbuf while that BO is busy on waiting on work!
>
> Hmm do we have a path which adds DMA_RESV_USAGE_BOOKKEEP usage in execbuf?
>
> Rob, any comments here? Given how I basically lifted the logic from
> 686b21b5f6ca ("drm: Add fdinfo memory stats"), does it sound plausible
> to upgrade the test against all fences?

Yes, I think so.. I don't have any use for BOOKKEEP so I hadn't considered it

BR,
-R


>
> Regards,
>
> Tvrtko
>
> >> +                    stats[id].active += sz;
> >> +            else if (i915_gem_object_is_shrinkable(obj) &&
> >> +                     obj->mm.madv == I915_MADV_DONTNEED)
> >> +                    stats[id].purgeable += sz;
> >> +    }
> >> +}
> >> +
> >> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
> >> +{
> >> +    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
> >> +    struct drm_i915_file_private *fpriv = file->driver_priv;
> >> +    struct i915_drm_client *client = fpriv->client;
> >> +    struct drm_i915_private *i915 = fpriv->i915;
> >> +    struct drm_i915_gem_object *obj;
> >> +    struct intel_memory_region *mr;
> >> +    struct list_head *pos;
> >> +    unsigned int id;
> >> +
> >> +    /* Public objects. */
> >> +    spin_lock(&file->table_lock);
> >> +    idr_for_each_entry (&file->object_idr, obj, id)
> >> +            obj_meminfo(obj, stats);
> >> +    spin_unlock(&file->table_lock);
> >> +
> >> +    /* Internal objects. */
> >> +    rcu_read_lock();
> >> +    list_for_each_rcu(pos, &client->objects_list) {
> >> +            obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
> >> +                                                     client_link));
> >> +            if (!obj)
> >> +                    continue;
> >> +            obj_meminfo(obj, stats);
> >> +            i915_gem_object_put(obj);
> >> +    }
> >> +    rcu_read_unlock();
> >> +
> >> +    for_each_memory_region(mr, i915, id)
> >> +            drm_print_memory_stats(p,
> >> +                                   &stats[id],
> >> +                                   DRM_GEM_OBJECT_RESIDENT |
> >> +                                   DRM_GEM_OBJECT_PURGEABLE,
> >> +                                   mr->name);
> >> +}
> >> +
> >>   static const char * const uabi_class_names[] = {
> >>      [I915_ENGINE_CLASS_RENDER] = "render",
> >>      [I915_ENGINE_CLASS_COPY] = "copy",
> >> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p,
> >> struct drm_file *file)
> >>       *
> >> ****************************************************************
> >> **
> >>       */
> >>
> >> +    show_meminfo(p, file);
> >> +
> >>      if (GRAPHICS_VER(i915) < 8)
> >>              return;
> >>
> >> --
> >> 2.39.2
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-09-21 11:48 [PATCH v7 0/5] fdinfo memory stats Tvrtko Ursulin
@ 2023-09-21 11:48 ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-09-21 11:48 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Aravind Iddamsetty, Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In order to show per client memory usage lets add some infrastructure
which enables tracking buffer objects owned by clients.

We add a per client list protected by a new per client lock and to support
delayed destruction (post client exit) we make tracked objects hold
references to the owning client.

Also, object memory region teardown is moved to the existing RCU free
callback to allow safe dereference from the fdinfo RCU read section.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
 drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
 4 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index c26d87555825..25eeeb863209 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -106,6 +106,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 
 	INIT_LIST_HEAD(&obj->mm.link);
 
+#ifdef CONFIG_PROC_FS
+	INIT_LIST_HEAD(&obj->client_link);
+#endif
+
 	INIT_LIST_HEAD(&obj->lut_list);
 	spin_lock_init(&obj->lut_lock);
 
@@ -293,6 +297,10 @@ void __i915_gem_free_object_rcu(struct rcu_head *head)
 		container_of(head, typeof(*obj), rcu);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
+	/* We need to keep this alive for RCU read access from fdinfo. */
+	if (obj->mm.n_placements > 1)
+		kfree(obj->mm.placements);
+
 	i915_gem_object_free(obj);
 
 	GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
@@ -389,9 +397,6 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
 	if (obj->ops->release)
 		obj->ops->release(obj);
 
-	if (obj->mm.n_placements > 1)
-		kfree(obj->mm.placements);
-
 	if (obj->shares_resv_from)
 		i915_vm_resv_put(obj->shares_resv_from);
 
@@ -442,6 +447,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
 
 	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
 
+	i915_drm_client_remove_object(obj);
+
 	/*
 	 * Before we free the object, make sure any pure RCU-only
 	 * read-side critical sections are complete, e.g.
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 2292404007c8..0c5cdab278b6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -302,6 +302,18 @@ struct drm_i915_gem_object {
 	 */
 	struct i915_address_space *shares_resv_from;
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @client: @i915_drm_client which created the object
+	 */
+	struct i915_drm_client *client;
+
+	/**
+	 * @client_link: Link into @i915_drm_client.objects_list
+	 */
+	struct list_head client_link;
+#endif
+
 	union {
 		struct rcu_head rcu;
 		struct llist_node freed;
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2a44b3876cb5..2e5e69edc0f9 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
 	kref_init(&client->kref);
 	spin_lock_init(&client->ctx_lock);
 	INIT_LIST_HEAD(&client->ctx_list);
+#ifdef CONFIG_PROC_FS
+	spin_lock_init(&client->objects_lock);
+	INIT_LIST_HEAD(&client->objects_list);
+#endif
 
 	return client;
 }
@@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
 		show_client_class(p, i915, file_priv->client, i);
 }
+
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj)
+{
+	unsigned long flags;
+
+	GEM_WARN_ON(obj->client);
+	GEM_WARN_ON(!list_empty(&obj->client_link));
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	obj->client = i915_drm_client_get(client);
+	list_add_tail_rcu(&obj->client_link, &client->objects_list);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+}
+
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+	struct i915_drm_client *client = fetch_and_zero(&obj->client);
+	unsigned long flags;
+
+	/* Object may not be associated with a client. */
+	if (!client)
+		return false;
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	list_del_rcu(&obj->client_link);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+
+	i915_drm_client_put(client);
+
+	return true;
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 67816c912bca..5f58fdf7dcb8 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -12,6 +12,9 @@
 
 #include <uapi/drm/i915_drm.h>
 
+#include "i915_file_private.h"
+#include "gem/i915_gem_object_types.h"
+
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
 struct drm_file;
@@ -25,6 +28,20 @@ struct i915_drm_client {
 	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
 	struct list_head ctx_list; /* List of contexts belonging to client. */
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @objects_lock: lock protecting @objects_list
+	 */
+	spinlock_t objects_lock;
+
+	/**
+	 * @objects_list: list of objects created by this client
+	 *
+	 * Protected by @objects_lock.
+	 */
+	struct list_head objects_list;
+#endif
+
 	/**
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
@@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
 
 void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
 
+#ifdef CONFIG_PROC_FS
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj);
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+#else
+static inline void i915_drm_client_add_object(struct i915_drm_client *client,
+					      struct drm_i915_gem_object *obj)
+{
+}
+
+static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+}
+#endif
+
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-07-27 10:13 [PATCH v6 0/5] fdinfo memory stats Tvrtko Ursulin
@ 2023-07-27 10:13 ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-07-27 10:13 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In order to show per client memory usage lets add some infrastructure
which enables tracking buffer objects owned by clients.

We add a per client list protected by a new per client lock and to support
delayed destruction (post client exit) we make tracked objects hold
references to the owning client.

Also, object memory region teardown is moved to the existing RCU free
callback to allow safe dereference from the fdinfo RCU read section.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    | 13 +++++--
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
 drivers/gpu/drm/i915/i915_drm_client.c        | 36 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 32 +++++++++++++++++
 4 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 97ac6fb37958..3dc4fbb67d2b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -105,6 +105,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 
 	INIT_LIST_HEAD(&obj->mm.link);
 
+#ifdef CONFIG_PROC_FS
+	INIT_LIST_HEAD(&obj->client_link);
+#endif
+
 	INIT_LIST_HEAD(&obj->lut_list);
 	spin_lock_init(&obj->lut_lock);
 
@@ -292,6 +296,10 @@ void __i915_gem_free_object_rcu(struct rcu_head *head)
 		container_of(head, typeof(*obj), rcu);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 
+	/* We need to keep this alive for RCU read access from fdinfo. */
+	if (obj->mm.n_placements > 1)
+		kfree(obj->mm.placements);
+
 	i915_gem_object_free(obj);
 
 	GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
@@ -388,9 +396,6 @@ void __i915_gem_free_object(struct drm_i915_gem_object *obj)
 	if (obj->ops->release)
 		obj->ops->release(obj);
 
-	if (obj->mm.n_placements > 1)
-		kfree(obj->mm.placements);
-
 	if (obj->shares_resv_from)
 		i915_vm_resv_put(obj->shares_resv_from);
 
@@ -441,6 +446,8 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
 
 	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
 
+	i915_drm_client_remove_object(obj);
+
 	/*
 	 * Before we free the object, make sure any pure RCU-only
 	 * read-side critical sections are complete, e.g.
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index e72c57716bee..8de2b91b3edf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -300,6 +300,18 @@ struct drm_i915_gem_object {
 	 */
 	struct i915_address_space *shares_resv_from;
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @client: @i915_drm_client which created the object
+	 */
+	struct i915_drm_client *client;
+
+	/**
+	 * @client_link: Link into @i915_drm_client.objects_list
+	 */
+	struct list_head client_link;
+#endif
+
 	union {
 		struct rcu_head rcu;
 		struct llist_node freed;
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2a44b3876cb5..2e5e69edc0f9 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -28,6 +28,10 @@ struct i915_drm_client *i915_drm_client_alloc(void)
 	kref_init(&client->kref);
 	spin_lock_init(&client->ctx_lock);
 	INIT_LIST_HEAD(&client->ctx_list);
+#ifdef CONFIG_PROC_FS
+	spin_lock_init(&client->objects_lock);
+	INIT_LIST_HEAD(&client->objects_list);
+#endif
 
 	return client;
 }
@@ -108,4 +112,36 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
 		show_client_class(p, i915, file_priv->client, i);
 }
+
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj)
+{
+	unsigned long flags;
+
+	GEM_WARN_ON(obj->client);
+	GEM_WARN_ON(!list_empty(&obj->client_link));
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	obj->client = i915_drm_client_get(client);
+	list_add_tail_rcu(&obj->client_link, &client->objects_list);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+}
+
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+	struct i915_drm_client *client = fetch_and_zero(&obj->client);
+	unsigned long flags;
+
+	/* Object may not be associated with a client. */
+	if (!client)
+		return false;
+
+	spin_lock_irqsave(&client->objects_lock, flags);
+	list_del_rcu(&obj->client_link);
+	spin_unlock_irqrestore(&client->objects_lock, flags);
+
+	i915_drm_client_put(client);
+
+	return true;
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 67816c912bca..5f58fdf7dcb8 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -12,6 +12,9 @@
 
 #include <uapi/drm/i915_drm.h>
 
+#include "i915_file_private.h"
+#include "gem/i915_gem_object_types.h"
+
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
 struct drm_file;
@@ -25,6 +28,20 @@ struct i915_drm_client {
 	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
 	struct list_head ctx_list; /* List of contexts belonging to client. */
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @objects_lock: lock protecting @objects_list
+	 */
+	spinlock_t objects_lock;
+
+	/**
+	 * @objects_list: list of objects created by this client
+	 *
+	 * Protected by @objects_lock.
+	 */
+	struct list_head objects_list;
+#endif
+
 	/**
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
@@ -49,4 +66,19 @@ struct i915_drm_client *i915_drm_client_alloc(void);
 
 void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
 
+#ifdef CONFIG_PROC_FS
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj);
+bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+#else
+static inline void i915_drm_client_add_object(struct i915_drm_client *client,
+					      struct drm_i915_gem_object *obj)
+{
+}
+
+static inline bool i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+}
+#endif
+
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client
  2023-06-12 10:46 [PATCH v4 0/5] fdinfo memory stats Tvrtko Ursulin
@ 2023-06-12 10:46 ` Tvrtko Ursulin
  0 siblings, 0 replies; 33+ messages in thread
From: Tvrtko Ursulin @ 2023-06-12 10:46 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In order to show per client memory usage lets add some infrastructure
which enables tracking buffer objects owned by clients.

We add a per client list protected by a new per client lock and to support
delayed destruction (post client exit) we make tracked objects hold
references to the owning client.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  5 +++
 .../gpu/drm/i915/gem/i915_gem_object_types.h  | 12 +++++++
 drivers/gpu/drm/i915/i915_drm_client.c        | 36 ++++++++++++++++++-
 drivers/gpu/drm/i915/i915_drm_client.h        | 34 +++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem.c               |  2 +-
 5 files changed, 86 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 97ac6fb37958..d6961f6818f1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -105,6 +105,10 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 
 	INIT_LIST_HEAD(&obj->mm.link);
 
+#ifdef CONFIG_PROC_FS
+	INIT_LIST_HEAD(&obj->client_link);
+#endif
+
 	INIT_LIST_HEAD(&obj->lut_list);
 	spin_lock_init(&obj->lut_lock);
 
@@ -410,6 +414,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
 		}
 
 		__i915_gem_object_pages_fini(obj);
+		i915_drm_client_remove_object(obj);
 		__i915_gem_free_object(obj);
 
 		/* But keep the pointer alive for RCU-protected lookups */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index e72c57716bee..8de2b91b3edf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -300,6 +300,18 @@ struct drm_i915_gem_object {
 	 */
 	struct i915_address_space *shares_resv_from;
 
+#ifdef CONFIG_PROC_FS
+	/**
+	 * @client: @i915_drm_client which created the object
+	 */
+	struct i915_drm_client *client;
+
+	/**
+	 * @client_link: Link into @i915_drm_client.objects_list
+	 */
+	struct list_head client_link;
+#endif
+
 	union {
 		struct rcu_head rcu;
 		struct llist_node freed;
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2a44b3876cb5..3c8d6a46a801 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -17,7 +17,8 @@
 #include "i915_gem.h"
 #include "i915_utils.h"
 
-struct i915_drm_client *i915_drm_client_alloc(void)
+struct i915_drm_client *
+i915_drm_client_alloc(struct drm_i915_file_private *fpriv)
 {
 	struct i915_drm_client *client;
 
@@ -28,6 +29,12 @@ struct i915_drm_client *i915_drm_client_alloc(void)
 	kref_init(&client->kref);
 	spin_lock_init(&client->ctx_lock);
 	INIT_LIST_HEAD(&client->ctx_list);
+#ifdef CONFIG_PROC_FS
+	spin_lock_init(&client->objects_lock);
+	INIT_LIST_HEAD(&client->objects_list);
+
+	client->fpriv = fpriv;
+#endif
 
 	return client;
 }
@@ -108,4 +115,31 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
 	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
 		show_client_class(p, i915, file_priv->client, i);
 }
+
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj)
+{
+	GEM_WARN_ON(obj->client);
+	GEM_WARN_ON(!list_empty(&obj->client_link));
+
+	spin_lock(&client->objects_lock);
+	obj->client = i915_drm_client_get(client);
+	list_add_tail(&obj->client_link, &client->objects_list);
+	spin_unlock(&client->objects_lock);
+}
+
+void i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+	struct i915_drm_client *client = fetch_and_zero(&obj->client);
+
+	/* Object may not be associated with a client. */
+	if (!client || list_empty(&obj->client_link))
+		return;
+
+	spin_lock(&client->objects_lock);
+	list_del(&obj->client_link);
+	spin_unlock(&client->objects_lock);
+
+	i915_drm_client_put(client);
+}
 #endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 4c18b99e10a4..5fc897ab1a6b 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -12,6 +12,9 @@
 
 #include <uapi/drm/i915_drm.h>
 
+#include "i915_file_private.h"
+#include "gem/i915_gem_object_types.h"
+
 #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
 
 struct drm_file;
@@ -25,6 +28,22 @@ struct i915_drm_client {
 	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
 	struct list_head ctx_list; /* List of contexts belonging to client. */
 
+#ifdef CONFIG_PROC_FS
+	struct drm_i915_file_private *fpriv;
+
+	/**
+	 * @objects_lock: lock protecting @objects_list
+	 */
+	spinlock_t objects_lock;
+
+	/**
+	 * @objects_list: list of objects created by this client
+	 *
+	 * Protected by @objects_lock.
+	 */
+	struct list_head objects_list;
+#endif
+
 	/**
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
@@ -45,10 +64,23 @@ static inline void i915_drm_client_put(struct i915_drm_client *client)
 	kref_put(&client->kref, __i915_drm_client_free);
 }
 
-struct i915_drm_client *i915_drm_client_alloc(void);
+struct i915_drm_client *i915_drm_client_alloc(struct drm_i915_file_private *fpriv);
 
 #ifdef CONFIG_PROC_FS
 void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
+
+void i915_drm_client_add_object(struct i915_drm_client *client,
+				struct drm_i915_gem_object *obj);
+void i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+#else
+static inline void i915_drm_client_add_object(struct i915_drm_client *client,
+					      struct drm_i915_gem_object *obj)
+{
+}
+
+static inline void i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
+{
+}
 #endif
 
 #endif /* !__I915_DRM_CLIENT_H__ */
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1f65bb33dd21..7ae42f746cc2 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1325,7 +1325,7 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 	if (!file_priv)
 		goto err_alloc;
 
-	client = i915_drm_client_alloc();
+	client = i915_drm_client_alloc(file_priv);
 	if (!client)
 		goto err_client;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2023-09-21 11:49 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-07 13:02 [PATCH v5 0/5] fdinfo memory stats Tvrtko Ursulin
2023-07-07 13:02 ` [Intel-gfx] " Tvrtko Ursulin
2023-07-07 13:02 ` [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-10 10:44   ` Iddamsetty, Aravind
2023-07-10 13:20     ` Tvrtko Ursulin
2023-07-11  7:48       ` Iddamsetty, Aravind
2023-07-11  9:39         ` Tvrtko Ursulin
2023-07-07 13:02 ` [PATCH 2/5] drm/i915: Record which client owns a VM Tvrtko Ursulin
2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-11  9:08   ` Iddamsetty, Aravind
2023-07-11  9:08     ` [Intel-gfx] " Iddamsetty, Aravind
2023-07-07 13:02 ` [PATCH 3/5] drm/i915: Track page table backing store usage Tvrtko Ursulin
2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-11  9:08   ` Iddamsetty, Aravind
2023-07-07 13:02 ` [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-11  9:29   ` Iddamsetty, Aravind
2023-07-11  9:44     ` Tvrtko Ursulin
2023-07-07 13:02 ` [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-07-07 13:02   ` [Intel-gfx] " Tvrtko Ursulin
2023-08-24 11:35   ` Upadhyay, Tejas
2023-08-24 11:35     ` Upadhyay, Tejas
2023-09-20 14:22     ` Tvrtko Ursulin
2023-09-20 14:39       ` Rob Clark
2023-09-20 14:39         ` Rob Clark
2023-07-07 16:00 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for fdinfo memory stats (rev4) Patchwork
2023-07-07 16:00 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2023-07-07 16:10 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-07-07 20:57 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2023-09-21 11:48 [PATCH v7 0/5] fdinfo memory stats Tvrtko Ursulin
2023-09-21 11:48 ` [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-07-27 10:13 [PATCH v6 0/5] fdinfo memory stats Tvrtko Ursulin
2023-07-27 10:13 ` [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-06-12 10:46 [PATCH v4 0/5] fdinfo memory stats Tvrtko Ursulin
2023-06-12 10:46 ` [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.