All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 0/9] Per client engine busyness
@ 2020-04-15 10:11 Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
                   ` (13 more replies)
  0 siblings, 14 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * One patch got merged in the meantime, otherwise just a rebase.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.

Tvrtko Ursulin (9):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track context current active time
  drm/i915: Prefer software tracked context busyness

 drivers/gpu/drm/i915/Makefile                 |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  60 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  21 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  18 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  24 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  58 ++-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  29 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 431 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  93 ++++
 drivers/gpu/drm/i915/i915_drv.c               |   3 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  25 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  25 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 740 insertions(+), 79 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-08-26  1:11   ` Lucas De Marchi
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 2/9] drm/i915: Update client name on context create Tvrtko Ursulin
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx; +Cc: Chris Wilson

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose a list of clients with open file handles in sysfs.

This will be a basis for a top-like utility showing per-client and per-
engine GPU load.

Currently we only expose each client's pid and name under opaque numbered
directories in /sys/class/drm/card0/clients/.

For instance:

/sys/class/drm/card0/clients/3/name: Xorg
/sys/class/drm/card0/clients/3/pid: 5664

v2:
 Chris Wilson:
 * Enclose new members into dedicated structs.
 * Protect against failed sysfs registration.

v3:
 * sysfs_attr_init.

v4:
 * Fix for internal clients.

v5:
 * Use cyclic ida for client id. (Chris)
 * Do not leak pid reference. (Chris)
 * Tidy code with some locals.

v6:
 * Use xa_alloc_cyclic to simplify locking. (Chris)
 * No need to unregister individial sysfs files. (Chris)
 * Rebase on top of fpriv kref.
 * Track client closed status and reflect in sysfs.

v7:
 * Make drm_client more standalone concept.

v8:
 * Simplify sysfs show. (Chris)
 * Always track name and pid.

v9:
 * Fix cyclic id assignment.

v10:
 * No need for a mutex around xa_alloc_cyclic.
 * Refactor sysfs into own function.
 * Unregister sysfs before freeing pid and name.
 * Move clients setup into own function.

v11:
 * Call clients init directly from driver init. (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/Makefile          |   3 +-
 drivers/gpu/drm/i915/i915_drm_client.c | 179 +++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h |  64 +++++++++
 drivers/gpu/drm/i915/i915_drv.c        |   3 +
 drivers/gpu/drm/i915/i915_drv.h        |   5 +
 drivers/gpu/drm/i915/i915_gem.c        |  25 +++-
 drivers/gpu/drm/i915/i915_sysfs.c      |   8 ++
 7 files changed, 283 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 44c506b7e117..b30f3d51c66a 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -33,7 +33,8 @@ subdir-ccflags-y += -I$(srctree)/$(src)
 # Please keep these build lists sorted!
 
 # core driver code
-i915-y += i915_drv.o \
+i915-y += i915_drm_client.o \
+	  i915_drv.o \
 	  i915_irq.o \
 	  i915_getparam.o \
 	  i915_params.o \
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
new file mode 100644
index 000000000000..2067fbcdb795
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "i915_drm_client.h"
+#include "i915_gem.h"
+#include "i915_utils.h"
+
+void i915_drm_clients_init(struct i915_drm_clients *clients)
+{
+	clients->next_id = 0;
+	xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC);
+}
+
+static ssize_t
+show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_drm_client *client =
+		container_of(attr, typeof(*client), attr.name);
+
+	return snprintf(buf, PAGE_SIZE,
+			READ_ONCE(client->closed) ? "<%s>" : "%s",
+			client->name);
+}
+
+static ssize_t
+show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_drm_client *client =
+		container_of(attr, typeof(*client), attr.pid);
+
+	return snprintf(buf, PAGE_SIZE,
+			READ_ONCE(client->closed) ? "<%u>" : "%u",
+			pid_nr(client->pid));
+}
+
+static int
+__client_register_sysfs(struct i915_drm_client *client)
+{
+	const struct {
+		const char *name;
+		struct device_attribute *attr;
+		ssize_t (*show)(struct device *dev,
+				struct device_attribute *attr,
+				char *buf);
+	} files[] = {
+		{ "name", &client->attr.name, show_client_name },
+		{ "pid", &client->attr.pid, show_client_pid },
+	};
+	unsigned int i;
+	char buf[16];
+	int ret;
+
+	ret = scnprintf(buf, sizeof(buf), "%u", client->id);
+	if (ret == sizeof(buf))
+		return -EINVAL;
+
+	client->root = kobject_create_and_add(buf, client->clients->root);
+	if (!client->root)
+		return -ENOMEM;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		struct device_attribute *attr = files[i].attr;
+
+		sysfs_attr_init(&attr->attr);
+
+		attr->attr.name = files[i].name;
+		attr->attr.mode = 0444;
+		attr->show = files[i].show;
+
+		ret = sysfs_create_file(client->root, (struct attribute *)attr);
+		if (ret)
+			break;
+	}
+
+	if (ret)
+		kobject_put(client->root);
+
+	return ret;
+}
+
+static void __client_unregister_sysfs(struct i915_drm_client *client)
+{
+	kobject_put(fetch_and_zero(&client->root));
+}
+
+static int
+__i915_drm_client_register(struct i915_drm_client *client,
+			   struct task_struct *task)
+{
+	struct i915_drm_clients *clients = client->clients;
+	char *name;
+	int ret;
+
+	name = kstrdup(task->comm, GFP_KERNEL);
+	if (!name)
+		return -ENOMEM;
+
+	client->pid = get_task_pid(task, PIDTYPE_PID);
+	client->name = name;
+
+	if (!clients->root)
+		return 0; /* intel_fbdev_init registers a client before sysfs */
+
+	ret = __client_register_sysfs(client);
+	if (ret)
+		goto err_sysfs;
+
+	return 0;
+
+err_sysfs:
+	put_pid(client->pid);
+	kfree(client->name);
+
+	return ret;
+}
+
+static void
+__i915_drm_client_unregister(struct i915_drm_client *client)
+{
+	__client_unregister_sysfs(client);
+
+	put_pid(fetch_and_zero(&client->pid));
+	kfree(fetch_and_zero(&client->name));
+}
+
+struct i915_drm_client *
+i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
+{
+	struct i915_drm_client *client;
+	int ret;
+
+	client = kzalloc(sizeof(*client), GFP_KERNEL);
+	if (!client)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&client->kref);
+	client->clients = clients;
+
+	ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
+			      xa_limit_32b, &clients->next_id, GFP_KERNEL);
+	if (ret)
+		goto err_id;
+
+	ret = __i915_drm_client_register(client, task);
+	if (ret)
+		goto err_register;
+
+	return client;
+
+err_register:
+	xa_erase(&clients->xarray, client->id);
+err_id:
+	kfree(client);
+
+	return ERR_PTR(ret);
+}
+
+void __i915_drm_client_free(struct kref *kref)
+{
+	struct i915_drm_client *client =
+		container_of(kref, typeof(*client), kref);
+
+	__i915_drm_client_unregister(client);
+	xa_erase(&client->clients->xarray, client->id);
+	kfree_rcu(client, rcu);
+}
+
+void i915_drm_client_close(struct i915_drm_client *client)
+{
+	GEM_BUG_ON(READ_ONCE(client->closed));
+	WRITE_ONCE(client->closed, true);
+	i915_drm_client_put(client);
+}
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
new file mode 100644
index 000000000000..af6998c74d4c
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+
+#ifndef __I915_DRM_CLIENT_H__
+#define __I915_DRM_CLIENT_H__
+
+#include <linux/device.h>
+#include <linux/kobject.h>
+#include <linux/kref.h>
+#include <linux/pid.h>
+#include <linux/rcupdate.h>
+#include <linux/sched.h>
+#include <linux/xarray.h>
+
+struct i915_drm_clients {
+	struct xarray xarray;
+	u32 next_id;
+
+	struct kobject *root;
+};
+
+struct i915_drm_client {
+	struct kref kref;
+
+	struct rcu_head rcu;
+
+	unsigned int id;
+	struct pid *pid;
+	char *name;
+	bool closed;
+
+	struct i915_drm_clients *clients;
+
+	struct kobject *root;
+	struct {
+		struct device_attribute pid;
+		struct device_attribute name;
+	} attr;
+};
+
+void i915_drm_clients_init(struct i915_drm_clients *clients);
+
+static inline struct i915_drm_client *
+i915_drm_client_get(struct i915_drm_client *client)
+{
+	kref_get(&client->kref);
+	return client;
+}
+
+void __i915_drm_client_free(struct kref *kref);
+
+static inline void i915_drm_client_put(struct i915_drm_client *client)
+{
+	kref_put(&client->kref, __i915_drm_client_free);
+}
+
+void i915_drm_client_close(struct i915_drm_client *client);
+
+struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients,
+					    struct task_struct *task);
+
+#endif /* !__I915_DRM_CLIENT_H__ */
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 641f5e03b661..dac84b17d23d 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -70,6 +70,7 @@
 #include "gt/intel_rc6.h"
 
 #include "i915_debugfs.h"
+#include "i915_drm_client.h"
 #include "i915_drv.h"
 #include "i915_ioc32.h"
 #include "i915_irq.h"
@@ -456,6 +457,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
 
 	i915_gem_init_early(dev_priv);
 
+	i915_drm_clients_init(&dev_priv->clients);
+
 	/* This must be called before any calls to HAS_PCH_* */
 	intel_detect_pch(dev_priv);
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e9ee4daa9320..f9f0c3ba6e4a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -91,6 +91,7 @@
 #include "intel_wakeref.h"
 #include "intel_wopcm.h"
 
+#include "i915_drm_client.h"
 #include "i915_gem.h"
 #include "i915_gem_gtt.h"
 #include "i915_gpu_error.h"
@@ -226,6 +227,8 @@ struct drm_i915_file_private {
 	/** ban_score: Accumulated score of all ctx bans and fast hangs. */
 	atomic_t ban_score;
 	unsigned long hang_timestamp;
+
+	struct i915_drm_client *client;
 };
 
 /* Interface history:
@@ -1201,6 +1204,8 @@ struct drm_i915_private {
 
 	struct i915_pmu pmu;
 
+	struct i915_drm_clients clients;
+
 	struct i915_hdcp_comp_master *hdcp_master;
 	bool hdcp_comp_added;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0cbcb9f54e7d..5a0b5fae8b92 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1234,6 +1234,8 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
 	GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
 	GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
 	drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
+	drm_WARN_ON(&dev_priv->drm, !xa_empty(&dev_priv->clients.xarray));
+	xa_destroy(&dev_priv->clients.xarray);
 }
 
 int i915_gem_freeze(struct drm_i915_private *dev_priv)
@@ -1288,6 +1290,8 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 	struct drm_i915_file_private *file_priv = file->driver_priv;
 	struct i915_request *request;
 
+	i915_drm_client_close(file_priv->client);
+
 	/* Clean up our request list when the client is going away, so that
 	 * later retire_requests won't dereference our soon-to-be-gone
 	 * file_priv.
@@ -1301,17 +1305,25 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv;
-	int ret;
+	struct i915_drm_client *client;
+	int ret = -ENOMEM;
 
 	DRM_DEBUG("\n");
 
 	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
 	if (!file_priv)
-		return -ENOMEM;
+		goto err_alloc;
+
+	client = i915_drm_client_add(&i915->clients, current);
+	if (IS_ERR(client)) {
+		ret = PTR_ERR(client);
+		goto err_client;
+	}
 
 	file->driver_priv = file_priv;
 	file_priv->dev_priv = i915;
 	file_priv->file = file;
+	file_priv->client = client;
 
 	spin_lock_init(&file_priv->mm.lock);
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
@@ -1321,8 +1333,15 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 
 	ret = i915_gem_context_open(i915, file);
 	if (ret)
-		kfree(file_priv);
+		goto err_context;
+
+	return 0;
 
+err_context:
+	i915_drm_client_close(client);
+err_client:
+	kfree(file_priv);
+err_alloc:
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 45d32ef42787..b7d4a6d2dd5c 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -560,6 +560,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	struct device *kdev = dev_priv->drm.primary->kdev;
 	int ret;
 
+	dev_priv->clients.root =
+		kobject_create_and_add("clients", &kdev->kobj);
+	if (!dev_priv->clients.root)
+		DRM_ERROR("Per-client sysfs setup failed\n");
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -627,4 +632,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
+
+	if (dev_priv->clients.root)
+		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 2/9] drm/i915: Update client name on context create
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 3/9] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx; +Cc: Chris Wilson

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some clients have the DRM fd passed to them over a socket by the X server.

Grab the real client and pid when they create their first context and
update the exposed data for more useful enumeration.

To enable lockless access to client name and pid data from the following
patches, we also make these fields rcu protected. In this way asynchronous
code paths where both contexts which remain after the client exit, and
access to client name and pid as they are getting updated due context
creation running in parallel with name/pid queries.

v2:
 * Do not leak the pid reference and borrow context idr_lock. (Chris)

v3:
 * More avoiding leaks. (Chris)

v4:
 * Move update completely to drm client. (Chris)
 * Do not lose previous client data on failure to re-register and simplify
   update to only touch what it needs.

v5:
 * Reuse ext_data local. (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c |   5 +
 drivers/gpu/drm/i915/i915_drm_client.c      | 103 ++++++++++++++++++--
 drivers/gpu/drm/i915/i915_drm_client.h      |  10 +-
 3 files changed, 106 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 11d9135cf21a..0a1d1db98d64 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -74,6 +74,7 @@
 #include "gt/intel_engine_user.h"
 #include "gt/intel_ring.h"
 
+#include "i915_drm_client.h"
 #include "i915_gem_context.h"
 #include "i915_globals.h"
 #include "i915_trace.h"
@@ -2356,6 +2357,10 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 		return -EIO;
 	}
 
+	ret = i915_drm_client_update(ext_data.fpriv->client, current);
+	if (ret)
+		return ret;
+
 	ext_data.ctx = i915_gem_create_context(i915, args->flags);
 	if (IS_ERR(ext_data.ctx))
 		return PTR_ERR(ext_data.ctx);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 2067fbcdb795..342a11554573 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -7,6 +7,9 @@
 #include <linux/slab.h>
 #include <linux/types.h>
 
+#include <drm/drm_print.h>
+
+#include "i915_drv.h"
 #include "i915_drm_client.h"
 #include "i915_gem.h"
 #include "i915_utils.h"
@@ -22,10 +25,15 @@ show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
 {
 	struct i915_drm_client *client =
 		container_of(attr, typeof(*client), attr.name);
+	int ret;
+
+	rcu_read_lock();
+	ret = snprintf(buf, PAGE_SIZE,
+		       READ_ONCE(client->closed) ? "<%s>" : "%s",
+		       rcu_dereference(client->name));
+	rcu_read_unlock();
 
-	return snprintf(buf, PAGE_SIZE,
-			READ_ONCE(client->closed) ? "<%s>" : "%s",
-			client->name);
+	return ret;
 }
 
 static ssize_t
@@ -33,10 +41,15 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 {
 	struct i915_drm_client *client =
 		container_of(attr, typeof(*client), attr.pid);
+	int ret;
 
-	return snprintf(buf, PAGE_SIZE,
-			READ_ONCE(client->closed) ? "<%u>" : "%u",
-			pid_nr(client->pid));
+	rcu_read_lock();
+	ret = snprintf(buf, PAGE_SIZE,
+		       READ_ONCE(client->closed) ? "<%u>" : "%u",
+		       pid_nr(rcu_dereference(client->pid)));
+	rcu_read_unlock();
+
+	return ret;
 }
 
 static int
@@ -101,8 +114,8 @@ __i915_drm_client_register(struct i915_drm_client *client,
 	if (!name)
 		return -ENOMEM;
 
-	client->pid = get_task_pid(task, PIDTYPE_PID);
-	client->name = name;
+	rcu_assign_pointer(client->pid, get_task_pid(task, PIDTYPE_PID));
+	rcu_assign_pointer(client->name, name);
 
 	if (!clients->root)
 		return 0; /* intel_fbdev_init registers a client before sysfs */
@@ -125,8 +138,8 @@ __i915_drm_client_unregister(struct i915_drm_client *client)
 {
 	__client_unregister_sysfs(client);
 
-	put_pid(fetch_and_zero(&client->pid));
-	kfree(fetch_and_zero(&client->name));
+	put_pid(rcu_replace_pointer(client->pid, NULL, true));
+	kfree(rcu_replace_pointer(client->name, NULL, true));
 }
 
 struct i915_drm_client *
@@ -140,6 +153,7 @@ i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
 		return ERR_PTR(-ENOMEM);
 
 	kref_init(&client->kref);
+	mutex_init(&client->update_lock);
 	client->clients = clients;
 
 	ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
@@ -177,3 +191,72 @@ void i915_drm_client_close(struct i915_drm_client *client)
 	WRITE_ONCE(client->closed, true);
 	i915_drm_client_put(client);
 }
+
+struct client_update_free {
+	struct rcu_head rcu;
+	struct pid *pid;
+	char *name;
+};
+
+static void __client_update_free(struct rcu_head *rcu)
+{
+	struct client_update_free *old = container_of(rcu, typeof(*old), rcu);
+
+	put_pid(old->pid);
+	kfree(old->name);
+	kfree(old);
+}
+
+int
+i915_drm_client_update(struct i915_drm_client *client,
+		       struct task_struct *task)
+{
+	struct drm_i915_private *i915 =
+		container_of(client->clients, typeof(*i915), clients);
+	struct client_update_free *old;
+	struct pid *pid;
+	char *name;
+	int ret;
+
+	old = kmalloc(sizeof(*old), GFP_KERNEL);
+	if (!old)
+		return -ENOMEM;
+
+	ret = mutex_lock_interruptible(&client->update_lock);
+	if (ret)
+		goto out_free;
+
+	pid = get_task_pid(task, PIDTYPE_PID);
+	if (!pid)
+		goto out_pid;
+	if (pid == client->pid)
+		goto out_name;
+
+	name = kstrdup(task->comm, GFP_KERNEL);
+	if (!name) {
+		drm_notice(&i915->drm,
+			   "Failed to update client id=%u,name=%s,pid=%u! (%d)\n",
+			   client->id, client->name, pid_nr(client->pid), ret);
+		goto out_name;
+	}
+
+	init_rcu_head(&old->rcu);
+
+	old->pid = rcu_replace_pointer(client->pid, pid, true);
+	old->name = rcu_replace_pointer(client->name, name, true);
+
+	mutex_unlock(&client->update_lock);
+
+	call_rcu(&old->rcu, __client_update_free);
+
+	return 0;
+
+out_name:
+	put_pid(pid);
+out_pid:
+	mutex_unlock(&client->update_lock);
+out_free:
+	kfree(old);
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index af6998c74d4c..11b48383881d 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -9,6 +9,7 @@
 #include <linux/device.h>
 #include <linux/kobject.h>
 #include <linux/kref.h>
+#include <linux/mutex.h>
 #include <linux/pid.h>
 #include <linux/rcupdate.h>
 #include <linux/sched.h>
@@ -26,9 +27,11 @@ struct i915_drm_client {
 
 	struct rcu_head rcu;
 
+	struct mutex update_lock; /* Serializes name and pid updates. */
+
 	unsigned int id;
-	struct pid *pid;
-	char *name;
+	struct pid __rcu *pid;
+	char __rcu *name;
 	bool closed;
 
 	struct i915_drm_clients *clients;
@@ -61,4 +64,7 @@ void i915_drm_client_close(struct i915_drm_client *client);
 struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients,
 					    struct task_struct *task);
 
+int i915_drm_client_update(struct i915_drm_client *client,
+			   struct task_struct *task);
+
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 3/9] drm/i915: Make GEM contexts track DRM clients
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 2/9] drm/i915: Update client name on context create Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 4/9] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx; +Cc: Chris Wilson

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

If we make GEM contexts keep a reference to i915_drm_client for the whole
of their lifetime, we can consolidate the current task pid and name usage
by getting it from the client.

v2:
 * Don't bother supporting selftests contexts from debugfs. (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   | 23 ++++++++++++---
 .../gpu/drm/i915/gem/i915_gem_context_types.h | 13 ++-------
 drivers/gpu/drm/i915/i915_debugfs.c           | 29 +++++++------------
 drivers/gpu/drm/i915/i915_gpu_error.c         | 21 ++++++++------
 4 files changed, 44 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 0a1d1db98d64..984abd8cc76a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -340,8 +340,13 @@ static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
 
 static void i915_gem_context_free(struct i915_gem_context *ctx)
 {
+	struct i915_drm_client *client = ctx->client;
+
 	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
 
+	if (client)
+		i915_drm_client_put(client);
+
 	spin_lock(&ctx->i915->gem.contexts.lock);
 	list_del(&ctx->link);
 	spin_unlock(&ctx->i915->gem.contexts.lock);
@@ -351,7 +356,6 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 	if (ctx->timeline)
 		intel_timeline_put(ctx->timeline);
 
-	put_pid(ctx->pid);
 	mutex_destroy(&ctx->mutex);
 
 	kfree_rcu(ctx, rcu);
@@ -934,6 +938,7 @@ static int gem_context_register(struct i915_gem_context *ctx,
 				struct drm_i915_file_private *fpriv,
 				u32 *id)
 {
+	struct i915_drm_client *client;
 	struct i915_address_space *vm;
 	int ret;
 
@@ -945,15 +950,25 @@ static int gem_context_register(struct i915_gem_context *ctx,
 		WRITE_ONCE(vm->file, fpriv); /* XXX */
 	mutex_unlock(&ctx->mutex);
 
-	ctx->pid = get_task_pid(current, PIDTYPE_PID);
+	client = i915_drm_client_get(fpriv->client);
+
+	rcu_read_lock();
 	snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
-		 current->comm, pid_nr(ctx->pid));
+		 rcu_dereference(client->name),
+		 pid_nr(rcu_dereference(client->pid)));
+	rcu_read_unlock();
 
 	/* And finally expose ourselves to userspace via the idr */
 	ret = xa_alloc(&fpriv->context_xa, id, ctx, xa_limit_32b, GFP_KERNEL);
 	if (ret)
-		put_pid(fetch_and_zero(&ctx->pid));
+		goto err;
+
+	ctx->client = client;
 
+	return 0;
+
+err:
+	i915_drm_client_put(client);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 28760bd03265..b0e03380c690 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -96,20 +96,13 @@ struct i915_gem_context {
 	 */
 	struct i915_address_space __rcu *vm;
 
-	/**
-	 * @pid: process id of creator
-	 *
-	 * Note that who created the context may not be the principle user,
-	 * as the context may be shared across a local socket. However,
-	 * that should only affect the default context, all contexts created
-	 * explicitly by the client are expected to be isolated.
-	 */
-	struct pid *pid;
-
 	/** link: place with &drm_i915_private.context_list */
 	struct list_head link;
 	struct llist_node free_link;
 
+	/** client: struct i915_drm_client */
+	struct i915_drm_client *client;
+
 	/**
 	 * @ref: reference count
 	 *
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index aa35a59f1c7d..3a262122a4c1 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -326,17 +326,15 @@ static void print_context_stats(struct seq_file *m,
 				.vm = rcu_access_pointer(ctx->vm),
 			};
 			struct drm_file *file = ctx->file_priv->file;
-			struct task_struct *task;
 			char name[80];
 
 			rcu_read_lock();
+
 			idr_for_each(&file->object_idr, per_file_stats, &stats);
-			rcu_read_unlock();
 
-			rcu_read_lock();
-			task = pid_task(ctx->pid ?: file->pid, PIDTYPE_PID);
 			snprintf(name, sizeof(name), "%s",
-				 task ? task->comm : "<unknown>");
+				 rcu_dereference(ctx->client->name));
+
 			rcu_read_unlock();
 
 			print_file_stats(m, name, stats);
@@ -1055,20 +1053,13 @@ static int i915_context_status(struct seq_file *m, void *unused)
 		spin_unlock(&i915->gem.contexts.lock);
 
 		seq_puts(m, "HW context ");
-		if (ctx->pid) {
-			struct task_struct *task;
-
-			task = get_pid_task(ctx->pid, PIDTYPE_PID);
-			if (task) {
-				seq_printf(m, "(%s [%d]) ",
-					   task->comm, task->pid);
-				put_task_struct(task);
-			}
-		} else if (IS_ERR(ctx->file_priv)) {
-			seq_puts(m, "(deleted) ");
-		} else {
-			seq_puts(m, "(kernel) ");
-		}
+
+		rcu_read_lock();
+		seq_printf(m, "(%s [%d]) %s",
+			   rcu_dereference(ctx->client->name),
+			   pid_nr(rcu_dereference(ctx->client->pid)),
+			   ctx->client->closed ? "(closed) " : "");
+		rcu_read_unlock();
 
 		seq_putc(m, ctx->remap_slice ? 'R' : 'r');
 		seq_putc(m, '\n');
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 424ad975a360..07c1f98680f7 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1221,7 +1221,8 @@ static void record_request(const struct i915_request *request,
 	rcu_read_lock();
 	ctx = rcu_dereference(request->context->gem_context);
 	if (ctx)
-		erq->pid = pid_nr(ctx->pid);
+		erq->pid = I915_SELFTEST_ONLY(!ctx->client) ?
+			   0 : pid_nr(rcu_dereference(ctx->client->pid));
 	rcu_read_unlock();
 }
 
@@ -1241,23 +1242,25 @@ static bool record_context(struct i915_gem_context_coredump *e,
 			   const struct i915_request *rq)
 {
 	struct i915_gem_context *ctx;
-	struct task_struct *task;
 	bool simulated;
 
 	rcu_read_lock();
+
 	ctx = rcu_dereference(rq->context->gem_context);
 	if (ctx && !kref_get_unless_zero(&ctx->ref))
 		ctx = NULL;
-	rcu_read_unlock();
-	if (!ctx)
+	if (!ctx) {
+		rcu_read_unlock();
 		return true;
+	}
 
-	rcu_read_lock();
-	task = pid_task(ctx->pid, PIDTYPE_PID);
-	if (task) {
-		strcpy(e->comm, task->comm);
-		e->pid = task->pid;
+	if (I915_SELFTEST_ONLY(!ctx->client)) {
+		strcpy(e->comm, "[kernel]");
+	} else {
+		strcpy(e->comm, rcu_dereference(ctx->client->name));
+		e->pid = pid_nr(rcu_dereference(ctx->client->pid));
 	}
+
 	rcu_read_unlock();
 
 	e->sched_attr = ctx->sched;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 4/9] drm/i915: Track runtime spent in unreachable intel_contexts
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (2 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 3/9] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 5/9] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

As contexts are abandoned we want to remember how much GPU time they used
(per class) so later we can used it for smarter purposes.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 13 ++++++++++++-
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  5 +++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 984abd8cc76a..d4229155853b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -257,7 +257,19 @@ static void free_engines_rcu(struct rcu_head *rcu)
 {
 	struct i915_gem_engines *engines =
 		container_of(rcu, struct i915_gem_engines, rcu);
+	struct i915_gem_context *ctx = engines->ctx;
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+
+	/* Transfer accumulated runtime to the parent GEM context. */
+	for_each_gem_engine(ce, engines, it) {
+		unsigned int class = ce->engine->uabi_class;
 
+		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
+		atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
+	}
+
+	i915_gem_context_put(ctx);
 	i915_sw_fence_fini(&engines->fence);
 	free_engines(engines);
 }
@@ -278,7 +290,6 @@ engines_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)
 			list_del(&engines->link);
 			spin_unlock_irqrestore(&ctx->stale.lock, flags);
 		}
-		i915_gem_context_put(engines->ctx);
 		break;
 
 	case FENCE_FREE:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index b0e03380c690..f0d7441aafc8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -177,6 +177,11 @@ struct i915_gem_context {
 		spinlock_t lock;
 		struct list_head engines;
 	} stale;
+
+	/**
+	 * @past_runtime: Accumulation of freed intel_context pphwsp runtimes.
+	 */
+	atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
 };
 
 #endif /* __I915_GEM_CONTEXT_TYPES_H__ */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 5/9] drm/i915: Track runtime spent in closed GEM contexts
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (3 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 4/9] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 6/9] drm/i915: Track all user contexts per client Tvrtko Ursulin
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

As GEM contexts are closed we want to have the DRM client remember how
much GPU time they used (per class) so later we can used it for smarter
purposes.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 12 +++++++++++-
 drivers/gpu/drm/i915/i915_drm_client.h      |  7 +++++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index d4229155853b..4f623eee4f70 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -355,8 +355,18 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 
 	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
 
-	if (client)
+	if (client) {
+		unsigned int i;
+
+		/* Transfer accumulated runtime to the parent drm client. */
+		BUILD_BUG_ON(ARRAY_SIZE(client->past_runtime) !=
+			     ARRAY_SIZE(ctx->past_runtime));
+		for (i = 0; i < ARRAY_SIZE(client->past_runtime); i++)
+			atomic64_add(atomic64_read(&ctx->past_runtime[i]),
+				     &client->past_runtime[i]);
+
 		i915_drm_client_put(client);
+	}
 
 	spin_lock(&ctx->i915->gem.contexts.lock);
 	list_del(&ctx->link);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 11b48383881d..29b116606596 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -15,6 +15,8 @@
 #include <linux/sched.h>
 #include <linux/xarray.h>
 
+#include "gt/intel_engine_types.h"
+
 struct i915_drm_clients {
 	struct xarray xarray;
 	u32 next_id;
@@ -41,6 +43,11 @@ struct i915_drm_client {
 		struct device_attribute pid;
 		struct device_attribute name;
 	} attr;
+
+	/**
+	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
+	 */
+	atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
 };
 
 void i915_drm_clients_init(struct i915_drm_clients *clients);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 6/9] drm/i915: Track all user contexts per client
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (4 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 5/9] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 7/9] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

We soon want to start answering questions like how much GPU time is the
context belonging to a client which exited still using.

To enable this we start tracking all context belonging to a client on a
separate list.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 8 ++++++++
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h | 3 +++
 drivers/gpu/drm/i915/i915_drm_client.c            | 3 +++
 drivers/gpu/drm/i915/i915_drm_client.h            | 5 +++++
 4 files changed, 19 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 4f623eee4f70..96f70c84cb29 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -358,6 +358,10 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 	if (client) {
 		unsigned int i;
 
+		spin_lock(&client->ctx_lock);
+		list_del_rcu(&ctx->client_link);
+		spin_unlock(&client->ctx_lock);
+
 		/* Transfer accumulated runtime to the parent drm client. */
 		BUILD_BUG_ON(ARRAY_SIZE(client->past_runtime) !=
 			     ARRAY_SIZE(ctx->past_runtime));
@@ -986,6 +990,10 @@ static int gem_context_register(struct i915_gem_context *ctx,
 
 	ctx->client = client;
 
+	spin_lock(&client->ctx_lock);
+	list_add_tail_rcu(&ctx->client_link, &client->ctx_list);
+	spin_unlock(&client->ctx_lock);
+
 	return 0;
 
 err:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index f0d7441aafc8..255fcc469d9b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -103,6 +103,9 @@ struct i915_gem_context {
 	/** client: struct i915_drm_client */
 	struct i915_drm_client *client;
 
+	/** link: &drm_client.context_list */
+	struct list_head client_link;
+
 	/**
 	 * @ref: reference count
 	 *
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 342a11554573..c88d9ff448e0 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -154,6 +154,9 @@ i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
 
 	kref_init(&client->kref);
 	mutex_init(&client->update_lock);
+	spin_lock_init(&client->ctx_lock);
+	INIT_LIST_HEAD(&client->ctx_list);
+
 	client->clients = clients;
 
 	ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 29b116606596..0be27aa9bbda 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -9,10 +9,12 @@
 #include <linux/device.h>
 #include <linux/kobject.h>
 #include <linux/kref.h>
+#include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/pid.h>
 #include <linux/rcupdate.h>
 #include <linux/sched.h>
+#include <linux/spinlock.h>
 #include <linux/xarray.h>
 
 #include "gt/intel_engine_types.h"
@@ -36,6 +38,9 @@ struct i915_drm_client {
 	char __rcu *name;
 	bool closed;
 
+	spinlock_t ctx_lock; /* For add/remove from ctx_list. */
+	struct list_head ctx_list; /* List of contexts belonging to client. */
+
 	struct i915_drm_clients *clients;
 
 	struct kobject *root;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 7/9] drm/i915: Expose per-engine client busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (5 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 6/9] drm/i915: Track all user contexts per client Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 8/9] drm/i915: Track context current active time Tvrtko Ursulin
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose per-client and per-engine busyness under the previously added sysfs
client root.

The new files are one per-engine instance and located under the 'busy'
directory. Each contains a monotonically increasing nano-second resolution
times each client's jobs were executing on the GPU.

This enables userspace to create a top-like tool for GPU utilization:

==========================================================================
intel-gpu-top -  935/ 935 MHz;    0% RC6; 14.73 Watts;     1097 irqs/s

      IMC reads:     1401 MiB/s
     IMC writes:        4 MiB/s

          ENGINE      BUSY                                 MI_SEMA MI_WAIT
     Render/3D/0   63.73% |███████████████████           |      3%      0%
       Blitter/0    9.53% |██▊                           |      6%      0%
         Video/0   39.32% |███████████▊                  |     16%      0%
         Video/1   15.62% |████▋                         |      0%      0%
  VideoEnhance/0    0.00% |                              |      0%      0%

  PID            NAME     RCS          BCS          VCS         VECS
 4084        gem_wsim |█████▌     ||█          ||           ||           |
 4086        gem_wsim |█▌         ||           ||███        ||           |
==========================================================================

v2: Use intel_context_engine_get_busy_time.
v3: New directory structure.
v4: Rebase.
v5: sysfs_attr_init.
v6: Small tidy in i915_gem_add_client.
v7: Rebase to be engine class based.
v8:
 * Always enable stats.
 * Walk all client contexts.
v9:
 * Skip unsupported engine classes. (Chris)
 * Use scheduler caps. (Chris)
v10:
 * Use pphwsp runtime only.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drm_client.c | 110 ++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_drm_client.h |  11 +++
 2 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index c88d9ff448e0..0a2d933fe83c 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -9,8 +9,13 @@
 
 #include <drm/drm_print.h>
 
+#include <uapi/drm/i915_drm.h>
+
 #include "i915_drv.h"
 #include "i915_drm_client.h"
+#include "gem/i915_gem_context.h"
+#include "gt/intel_engine_user.h"
+#include "i915_drv.h"
 #include "i915_gem.h"
 #include "i915_utils.h"
 
@@ -52,6 +57,104 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return ret;
 }
 
+static u64
+pphwsp_busy_add(struct i915_gem_context *ctx, unsigned int class)
+{
+	struct i915_gem_engines *engines = rcu_dereference(ctx->engines);
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	u64 total = 0;
+
+	for_each_gem_engine(ce, engines, it) {
+		if (ce->engine->uabi_class == class)
+			total += ce->runtime.total;
+	}
+
+	return total;
+}
+
+static ssize_t
+show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	unsigned int class = i915_attr->engine_class;
+	struct i915_drm_client *client = i915_attr->client;
+	u64 total = atomic64_read(&client->past_runtime[class]);
+	struct list_head *list = &client->ctx_list;
+	struct i915_gem_context *ctx;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(ctx, list, client_link) {
+		total += atomic64_read(&ctx->past_runtime[class]);
+		total += pphwsp_busy_add(ctx, class);
+	}
+	rcu_read_unlock();
+
+	total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", total);
+}
+
+static const char * const uabi_class_names[] = {
+	[I915_ENGINE_CLASS_RENDER] = "0",
+	[I915_ENGINE_CLASS_COPY] = "1",
+	[I915_ENGINE_CLASS_VIDEO] = "2",
+	[I915_ENGINE_CLASS_VIDEO_ENHANCE] = "3",
+};
+
+static int
+__client_register_sysfs_busy(struct i915_drm_client *client)
+{
+	struct i915_drm_clients *clients = client->clients;
+	struct drm_i915_private *i915 =
+		container_of(clients, typeof(*i915), clients);
+	unsigned int i;
+	int ret = 0;
+
+	if (!HAS_LOGICAL_RING_CONTEXTS(i915))
+		return 0;
+
+	client->busy_root = kobject_create_and_add("busy", client->root);
+	if (!client->busy_root)
+		return -ENOMEM;
+
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) {
+		struct i915_engine_busy_attribute *i915_attr =
+			&client->attr.busy[i];
+		struct device_attribute *attr = &i915_attr->attr;
+
+		if (!intel_engine_lookup_user(i915, i, 0))
+			continue;
+
+		i915_attr->client = client;
+		i915_attr->i915 = i915;
+		i915_attr->engine_class = i;
+
+		sysfs_attr_init(&attr->attr);
+
+		attr->attr.name = uabi_class_names[i];
+		attr->attr.mode = 0444;
+		attr->show = show_client_busy;
+
+		ret = sysfs_create_file(client->busy_root,
+					(struct attribute *)attr);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+
+err:
+	kobject_put(client->busy_root);
+	return ret;
+}
+
+static void __client_unregister_sysfs_busy(struct i915_drm_client *client)
+{
+	kobject_put(fetch_and_zero(&client->busy_root));
+}
+
 static int
 __client_register_sysfs(struct i915_drm_client *client)
 {
@@ -88,9 +191,12 @@ __client_register_sysfs(struct i915_drm_client *client)
 
 		ret = sysfs_create_file(client->root, (struct attribute *)attr);
 		if (ret)
-			break;
+			goto out;
 	}
 
+	ret = __client_register_sysfs_busy(client);
+
+out:
 	if (ret)
 		kobject_put(client->root);
 
@@ -99,6 +205,8 @@ __client_register_sysfs(struct i915_drm_client *client)
 
 static void __client_unregister_sysfs(struct i915_drm_client *client)
 {
+	__client_unregister_sysfs_busy(client);
+
 	kobject_put(fetch_and_zero(&client->root));
 }
 
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 0be27aa9bbda..98c5fc24fcb4 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -26,6 +26,15 @@ struct i915_drm_clients {
 	struct kobject *root;
 };
 
+struct i915_drm_client;
+
+struct i915_engine_busy_attribute {
+	struct device_attribute attr;
+	struct drm_i915_private *i915;
+	struct i915_drm_client *client;
+	unsigned int engine_class;
+};
+
 struct i915_drm_client {
 	struct kref kref;
 
@@ -44,9 +53,11 @@ struct i915_drm_client {
 	struct i915_drm_clients *clients;
 
 	struct kobject *root;
+	struct kobject *busy_root;
 	struct {
 		struct device_attribute pid;
 		struct device_attribute name;
+		struct i915_engine_busy_attribute busy[MAX_ENGINE_CLASS + 1];
 	} attr;
 
 	/**
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 8/9] drm/i915: Track context current active time
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (6 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 7/9] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 9/9] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Track context active (on hardware) status together with the start
timestamp.

This will be used to provide better granularity of context
runtime reporting in conjunction with already tracked pphwsp accumulated
runtime.

The latter is only updated on context save so does not give us visibility
to any currently executing work.

As part of the patch the existing runtime tracking data is moved under the
new ce->stats member and updated under the seqlock. This provides the
ability to atomically read out accumulated plus active runtime.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  3 +-
 drivers/gpu/drm/i915/gt/intel_context.c       | 18 +++++-
 drivers/gpu/drm/i915/gt/intel_context.h       |  6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h | 24 +++++---
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 58 ++++++++++++++-----
 drivers/gpu/drm/i915/gt/selftest_lrc.c        | 10 ++--
 drivers/gpu/drm/i915/i915_drm_client.c        |  2 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  4 +-
 8 files changed, 91 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 96f70c84cb29..e9c33662b90c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -266,7 +266,8 @@ static void free_engines_rcu(struct rcu_head *rcu)
 		unsigned int class = ce->engine->uabi_class;
 
 		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
-		atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
+		atomic64_add(ce->stats.runtime.total,
+			     &ctx->past_runtime[class]);
 	}
 
 	i915_gem_context_put(ctx);
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index e4aece20bc80..473238c2a85b 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -293,7 +293,7 @@ intel_context_init(struct intel_context *ce,
 	ce->sseu = engine->sseu;
 	ce->ring = __intel_context_ring_size(SZ_4K);
 
-	ewma_runtime_init(&ce->runtime.avg);
+	ewma_runtime_init(&ce->stats.runtime.avg);
 
 	ce->vm = i915_vm_get(engine->gt->vm);
 
@@ -301,6 +301,7 @@ intel_context_init(struct intel_context *ce,
 	INIT_LIST_HEAD(&ce->signals);
 
 	mutex_init(&ce->pin_mutex);
+	seqlock_init(&ce->stats.lock);
 
 	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire);
@@ -395,6 +396,21 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
 	return rq;
 }
 
+ktime_t intel_context_get_active_time(struct intel_context *ce)
+{
+	struct intel_context_stats *stats = &ce->stats;
+	unsigned int seq;
+	ktime_t total = 0;
+
+	do {
+		seq = read_seqbegin(&stats->lock);
+		if (stats->active)
+			total = ktime_sub(ktime_get(), stats->start);
+	} while (read_seqretry(&stats->lock, seq));
+
+	return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index 07be021882cc..fdd5f4366db2 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -238,7 +238,7 @@ static inline u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
 	const u32 period =
 		RUNTIME_INFO(ce->engine->i915)->cs_timestamp_period_ns;
 
-	return READ_ONCE(ce->runtime.total) * period;
+	return READ_ONCE(ce->stats.runtime.total) * period;
 }
 
 static inline u64 intel_context_get_avg_runtime_ns(struct intel_context *ce)
@@ -246,7 +246,9 @@ static inline u64 intel_context_get_avg_runtime_ns(struct intel_context *ce)
 	const u32 period =
 		RUNTIME_INFO(ce->engine->i915)->cs_timestamp_period_ns;
 
-	return mul_u32_u32(ewma_runtime_read(&ce->runtime.avg), period);
+	return mul_u32_u32(ewma_runtime_read(&ce->stats.runtime.avg), period);
 }
 
+ktime_t intel_context_get_active_time(struct intel_context *ce);
+
 #endif /* __INTEL_CONTEXT_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 07cb83a0d017..5078ad822da9 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -12,6 +12,7 @@
 #include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
+#include <linux/seqlock.h>
 
 #include "i915_active_types.h"
 #include "i915_utils.h"
@@ -72,14 +73,21 @@ struct intel_context {
 	u64 lrc_desc;
 	u32 tag; /* cookie passed to HW to track this context on submission */
 
-	/* Time on GPU as tracked by the hw. */
-	struct {
-		struct ewma_runtime avg;
-		u64 total;
-		u32 last;
-		I915_SELFTEST_DECLARE(u32 num_underflow);
-		I915_SELFTEST_DECLARE(u32 max_underflow);
-	} runtime;
+	/** stats: Context GPU engine busyness tracking. */
+	struct intel_context_stats {
+		seqlock_t lock;
+		bool active;
+		ktime_t start;
+
+		/* Time on GPU as tracked by the hw. */
+		struct {
+			struct ewma_runtime avg;
+			u64 total;
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);
+			I915_SELFTEST_DECLARE(u32 max_underflow);
+		} runtime;
+	} stats;
 
 	unsigned int active_count; /* protected by timeline->mutex */
 
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 6fbad5e2343f..0c498ce4610a 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1165,7 +1165,7 @@ static void restore_default_state(struct intel_context *ce,
 		       engine->context_size - PAGE_SIZE);
 
 	execlists_init_reg_state(regs, ce, engine, ce->ring, false);
-	ce->runtime.last = intel_context_get_runtime(ce);
+	ce->stats.runtime.last = intel_context_get_runtime(ce);
 }
 
 static void reset_active(struct i915_request *rq,
@@ -1207,35 +1207,61 @@ static void reset_active(struct i915_request *rq,
 	ce->lrc_desc |= CTX_DESC_FORCE_RESTORE;
 }
 
-static void st_update_runtime_underflow(struct intel_context *ce, s32 dt)
+static void
+st_update_runtime_underflow(struct intel_context_stats *stats, s32 dt)
 {
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
-	ce->runtime.num_underflow += dt < 0;
-	ce->runtime.max_underflow = max_t(u32, ce->runtime.max_underflow, -dt);
+	stats->runtime.num_underflow += dt < 0;
+	stats->runtime.max_underflow =
+		max_t(u32, stats->runtime.max_underflow, -dt);
 #endif
 }
 
 static void intel_context_update_runtime(struct intel_context *ce)
 {
+	struct intel_context_stats *stats = &ce->stats;
 	u32 old;
 	s32 dt;
 
 	if (intel_context_is_barrier(ce))
 		return;
 
-	old = ce->runtime.last;
-	ce->runtime.last = intel_context_get_runtime(ce);
-	dt = ce->runtime.last - old;
+	old = stats->runtime.last;
+	stats->runtime.last = intel_context_get_runtime(ce);
+	dt = stats->runtime.last - old;
 
 	if (unlikely(dt <= 0)) {
 		CE_TRACE(ce, "runtime underflow: last=%u, new=%u, delta=%d\n",
-			 old, ce->runtime.last, dt);
-		st_update_runtime_underflow(ce, dt);
+			 old, stats->runtime.last, dt);
+		st_update_runtime_underflow(stats, dt);
 		return;
 	}
 
-	ewma_runtime_add(&ce->runtime.avg, dt);
-	ce->runtime.total += dt;
+	ewma_runtime_add(&stats->runtime.avg, dt);
+	stats->runtime.total += dt;
+}
+
+static void intel_context_stats_start(struct intel_context *ce)
+{
+	struct intel_context_stats *stats = &ce->stats;
+	unsigned long flags;
+
+	write_seqlock_irqsave(&stats->lock, flags);
+	stats->start = ktime_get();
+	stats->active = true;
+	write_sequnlock_irqrestore(&stats->lock, flags);
+}
+
+static void intel_context_stats_stop(struct intel_context *ce)
+{
+	struct intel_context_stats *stats = &ce->stats;
+	unsigned long flags;
+
+	write_seqlock_irqsave(&stats->lock, flags);
+	stats->active = false;
+	stats->start = 0;
+	intel_context_update_runtime(ce);
+	write_sequnlock_irqrestore(&stats->lock, flags);
 }
 
 static inline struct intel_engine_cs *
@@ -1305,7 +1331,7 @@ static inline void
 __execlists_schedule_out(struct i915_request *rq,
 			 struct intel_engine_cs * const engine)
 {
-	struct intel_context * const ce = rq->context;
+	struct intel_context *ce = rq->context;
 
 	/*
 	 * NB process_csb() is not under the engine->active.lock and hence
@@ -1321,8 +1347,8 @@ __execlists_schedule_out(struct i915_request *rq,
 	    i915_request_completed(rq))
 		intel_engine_add_retire(engine, ce->timeline);
 
-	intel_context_update_runtime(ce);
 	intel_engine_context_out(engine);
+	intel_context_stats_stop(ce);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 	intel_gt_pm_put_async(engine->gt);
 
@@ -1840,15 +1866,19 @@ static unsigned long active_timeslice(const struct intel_engine_cs *engine)
 
 static void set_timeslice(struct intel_engine_cs *engine)
 {
+	struct intel_engine_execlists * const execlists = &engine->execlists;
 	unsigned long duration;
 
+	if (*execlists->active)
+		intel_context_stats_start((*execlists->active)->context);
+
 	if (!intel_engine_has_timeslices(engine))
 		return;
 
 	duration = active_timeslice(engine);
 	ENGINE_TRACE(engine, "bump timeslicing, interval:%lu", duration);
 
-	set_timer_ms(&engine->execlists.timer, duration);
+	set_timer_ms(&execlists->timer, duration);
 }
 
 static void start_timeslice(struct intel_engine_cs *engine)
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 6f5e35afe1b2..c8179ab04747 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -5508,8 +5508,8 @@ static int __live_pphwsp_runtime(struct intel_engine_cs *engine)
 	if (IS_ERR(ce))
 		return PTR_ERR(ce);
 
-	ce->runtime.num_underflow = 0;
-	ce->runtime.max_underflow = 0;
+	ce->stats.runtime.num_underflow = 0;
+	ce->stats.runtime.max_underflow = 0;
 
 	do {
 		unsigned int loop = 1024;
@@ -5547,11 +5547,11 @@ static int __live_pphwsp_runtime(struct intel_engine_cs *engine)
 		intel_context_get_avg_runtime_ns(ce));
 
 	err = 0;
-	if (ce->runtime.num_underflow) {
+	if (ce->stats.runtime.num_underflow) {
 		pr_err("%s: pphwsp underflow %u time(s), max %u cycles!\n",
 		       engine->name,
-		       ce->runtime.num_underflow,
-		       ce->runtime.max_underflow);
+		       ce->stats.runtime.num_underflow,
+		       ce->stats.runtime.max_underflow);
 		GEM_TRACE_DUMP();
 		err = -EOVERFLOW;
 	}
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 0a2d933fe83c..485a2b75d3e1 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -67,7 +67,7 @@ pphwsp_busy_add(struct i915_gem_context *ctx, unsigned int class)
 
 	for_each_gem_engine(ce, engines, it) {
 		if (ce->engine->uabi_class == class)
-			total += ce->runtime.total;
+			total += ce->stats.runtime.total;
 	}
 
 	return total;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 07c1f98680f7..b344272ddfb5 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1267,8 +1267,8 @@ static bool record_context(struct i915_gem_context_coredump *e,
 	e->guilty = atomic_read(&ctx->guilty_count);
 	e->active = atomic_read(&ctx->active_count);
 
-	e->total_runtime = rq->context->runtime.total;
-	e->avg_runtime = ewma_runtime_read(&rq->context->runtime.avg);
+	e->total_runtime = rq->context->stats.runtime.total;
+	e->avg_runtime = ewma_runtime_read(&rq->context->stats.runtime.avg);
 
 	simulated = i915_gem_context_no_error_capture(ctx);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] [PATCH 9/9] drm/i915: Prefer software tracked context busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (7 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 8/9] drm/i915: Track context current active time Tvrtko Ursulin
@ 2020-04-15 10:11 ` Tvrtko Ursulin
  2020-04-15 11:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-04-15 10:11 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

When available prefer context tracked context busyness because it provides
visibility into currently executing contexts as well.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drm_client.c | 68 ++++++++++++++++++++++++--
 1 file changed, 63 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 485a2b75d3e1..31f0d373caae 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -96,6 +96,61 @@ show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%llu\n", total);
 }
 
+static u64
+sw_busy_add(struct i915_gem_context *ctx, unsigned int class)
+{
+	struct i915_gem_engines *engines = rcu_dereference(ctx->engines);
+	u32 period_ns = RUNTIME_INFO(ctx->i915)->cs_timestamp_period_ns;
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	u64 total = 0;
+
+	for_each_gem_engine(ce, engines, it) {
+		struct intel_context_stats *stats;
+		unsigned int seq;
+		u64 t;
+
+		if (ce->engine->uabi_class != class)
+			continue;
+
+		stats = &ce->stats;
+
+		do {
+			seq = read_seqbegin(&stats->lock);
+			t = ce->stats.runtime.total * period_ns;
+			t += intel_context_get_active_time(ce);
+		} while (read_seqretry(&stats->lock, seq));
+
+		total += t;
+	}
+
+	return total;
+}
+
+static ssize_t
+show_client_sw_busy(struct device *kdev,
+		    struct device_attribute *attr,
+		    char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	unsigned int class = i915_attr->engine_class;
+	struct i915_drm_client *client = i915_attr->client;
+	u32 period_ns = RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
+	u64 total = atomic64_read(&client->past_runtime[class]) * period_ns;
+	struct list_head *list = &client->ctx_list;
+	struct i915_gem_context *ctx;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(ctx, list, client_link) {
+		total += atomic64_read(&ctx->past_runtime[class]) * period_ns +
+			 sw_busy_add(ctx, class);
+	}
+	rcu_read_unlock();
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", total);
+}
+
 static const char * const uabi_class_names[] = {
 	[I915_ENGINE_CLASS_RENDER] = "0",
 	[I915_ENGINE_CLASS_COPY] = "1",
@@ -109,6 +164,8 @@ __client_register_sysfs_busy(struct i915_drm_client *client)
 	struct i915_drm_clients *clients = client->clients;
 	struct drm_i915_private *i915 =
 		container_of(clients, typeof(*i915), clients);
+	bool sw_stats = i915->caps.scheduler &
+			I915_SCHEDULER_CAP_ENGINE_BUSY_STATS;
 	unsigned int i;
 	int ret = 0;
 
@@ -135,18 +192,19 @@ __client_register_sysfs_busy(struct i915_drm_client *client)
 
 		attr->attr.name = uabi_class_names[i];
 		attr->attr.mode = 0444;
-		attr->show = show_client_busy;
+		attr->show = sw_stats ?
+			     show_client_sw_busy : show_client_busy;
 
 		ret = sysfs_create_file(client->busy_root,
 					(struct attribute *)attr);
 		if (ret)
-			goto err;
+			goto out;
 	}
 
-	return 0;
+out:
+	if (ret)
+		kobject_put(client->busy_root);
 
-err:
-	kobject_put(client->busy_root);
 	return ret;
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (8 preceding siblings ...)
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 9/9] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
@ 2020-04-15 11:05 ` Patchwork
  2020-04-15 11:11 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-04-15 11:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/75967/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
027b0636d391 drm/i915: Expose list of clients in sysfs
-:78: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#78: 
new file mode 100644

-:268: WARNING:SPDX_LICENSE_TAG: Improper SPDX comment style for 'drivers/gpu/drm/i915/i915_drm_client.h', please use '/*' instead
#268: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

-:268: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#268: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

total: 0 errors, 3 warnings, 0 checks, 367 lines checked
a763b56f5eed drm/i915: Update client name on context create
-:186: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#186: FILE: drivers/gpu/drm/i915/i915_drm_client.c:237:
+	if (!name) {
+		drm_notice(&i915->drm,

total: 0 errors, 1 warnings, 0 checks, 188 lines checked
7aaaba098c18 drm/i915: Make GEM contexts track DRM clients
0cd24e79ea05 drm/i915: Track runtime spent in unreachable intel_contexts
7d61e2172ad1 drm/i915: Track runtime spent in closed GEM contexts
80f8fe2f733c drm/i915: Track all user contexts per client
96b05255f5c0 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 164 lines checked
a89a065b63bf drm/i915: Track context current active time
-:138: WARNING:LINE_SPACING: Missing a blank line after declarations
#138: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:87:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 257 lines checked
e9b16b5095f8 drm/i915: Prefer software tracked context busyness

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Per client engine busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (9 preceding siblings ...)
  2020-04-15 11:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork
@ 2020-04-15 11:11 ` Patchwork
  2020-04-15 11:25 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-04-15 11:11 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/75967/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.6.0
Commit: drm/i915: Expose list of clients in sysfs
Okay!

Commit: drm/i915: Update client name on context create
+drivers/gpu/drm/i915/i915_drm_client.c:130:23:    expected struct pid *pid
+drivers/gpu/drm/i915/i915_drm_client.c:130:23:    got struct pid [noderef] <asn:4> *pid
+drivers/gpu/drm/i915/i915_drm_client.c:130:23: warning: incorrect type in argument 1 (different address spaces)
+drivers/gpu/drm/i915/i915_drm_client.c:131:21:    expected void const *
+drivers/gpu/drm/i915/i915_drm_client.c:131:21:    got char [noderef] <asn:4> *name
+drivers/gpu/drm/i915/i915_drm_client.c:131:21: warning: incorrect type in argument 1 (different address spaces)
+drivers/gpu/drm/i915/i915_drm_client.c:232:17: error: incompatible types in comparison expression (different address spaces)

Commit: drm/i915: Make GEM contexts track DRM clients
Okay!

Commit: drm/i915: Track runtime spent in unreachable intel_contexts
Okay!

Commit: drm/i915: Track runtime spent in closed GEM contexts
Okay!

Commit: drm/i915: Track all user contexts per client
Okay!

Commit: drm/i915: Expose per-engine client busyness
Okay!

Commit: drm/i915: Track context current active time
Okay!

Commit: drm/i915: Prefer software tracked context busyness
Okay!

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.DOCS: warning for Per client engine busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (10 preceding siblings ...)
  2020-04-15 11:11 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2020-04-15 11:25 ` Patchwork
  2020-04-15 11:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2020-04-16  8:01 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  13 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-04-15 11:25 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/75967/
State : warning

== Summary ==

$ make htmldocs 2>&1 > /dev/null | grep i915
/home/cidrm/kernel/Documentation/gpu/i915.rst:610: WARNING: duplicate label gpu/i915:layout, other instance in /home/cidrm/kernel/Documentation/gpu/i915.rst

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for Per client engine busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (11 preceding siblings ...)
  2020-04-15 11:25 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
@ 2020-04-15 11:34 ` Patchwork
  2020-04-16  8:01 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  13 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-04-15 11:34 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/75967/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8300 -> Patchwork_17306
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/index.html

Known issues
------------

  Here are the changes found in Patchwork_17306 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@hangcheck:
    - fi-icl-y:           [PASS][1] -> [INCOMPLETE][2] ([i915#1580])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/fi-icl-y/igt@i915_selftest@live@hangcheck.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/fi-icl-y/igt@i915_selftest@live@hangcheck.html

  * igt@kms_chamelium@dp-edid-read:
    - fi-icl-u2:          [PASS][3] -> [FAIL][4] ([i915#976])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/fi-icl-u2/igt@kms_chamelium@dp-edid-read.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/fi-icl-u2/igt@kms_chamelium@dp-edid-read.html

  
  [i915#1580]: https://gitlab.freedesktop.org/drm/intel/issues/1580
  [i915#976]: https://gitlab.freedesktop.org/drm/intel/issues/976


Participating hosts (48 -> 44)
------------------------------

  Additional (1): fi-kbl-7560u 
  Missing    (5): fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8300 -> Patchwork_17306

  CI-20190529: 20190529
  CI_DRM_8300: 02f5d84db84f885cba1f8d258b23e9ea0f2d922e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5590: c7b4a43942be32245b1c00b5b4a38401d8ca6e0d @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17306: e9b16b5095f854deeaee59534fa4047771ace61e @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

e9b16b5095f8 drm/i915: Prefer software tracked context busyness
a89a065b63bf drm/i915: Track context current active time
96b05255f5c0 drm/i915: Expose per-engine client busyness
80f8fe2f733c drm/i915: Track all user contexts per client
7d61e2172ad1 drm/i915: Track runtime spent in closed GEM contexts
0cd24e79ea05 drm/i915: Track runtime spent in unreachable intel_contexts
7aaaba098c18 drm/i915: Make GEM contexts track DRM clients
a763b56f5eed drm/i915: Update client name on context create
027b0636d391 drm/i915: Expose list of clients in sysfs

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for Per client engine busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
                   ` (12 preceding siblings ...)
  2020-04-15 11:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2020-04-16  8:01 ` Patchwork
  13 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-04-16  8:01 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/75967/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8300_full -> Patchwork_17306_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Known issues
------------

  Here are the changes found in Patchwork_17306_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_params@invalid-bsd-ring:
    - shard-iclb:         [PASS][1] -> [SKIP][2] ([fdo#109276])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-iclb4/igt@gem_exec_params@invalid-bsd-ring.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-iclb3/igt@gem_exec_params@invalid-bsd-ring.html

  * igt@gem_workarounds@suspend-resume-fd:
    - shard-kbl:          [PASS][3] -> [DMESG-WARN][4] ([i915#180]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-kbl2/igt@gem_workarounds@suspend-resume-fd.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-kbl4/igt@gem_workarounds@suspend-resume-fd.html

  * igt@i915_selftest@live@requests:
    - shard-tglb:         [PASS][5] -> [INCOMPLETE][6] ([i915#1531])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-tglb2/igt@i915_selftest@live@requests.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-tglb7/igt@i915_selftest@live@requests.html

  * igt@kms_busy@basic-flip-pipe-a:
    - shard-hsw:          [PASS][7] -> [INCOMPLETE][8] ([i915#61])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-hsw1/igt@kms_busy@basic-flip-pipe-a.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-hsw8/igt@kms_busy@basic-flip-pipe-a.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x85-sliding:
    - shard-apl:          [PASS][9] -> [FAIL][10] ([i915#54] / [i915#95])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-apl2/igt@kms_cursor_crc@pipe-a-cursor-256x85-sliding.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-apl3/igt@kms_cursor_crc@pipe-a-cursor-256x85-sliding.html

  * igt@kms_cursor_legacy@all-pipes-torture-bo:
    - shard-kbl:          [PASS][11] -> [DMESG-WARN][12] ([i915#128])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-kbl7/igt@kms_cursor_legacy@all-pipes-torture-bo.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-kbl2/igt@kms_cursor_legacy@all-pipes-torture-bo.html

  * igt@kms_flip@plain-flip-fb-recreate:
    - shard-skl:          [PASS][13] -> [FAIL][14] ([i915#34])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-skl2/igt@kms_flip@plain-flip-fb-recreate.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-skl3/igt@kms_flip@plain-flip-fb-recreate.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
    - shard-skl:          [PASS][15] -> [FAIL][16] ([fdo#108145] / [i915#265])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-skl8/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html

  * igt@kms_psr@no_drrs:
    - shard-iclb:         [PASS][17] -> [FAIL][18] ([i915#173])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-iclb8/igt@kms_psr@no_drrs.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-iclb1/igt@kms_psr@no_drrs.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [PASS][19] -> [SKIP][20] ([fdo#109441]) +2 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-iclb2/igt@kms_psr@psr2_no_drrs.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-iclb7/igt@kms_psr@psr2_no_drrs.html

  * igt@kms_vblank@pipe-c-accuracy-idle:
    - shard-skl:          [PASS][21] -> [FAIL][22] ([i915#43])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-skl5/igt@kms_vblank@pipe-c-accuracy-idle.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-skl7/igt@kms_vblank@pipe-c-accuracy-idle.html

  
#### Possible fixes ####

  * {igt@gem_ctx_isolation@preservation-s3@rcs0}:
    - shard-apl:          [DMESG-WARN][23] ([i915#180]) -> [PASS][24] +1 similar issue
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-apl6/igt@gem_ctx_isolation@preservation-s3@rcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-apl3/igt@gem_ctx_isolation@preservation-s3@rcs0.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [FAIL][25] ([i915#454]) -> [PASS][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-iclb6/igt@i915_pm_dc@dc6-psr.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-iclb8/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_rc6_residency@rc6-idle:
    - shard-hsw:          [WARN][27] ([i915#1519]) -> [PASS][28]
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-hsw6/igt@i915_pm_rc6_residency@rc6-idle.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-hsw7/igt@i915_pm_rc6_residency@rc6-idle.html

  * igt@kms_cursor_crc@pipe-a-cursor-128x128-sliding:
    - shard-apl:          [FAIL][29] ([i915#54] / [i915#95]) -> [PASS][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-apl8/igt@kms_cursor_crc@pipe-a-cursor-128x128-sliding.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-apl2/igt@kms_cursor_crc@pipe-a-cursor-128x128-sliding.html

  * igt@kms_draw_crc@draw-method-rgb565-render-untiled:
    - shard-glk:          [FAIL][31] ([i915#52] / [i915#54]) -> [PASS][32] +1 similar issue
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-glk4/igt@kms_draw_crc@draw-method-rgb565-render-untiled.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-glk8/igt@kms_draw_crc@draw-method-rgb565-render-untiled.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-render:
    - shard-snb:          [SKIP][33] ([fdo#109271]) -> [PASS][34] +1 similar issue
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-snb2/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-render.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-snb2/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-render.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [FAIL][35] ([i915#1188]) -> [PASS][36] +1 similar issue
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-skl4/igt@kms_hdr@bpc-switch-suspend.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-skl6/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_psr@psr2_cursor_blt:
    - shard-iclb:         [SKIP][37] ([fdo#109441]) -> [PASS][38] +1 similar issue
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-iclb1/igt@kms_psr@psr2_cursor_blt.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-iclb2/igt@kms_psr@psr2_cursor_blt.html

  * igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend:
    - shard-skl:          [INCOMPLETE][39] ([i915#69]) -> [PASS][40]
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-skl9/igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-skl2/igt@kms_vblank@pipe-a-ts-continuation-dpms-suspend.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - shard-kbl:          [DMESG-WARN][41] ([i915#180]) -> [PASS][42] +4 similar issues
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-kbl7/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-kbl6/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  * {igt@prime_vgem@sync@bcs0}:
    - shard-tglb:         [INCOMPLETE][43] ([i915#409]) -> [PASS][44]
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-tglb1/igt@prime_vgem@sync@bcs0.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-tglb5/igt@prime_vgem@sync@bcs0.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc3co-vpb-simulation:
    - shard-iclb:         [SKIP][45] ([i915#588]) -> [SKIP][46] ([i915#658])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-iclb2/igt@i915_pm_dc@dc3co-vpb-simulation.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-iclb8/igt@i915_pm_dc@dc3co-vpb-simulation.html

  * igt@i915_pm_rpm@system-suspend:
    - shard-snb:          [SKIP][47] ([fdo#109271]) -> [INCOMPLETE][48] ([i915#82])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-snb4/igt@i915_pm_rpm@system-suspend.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-snb6/igt@i915_pm_rpm@system-suspend.html

  * igt@kms_plane_alpha_blend@pipe-a-alpha-7efc:
    - shard-apl:          [FAIL][49] ([fdo#108145] / [i915#265] / [i915#95]) -> [FAIL][50] ([fdo#108145] / [i915#265])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8300/shard-apl2/igt@kms_plane_alpha_blend@pipe-a-alpha-7efc.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/shard-apl8/igt@kms_plane_alpha_blend@pipe-a-alpha-7efc.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#128]: https://gitlab.freedesktop.org/drm/intel/issues/128
  [i915#1519]: https://gitlab.freedesktop.org/drm/intel/issues/1519
  [i915#1531]: https://gitlab.freedesktop.org/drm/intel/issues/1531
  [i915#1532]: https://gitlab.freedesktop.org/drm/intel/issues/1532
  [i915#1542]: https://gitlab.freedesktop.org/drm/intel/issues/1542
  [i915#173]: https://gitlab.freedesktop.org/drm/intel/issues/173
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#34]: https://gitlab.freedesktop.org/drm/intel/issues/34
  [i915#409]: https://gitlab.freedesktop.org/drm/intel/issues/409
  [i915#43]: https://gitlab.freedesktop.org/drm/intel/issues/43
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#52]: https://gitlab.freedesktop.org/drm/intel/issues/52
  [i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54
  [i915#588]: https://gitlab.freedesktop.org/drm/intel/issues/588
  [i915#61]: https://gitlab.freedesktop.org/drm/intel/issues/61
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#69]: https://gitlab.freedesktop.org/drm/intel/issues/69
  [i915#82]: https://gitlab.freedesktop.org/drm/intel/issues/82
  [i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95


Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8300 -> Patchwork_17306

  CI-20190529: 20190529
  CI_DRM_8300: 02f5d84db84f885cba1f8d258b23e9ea0f2d922e @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5590: c7b4a43942be32245b1c00b5b4a38401d8ca6e0d @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17306: e9b16b5095f854deeaee59534fa4047771ace61e @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17306/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs
  2020-04-15 10:11 ` [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
@ 2020-08-26  1:11   ` Lucas De Marchi
  2020-09-01 15:09     ` Tvrtko Ursulin
  0 siblings, 1 reply; 26+ messages in thread
From: Lucas De Marchi @ 2020-08-26  1:11 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel Graphics, Chris Wilson

Hi,

Any update on this? It now conflicts in a few places so it needs a rebase.

Lucas De Marchi

On Wed, Apr 15, 2020 at 3:11 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
> Expose a list of clients with open file handles in sysfs.
>
> This will be a basis for a top-like utility showing per-client and per-
> engine GPU load.
>
> Currently we only expose each client's pid and name under opaque numbered
> directories in /sys/class/drm/card0/clients/.
>
> For instance:
>
> /sys/class/drm/card0/clients/3/name: Xorg
> /sys/class/drm/card0/clients/3/pid: 5664
>
> v2:
>  Chris Wilson:
>  * Enclose new members into dedicated structs.
>  * Protect against failed sysfs registration.
>
> v3:
>  * sysfs_attr_init.
>
> v4:
>  * Fix for internal clients.
>
> v5:
>  * Use cyclic ida for client id. (Chris)
>  * Do not leak pid reference. (Chris)
>  * Tidy code with some locals.
>
> v6:
>  * Use xa_alloc_cyclic to simplify locking. (Chris)
>  * No need to unregister individial sysfs files. (Chris)
>  * Rebase on top of fpriv kref.
>  * Track client closed status and reflect in sysfs.
>
> v7:
>  * Make drm_client more standalone concept.
>
> v8:
>  * Simplify sysfs show. (Chris)
>  * Always track name and pid.
>
> v9:
>  * Fix cyclic id assignment.
>
> v10:
>  * No need for a mutex around xa_alloc_cyclic.
>  * Refactor sysfs into own function.
>  * Unregister sysfs before freeing pid and name.
>  * Move clients setup into own function.
>
> v11:
>  * Call clients init directly from driver init. (Chris)
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/Makefile          |   3 +-
>  drivers/gpu/drm/i915/i915_drm_client.c | 179 +++++++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_drm_client.h |  64 +++++++++
>  drivers/gpu/drm/i915/i915_drv.c        |   3 +
>  drivers/gpu/drm/i915/i915_drv.h        |   5 +
>  drivers/gpu/drm/i915/i915_gem.c        |  25 +++-
>  drivers/gpu/drm/i915/i915_sysfs.c      |   8 ++
>  7 files changed, 283 insertions(+), 4 deletions(-)
>  create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
>  create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
>
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index 44c506b7e117..b30f3d51c66a 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -33,7 +33,8 @@ subdir-ccflags-y += -I$(srctree)/$(src)
>  # Please keep these build lists sorted!
>
>  # core driver code
> -i915-y += i915_drv.o \
> +i915-y += i915_drm_client.o \
> +         i915_drv.o \
>           i915_irq.o \
>           i915_getparam.o \
>           i915_params.o \
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
> new file mode 100644
> index 000000000000..2067fbcdb795
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> @@ -0,0 +1,179 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2020 Intel Corporation
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include "i915_drm_client.h"
> +#include "i915_gem.h"
> +#include "i915_utils.h"
> +
> +void i915_drm_clients_init(struct i915_drm_clients *clients)
> +{
> +       clients->next_id = 0;
> +       xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC);
> +}
> +
> +static ssize_t
> +show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
> +{
> +       struct i915_drm_client *client =
> +               container_of(attr, typeof(*client), attr.name);
> +
> +       return snprintf(buf, PAGE_SIZE,
> +                       READ_ONCE(client->closed) ? "<%s>" : "%s",
> +                       client->name);
> +}
> +
> +static ssize_t
> +show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
> +{
> +       struct i915_drm_client *client =
> +               container_of(attr, typeof(*client), attr.pid);
> +
> +       return snprintf(buf, PAGE_SIZE,
> +                       READ_ONCE(client->closed) ? "<%u>" : "%u",
> +                       pid_nr(client->pid));
> +}
> +
> +static int
> +__client_register_sysfs(struct i915_drm_client *client)
> +{
> +       const struct {
> +               const char *name;
> +               struct device_attribute *attr;
> +               ssize_t (*show)(struct device *dev,
> +                               struct device_attribute *attr,
> +                               char *buf);
> +       } files[] = {
> +               { "name", &client->attr.name, show_client_name },
> +               { "pid", &client->attr.pid, show_client_pid },
> +       };
> +       unsigned int i;
> +       char buf[16];
> +       int ret;
> +
> +       ret = scnprintf(buf, sizeof(buf), "%u", client->id);
> +       if (ret == sizeof(buf))
> +               return -EINVAL;
> +
> +       client->root = kobject_create_and_add(buf, client->clients->root);
> +       if (!client->root)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < ARRAY_SIZE(files); i++) {
> +               struct device_attribute *attr = files[i].attr;
> +
> +               sysfs_attr_init(&attr->attr);
> +
> +               attr->attr.name = files[i].name;
> +               attr->attr.mode = 0444;
> +               attr->show = files[i].show;
> +
> +               ret = sysfs_create_file(client->root, (struct attribute *)attr);
> +               if (ret)
> +                       break;
> +       }
> +
> +       if (ret)
> +               kobject_put(client->root);
> +
> +       return ret;
> +}
> +
> +static void __client_unregister_sysfs(struct i915_drm_client *client)
> +{
> +       kobject_put(fetch_and_zero(&client->root));
> +}
> +
> +static int
> +__i915_drm_client_register(struct i915_drm_client *client,
> +                          struct task_struct *task)
> +{
> +       struct i915_drm_clients *clients = client->clients;
> +       char *name;
> +       int ret;
> +
> +       name = kstrdup(task->comm, GFP_KERNEL);
> +       if (!name)
> +               return -ENOMEM;
> +
> +       client->pid = get_task_pid(task, PIDTYPE_PID);
> +       client->name = name;
> +
> +       if (!clients->root)
> +               return 0; /* intel_fbdev_init registers a client before sysfs */
> +
> +       ret = __client_register_sysfs(client);
> +       if (ret)
> +               goto err_sysfs;
> +
> +       return 0;
> +
> +err_sysfs:
> +       put_pid(client->pid);
> +       kfree(client->name);
> +
> +       return ret;
> +}
> +
> +static void
> +__i915_drm_client_unregister(struct i915_drm_client *client)
> +{
> +       __client_unregister_sysfs(client);
> +
> +       put_pid(fetch_and_zero(&client->pid));
> +       kfree(fetch_and_zero(&client->name));
> +}
> +
> +struct i915_drm_client *
> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
> +{
> +       struct i915_drm_client *client;
> +       int ret;
> +
> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
> +       if (!client)
> +               return ERR_PTR(-ENOMEM);
> +
> +       kref_init(&client->kref);
> +       client->clients = clients;
> +
> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);
> +       if (ret)
> +               goto err_id;
> +
> +       ret = __i915_drm_client_register(client, task);
> +       if (ret)
> +               goto err_register;
> +
> +       return client;
> +
> +err_register:
> +       xa_erase(&clients->xarray, client->id);
> +err_id:
> +       kfree(client);
> +
> +       return ERR_PTR(ret);
> +}
> +
> +void __i915_drm_client_free(struct kref *kref)
> +{
> +       struct i915_drm_client *client =
> +               container_of(kref, typeof(*client), kref);
> +
> +       __i915_drm_client_unregister(client);
> +       xa_erase(&client->clients->xarray, client->id);
> +       kfree_rcu(client, rcu);
> +}
> +
> +void i915_drm_client_close(struct i915_drm_client *client)
> +{
> +       GEM_BUG_ON(READ_ONCE(client->closed));
> +       WRITE_ONCE(client->closed, true);
> +       i915_drm_client_put(client);
> +}
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
> new file mode 100644
> index 000000000000..af6998c74d4c
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
> @@ -0,0 +1,64 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2020 Intel Corporation
> + */
> +
> +#ifndef __I915_DRM_CLIENT_H__
> +#define __I915_DRM_CLIENT_H__
> +
> +#include <linux/device.h>
> +#include <linux/kobject.h>
> +#include <linux/kref.h>
> +#include <linux/pid.h>
> +#include <linux/rcupdate.h>
> +#include <linux/sched.h>
> +#include <linux/xarray.h>
> +
> +struct i915_drm_clients {
> +       struct xarray xarray;
> +       u32 next_id;
> +
> +       struct kobject *root;
> +};
> +
> +struct i915_drm_client {
> +       struct kref kref;
> +
> +       struct rcu_head rcu;
> +
> +       unsigned int id;
> +       struct pid *pid;
> +       char *name;
> +       bool closed;
> +
> +       struct i915_drm_clients *clients;
> +
> +       struct kobject *root;
> +       struct {
> +               struct device_attribute pid;
> +               struct device_attribute name;
> +       } attr;
> +};
> +
> +void i915_drm_clients_init(struct i915_drm_clients *clients);
> +
> +static inline struct i915_drm_client *
> +i915_drm_client_get(struct i915_drm_client *client)
> +{
> +       kref_get(&client->kref);
> +       return client;
> +}
> +
> +void __i915_drm_client_free(struct kref *kref);
> +
> +static inline void i915_drm_client_put(struct i915_drm_client *client)
> +{
> +       kref_put(&client->kref, __i915_drm_client_free);
> +}
> +
> +void i915_drm_client_close(struct i915_drm_client *client);
> +
> +struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients,
> +                                           struct task_struct *task);
> +
> +#endif /* !__I915_DRM_CLIENT_H__ */
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 641f5e03b661..dac84b17d23d 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -70,6 +70,7 @@
>  #include "gt/intel_rc6.h"
>
>  #include "i915_debugfs.h"
> +#include "i915_drm_client.h"
>  #include "i915_drv.h"
>  #include "i915_ioc32.h"
>  #include "i915_irq.h"
> @@ -456,6 +457,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
>
>         i915_gem_init_early(dev_priv);
>
> +       i915_drm_clients_init(&dev_priv->clients);
> +
>         /* This must be called before any calls to HAS_PCH_* */
>         intel_detect_pch(dev_priv);
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index e9ee4daa9320..f9f0c3ba6e4a 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -91,6 +91,7 @@
>  #include "intel_wakeref.h"
>  #include "intel_wopcm.h"
>
> +#include "i915_drm_client.h"
>  #include "i915_gem.h"
>  #include "i915_gem_gtt.h"
>  #include "i915_gpu_error.h"
> @@ -226,6 +227,8 @@ struct drm_i915_file_private {
>         /** ban_score: Accumulated score of all ctx bans and fast hangs. */
>         atomic_t ban_score;
>         unsigned long hang_timestamp;
> +
> +       struct i915_drm_client *client;
>  };
>
>  /* Interface history:
> @@ -1201,6 +1204,8 @@ struct drm_i915_private {
>
>         struct i915_pmu pmu;
>
> +       struct i915_drm_clients clients;
> +
>         struct i915_hdcp_comp_master *hdcp_master;
>         bool hdcp_comp_added;
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0cbcb9f54e7d..5a0b5fae8b92 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1234,6 +1234,8 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
>         GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
>         GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
>         drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
> +       drm_WARN_ON(&dev_priv->drm, !xa_empty(&dev_priv->clients.xarray));
> +       xa_destroy(&dev_priv->clients.xarray);
>  }
>
>  int i915_gem_freeze(struct drm_i915_private *dev_priv)
> @@ -1288,6 +1290,8 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
>         struct drm_i915_file_private *file_priv = file->driver_priv;
>         struct i915_request *request;
>
> +       i915_drm_client_close(file_priv->client);
> +
>         /* Clean up our request list when the client is going away, so that
>          * later retire_requests won't dereference our soon-to-be-gone
>          * file_priv.
> @@ -1301,17 +1305,25 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
>  int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
>  {
>         struct drm_i915_file_private *file_priv;
> -       int ret;
> +       struct i915_drm_client *client;
> +       int ret = -ENOMEM;
>
>         DRM_DEBUG("\n");
>
>         file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>         if (!file_priv)
> -               return -ENOMEM;
> +               goto err_alloc;
> +
> +       client = i915_drm_client_add(&i915->clients, current);
> +       if (IS_ERR(client)) {
> +               ret = PTR_ERR(client);
> +               goto err_client;
> +       }
>
>         file->driver_priv = file_priv;
>         file_priv->dev_priv = i915;
>         file_priv->file = file;
> +       file_priv->client = client;
>
>         spin_lock_init(&file_priv->mm.lock);
>         INIT_LIST_HEAD(&file_priv->mm.request_list);
> @@ -1321,8 +1333,15 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
>
>         ret = i915_gem_context_open(i915, file);
>         if (ret)
> -               kfree(file_priv);
> +               goto err_context;
> +
> +       return 0;
>
> +err_context:
> +       i915_drm_client_close(client);
> +err_client:
> +       kfree(file_priv);
> +err_alloc:
>         return ret;
>  }
>
> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> index 45d32ef42787..b7d4a6d2dd5c 100644
> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> @@ -560,6 +560,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
>         struct device *kdev = dev_priv->drm.primary->kdev;
>         int ret;
>
> +       dev_priv->clients.root =
> +               kobject_create_and_add("clients", &kdev->kobj);
> +       if (!dev_priv->clients.root)
> +               DRM_ERROR("Per-client sysfs setup failed\n");
> +
>  #ifdef CONFIG_PM
>         if (HAS_RC6(dev_priv)) {
>                 ret = sysfs_merge_group(&kdev->kobj,
> @@ -627,4 +632,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
>         sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
>         sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
>  #endif
> +
> +       if (dev_priv->clients.root)
> +               kobject_put(dev_priv->clients.root);
>  }
> --
> 2.20.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs
  2020-08-26  1:11   ` Lucas De Marchi
@ 2020-09-01 15:09     ` Tvrtko Ursulin
  2020-09-01 15:25       ` Tvrtko Ursulin
  0 siblings, 1 reply; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-09-01 15:09 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: Intel Graphics, Chris Wilson


Hi,

On 26/08/2020 02:11, Lucas De Marchi wrote:
> Hi,
> 
> Any update on this? It now conflicts in a few places so it needs a rebase.

I don't see any previous email on the topic - what kind of update, where 
and how, are you looking for? Rebase against drm-tip so you pull it in? 
Rebase against some internal in progress branch?

Regards,

Tvrtko

> Lucas De Marchi
> 
> On Wed, Apr 15, 2020 at 3:11 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Expose a list of clients with open file handles in sysfs.
>>
>> This will be a basis for a top-like utility showing per-client and per-
>> engine GPU load.
>>
>> Currently we only expose each client's pid and name under opaque numbered
>> directories in /sys/class/drm/card0/clients/.
>>
>> For instance:
>>
>> /sys/class/drm/card0/clients/3/name: Xorg
>> /sys/class/drm/card0/clients/3/pid: 5664
>>
>> v2:
>>   Chris Wilson:
>>   * Enclose new members into dedicated structs.
>>   * Protect against failed sysfs registration.
>>
>> v3:
>>   * sysfs_attr_init.
>>
>> v4:
>>   * Fix for internal clients.
>>
>> v5:
>>   * Use cyclic ida for client id. (Chris)
>>   * Do not leak pid reference. (Chris)
>>   * Tidy code with some locals.
>>
>> v6:
>>   * Use xa_alloc_cyclic to simplify locking. (Chris)
>>   * No need to unregister individial sysfs files. (Chris)
>>   * Rebase on top of fpriv kref.
>>   * Track client closed status and reflect in sysfs.
>>
>> v7:
>>   * Make drm_client more standalone concept.
>>
>> v8:
>>   * Simplify sysfs show. (Chris)
>>   * Always track name and pid.
>>
>> v9:
>>   * Fix cyclic id assignment.
>>
>> v10:
>>   * No need for a mutex around xa_alloc_cyclic.
>>   * Refactor sysfs into own function.
>>   * Unregister sysfs before freeing pid and name.
>>   * Move clients setup into own function.
>>
>> v11:
>>   * Call clients init directly from driver init. (Chris)
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
>> ---
>>   drivers/gpu/drm/i915/Makefile          |   3 +-
>>   drivers/gpu/drm/i915/i915_drm_client.c | 179 +++++++++++++++++++++++++
>>   drivers/gpu/drm/i915/i915_drm_client.h |  64 +++++++++
>>   drivers/gpu/drm/i915/i915_drv.c        |   3 +
>>   drivers/gpu/drm/i915/i915_drv.h        |   5 +
>>   drivers/gpu/drm/i915/i915_gem.c        |  25 +++-
>>   drivers/gpu/drm/i915/i915_sysfs.c      |   8 ++
>>   7 files changed, 283 insertions(+), 4 deletions(-)
>>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
>>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
>>
>> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
>> index 44c506b7e117..b30f3d51c66a 100644
>> --- a/drivers/gpu/drm/i915/Makefile
>> +++ b/drivers/gpu/drm/i915/Makefile
>> @@ -33,7 +33,8 @@ subdir-ccflags-y += -I$(srctree)/$(src)
>>   # Please keep these build lists sorted!
>>
>>   # core driver code
>> -i915-y += i915_drv.o \
>> +i915-y += i915_drm_client.o \
>> +         i915_drv.o \
>>            i915_irq.o \
>>            i915_getparam.o \
>>            i915_params.o \
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
>> new file mode 100644
>> index 000000000000..2067fbcdb795
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>> @@ -0,0 +1,179 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2020 Intel Corporation
>> + */
>> +
>> +#include <linux/kernel.h>
>> +#include <linux/slab.h>
>> +#include <linux/types.h>
>> +
>> +#include "i915_drm_client.h"
>> +#include "i915_gem.h"
>> +#include "i915_utils.h"
>> +
>> +void i915_drm_clients_init(struct i915_drm_clients *clients)
>> +{
>> +       clients->next_id = 0;
>> +       xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC);
>> +}
>> +
>> +static ssize_t
>> +show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
>> +{
>> +       struct i915_drm_client *client =
>> +               container_of(attr, typeof(*client), attr.name);
>> +
>> +       return snprintf(buf, PAGE_SIZE,
>> +                       READ_ONCE(client->closed) ? "<%s>" : "%s",
>> +                       client->name);
>> +}
>> +
>> +static ssize_t
>> +show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
>> +{
>> +       struct i915_drm_client *client =
>> +               container_of(attr, typeof(*client), attr.pid);
>> +
>> +       return snprintf(buf, PAGE_SIZE,
>> +                       READ_ONCE(client->closed) ? "<%u>" : "%u",
>> +                       pid_nr(client->pid));
>> +}
>> +
>> +static int
>> +__client_register_sysfs(struct i915_drm_client *client)
>> +{
>> +       const struct {
>> +               const char *name;
>> +               struct device_attribute *attr;
>> +               ssize_t (*show)(struct device *dev,
>> +                               struct device_attribute *attr,
>> +                               char *buf);
>> +       } files[] = {
>> +               { "name", &client->attr.name, show_client_name },
>> +               { "pid", &client->attr.pid, show_client_pid },
>> +       };
>> +       unsigned int i;
>> +       char buf[16];
>> +       int ret;
>> +
>> +       ret = scnprintf(buf, sizeof(buf), "%u", client->id);
>> +       if (ret == sizeof(buf))
>> +               return -EINVAL;
>> +
>> +       client->root = kobject_create_and_add(buf, client->clients->root);
>> +       if (!client->root)
>> +               return -ENOMEM;
>> +
>> +       for (i = 0; i < ARRAY_SIZE(files); i++) {
>> +               struct device_attribute *attr = files[i].attr;
>> +
>> +               sysfs_attr_init(&attr->attr);
>> +
>> +               attr->attr.name = files[i].name;
>> +               attr->attr.mode = 0444;
>> +               attr->show = files[i].show;
>> +
>> +               ret = sysfs_create_file(client->root, (struct attribute *)attr);
>> +               if (ret)
>> +                       break;
>> +       }
>> +
>> +       if (ret)
>> +               kobject_put(client->root);
>> +
>> +       return ret;
>> +}
>> +
>> +static void __client_unregister_sysfs(struct i915_drm_client *client)
>> +{
>> +       kobject_put(fetch_and_zero(&client->root));
>> +}
>> +
>> +static int
>> +__i915_drm_client_register(struct i915_drm_client *client,
>> +                          struct task_struct *task)
>> +{
>> +       struct i915_drm_clients *clients = client->clients;
>> +       char *name;
>> +       int ret;
>> +
>> +       name = kstrdup(task->comm, GFP_KERNEL);
>> +       if (!name)
>> +               return -ENOMEM;
>> +
>> +       client->pid = get_task_pid(task, PIDTYPE_PID);
>> +       client->name = name;
>> +
>> +       if (!clients->root)
>> +               return 0; /* intel_fbdev_init registers a client before sysfs */
>> +
>> +       ret = __client_register_sysfs(client);
>> +       if (ret)
>> +               goto err_sysfs;
>> +
>> +       return 0;
>> +
>> +err_sysfs:
>> +       put_pid(client->pid);
>> +       kfree(client->name);
>> +
>> +       return ret;
>> +}
>> +
>> +static void
>> +__i915_drm_client_unregister(struct i915_drm_client *client)
>> +{
>> +       __client_unregister_sysfs(client);
>> +
>> +       put_pid(fetch_and_zero(&client->pid));
>> +       kfree(fetch_and_zero(&client->name));
>> +}
>> +
>> +struct i915_drm_client *
>> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
>> +{
>> +       struct i915_drm_client *client;
>> +       int ret;
>> +
>> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
>> +       if (!client)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       kref_init(&client->kref);
>> +       client->clients = clients;
>> +
>> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
>> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);
>> +       if (ret)
>> +               goto err_id;
>> +
>> +       ret = __i915_drm_client_register(client, task);
>> +       if (ret)
>> +               goto err_register;
>> +
>> +       return client;
>> +
>> +err_register:
>> +       xa_erase(&clients->xarray, client->id);
>> +err_id:
>> +       kfree(client);
>> +
>> +       return ERR_PTR(ret);
>> +}
>> +
>> +void __i915_drm_client_free(struct kref *kref)
>> +{
>> +       struct i915_drm_client *client =
>> +               container_of(kref, typeof(*client), kref);
>> +
>> +       __i915_drm_client_unregister(client);
>> +       xa_erase(&client->clients->xarray, client->id);
>> +       kfree_rcu(client, rcu);
>> +}
>> +
>> +void i915_drm_client_close(struct i915_drm_client *client)
>> +{
>> +       GEM_BUG_ON(READ_ONCE(client->closed));
>> +       WRITE_ONCE(client->closed, true);
>> +       i915_drm_client_put(client);
>> +}
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
>> new file mode 100644
>> index 000000000000..af6998c74d4c
>> --- /dev/null
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>> @@ -0,0 +1,64 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2020 Intel Corporation
>> + */
>> +
>> +#ifndef __I915_DRM_CLIENT_H__
>> +#define __I915_DRM_CLIENT_H__
>> +
>> +#include <linux/device.h>
>> +#include <linux/kobject.h>
>> +#include <linux/kref.h>
>> +#include <linux/pid.h>
>> +#include <linux/rcupdate.h>
>> +#include <linux/sched.h>
>> +#include <linux/xarray.h>
>> +
>> +struct i915_drm_clients {
>> +       struct xarray xarray;
>> +       u32 next_id;
>> +
>> +       struct kobject *root;
>> +};
>> +
>> +struct i915_drm_client {
>> +       struct kref kref;
>> +
>> +       struct rcu_head rcu;
>> +
>> +       unsigned int id;
>> +       struct pid *pid;
>> +       char *name;
>> +       bool closed;
>> +
>> +       struct i915_drm_clients *clients;
>> +
>> +       struct kobject *root;
>> +       struct {
>> +               struct device_attribute pid;
>> +               struct device_attribute name;
>> +       } attr;
>> +};
>> +
>> +void i915_drm_clients_init(struct i915_drm_clients *clients);
>> +
>> +static inline struct i915_drm_client *
>> +i915_drm_client_get(struct i915_drm_client *client)
>> +{
>> +       kref_get(&client->kref);
>> +       return client;
>> +}
>> +
>> +void __i915_drm_client_free(struct kref *kref);
>> +
>> +static inline void i915_drm_client_put(struct i915_drm_client *client)
>> +{
>> +       kref_put(&client->kref, __i915_drm_client_free);
>> +}
>> +
>> +void i915_drm_client_close(struct i915_drm_client *client);
>> +
>> +struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients,
>> +                                           struct task_struct *task);
>> +
>> +#endif /* !__I915_DRM_CLIENT_H__ */
>> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
>> index 641f5e03b661..dac84b17d23d 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.c
>> +++ b/drivers/gpu/drm/i915/i915_drv.c
>> @@ -70,6 +70,7 @@
>>   #include "gt/intel_rc6.h"
>>
>>   #include "i915_debugfs.h"
>> +#include "i915_drm_client.h"
>>   #include "i915_drv.h"
>>   #include "i915_ioc32.h"
>>   #include "i915_irq.h"
>> @@ -456,6 +457,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
>>
>>          i915_gem_init_early(dev_priv);
>>
>> +       i915_drm_clients_init(&dev_priv->clients);
>> +
>>          /* This must be called before any calls to HAS_PCH_* */
>>          intel_detect_pch(dev_priv);
>>
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index e9ee4daa9320..f9f0c3ba6e4a 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -91,6 +91,7 @@
>>   #include "intel_wakeref.h"
>>   #include "intel_wopcm.h"
>>
>> +#include "i915_drm_client.h"
>>   #include "i915_gem.h"
>>   #include "i915_gem_gtt.h"
>>   #include "i915_gpu_error.h"
>> @@ -226,6 +227,8 @@ struct drm_i915_file_private {
>>          /** ban_score: Accumulated score of all ctx bans and fast hangs. */
>>          atomic_t ban_score;
>>          unsigned long hang_timestamp;
>> +
>> +       struct i915_drm_client *client;
>>   };
>>
>>   /* Interface history:
>> @@ -1201,6 +1204,8 @@ struct drm_i915_private {
>>
>>          struct i915_pmu pmu;
>>
>> +       struct i915_drm_clients clients;
>> +
>>          struct i915_hdcp_comp_master *hdcp_master;
>>          bool hdcp_comp_added;
>>
>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> index 0cbcb9f54e7d..5a0b5fae8b92 100644
>> --- a/drivers/gpu/drm/i915/i915_gem.c
>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>> @@ -1234,6 +1234,8 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
>>          GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
>>          GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
>>          drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
>> +       drm_WARN_ON(&dev_priv->drm, !xa_empty(&dev_priv->clients.xarray));
>> +       xa_destroy(&dev_priv->clients.xarray);
>>   }
>>
>>   int i915_gem_freeze(struct drm_i915_private *dev_priv)
>> @@ -1288,6 +1290,8 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
>>          struct drm_i915_file_private *file_priv = file->driver_priv;
>>          struct i915_request *request;
>>
>> +       i915_drm_client_close(file_priv->client);
>> +
>>          /* Clean up our request list when the client is going away, so that
>>           * later retire_requests won't dereference our soon-to-be-gone
>>           * file_priv.
>> @@ -1301,17 +1305,25 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
>>   int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
>>   {
>>          struct drm_i915_file_private *file_priv;
>> -       int ret;
>> +       struct i915_drm_client *client;
>> +       int ret = -ENOMEM;
>>
>>          DRM_DEBUG("\n");
>>
>>          file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>>          if (!file_priv)
>> -               return -ENOMEM;
>> +               goto err_alloc;
>> +
>> +       client = i915_drm_client_add(&i915->clients, current);
>> +       if (IS_ERR(client)) {
>> +               ret = PTR_ERR(client);
>> +               goto err_client;
>> +       }
>>
>>          file->driver_priv = file_priv;
>>          file_priv->dev_priv = i915;
>>          file_priv->file = file;
>> +       file_priv->client = client;
>>
>>          spin_lock_init(&file_priv->mm.lock);
>>          INIT_LIST_HEAD(&file_priv->mm.request_list);
>> @@ -1321,8 +1333,15 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
>>
>>          ret = i915_gem_context_open(i915, file);
>>          if (ret)
>> -               kfree(file_priv);
>> +               goto err_context;
>> +
>> +       return 0;
>>
>> +err_context:
>> +       i915_drm_client_close(client);
>> +err_client:
>> +       kfree(file_priv);
>> +err_alloc:
>>          return ret;
>>   }
>>
>> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
>> index 45d32ef42787..b7d4a6d2dd5c 100644
>> --- a/drivers/gpu/drm/i915/i915_sysfs.c
>> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
>> @@ -560,6 +560,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
>>          struct device *kdev = dev_priv->drm.primary->kdev;
>>          int ret;
>>
>> +       dev_priv->clients.root =
>> +               kobject_create_and_add("clients", &kdev->kobj);
>> +       if (!dev_priv->clients.root)
>> +               DRM_ERROR("Per-client sysfs setup failed\n");
>> +
>>   #ifdef CONFIG_PM
>>          if (HAS_RC6(dev_priv)) {
>>                  ret = sysfs_merge_group(&kdev->kobj,
>> @@ -627,4 +632,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
>>          sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
>>          sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
>>   #endif
>> +
>> +       if (dev_priv->clients.root)
>> +               kobject_put(dev_priv->clients.root);
>>   }
>> --
>> 2.20.1
>>
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs
  2020-09-01 15:09     ` Tvrtko Ursulin
@ 2020-09-01 15:25       ` Tvrtko Ursulin
  2020-09-04  6:26         ` Lucas De Marchi
  0 siblings, 1 reply; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-09-01 15:25 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: Intel Graphics, Chris Wilson


On 01/09/2020 16:09, Tvrtko Ursulin wrote:
> 
> Hi,
> 
> On 26/08/2020 02:11, Lucas De Marchi wrote:
>> Hi,
>>
>> Any update on this? It now conflicts in a few places so it needs a 
>> rebase.
> 
> I don't see any previous email on the topic - what kind of update, where 
> and how, are you looking for? Rebase against drm-tip so you pull it in? 
> Rebase against some internal in progress branch?

Clearly you were after an update against drm-tip.. :) Problem here was 
no userspace but I can try to respin it.

Regards,

Tvrtko

> 
> Regards,
> 
> Tvrtko
> 
>> Lucas De Marchi
>>
>> On Wed, Apr 15, 2020 at 3:11 AM Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> Expose a list of clients with open file handles in sysfs.
>>>
>>> This will be a basis for a top-like utility showing per-client and per-
>>> engine GPU load.
>>>
>>> Currently we only expose each client's pid and name under opaque 
>>> numbered
>>> directories in /sys/class/drm/card0/clients/.
>>>
>>> For instance:
>>>
>>> /sys/class/drm/card0/clients/3/name: Xorg
>>> /sys/class/drm/card0/clients/3/pid: 5664
>>>
>>> v2:
>>>   Chris Wilson:
>>>   * Enclose new members into dedicated structs.
>>>   * Protect against failed sysfs registration.
>>>
>>> v3:
>>>   * sysfs_attr_init.
>>>
>>> v4:
>>>   * Fix for internal clients.
>>>
>>> v5:
>>>   * Use cyclic ida for client id. (Chris)
>>>   * Do not leak pid reference. (Chris)
>>>   * Tidy code with some locals.
>>>
>>> v6:
>>>   * Use xa_alloc_cyclic to simplify locking. (Chris)
>>>   * No need to unregister individial sysfs files. (Chris)
>>>   * Rebase on top of fpriv kref.
>>>   * Track client closed status and reflect in sysfs.
>>>
>>> v7:
>>>   * Make drm_client more standalone concept.
>>>
>>> v8:
>>>   * Simplify sysfs show. (Chris)
>>>   * Always track name and pid.
>>>
>>> v9:
>>>   * Fix cyclic id assignment.
>>>
>>> v10:
>>>   * No need for a mutex around xa_alloc_cyclic.
>>>   * Refactor sysfs into own function.
>>>   * Unregister sysfs before freeing pid and name.
>>>   * Move clients setup into own function.
>>>
>>> v11:
>>>   * Call clients init directly from driver init. (Chris)
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> ---
>>>   drivers/gpu/drm/i915/Makefile          |   3 +-
>>>   drivers/gpu/drm/i915/i915_drm_client.c | 179 +++++++++++++++++++++++++
>>>   drivers/gpu/drm/i915/i915_drm_client.h |  64 +++++++++
>>>   drivers/gpu/drm/i915/i915_drv.c        |   3 +
>>>   drivers/gpu/drm/i915/i915_drv.h        |   5 +
>>>   drivers/gpu/drm/i915/i915_gem.c        |  25 +++-
>>>   drivers/gpu/drm/i915/i915_sysfs.c      |   8 ++
>>>   7 files changed, 283 insertions(+), 4 deletions(-)
>>>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
>>>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
>>>
>>> diff --git a/drivers/gpu/drm/i915/Makefile 
>>> b/drivers/gpu/drm/i915/Makefile
>>> index 44c506b7e117..b30f3d51c66a 100644
>>> --- a/drivers/gpu/drm/i915/Makefile
>>> +++ b/drivers/gpu/drm/i915/Makefile
>>> @@ -33,7 +33,8 @@ subdir-ccflags-y += -I$(srctree)/$(src)
>>>   # Please keep these build lists sorted!
>>>
>>>   # core driver code
>>> -i915-y += i915_drv.o \
>>> +i915-y += i915_drm_client.o \
>>> +         i915_drv.o \
>>>            i915_irq.o \
>>>            i915_getparam.o \
>>>            i915_params.o \
>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c 
>>> b/drivers/gpu/drm/i915/i915_drm_client.c
>>> new file mode 100644
>>> index 000000000000..2067fbcdb795
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>>> @@ -0,0 +1,179 @@
>>> +// SPDX-License-Identifier: MIT
>>> +/*
>>> + * Copyright © 2020 Intel Corporation
>>> + */
>>> +
>>> +#include <linux/kernel.h>
>>> +#include <linux/slab.h>
>>> +#include <linux/types.h>
>>> +
>>> +#include "i915_drm_client.h"
>>> +#include "i915_gem.h"
>>> +#include "i915_utils.h"
>>> +
>>> +void i915_drm_clients_init(struct i915_drm_clients *clients)
>>> +{
>>> +       clients->next_id = 0;
>>> +       xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC);
>>> +}
>>> +
>>> +static ssize_t
>>> +show_client_name(struct device *kdev, struct device_attribute *attr, 
>>> char *buf)
>>> +{
>>> +       struct i915_drm_client *client =
>>> +               container_of(attr, typeof(*client), attr.name);
>>> +
>>> +       return snprintf(buf, PAGE_SIZE,
>>> +                       READ_ONCE(client->closed) ? "<%s>" : "%s",
>>> +                       client->name);
>>> +}
>>> +
>>> +static ssize_t
>>> +show_client_pid(struct device *kdev, struct device_attribute *attr, 
>>> char *buf)
>>> +{
>>> +       struct i915_drm_client *client =
>>> +               container_of(attr, typeof(*client), attr.pid);
>>> +
>>> +       return snprintf(buf, PAGE_SIZE,
>>> +                       READ_ONCE(client->closed) ? "<%u>" : "%u",
>>> +                       pid_nr(client->pid));
>>> +}
>>> +
>>> +static int
>>> +__client_register_sysfs(struct i915_drm_client *client)
>>> +{
>>> +       const struct {
>>> +               const char *name;
>>> +               struct device_attribute *attr;
>>> +               ssize_t (*show)(struct device *dev,
>>> +                               struct device_attribute *attr,
>>> +                               char *buf);
>>> +       } files[] = {
>>> +               { "name", &client->attr.name, show_client_name },
>>> +               { "pid", &client->attr.pid, show_client_pid },
>>> +       };
>>> +       unsigned int i;
>>> +       char buf[16];
>>> +       int ret;
>>> +
>>> +       ret = scnprintf(buf, sizeof(buf), "%u", client->id);
>>> +       if (ret == sizeof(buf))
>>> +               return -EINVAL;
>>> +
>>> +       client->root = kobject_create_and_add(buf, 
>>> client->clients->root);
>>> +       if (!client->root)
>>> +               return -ENOMEM;
>>> +
>>> +       for (i = 0; i < ARRAY_SIZE(files); i++) {
>>> +               struct device_attribute *attr = files[i].attr;
>>> +
>>> +               sysfs_attr_init(&attr->attr);
>>> +
>>> +               attr->attr.name = files[i].name;
>>> +               attr->attr.mode = 0444;
>>> +               attr->show = files[i].show;
>>> +
>>> +               ret = sysfs_create_file(client->root, (struct 
>>> attribute *)attr);
>>> +               if (ret)
>>> +                       break;
>>> +       }
>>> +
>>> +       if (ret)
>>> +               kobject_put(client->root);
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static void __client_unregister_sysfs(struct i915_drm_client *client)
>>> +{
>>> +       kobject_put(fetch_and_zero(&client->root));
>>> +}
>>> +
>>> +static int
>>> +__i915_drm_client_register(struct i915_drm_client *client,
>>> +                          struct task_struct *task)
>>> +{
>>> +       struct i915_drm_clients *clients = client->clients;
>>> +       char *name;
>>> +       int ret;
>>> +
>>> +       name = kstrdup(task->comm, GFP_KERNEL);
>>> +       if (!name)
>>> +               return -ENOMEM;
>>> +
>>> +       client->pid = get_task_pid(task, PIDTYPE_PID);
>>> +       client->name = name;
>>> +
>>> +       if (!clients->root)
>>> +               return 0; /* intel_fbdev_init registers a client 
>>> before sysfs */
>>> +
>>> +       ret = __client_register_sysfs(client);
>>> +       if (ret)
>>> +               goto err_sysfs;
>>> +
>>> +       return 0;
>>> +
>>> +err_sysfs:
>>> +       put_pid(client->pid);
>>> +       kfree(client->name);
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static void
>>> +__i915_drm_client_unregister(struct i915_drm_client *client)
>>> +{
>>> +       __client_unregister_sysfs(client);
>>> +
>>> +       put_pid(fetch_and_zero(&client->pid));
>>> +       kfree(fetch_and_zero(&client->name));
>>> +}
>>> +
>>> +struct i915_drm_client *
>>> +i915_drm_client_add(struct i915_drm_clients *clients, struct 
>>> task_struct *task)
>>> +{
>>> +       struct i915_drm_client *client;
>>> +       int ret;
>>> +
>>> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
>>> +       if (!client)
>>> +               return ERR_PTR(-ENOMEM);
>>> +
>>> +       kref_init(&client->kref);
>>> +       client->clients = clients;
>>> +
>>> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
>>> +                             xa_limit_32b, &clients->next_id, 
>>> GFP_KERNEL);
>>> +       if (ret)
>>> +               goto err_id;
>>> +
>>> +       ret = __i915_drm_client_register(client, task);
>>> +       if (ret)
>>> +               goto err_register;
>>> +
>>> +       return client;
>>> +
>>> +err_register:
>>> +       xa_erase(&clients->xarray, client->id);
>>> +err_id:
>>> +       kfree(client);
>>> +
>>> +       return ERR_PTR(ret);
>>> +}
>>> +
>>> +void __i915_drm_client_free(struct kref *kref)
>>> +{
>>> +       struct i915_drm_client *client =
>>> +               container_of(kref, typeof(*client), kref);
>>> +
>>> +       __i915_drm_client_unregister(client);
>>> +       xa_erase(&client->clients->xarray, client->id);
>>> +       kfree_rcu(client, rcu);
>>> +}
>>> +
>>> +void i915_drm_client_close(struct i915_drm_client *client)
>>> +{
>>> +       GEM_BUG_ON(READ_ONCE(client->closed));
>>> +       WRITE_ONCE(client->closed, true);
>>> +       i915_drm_client_put(client);
>>> +}
>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h 
>>> b/drivers/gpu/drm/i915/i915_drm_client.h
>>> new file mode 100644
>>> index 000000000000..af6998c74d4c
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>>> @@ -0,0 +1,64 @@
>>> +// SPDX-License-Identifier: MIT
>>> +/*
>>> + * Copyright © 2020 Intel Corporation
>>> + */
>>> +
>>> +#ifndef __I915_DRM_CLIENT_H__
>>> +#define __I915_DRM_CLIENT_H__
>>> +
>>> +#include <linux/device.h>
>>> +#include <linux/kobject.h>
>>> +#include <linux/kref.h>
>>> +#include <linux/pid.h>
>>> +#include <linux/rcupdate.h>
>>> +#include <linux/sched.h>
>>> +#include <linux/xarray.h>
>>> +
>>> +struct i915_drm_clients {
>>> +       struct xarray xarray;
>>> +       u32 next_id;
>>> +
>>> +       struct kobject *root;
>>> +};
>>> +
>>> +struct i915_drm_client {
>>> +       struct kref kref;
>>> +
>>> +       struct rcu_head rcu;
>>> +
>>> +       unsigned int id;
>>> +       struct pid *pid;
>>> +       char *name;
>>> +       bool closed;
>>> +
>>> +       struct i915_drm_clients *clients;
>>> +
>>> +       struct kobject *root;
>>> +       struct {
>>> +               struct device_attribute pid;
>>> +               struct device_attribute name;
>>> +       } attr;
>>> +};
>>> +
>>> +void i915_drm_clients_init(struct i915_drm_clients *clients);
>>> +
>>> +static inline struct i915_drm_client *
>>> +i915_drm_client_get(struct i915_drm_client *client)
>>> +{
>>> +       kref_get(&client->kref);
>>> +       return client;
>>> +}
>>> +
>>> +void __i915_drm_client_free(struct kref *kref);
>>> +
>>> +static inline void i915_drm_client_put(struct i915_drm_client *client)
>>> +{
>>> +       kref_put(&client->kref, __i915_drm_client_free);
>>> +}
>>> +
>>> +void i915_drm_client_close(struct i915_drm_client *client);
>>> +
>>> +struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients 
>>> *clients,
>>> +                                           struct task_struct *task);
>>> +
>>> +#endif /* !__I915_DRM_CLIENT_H__ */
>>> diff --git a/drivers/gpu/drm/i915/i915_drv.c 
>>> b/drivers/gpu/drm/i915/i915_drv.c
>>> index 641f5e03b661..dac84b17d23d 100644
>>> --- a/drivers/gpu/drm/i915/i915_drv.c
>>> +++ b/drivers/gpu/drm/i915/i915_drv.c
>>> @@ -70,6 +70,7 @@
>>>   #include "gt/intel_rc6.h"
>>>
>>>   #include "i915_debugfs.h"
>>> +#include "i915_drm_client.h"
>>>   #include "i915_drv.h"
>>>   #include "i915_ioc32.h"
>>>   #include "i915_irq.h"
>>> @@ -456,6 +457,8 @@ static int i915_driver_early_probe(struct 
>>> drm_i915_private *dev_priv)
>>>
>>>          i915_gem_init_early(dev_priv);
>>>
>>> +       i915_drm_clients_init(&dev_priv->clients);
>>> +
>>>          /* This must be called before any calls to HAS_PCH_* */
>>>          intel_detect_pch(dev_priv);
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_drv.h 
>>> b/drivers/gpu/drm/i915/i915_drv.h
>>> index e9ee4daa9320..f9f0c3ba6e4a 100644
>>> --- a/drivers/gpu/drm/i915/i915_drv.h
>>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>>> @@ -91,6 +91,7 @@
>>>   #include "intel_wakeref.h"
>>>   #include "intel_wopcm.h"
>>>
>>> +#include "i915_drm_client.h"
>>>   #include "i915_gem.h"
>>>   #include "i915_gem_gtt.h"
>>>   #include "i915_gpu_error.h"
>>> @@ -226,6 +227,8 @@ struct drm_i915_file_private {
>>>          /** ban_score: Accumulated score of all ctx bans and fast 
>>> hangs. */
>>>          atomic_t ban_score;
>>>          unsigned long hang_timestamp;
>>> +
>>> +       struct i915_drm_client *client;
>>>   };
>>>
>>>   /* Interface history:
>>> @@ -1201,6 +1204,8 @@ struct drm_i915_private {
>>>
>>>          struct i915_pmu pmu;
>>>
>>> +       struct i915_drm_clients clients;
>>> +
>>>          struct i915_hdcp_comp_master *hdcp_master;
>>>          bool hdcp_comp_added;
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c 
>>> b/drivers/gpu/drm/i915/i915_gem.c
>>> index 0cbcb9f54e7d..5a0b5fae8b92 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>> @@ -1234,6 +1234,8 @@ void i915_gem_cleanup_early(struct 
>>> drm_i915_private *dev_priv)
>>>          GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
>>>          GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
>>>          drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
>>> +       drm_WARN_ON(&dev_priv->drm, 
>>> !xa_empty(&dev_priv->clients.xarray));
>>> +       xa_destroy(&dev_priv->clients.xarray);
>>>   }
>>>
>>>   int i915_gem_freeze(struct drm_i915_private *dev_priv)
>>> @@ -1288,6 +1290,8 @@ void i915_gem_release(struct drm_device *dev, 
>>> struct drm_file *file)
>>>          struct drm_i915_file_private *file_priv = file->driver_priv;
>>>          struct i915_request *request;
>>>
>>> +       i915_drm_client_close(file_priv->client);
>>> +
>>>          /* Clean up our request list when the client is going away, 
>>> so that
>>>           * later retire_requests won't dereference our soon-to-be-gone
>>>           * file_priv.
>>> @@ -1301,17 +1305,25 @@ void i915_gem_release(struct drm_device *dev, 
>>> struct drm_file *file)
>>>   int i915_gem_open(struct drm_i915_private *i915, struct drm_file 
>>> *file)
>>>   {
>>>          struct drm_i915_file_private *file_priv;
>>> -       int ret;
>>> +       struct i915_drm_client *client;
>>> +       int ret = -ENOMEM;
>>>
>>>          DRM_DEBUG("\n");
>>>
>>>          file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>>>          if (!file_priv)
>>> -               return -ENOMEM;
>>> +               goto err_alloc;
>>> +
>>> +       client = i915_drm_client_add(&i915->clients, current);
>>> +       if (IS_ERR(client)) {
>>> +               ret = PTR_ERR(client);
>>> +               goto err_client;
>>> +       }
>>>
>>>          file->driver_priv = file_priv;
>>>          file_priv->dev_priv = i915;
>>>          file_priv->file = file;
>>> +       file_priv->client = client;
>>>
>>>          spin_lock_init(&file_priv->mm.lock);
>>>          INIT_LIST_HEAD(&file_priv->mm.request_list);
>>> @@ -1321,8 +1333,15 @@ int i915_gem_open(struct drm_i915_private 
>>> *i915, struct drm_file *file)
>>>
>>>          ret = i915_gem_context_open(i915, file);
>>>          if (ret)
>>> -               kfree(file_priv);
>>> +               goto err_context;
>>> +
>>> +       return 0;
>>>
>>> +err_context:
>>> +       i915_drm_client_close(client);
>>> +err_client:
>>> +       kfree(file_priv);
>>> +err_alloc:
>>>          return ret;
>>>   }
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c 
>>> b/drivers/gpu/drm/i915/i915_sysfs.c
>>> index 45d32ef42787..b7d4a6d2dd5c 100644
>>> --- a/drivers/gpu/drm/i915/i915_sysfs.c
>>> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
>>> @@ -560,6 +560,11 @@ void i915_setup_sysfs(struct drm_i915_private 
>>> *dev_priv)
>>>          struct device *kdev = dev_priv->drm.primary->kdev;
>>>          int ret;
>>>
>>> +       dev_priv->clients.root =
>>> +               kobject_create_and_add("clients", &kdev->kobj);
>>> +       if (!dev_priv->clients.root)
>>> +               DRM_ERROR("Per-client sysfs setup failed\n");
>>> +
>>>   #ifdef CONFIG_PM
>>>          if (HAS_RC6(dev_priv)) {
>>>                  ret = sysfs_merge_group(&kdev->kobj,
>>> @@ -627,4 +632,7 @@ void i915_teardown_sysfs(struct drm_i915_private 
>>> *dev_priv)
>>>          sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
>>>          sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
>>>   #endif
>>> +
>>> +       if (dev_priv->clients.root)
>>> +               kobject_put(dev_priv->clients.root);
>>>   }
>>> -- 
>>> 2.20.1
>>>
>>> _______________________________________________
>>> Intel-gfx mailing list
>>> Intel-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs
  2020-09-01 15:25       ` Tvrtko Ursulin
@ 2020-09-04  6:26         ` Lucas De Marchi
  2020-09-04 13:03           ` Tvrtko Ursulin
  0 siblings, 1 reply; 26+ messages in thread
From: Lucas De Marchi @ 2020-09-04  6:26 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Intel Graphics, Chris Wilson

On Tue, Sep 1, 2020 at 8:25 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 01/09/2020 16:09, Tvrtko Ursulin wrote:
> >
> > Hi,
> >
> > On 26/08/2020 02:11, Lucas De Marchi wrote:
> >> Hi,
> >>
> >> Any update on this? It now conflicts in a few places so it needs a
> >> rebase.
> >
> > I don't see any previous email on the topic - what kind of update, where
> > and how, are you looking for? Rebase against drm-tip so you pull it in?
> > Rebase against some internal in progress branch?
>
> Clearly you were after an update against drm-tip.. :) Problem here was
> no userspace but I can try to respin it.

Yes, against drm-tip. I rebased it, but I think there is something
wrong with it.
If you can share your version I can do some tests.

thanks
Lucas De Marchi

>
> Regards,
>
> Tvrtko
>
> >
> > Regards,
> >
> > Tvrtko
> >
> >> Lucas De Marchi
> >>
> >> On Wed, Apr 15, 2020 at 3:11 AM Tvrtko Ursulin
> >> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>
> >>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>>
> >>> Expose a list of clients with open file handles in sysfs.
> >>>
> >>> This will be a basis for a top-like utility showing per-client and per-
> >>> engine GPU load.
> >>>
> >>> Currently we only expose each client's pid and name under opaque
> >>> numbered
> >>> directories in /sys/class/drm/card0/clients/.
> >>>
> >>> For instance:
> >>>
> >>> /sys/class/drm/card0/clients/3/name: Xorg
> >>> /sys/class/drm/card0/clients/3/pid: 5664
> >>>
> >>> v2:
> >>>   Chris Wilson:
> >>>   * Enclose new members into dedicated structs.
> >>>   * Protect against failed sysfs registration.
> >>>
> >>> v3:
> >>>   * sysfs_attr_init.
> >>>
> >>> v4:
> >>>   * Fix for internal clients.
> >>>
> >>> v5:
> >>>   * Use cyclic ida for client id. (Chris)
> >>>   * Do not leak pid reference. (Chris)
> >>>   * Tidy code with some locals.
> >>>
> >>> v6:
> >>>   * Use xa_alloc_cyclic to simplify locking. (Chris)
> >>>   * No need to unregister individial sysfs files. (Chris)
> >>>   * Rebase on top of fpriv kref.
> >>>   * Track client closed status and reflect in sysfs.
> >>>
> >>> v7:
> >>>   * Make drm_client more standalone concept.
> >>>
> >>> v8:
> >>>   * Simplify sysfs show. (Chris)
> >>>   * Always track name and pid.
> >>>
> >>> v9:
> >>>   * Fix cyclic id assignment.
> >>>
> >>> v10:
> >>>   * No need for a mutex around xa_alloc_cyclic.
> >>>   * Refactor sysfs into own function.
> >>>   * Unregister sysfs before freeing pid and name.
> >>>   * Move clients setup into own function.
> >>>
> >>> v11:
> >>>   * Call clients init directly from driver init. (Chris)
> >>>
> >>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> ---
> >>>   drivers/gpu/drm/i915/Makefile          |   3 +-
> >>>   drivers/gpu/drm/i915/i915_drm_client.c | 179 +++++++++++++++++++++++++
> >>>   drivers/gpu/drm/i915/i915_drm_client.h |  64 +++++++++
> >>>   drivers/gpu/drm/i915/i915_drv.c        |   3 +
> >>>   drivers/gpu/drm/i915/i915_drv.h        |   5 +
> >>>   drivers/gpu/drm/i915/i915_gem.c        |  25 +++-
> >>>   drivers/gpu/drm/i915/i915_sysfs.c      |   8 ++
> >>>   7 files changed, 283 insertions(+), 4 deletions(-)
> >>>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
> >>>   create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/Makefile
> >>> b/drivers/gpu/drm/i915/Makefile
> >>> index 44c506b7e117..b30f3d51c66a 100644
> >>> --- a/drivers/gpu/drm/i915/Makefile
> >>> +++ b/drivers/gpu/drm/i915/Makefile
> >>> @@ -33,7 +33,8 @@ subdir-ccflags-y += -I$(srctree)/$(src)
> >>>   # Please keep these build lists sorted!
> >>>
> >>>   # core driver code
> >>> -i915-y += i915_drv.o \
> >>> +i915-y += i915_drm_client.o \
> >>> +         i915_drv.o \
> >>>            i915_irq.o \
> >>>            i915_getparam.o \
> >>>            i915_params.o \
> >>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
> >>> b/drivers/gpu/drm/i915/i915_drm_client.c
> >>> new file mode 100644
> >>> index 000000000000..2067fbcdb795
> >>> --- /dev/null
> >>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
> >>> @@ -0,0 +1,179 @@
> >>> +// SPDX-License-Identifier: MIT
> >>> +/*
> >>> + * Copyright © 2020 Intel Corporation
> >>> + */
> >>> +
> >>> +#include <linux/kernel.h>
> >>> +#include <linux/slab.h>
> >>> +#include <linux/types.h>
> >>> +
> >>> +#include "i915_drm_client.h"
> >>> +#include "i915_gem.h"
> >>> +#include "i915_utils.h"
> >>> +
> >>> +void i915_drm_clients_init(struct i915_drm_clients *clients)
> >>> +{
> >>> +       clients->next_id = 0;
> >>> +       xa_init_flags(&clients->xarray, XA_FLAGS_ALLOC);
> >>> +}
> >>> +
> >>> +static ssize_t
> >>> +show_client_name(struct device *kdev, struct device_attribute *attr,
> >>> char *buf)
> >>> +{
> >>> +       struct i915_drm_client *client =
> >>> +               container_of(attr, typeof(*client), attr.name);
> >>> +
> >>> +       return snprintf(buf, PAGE_SIZE,
> >>> +                       READ_ONCE(client->closed) ? "<%s>" : "%s",
> >>> +                       client->name);
> >>> +}
> >>> +
> >>> +static ssize_t
> >>> +show_client_pid(struct device *kdev, struct device_attribute *attr,
> >>> char *buf)
> >>> +{
> >>> +       struct i915_drm_client *client =
> >>> +               container_of(attr, typeof(*client), attr.pid);
> >>> +
> >>> +       return snprintf(buf, PAGE_SIZE,
> >>> +                       READ_ONCE(client->closed) ? "<%u>" : "%u",
> >>> +                       pid_nr(client->pid));
> >>> +}
> >>> +
> >>> +static int
> >>> +__client_register_sysfs(struct i915_drm_client *client)
> >>> +{
> >>> +       const struct {
> >>> +               const char *name;
> >>> +               struct device_attribute *attr;
> >>> +               ssize_t (*show)(struct device *dev,
> >>> +                               struct device_attribute *attr,
> >>> +                               char *buf);
> >>> +       } files[] = {
> >>> +               { "name", &client->attr.name, show_client_name },
> >>> +               { "pid", &client->attr.pid, show_client_pid },
> >>> +       };
> >>> +       unsigned int i;
> >>> +       char buf[16];
> >>> +       int ret;
> >>> +
> >>> +       ret = scnprintf(buf, sizeof(buf), "%u", client->id);
> >>> +       if (ret == sizeof(buf))
> >>> +               return -EINVAL;
> >>> +
> >>> +       client->root = kobject_create_and_add(buf,
> >>> client->clients->root);
> >>> +       if (!client->root)
> >>> +               return -ENOMEM;
> >>> +
> >>> +       for (i = 0; i < ARRAY_SIZE(files); i++) {
> >>> +               struct device_attribute *attr = files[i].attr;
> >>> +
> >>> +               sysfs_attr_init(&attr->attr);
> >>> +
> >>> +               attr->attr.name = files[i].name;
> >>> +               attr->attr.mode = 0444;
> >>> +               attr->show = files[i].show;
> >>> +
> >>> +               ret = sysfs_create_file(client->root, (struct
> >>> attribute *)attr);
> >>> +               if (ret)
> >>> +                       break;
> >>> +       }
> >>> +
> >>> +       if (ret)
> >>> +               kobject_put(client->root);
> >>> +
> >>> +       return ret;
> >>> +}
> >>> +
> >>> +static void __client_unregister_sysfs(struct i915_drm_client *client)
> >>> +{
> >>> +       kobject_put(fetch_and_zero(&client->root));
> >>> +}
> >>> +
> >>> +static int
> >>> +__i915_drm_client_register(struct i915_drm_client *client,
> >>> +                          struct task_struct *task)
> >>> +{
> >>> +       struct i915_drm_clients *clients = client->clients;
> >>> +       char *name;
> >>> +       int ret;
> >>> +
> >>> +       name = kstrdup(task->comm, GFP_KERNEL);
> >>> +       if (!name)
> >>> +               return -ENOMEM;
> >>> +
> >>> +       client->pid = get_task_pid(task, PIDTYPE_PID);
> >>> +       client->name = name;
> >>> +
> >>> +       if (!clients->root)
> >>> +               return 0; /* intel_fbdev_init registers a client
> >>> before sysfs */
> >>> +
> >>> +       ret = __client_register_sysfs(client);
> >>> +       if (ret)
> >>> +               goto err_sysfs;
> >>> +
> >>> +       return 0;
> >>> +
> >>> +err_sysfs:
> >>> +       put_pid(client->pid);
> >>> +       kfree(client->name);
> >>> +
> >>> +       return ret;
> >>> +}
> >>> +
> >>> +static void
> >>> +__i915_drm_client_unregister(struct i915_drm_client *client)
> >>> +{
> >>> +       __client_unregister_sysfs(client);
> >>> +
> >>> +       put_pid(fetch_and_zero(&client->pid));
> >>> +       kfree(fetch_and_zero(&client->name));
> >>> +}
> >>> +
> >>> +struct i915_drm_client *
> >>> +i915_drm_client_add(struct i915_drm_clients *clients, struct
> >>> task_struct *task)
> >>> +{
> >>> +       struct i915_drm_client *client;
> >>> +       int ret;
> >>> +
> >>> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
> >>> +       if (!client)
> >>> +               return ERR_PTR(-ENOMEM);
> >>> +
> >>> +       kref_init(&client->kref);
> >>> +       client->clients = clients;
> >>> +
> >>> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
> >>> +                             xa_limit_32b, &clients->next_id,
> >>> GFP_KERNEL);
> >>> +       if (ret)
> >>> +               goto err_id;
> >>> +
> >>> +       ret = __i915_drm_client_register(client, task);
> >>> +       if (ret)
> >>> +               goto err_register;
> >>> +
> >>> +       return client;
> >>> +
> >>> +err_register:
> >>> +       xa_erase(&clients->xarray, client->id);
> >>> +err_id:
> >>> +       kfree(client);
> >>> +
> >>> +       return ERR_PTR(ret);
> >>> +}
> >>> +
> >>> +void __i915_drm_client_free(struct kref *kref)
> >>> +{
> >>> +       struct i915_drm_client *client =
> >>> +               container_of(kref, typeof(*client), kref);
> >>> +
> >>> +       __i915_drm_client_unregister(client);
> >>> +       xa_erase(&client->clients->xarray, client->id);
> >>> +       kfree_rcu(client, rcu);
> >>> +}
> >>> +
> >>> +void i915_drm_client_close(struct i915_drm_client *client)
> >>> +{
> >>> +       GEM_BUG_ON(READ_ONCE(client->closed));
> >>> +       WRITE_ONCE(client->closed, true);
> >>> +       i915_drm_client_put(client);
> >>> +}
> >>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h
> >>> b/drivers/gpu/drm/i915/i915_drm_client.h
> >>> new file mode 100644
> >>> index 000000000000..af6998c74d4c
> >>> --- /dev/null
> >>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
> >>> @@ -0,0 +1,64 @@
> >>> +// SPDX-License-Identifier: MIT
> >>> +/*
> >>> + * Copyright © 2020 Intel Corporation
> >>> + */
> >>> +
> >>> +#ifndef __I915_DRM_CLIENT_H__
> >>> +#define __I915_DRM_CLIENT_H__
> >>> +
> >>> +#include <linux/device.h>
> >>> +#include <linux/kobject.h>
> >>> +#include <linux/kref.h>
> >>> +#include <linux/pid.h>
> >>> +#include <linux/rcupdate.h>
> >>> +#include <linux/sched.h>
> >>> +#include <linux/xarray.h>
> >>> +
> >>> +struct i915_drm_clients {
> >>> +       struct xarray xarray;
> >>> +       u32 next_id;
> >>> +
> >>> +       struct kobject *root;
> >>> +};
> >>> +
> >>> +struct i915_drm_client {
> >>> +       struct kref kref;
> >>> +
> >>> +       struct rcu_head rcu;
> >>> +
> >>> +       unsigned int id;
> >>> +       struct pid *pid;
> >>> +       char *name;
> >>> +       bool closed;
> >>> +
> >>> +       struct i915_drm_clients *clients;
> >>> +
> >>> +       struct kobject *root;
> >>> +       struct {
> >>> +               struct device_attribute pid;
> >>> +               struct device_attribute name;
> >>> +       } attr;
> >>> +};
> >>> +
> >>> +void i915_drm_clients_init(struct i915_drm_clients *clients);
> >>> +
> >>> +static inline struct i915_drm_client *
> >>> +i915_drm_client_get(struct i915_drm_client *client)
> >>> +{
> >>> +       kref_get(&client->kref);
> >>> +       return client;
> >>> +}
> >>> +
> >>> +void __i915_drm_client_free(struct kref *kref);
> >>> +
> >>> +static inline void i915_drm_client_put(struct i915_drm_client *client)
> >>> +{
> >>> +       kref_put(&client->kref, __i915_drm_client_free);
> >>> +}
> >>> +
> >>> +void i915_drm_client_close(struct i915_drm_client *client);
> >>> +
> >>> +struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients
> >>> *clients,
> >>> +                                           struct task_struct *task);
> >>> +
> >>> +#endif /* !__I915_DRM_CLIENT_H__ */
> >>> diff --git a/drivers/gpu/drm/i915/i915_drv.c
> >>> b/drivers/gpu/drm/i915/i915_drv.c
> >>> index 641f5e03b661..dac84b17d23d 100644
> >>> --- a/drivers/gpu/drm/i915/i915_drv.c
> >>> +++ b/drivers/gpu/drm/i915/i915_drv.c
> >>> @@ -70,6 +70,7 @@
> >>>   #include "gt/intel_rc6.h"
> >>>
> >>>   #include "i915_debugfs.h"
> >>> +#include "i915_drm_client.h"
> >>>   #include "i915_drv.h"
> >>>   #include "i915_ioc32.h"
> >>>   #include "i915_irq.h"
> >>> @@ -456,6 +457,8 @@ static int i915_driver_early_probe(struct
> >>> drm_i915_private *dev_priv)
> >>>
> >>>          i915_gem_init_early(dev_priv);
> >>>
> >>> +       i915_drm_clients_init(&dev_priv->clients);
> >>> +
> >>>          /* This must be called before any calls to HAS_PCH_* */
> >>>          intel_detect_pch(dev_priv);
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_drv.h
> >>> b/drivers/gpu/drm/i915/i915_drv.h
> >>> index e9ee4daa9320..f9f0c3ba6e4a 100644
> >>> --- a/drivers/gpu/drm/i915/i915_drv.h
> >>> +++ b/drivers/gpu/drm/i915/i915_drv.h
> >>> @@ -91,6 +91,7 @@
> >>>   #include "intel_wakeref.h"
> >>>   #include "intel_wopcm.h"
> >>>
> >>> +#include "i915_drm_client.h"
> >>>   #include "i915_gem.h"
> >>>   #include "i915_gem_gtt.h"
> >>>   #include "i915_gpu_error.h"
> >>> @@ -226,6 +227,8 @@ struct drm_i915_file_private {
> >>>          /** ban_score: Accumulated score of all ctx bans and fast
> >>> hangs. */
> >>>          atomic_t ban_score;
> >>>          unsigned long hang_timestamp;
> >>> +
> >>> +       struct i915_drm_client *client;
> >>>   };
> >>>
> >>>   /* Interface history:
> >>> @@ -1201,6 +1204,8 @@ struct drm_i915_private {
> >>>
> >>>          struct i915_pmu pmu;
> >>>
> >>> +       struct i915_drm_clients clients;
> >>> +
> >>>          struct i915_hdcp_comp_master *hdcp_master;
> >>>          bool hdcp_comp_added;
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_gem.c
> >>> b/drivers/gpu/drm/i915/i915_gem.c
> >>> index 0cbcb9f54e7d..5a0b5fae8b92 100644
> >>> --- a/drivers/gpu/drm/i915/i915_gem.c
> >>> +++ b/drivers/gpu/drm/i915/i915_gem.c
> >>> @@ -1234,6 +1234,8 @@ void i915_gem_cleanup_early(struct
> >>> drm_i915_private *dev_priv)
> >>>          GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
> >>>          GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
> >>>          drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
> >>> +       drm_WARN_ON(&dev_priv->drm,
> >>> !xa_empty(&dev_priv->clients.xarray));
> >>> +       xa_destroy(&dev_priv->clients.xarray);
> >>>   }
> >>>
> >>>   int i915_gem_freeze(struct drm_i915_private *dev_priv)
> >>> @@ -1288,6 +1290,8 @@ void i915_gem_release(struct drm_device *dev,
> >>> struct drm_file *file)
> >>>          struct drm_i915_file_private *file_priv = file->driver_priv;
> >>>          struct i915_request *request;
> >>>
> >>> +       i915_drm_client_close(file_priv->client);
> >>> +
> >>>          /* Clean up our request list when the client is going away,
> >>> so that
> >>>           * later retire_requests won't dereference our soon-to-be-gone
> >>>           * file_priv.
> >>> @@ -1301,17 +1305,25 @@ void i915_gem_release(struct drm_device *dev,
> >>> struct drm_file *file)
> >>>   int i915_gem_open(struct drm_i915_private *i915, struct drm_file
> >>> *file)
> >>>   {
> >>>          struct drm_i915_file_private *file_priv;
> >>> -       int ret;
> >>> +       struct i915_drm_client *client;
> >>> +       int ret = -ENOMEM;
> >>>
> >>>          DRM_DEBUG("\n");
> >>>
> >>>          file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
> >>>          if (!file_priv)
> >>> -               return -ENOMEM;
> >>> +               goto err_alloc;
> >>> +
> >>> +       client = i915_drm_client_add(&i915->clients, current);
> >>> +       if (IS_ERR(client)) {
> >>> +               ret = PTR_ERR(client);
> >>> +               goto err_client;
> >>> +       }
> >>>
> >>>          file->driver_priv = file_priv;
> >>>          file_priv->dev_priv = i915;
> >>>          file_priv->file = file;
> >>> +       file_priv->client = client;
> >>>
> >>>          spin_lock_init(&file_priv->mm.lock);
> >>>          INIT_LIST_HEAD(&file_priv->mm.request_list);
> >>> @@ -1321,8 +1333,15 @@ int i915_gem_open(struct drm_i915_private
> >>> *i915, struct drm_file *file)
> >>>
> >>>          ret = i915_gem_context_open(i915, file);
> >>>          if (ret)
> >>> -               kfree(file_priv);
> >>> +               goto err_context;
> >>> +
> >>> +       return 0;
> >>>
> >>> +err_context:
> >>> +       i915_drm_client_close(client);
> >>> +err_client:
> >>> +       kfree(file_priv);
> >>> +err_alloc:
> >>>          return ret;
> >>>   }
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c
> >>> b/drivers/gpu/drm/i915/i915_sysfs.c
> >>> index 45d32ef42787..b7d4a6d2dd5c 100644
> >>> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> >>> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> >>> @@ -560,6 +560,11 @@ void i915_setup_sysfs(struct drm_i915_private
> >>> *dev_priv)
> >>>          struct device *kdev = dev_priv->drm.primary->kdev;
> >>>          int ret;
> >>>
> >>> +       dev_priv->clients.root =
> >>> +               kobject_create_and_add("clients", &kdev->kobj);
> >>> +       if (!dev_priv->clients.root)
> >>> +               DRM_ERROR("Per-client sysfs setup failed\n");
> >>> +
> >>>   #ifdef CONFIG_PM
> >>>          if (HAS_RC6(dev_priv)) {
> >>>                  ret = sysfs_merge_group(&kdev->kobj,
> >>> @@ -627,4 +632,7 @@ void i915_teardown_sysfs(struct drm_i915_private
> >>> *dev_priv)
> >>>          sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
> >>>          sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
> >>>   #endif
> >>> +
> >>> +       if (dev_priv->clients.root)
> >>> +               kobject_put(dev_priv->clients.root);
> >>>   }
> >>> --
> >>> 2.20.1
> >>>
> >>> _______________________________________________
> >>> Intel-gfx mailing list
> >>> Intel-gfx@lists.freedesktop.org
> >>> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs
  2020-09-04  6:26         ` Lucas De Marchi
@ 2020-09-04 13:03           ` Tvrtko Ursulin
  0 siblings, 0 replies; 26+ messages in thread
From: Tvrtko Ursulin @ 2020-09-04 13:03 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: Intel Graphics, Chris Wilson


On 04/09/2020 07:26, Lucas De Marchi wrote:
> On Tue, Sep 1, 2020 at 8:25 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 01/09/2020 16:09, Tvrtko Ursulin wrote:
>>>
>>> Hi,
>>>
>>> On 26/08/2020 02:11, Lucas De Marchi wrote:
>>>> Hi,
>>>>
>>>> Any update on this? It now conflicts in a few places so it needs a
>>>> rebase.
>>>
>>> I don't see any previous email on the topic - what kind of update, where
>>> and how, are you looking for? Rebase against drm-tip so you pull it in?
>>> Rebase against some internal in progress branch?
>>
>> Clearly you were after an update against drm-tip.. :) Problem here was
>> no userspace but I can try to respin it.
> 
> Yes, against drm-tip. I rebased it, but I think there is something
> wrong with it.
> If you can share your version I can do some tests.

I've sent a series out just now. Tested it lightly (proper IGTs are 
still in progress) and it seems to work.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2021-07-12 12:17 [PATCH 0/8] " Tvrtko Ursulin
@ 2021-07-12 12:35 ` Patchwork
  0 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-07-12 12:35 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/92435/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
cd1de51f55b2 drm/i915: Explicitly track DRM clients
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 287 lines checked
f7063d21a85d drm/i915: Update client name on context create
8563c5aeca1a drm/i915: Make GEM contexts track DRM clients
db56ccda834e drm/i915: Track runtime spent in closed and unreachable GEM contexts
726f6083cc01 drm/i915: Track all user contexts per client
76b7c1579fce drm/i915: Track context current active time
-:139: WARNING:LINE_SPACING: Missing a blank line after declarations
#139: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:126:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
e43d1b5db399 drm/i915: Expose client engine utilisation via fdinfo
-:10: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface")'
#10: 
874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the

total: 1 errors, 0 warnings, 0 checks, 101 lines checked
1fe5f6f4bff5 drm: Document fdinfo format specification
-:32: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#32: 
new file mode 100644

-:37: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#37: FILE: Documentation/gpu/drm-usage-stats.rst:1:
+.. _drm-client-usage-stats:

total: 0 errors, 2 warnings, 0 checks, 136 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2021-05-20 15:12 [RFC 0/7] " Tvrtko Ursulin
@ 2021-05-20 15:55 ` Patchwork
  0 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-05-20 15:55 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/90375/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
1455dee675ac drm/i915: Explicitly track DRM clients
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 287 lines checked
bfb2b185e63f drm/i915: Update client name on context create
8b3d87efebae drm/i915: Make GEM contexts track DRM clients
bc02e5505f79 drm/i915: Track runtime spent in closed and unreachable GEM contexts
f7fcc7b31e1a drm/i915: Track all user contexts per client
2b0b1913bd6c drm/i915: Track context current active time
-:136: WARNING:LINE_SPACING: Missing a blank line after declarations
#136: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:125:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
d5123f923671 drm/i915: Expose client engine utilisation via fdinfo
-:10: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface")'
#10: 
874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the

-:31: WARNING:TYPO_SPELLING: 'writting' may be misspelled - perhaps 'writing'?
#31: 
in order to enable writting of generic top-like tools.
                   ^^^^^^^^

-:127: WARNING:PREFER_SEQ_PUTS: Prefer seq_puts to seq_printf
#127: FILE: drivers/gpu/drm/i915/i915_drm_client.c:228:
+	seq_printf(m, "drm-driver:\ti915\n");

total: 1 errors, 2 warnings, 0 checks, 96 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2021-05-13 10:59 [PATCH 0/7] " Tvrtko Ursulin
@ 2021-05-13 11:28 ` Patchwork
  0 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2021-05-13 11:28 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/90128/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
289c10899f79 drm/i915: Expose list of clients in sysfs
-:89: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#89: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 402 lines checked
d83c2db8842f drm/i915: Update client name on context create
71fdfda22feb drm/i915: Make GEM contexts track DRM clients
c8194e95eb88 drm/i915: Track runtime spent in closed and unreachable GEM contexts
fc8d2f8f24bf drm/i915: Track all user contexts per client
9ac58b591817 drm/i915: Track context current active time
-:138: WARNING:LINE_SPACING: Missing a blank line after declarations
#138: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:125:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
b83b94d7ebd7 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 152 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2020-09-14 13:12 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
@ 2020-09-14 17:47 ` Patchwork
  0 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-09-14 17:47 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/81652/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
ac3c07ab42f4 drm/i915: Expose list of clients in sysfs
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

-:274: WARNING:SPDX_LICENSE_TAG: Improper SPDX comment style for 'drivers/gpu/drm/i915/i915_drm_client.h', please use '/*' instead
#274: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

-:274: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#274: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

total: 0 errors, 3 warnings, 0 checks, 368 lines checked
5e257bb526a4 drm/i915: Update client name on context create
-:197: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#197: FILE: drivers/gpu/drm/i915/i915_drm_client.c:237:
+	if (!name) {
+		drm_notice(&i915->drm,

total: 0 errors, 1 warnings, 0 checks, 201 lines checked
02a3cfd8e1dd drm/i915: Make GEM contexts track DRM clients
25668d5d310b drm/i915: Track runtime spent in unreachable intel_contexts
556043f2cc84 drm/i915: Track runtime spent in closed GEM contexts
71e7b25bc497 drm/i915: Track all user contexts per client
9cd6cdc6cf88 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 164 lines checked
b58bb1362c49 drm/i915: Track context current active time
-:71: CHECK:LINE_SPACING: Please don't use multiple blank lines
#71: FILE: drivers/gpu/drm/i915/gt/intel_context.c:504:
+
+

-:134: WARNING:LINE_SPACING: Missing a blank line after declarations
#134: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:96:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 1 checks, 248 lines checked
2905d1efe3cd drm/i915: Prefer software tracked context busyness


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2020-09-04 12:59 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
@ 2020-09-04 14:05 ` Patchwork
  0 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2020-09-04 14:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/81336/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
0a3f079e9a9b drm/i915: Expose list of clients in sysfs
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

-:89: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#89: FILE: drivers/gpu/drm/i915/i915_drm_client.c:1:
+/*

-:90: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#90: FILE: drivers/gpu/drm/i915/i915_drm_client.c:2:
+ * SPDX-License-Identifier: MIT

-:275: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#275: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+/*

-:276: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#276: FILE: drivers/gpu/drm/i915/i915_drm_client.h:2:
+ * SPDX-License-Identifier: MIT

total: 0 errors, 5 warnings, 0 checks, 370 lines checked
83e61dc5c0ff drm/i915: Update client name on context create
-:197: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#197: FILE: drivers/gpu/drm/i915/i915_drm_client.c:238:
+	if (!name) {
+		drm_notice(&i915->drm,

total: 0 errors, 1 warnings, 0 checks, 198 lines checked
c3bc3f4edc15 drm/i915: Make GEM contexts track DRM clients
7e62f1a3be14 drm/i915: Track runtime spent in unreachable intel_contexts
99d0c612c5f9 drm/i915: Track runtime spent in closed GEM contexts
7375c4cd47bf drm/i915: Track all user contexts per client
817eec4963d2 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 164 lines checked
9552c8ab393c drm/i915: Track context current active time
-:71: CHECK:LINE_SPACING: Please don't use multiple blank lines
#71: FILE: drivers/gpu/drm/i915/gt/intel_context.c:504:
+
+

-:134: WARNING:LINE_SPACING: Missing a blank line after declarations
#134: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:96:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 1 checks, 248 lines checked
bf6729db49be drm/i915: Prefer software tracked context busyness


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] " Tvrtko Ursulin
@ 2019-12-16 17:45 ` Patchwork
  0 siblings, 0 replies; 26+ messages in thread
From: Patchwork @ 2019-12-16 17:45 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/70977/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
03aa1c71a788 drm/i915: Track per-context engine busyness
de4bbb7cc78d drm/i915: Expose list of clients in sysfs
-:67: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#67: FILE: drivers/gpu/drm/i915/i915_drv.h:1296:
+		spinlock_t idr_lock;

-:124: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#124: FILE: drivers/gpu/drm/i915/i915_gem.c:1546:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 2 checks, 239 lines checked
3caa68403667 drm/i915: Update client name on context create
-:74: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#74: FILE: drivers/gpu/drm/i915/i915_drv.h:1904:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 1 checks, 71 lines checked
db6e83f57574 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

-:98: CHECK:PREFER_KERNEL_TYPES: Prefer kernel type 'u64' over 'uint64_t'
#98: FILE: drivers/gpu/drm/i915/i915_gem.c:1557:
+	uint64_t total = bc->total;

-:125: WARNING:STATIC_CONST_CHAR_ARRAY: static const char * array should probably be static const char * const
#125: FILE: drivers/gpu/drm/i915/i915_gem.c:1584:
+static const char *uabi_class_names[] = {

total: 0 errors, 2 warnings, 1 checks, 157 lines checked
24ccc195eaa8 drm/i915: Add sysfs toggle to enable per-client engine stats

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-07-12 12:36 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] Per client engine busyness Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 1/9] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-08-26  1:11   ` Lucas De Marchi
2020-09-01 15:09     ` Tvrtko Ursulin
2020-09-01 15:25       ` Tvrtko Ursulin
2020-09-04  6:26         ` Lucas De Marchi
2020-09-04 13:03           ` Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 2/9] drm/i915: Update client name on context create Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 3/9] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 4/9] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 5/9] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 6/9] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 7/9] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 8/9] drm/i915: Track context current active time Tvrtko Ursulin
2020-04-15 10:11 ` [Intel-gfx] [PATCH 9/9] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-04-15 11:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork
2020-04-15 11:11 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-04-15 11:25 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2020-04-15 11:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-04-16  8:01 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2021-07-12 12:17 [PATCH 0/8] " Tvrtko Ursulin
2021-07-12 12:35 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2021-05-20 15:12 [RFC 0/7] " Tvrtko Ursulin
2021-05-20 15:55 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2021-05-13 10:59 [PATCH 0/7] " Tvrtko Ursulin
2021-05-13 11:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2020-09-14 13:12 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-09-14 17:47 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2020-09-04 12:59 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-09-04 14:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] " Tvrtko Ursulin
2019-12-16 17:45 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.