All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [RFC 00/12] Per client engine busyness
@ 2020-03-09 18:31 Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
                   ` (18 more replies)
  0 siblings, 19 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * Different way of tracking runtime of exited/unreachable context. This time
   round I accumulate those per context/client and engine class, but active
   contexts are kept in a list and tallied on sysfs reads.
 * I had to do a small tweak in the engine release code since I needed the
   GEM context for a bit longer. (So I can accumulate the intel_context runtime
   into it as it is getting freed, because context complete can be late.)
 * PPHWSP method is back and even comes first in the series this time. It still
   can't show the currently running workloads but the software tracking method
   suffers from the CSB processing delay with high frequency and very short
   batches.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.

Tvrtko Ursulin (12):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Use explicit flag to mark unreachable intel_context
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track per-context engine busyness
  drm/i915: Carry over past software tracked context runtime
  drm/i915: Prefer software tracked context busyness
  compare runtimes

 drivers/gpu/drm/i915/Makefile                 |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  83 +++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  26 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   4 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  20 +
 drivers/gpu/drm/i915/gt/intel_context.h       |  13 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |  10 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  15 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  34 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  31 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 413 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        | 100 +++++
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  37 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  21 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 767 insertions(+), 56 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-09 21:34   ` Chris Wilson
                     ` (2 more replies)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
                   ` (17 subsequent siblings)
  18 siblings, 3 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose a list of clients with open file handles in sysfs.

This will be a basis for a top-like utility showing per-client and per-
engine GPU load.

Currently we only expose each client's pid and name under opaque numbered
directories in /sys/class/drm/card0/clients/.

For instance:

/sys/class/drm/card0/clients/3/name: Xorg
/sys/class/drm/card0/clients/3/pid: 5664

v2:
 Chris Wilson:
 * Enclose new members into dedicated structs.
 * Protect against failed sysfs registration.

v3:
 * sysfs_attr_init.

v4:
 * Fix for internal clients.

v5:
 * Use cyclic ida for client id. (Chris)
 * Do not leak pid reference. (Chris)
 * Tidy code with some locals.

v6:
 * Use xa_alloc_cyclic to simplify locking. (Chris)
 * No need to unregister individial sysfs files. (Chris)
 * Rebase on top of fpriv kref.
 * Track client closed status and reflect in sysfs.

v7:
 * Make drm_client more standalone concept.

v8:
 * Simplify sysfs show. (Chris)
 * Always track name and pid.

v9:
 * Fix cyclic id assignment.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/Makefile          |   3 +-
 drivers/gpu/drm/i915/i915_drm_client.c | 155 +++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h |  64 ++++++++++
 drivers/gpu/drm/i915/i915_drv.h        |   5 +
 drivers/gpu/drm/i915/i915_gem.c        |  37 ++++--
 drivers/gpu/drm/i915/i915_sysfs.c      |   8 ++
 6 files changed, 264 insertions(+), 8 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 9f887a86e555..c6fc0f258ce3 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -36,7 +36,8 @@ subdir-ccflags-y += -I$(srctree)/$(src)
 # Please keep these build lists sorted!
 
 # core driver code
-i915-y += i915_drv.o \
+i915-y += i915_drm_client.o \
+	  i915_drv.o \
 	  i915_irq.o \
 	  i915_getparam.o \
 	  i915_params.o \
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
new file mode 100644
index 000000000000..fbba9667aa7e
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -0,0 +1,155 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2020 Intel Corporation
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "i915_drm_client.h"
+#include "i915_gem.h"
+#include "i915_utils.h"
+
+static ssize_t
+show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_drm_client *client =
+		container_of(attr, typeof(*client), attr.name);
+
+	return snprintf(buf, PAGE_SIZE,
+			client->closed ? "<%s>" : "%s",
+			client->name);
+}
+
+static ssize_t
+show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_drm_client *client =
+		container_of(attr, typeof(*client), attr.pid);
+
+	return snprintf(buf, PAGE_SIZE,
+			client->closed ? "<%u>" : "%u",
+			pid_nr(client->pid));
+}
+
+static int
+__i915_drm_client_register(struct i915_drm_client *client,
+			   struct task_struct *task)
+{
+	struct i915_drm_clients *clients = client->clients;
+	struct device_attribute *attr;
+	int ret = -ENOMEM;
+	char idstr[32];
+
+	client->pid = get_task_pid(task, PIDTYPE_PID);
+
+	client->name = kstrdup(task->comm, GFP_KERNEL);
+	if (!client->name)
+		goto err_name;
+
+	if (!clients->root)
+		return 0; /* intel_fbdev_init registers a client before sysfs */
+
+	snprintf(idstr, sizeof(idstr), "%u", client->id);
+	client->root = kobject_create_and_add(idstr, clients->root);
+	if (!client->root)
+		goto err_client;
+
+	attr = &client->attr.name;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "name";
+	attr->attr.mode = 0444;
+	attr->show = show_client_name;
+
+	ret = sysfs_create_file(client->root, (struct attribute *)attr);
+	if (ret)
+		goto err_attr;
+
+	attr = &client->attr.pid;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "pid";
+	attr->attr.mode = 0444;
+	attr->show = show_client_pid;
+
+	ret = sysfs_create_file(client->root, (struct attribute *)attr);
+	if (ret)
+		goto err_attr;
+
+	return 0;
+
+err_attr:
+	kobject_put(client->root);
+err_client:
+	kfree(client->name);
+err_name:
+	put_pid(client->pid);
+
+	return ret;
+}
+
+static void
+__i915_drm_client_unregister(struct i915_drm_client *client)
+{
+	put_pid(fetch_and_zero(&client->pid));
+	kfree(fetch_and_zero(&client->name));
+
+	if (!client->root)
+		return; /* fbdev client or error during drm open */
+
+	kobject_put(fetch_and_zero(&client->root));
+}
+
+struct i915_drm_client *
+i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
+{
+	struct i915_drm_client *client;
+	int ret;
+
+	client = kzalloc(sizeof(*client), GFP_KERNEL);
+	if (!client)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&client->kref);
+	client->clients = clients;
+
+	ret = mutex_lock_interruptible(&clients->lock);
+	if (ret)
+		goto err_id;
+	ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
+			      xa_limit_32b, &clients->next_id, GFP_KERNEL);
+	mutex_unlock(&clients->lock);
+	if (ret)
+		goto err_id;
+
+	ret = __i915_drm_client_register(client, task);
+	if (ret)
+		goto err_register;
+
+	return client;
+
+err_register:
+	xa_erase(&clients->xarray, client->id);
+err_id:
+	kfree(client);
+
+	return ERR_PTR(ret);
+}
+
+void __i915_drm_client_free(struct kref *kref)
+{
+	struct i915_drm_client *client =
+		container_of(kref, typeof(*client), kref);
+
+	__i915_drm_client_unregister(client);
+	xa_erase(&client->clients->xarray, client->id);
+	kfree_rcu(client, rcu);
+}
+
+void i915_drm_client_close(struct i915_drm_client *client)
+{
+	GEM_BUG_ON(client->closed);
+	client->closed = true;
+	i915_drm_client_put(client);
+}
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
new file mode 100644
index 000000000000..fb5979ec92d7
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -0,0 +1,64 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2020 Intel Corporation
+ */
+
+#ifndef __I915_DRM_CLIENT_H__
+#define __I915_DRM_CLIENT_H__
+
+#include <linux/device.h>
+#include <linux/kobject.h>
+#include <linux/kref.h>
+#include <linux/pid.h>
+#include <linux/rcupdate.h>
+#include <linux/sched.h>
+#include <linux/xarray.h>
+
+struct i915_drm_clients {
+	struct mutex lock;
+	struct xarray xarray;
+	u32 next_id;
+
+	struct kobject *root;
+};
+
+struct i915_drm_client {
+	struct kref kref;
+
+	struct rcu_head rcu;
+
+	unsigned int id;
+	struct pid *pid;
+	char *name;
+	bool closed;
+
+	struct i915_drm_clients *clients;
+
+	struct kobject *root;
+	struct {
+		struct device_attribute pid;
+		struct device_attribute name;
+	} attr;
+};
+
+static inline struct i915_drm_client *
+i915_drm_client_get(struct i915_drm_client *client)
+{
+	kref_get(&client->kref);
+	return client;
+}
+
+void __i915_drm_client_free(struct kref *kref);
+
+static inline void i915_drm_client_put(struct i915_drm_client *client)
+{
+	kref_put(&client->kref, __i915_drm_client_free);
+}
+
+void i915_drm_client_close(struct i915_drm_client *client);
+
+struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients,
+					    struct task_struct *task);
+
+#endif /* !__I915_DRM_CLIENT_H__ */
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c081f4ec87df..2eda21d730eb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -91,6 +91,7 @@
 #include "intel_wakeref.h"
 #include "intel_wopcm.h"
 
+#include "i915_drm_client.h"
 #include "i915_gem.h"
 #include "i915_gem_fence_reg.h"
 #include "i915_gem_gtt.h"
@@ -220,6 +221,8 @@ struct drm_i915_file_private {
 	/** ban_score: Accumulated score of all ctx bans and fast hangs. */
 	atomic_t ban_score;
 	unsigned long hang_timestamp;
+
+	struct i915_drm_client *client;
 };
 
 /* Interface history:
@@ -1193,6 +1196,8 @@ struct drm_i915_private {
 
 	struct i915_pmu pmu;
 
+	struct i915_drm_clients clients;
+
 	struct i915_hdcp_comp_master *hdcp_master;
 	bool hdcp_comp_added;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ca5420012a22..ec8734eb54eb 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1218,12 +1218,16 @@ static void i915_gem_init__mm(struct drm_i915_private *i915)
 	i915_gem_init__objects(i915);
 }
 
-void i915_gem_init_early(struct drm_i915_private *dev_priv)
+void i915_gem_init_early(struct drm_i915_private *i915)
 {
-	i915_gem_init__mm(dev_priv);
-	i915_gem_init__contexts(dev_priv);
+	i915_gem_init__mm(i915);
+	i915_gem_init__contexts(i915);
 
-	spin_lock_init(&dev_priv->fb_tracking.lock);
+	spin_lock_init(&i915->fb_tracking.lock);
+
+	mutex_init(&i915->clients.lock);
+	i915->clients.next_id = 0;
+	xa_init_flags(&i915->clients.xarray, XA_FLAGS_ALLOC);
 }
 
 void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
@@ -1232,6 +1236,8 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
 	GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
 	GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
 	drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
+	drm_WARN_ON(&dev_priv->drm, !xa_empty(&dev_priv->clients.xarray));
+	xa_destroy(&dev_priv->clients.xarray);
 }
 
 int i915_gem_freeze(struct drm_i915_private *dev_priv)
@@ -1286,6 +1292,8 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 	struct drm_i915_file_private *file_priv = file->driver_priv;
 	struct i915_request *request;
 
+	i915_drm_client_close(file_priv->client);
+
 	/* Clean up our request list when the client is going away, so that
 	 * later retire_requests won't dereference our soon-to-be-gone
 	 * file_priv.
@@ -1299,17 +1307,25 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv;
-	int ret;
+	struct i915_drm_client *client;
+	int ret = -ENOMEM;
 
 	DRM_DEBUG("\n");
 
 	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
 	if (!file_priv)
-		return -ENOMEM;
+		goto err_alloc;
+
+	client = i915_drm_client_add(&i915->clients, current);
+	if (IS_ERR(client)) {
+		ret = PTR_ERR(client);
+		goto err_client;
+	}
 
 	file->driver_priv = file_priv;
 	file_priv->dev_priv = i915;
 	file_priv->file = file;
+	file_priv->client = client;
 
 	spin_lock_init(&file_priv->mm.lock);
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
@@ -1319,8 +1335,15 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 
 	ret = i915_gem_context_open(i915, file);
 	if (ret)
-		kfree(file_priv);
+		goto err_context;
+
+	return 0;
 
+err_context:
+	i915_drm_client_close(client);
+err_client:
+	kfree(file_priv);
+err_alloc:
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 45d32ef42787..b7d4a6d2dd5c 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -560,6 +560,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	struct device *kdev = dev_priv->drm.primary->kdev;
 	int ret;
 
+	dev_priv->clients.root =
+		kobject_create_and_add("clients", &kdev->kobj);
+	if (!dev_priv->clients.root)
+		DRM_ERROR("Per-client sysfs setup failed\n");
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -627,4 +632,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
+
+	if (dev_priv->clients.root)
+		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 18:11   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some clients have the DRM fd passed to them over a socket by the X server.

Grab the real client and pid when they create their first context and
update the exposed data for more useful enumeration.

To enable lockless access to client name and pid data from the following
patches, we also make these fields rcu protected. In this way asynchronous
code paths where both contexts which remain after the client exit, and
access to client name and pid as they are getting updated due context
creation running in parallel with name/pid queries.

v2:
 * Do not leak the pid reference and borrow context idr_lock. (Chris)

v3:
 * More avoiding leaks. (Chris)

v4:
 * Move update completely to drm client. (Chris)
 * Do not lose previous client data on failure to re-register and simplify
   update to only touch what it needs.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c |   8 +-
 drivers/gpu/drm/i915/i915_drm_client.c      | 110 +++++++++++++++++---
 drivers/gpu/drm/i915/i915_drm_client.h      |  10 +-
 3 files changed, 113 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index cb6b6be48978..2c3fd9748d39 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -74,6 +74,7 @@
 #include "gt/intel_engine_user.h"
 #include "gt/intel_ring.h"
 
+#include "i915_drm_client.h"
 #include "i915_gem_context.h"
 #include "i915_globals.h"
 #include "i915_trace.h"
@@ -2294,6 +2295,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 {
 	struct drm_i915_private *i915 = to_i915(dev);
 	struct drm_i915_gem_context_create_ext *args = data;
+	struct drm_i915_file_private *file_priv = file->driver_priv;
 	struct create_ext ext_data;
 	int ret;
 	u32 id;
@@ -2308,7 +2310,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	ext_data.fpriv = file->driver_priv;
+	ext_data.fpriv = file_priv;
 	if (client_is_banned(ext_data.fpriv)) {
 		drm_dbg(&i915->drm,
 			"client %s[%d] banned from creating ctx\n",
@@ -2316,6 +2318,10 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 		return -EIO;
 	}
 
+	ret = i915_drm_client_update(file_priv->client, current);
+	if (ret)
+		return ret;
+
 	ext_data.ctx = i915_gem_create_context(i915, args->flags);
 	if (IS_ERR(ext_data.ctx))
 		return PTR_ERR(ext_data.ctx);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index fbba9667aa7e..a3de02cc3e6b 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -8,6 +8,9 @@
 #include <linux/slab.h>
 #include <linux/types.h>
 
+#include <drm/drm_print.h>
+
+#include "i915_drv.h"
 #include "i915_drm_client.h"
 #include "i915_gem.h"
 #include "i915_utils.h"
@@ -17,10 +20,15 @@ show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
 {
 	struct i915_drm_client *client =
 		container_of(attr, typeof(*client), attr.name);
+	int ret;
 
-	return snprintf(buf, PAGE_SIZE,
-			client->closed ? "<%s>" : "%s",
-			client->name);
+	rcu_read_lock();
+	ret = snprintf(buf, PAGE_SIZE,
+		       client->closed ? "<%s>" : "%s",
+		       rcu_dereference(client->name));
+	rcu_read_unlock();
+
+	return ret;
 }
 
 static ssize_t
@@ -28,10 +36,15 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 {
 	struct i915_drm_client *client =
 		container_of(attr, typeof(*client), attr.pid);
+	int ret;
+
+	rcu_read_lock();
+	ret = snprintf(buf, PAGE_SIZE,
+		       client->closed ? "<%u>" : "%u",
+		       pid_nr(rcu_dereference(client->pid)));
+	rcu_read_unlock();
 
-	return snprintf(buf, PAGE_SIZE,
-			client->closed ? "<%u>" : "%u",
-			pid_nr(client->pid));
+	return ret;
 }
 
 static int
@@ -42,12 +55,14 @@ __i915_drm_client_register(struct i915_drm_client *client,
 	struct device_attribute *attr;
 	int ret = -ENOMEM;
 	char idstr[32];
+	char *name;
 
-	client->pid = get_task_pid(task, PIDTYPE_PID);
+	rcu_assign_pointer(client->pid, get_task_pid(task, PIDTYPE_PID));
 
-	client->name = kstrdup(task->comm, GFP_KERNEL);
-	if (!client->name)
+	name = kstrdup(task->comm, GFP_KERNEL);
+	if (!name)
 		goto err_name;
+	rcu_assign_pointer(client->name, name);
 
 	if (!clients->root)
 		return 0; /* intel_fbdev_init registers a client before sysfs */
@@ -82,7 +97,7 @@ __i915_drm_client_register(struct i915_drm_client *client,
 err_attr:
 	kobject_put(client->root);
 err_client:
-	kfree(client->name);
+	kfree(name);
 err_name:
 	put_pid(client->pid);
 
@@ -92,8 +107,8 @@ __i915_drm_client_register(struct i915_drm_client *client,
 static void
 __i915_drm_client_unregister(struct i915_drm_client *client)
 {
-	put_pid(fetch_and_zero(&client->pid));
-	kfree(fetch_and_zero(&client->name));
+	put_pid(rcu_replace_pointer(client->pid, NULL, true));
+	kfree(rcu_replace_pointer(client->name, NULL, true));
 
 	if (!client->root)
 		return; /* fbdev client or error during drm open */
@@ -112,6 +127,7 @@ i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
 		return ERR_PTR(-ENOMEM);
 
 	kref_init(&client->kref);
+	mutex_init(&client->update_lock);
 	client->clients = clients;
 
 	ret = mutex_lock_interruptible(&clients->lock);
@@ -153,3 +169,73 @@ void i915_drm_client_close(struct i915_drm_client *client)
 	client->closed = true;
 	i915_drm_client_put(client);
 }
+
+struct client_update_free
+{
+	struct rcu_head rcu;
+	struct pid *pid;
+	char *name;
+};
+
+static void __client_update_free(struct rcu_head *rcu)
+{
+	struct client_update_free *old = container_of(rcu, typeof(*old), rcu);
+
+	put_pid(old->pid);
+	kfree(old->name);
+	kfree(old);
+}
+
+int
+i915_drm_client_update(struct i915_drm_client *client,
+		       struct task_struct *task)
+{
+	struct drm_i915_private *i915 =
+		container_of(client->clients, typeof(*i915), clients);
+	struct client_update_free *old;
+	struct pid *pid;
+	char *name;
+	int ret;
+
+	old = kmalloc(sizeof(*old), GFP_KERNEL);
+	if (!old)
+		return -ENOMEM;
+
+	ret = mutex_lock_interruptible(&client->update_lock);
+	if (ret)
+		goto out_free;
+
+	pid = get_task_pid(task, PIDTYPE_PID);
+	if (!pid)
+		goto out_pid;
+	if (pid == client->pid)
+		goto out_name;
+
+	name = kstrdup(task->comm, GFP_KERNEL);
+	if (!name) {
+		drm_notice(&i915->drm,
+			   "Failed to update client id=%u,name=%s,pid=%u! (%d)\n",
+			   client->id, client->name, pid_nr(client->pid), ret);
+		goto out_name;
+	}
+
+	init_rcu_head(&old->rcu);
+
+	old->pid = rcu_replace_pointer(client->pid, pid, true);
+	old->name = rcu_replace_pointer(client->name, name, true);
+
+	mutex_unlock(&client->update_lock);
+
+	call_rcu(&old->rcu, __client_update_free);
+
+	return 0;
+
+out_name:
+	put_pid(pid);
+out_pid:
+	mutex_unlock(&client->update_lock);
+out_free:
+	kfree(old);
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index fb5979ec92d7..7825df32798d 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -10,6 +10,7 @@
 #include <linux/device.h>
 #include <linux/kobject.h>
 #include <linux/kref.h>
+#include <linux/mutex.h>
 #include <linux/pid.h>
 #include <linux/rcupdate.h>
 #include <linux/sched.h>
@@ -28,9 +29,11 @@ struct i915_drm_client {
 
 	struct rcu_head rcu;
 
+	struct mutex update_lock;
+
 	unsigned int id;
-	struct pid *pid;
-	char *name;
+	struct pid __rcu *pid;
+	char __rcu *name;
 	bool closed;
 
 	struct i915_drm_clients *clients;
@@ -61,4 +64,7 @@ void i915_drm_client_close(struct i915_drm_client *client);
 struct i915_drm_client *i915_drm_client_add(struct i915_drm_clients *clients,
 					    struct task_struct *task);
 
+int i915_drm_client_update(struct i915_drm_client *client,
+			   struct task_struct *task);
+
 #endif /* !__I915_DRM_CLIENT_H__ */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 18:20   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

If we make GEM contexts keep a reference to i915_drm_client for the whole
of their lifetime, we can consolidate the current task pid and name usage
by getting it from the client.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   | 23 +++++++++++---
 .../gpu/drm/i915/gem/i915_gem_context_types.h | 13 ++------
 drivers/gpu/drm/i915/i915_debugfs.c           | 31 +++++++++----------
 drivers/gpu/drm/i915/i915_gpu_error.c         | 21 +++++++------
 4 files changed, 48 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 2c3fd9748d39..0f4150c8d7fe 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -300,8 +300,13 @@ static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
 
 static void i915_gem_context_free(struct i915_gem_context *ctx)
 {
+	struct i915_drm_client *client = ctx->client;
+
 	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
 
+	if (client)
+		i915_drm_client_put(client);
+
 	spin_lock(&ctx->i915->gem.contexts.lock);
 	list_del(&ctx->link);
 	spin_unlock(&ctx->i915->gem.contexts.lock);
@@ -311,7 +316,6 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 	if (ctx->timeline)
 		intel_timeline_put(ctx->timeline);
 
-	put_pid(ctx->pid);
 	mutex_destroy(&ctx->mutex);
 
 	kfree_rcu(ctx, rcu);
@@ -899,6 +903,7 @@ static int gem_context_register(struct i915_gem_context *ctx,
 				struct drm_i915_file_private *fpriv,
 				u32 *id)
 {
+	struct i915_drm_client *client;
 	struct i915_address_space *vm;
 	int ret;
 
@@ -910,15 +915,25 @@ static int gem_context_register(struct i915_gem_context *ctx,
 		WRITE_ONCE(vm->file, fpriv); /* XXX */
 	mutex_unlock(&ctx->mutex);
 
-	ctx->pid = get_task_pid(current, PIDTYPE_PID);
+	client = i915_drm_client_get(fpriv->client);
+
+	rcu_read_lock();
 	snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
-		 current->comm, pid_nr(ctx->pid));
+		 rcu_dereference(client->name),
+		 pid_nr(rcu_dereference(client->pid)));
+	rcu_read_unlock();
 
 	/* And finally expose ourselves to userspace via the idr */
 	ret = xa_alloc(&fpriv->context_xa, id, ctx, xa_limit_32b, GFP_KERNEL);
 	if (ret)
-		put_pid(fetch_and_zero(&ctx->pid));
+		goto err;
+
+	ctx->client = client;
 
+	return 0;
+
+err:
+	i915_drm_client_put(client);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 28760bd03265..b0e03380c690 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -96,20 +96,13 @@ struct i915_gem_context {
 	 */
 	struct i915_address_space __rcu *vm;
 
-	/**
-	 * @pid: process id of creator
-	 *
-	 * Note that who created the context may not be the principle user,
-	 * as the context may be shared across a local socket. However,
-	 * that should only affect the default context, all contexts created
-	 * explicitly by the client are expected to be isolated.
-	 */
-	struct pid *pid;
-
 	/** link: place with &drm_i915_private.context_list */
 	struct list_head link;
 	struct llist_node free_link;
 
+	/** client: struct i915_drm_client */
+	struct i915_drm_client *client;
+
 	/**
 	 * @ref: reference count
 	 *
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 8f2525e4ce0f..0655f1e7527d 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -330,17 +330,17 @@ static void print_context_stats(struct seq_file *m,
 				.vm = rcu_access_pointer(ctx->vm),
 			};
 			struct drm_file *file = ctx->file_priv->file;
-			struct task_struct *task;
 			char name[80];
 
 			rcu_read_lock();
+
 			idr_for_each(&file->object_idr, per_file_stats, &stats);
-			rcu_read_unlock();
 
-			rcu_read_lock();
-			task = pid_task(ctx->pid ?: file->pid, PIDTYPE_PID);
 			snprintf(name, sizeof(name), "%s",
-				 task ? task->comm : "<unknown>");
+				 I915_SELFTEST_ONLY(!ctx->client) ?
+				 "[kernel]" :
+				 rcu_dereference(ctx->client->name));
+
 			rcu_read_unlock();
 
 			print_file_stats(m, name, stats);
@@ -1273,19 +1273,16 @@ static int i915_context_status(struct seq_file *m, void *unused)
 		spin_unlock(&i915->gem.contexts.lock);
 
 		seq_puts(m, "HW context ");
-		if (ctx->pid) {
-			struct task_struct *task;
-
-			task = get_pid_task(ctx->pid, PIDTYPE_PID);
-			if (task) {
-				seq_printf(m, "(%s [%d]) ",
-					   task->comm, task->pid);
-				put_task_struct(task);
-			}
-		} else if (IS_ERR(ctx->file_priv)) {
-			seq_puts(m, "(deleted) ");
+
+		if (I915_SELFTEST_ONLY(!ctx->client)) {
+			seq_puts(m, "([kernel]) ");
 		} else {
-			seq_puts(m, "(kernel) ");
+			rcu_read_lock();
+			seq_printf(m, "(%s [%d]) %s",
+				   rcu_dereference(ctx->client->name),
+				   pid_nr(rcu_dereference(ctx->client->pid)),
+				   ctx->client->closed ? "(closed) " : "");
+			rcu_read_unlock();
 		}
 
 		seq_putc(m, ctx->remap_slice ? 'R' : 'r');
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 2a4cd0ba5464..653e1bc5050e 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1221,7 +1221,8 @@ static void record_request(const struct i915_request *request,
 	rcu_read_lock();
 	ctx = rcu_dereference(request->context->gem_context);
 	if (ctx)
-		erq->pid = pid_nr(ctx->pid);
+		erq->pid = I915_SELFTEST_ONLY(!ctx->client) ?
+			   0 : pid_nr(rcu_dereference(ctx->client->pid));
 	rcu_read_unlock();
 }
 
@@ -1241,23 +1242,25 @@ static bool record_context(struct i915_gem_context_coredump *e,
 			   const struct i915_request *rq)
 {
 	struct i915_gem_context *ctx;
-	struct task_struct *task;
 	bool simulated;
 
 	rcu_read_lock();
+
 	ctx = rcu_dereference(rq->context->gem_context);
 	if (ctx && !kref_get_unless_zero(&ctx->ref))
 		ctx = NULL;
-	rcu_read_unlock();
-	if (!ctx)
+	if (!ctx) {
+		rcu_read_unlock();
 		return true;
+	}
 
-	rcu_read_lock();
-	task = pid_task(ctx->pid, PIDTYPE_PID);
-	if (task) {
-		strcpy(e->comm, task->comm);
-		e->pid = task->pid;
+	if (I915_SELFTEST_ONLY(!ctx->client)) {
+		strcpy(e->comm, "[kernel]");
+	} else {
+		strcpy(e->comm, rcu_dereference(ctx->client->name));
+		e->pid = pid_nr(rcu_dereference(ctx->client->pid));
 	}
+
 	rcu_read_unlock();
 
 	e->sched_attr = ctx->sched;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (2 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 15:30   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
                   ` (14 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

I need to keep the GEM context around a bit longer so adding an explicit
flag for syncing execbuf with closed/abandonded contexts.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c    | 3 ++-
 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 4 ++--
 drivers/gpu/drm/i915/gt/intel_context_types.h  | 1 +
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 0f4150c8d7fe..abc3a3e2fcf1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -568,7 +568,8 @@ static void engines_idle_release(struct i915_gem_context *ctx,
 		int err = 0;
 
 		/* serialises with execbuf */
-		RCU_INIT_POINTER(ce->gem_context, NULL);
+		smp_store_mb(ce->closed, true);
+
 		if (!intel_context_pin_if_active(ce))
 			continue;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 0893ce781a84..0302757396d5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2547,8 +2547,8 @@ static void eb_request_add(struct i915_execbuffer *eb)
 	prev = __i915_request_commit(rq);
 
 	/* Check that the context wasn't destroyed before submission */
-	if (likely(rcu_access_pointer(eb->context->gem_context))) {
-		attr = eb->gem_context->sched;
+	if (likely(!READ_ONCE(eb->context->closed))) {
+		attr = rcu_dereference(eb->gem_context)->sched;
 
 		/*
 		 * Boost actual workloads past semaphores!
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 11278343b9b5..c60490e756f9 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -50,6 +50,7 @@ struct intel_context {
 
 	struct i915_address_space *vm;
 	struct i915_gem_context __rcu *gem_context;
+	bool closed;
 
 	struct list_head signal_link;
 	struct list_head signals;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (3 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 18:25   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

As contexts are abandoned we want to remember how much GPU time they used
(per class) so later we can used it for smarter purposes.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 13 ++++++++++++-
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  5 +++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index abc3a3e2fcf1..5f6861a36655 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -257,7 +257,19 @@ static void free_engines_rcu(struct rcu_head *rcu)
 {
 	struct i915_gem_engines *engines =
 		container_of(rcu, struct i915_gem_engines, rcu);
+	struct i915_gem_context *ctx = engines->ctx;
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+
+	/* Transfer accumulated runtime to the parent GEM context. */
+	for_each_gem_engine(ce, engines, it) {
+		unsigned int class = ce->engine->uabi_class;
 
+		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
+		atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
+	}
+
+	i915_gem_context_put(ctx);
 	i915_sw_fence_fini(&engines->fence);
 	free_engines(engines);
 }
@@ -540,7 +552,6 @@ static int engines_notify(struct i915_sw_fence *fence,
 			list_del(&engines->link);
 			spin_unlock_irqrestore(&ctx->stale.lock, flags);
 		}
-		i915_gem_context_put(engines->ctx);
 		break;
 
 	case FENCE_FREE:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index b0e03380c690..f0d7441aafc8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -177,6 +177,11 @@ struct i915_gem_context {
 		spinlock_t lock;
 		struct list_head engines;
 	} stale;
+
+	/**
+	 * @past_runtime: Accumulation of freed intel_context pphwsp runtimes.
+	 */
+	atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
 };
 
 #endif /* __I915_GEM_CONTEXT_TYPES_H__ */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (4 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 18:28   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

As GEM contexts are closed we want to have the DRM client remember how
much GPU time they used (per class) so later we can used it for smarter
purposes.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 12 +++++++++++-
 drivers/gpu/drm/i915/i915_drm_client.h      |  7 +++++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 5f6861a36655..d99143dca0ab 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -316,8 +316,18 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 
 	GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
 
-	if (client)
+	if (client) {
+		unsigned int i;
+
+		/* Transfer accumulated runtime to the parent drm client. */
+		BUILD_BUG_ON(ARRAY_SIZE(client->past_runtime) !=
+			     ARRAY_SIZE(ctx->past_runtime));
+		for (i = 0; i < ARRAY_SIZE(client->past_runtime); i++)
+			atomic64_add(atomic64_read(&ctx->past_runtime[i]),
+				     &client->past_runtime[i]);
+
 		i915_drm_client_put(client);
+	}
 
 	spin_lock(&ctx->i915->gem.contexts.lock);
 	list_del(&ctx->link);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 7825df32798d..10752107e8c7 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -16,6 +16,8 @@
 #include <linux/sched.h>
 #include <linux/xarray.h>
 
+#include "gt/intel_engine_types.h"
+
 struct i915_drm_clients {
 	struct mutex lock;
 	struct xarray xarray;
@@ -43,6 +45,11 @@ struct i915_drm_client {
 		struct device_attribute pid;
 		struct device_attribute name;
 	} attr;
+
+	/**
+	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
+	 */
+	atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
 };
 
 static inline struct i915_drm_client *
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (5 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

We soon want to start answering questions like how much GPU time is the
context belonging to a client which exited still using.

To enable this we start tracking all context belonging to a client on a
separate list.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 8 ++++++++
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h | 3 +++
 drivers/gpu/drm/i915/i915_drm_client.c            | 3 +++
 drivers/gpu/drm/i915/i915_drm_client.h            | 5 +++++
 4 files changed, 19 insertions(+)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index d99143dca0ab..d3887712f8c3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -319,6 +319,10 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 	if (client) {
 		unsigned int i;
 
+		spin_lock(&client->ctx_lock);
+		list_del_rcu(&ctx->client_link);
+		spin_unlock(&client->ctx_lock);
+
 		/* Transfer accumulated runtime to the parent drm client. */
 		BUILD_BUG_ON(ARRAY_SIZE(client->past_runtime) !=
 			     ARRAY_SIZE(ctx->past_runtime));
@@ -952,6 +956,10 @@ static int gem_context_register(struct i915_gem_context *ctx,
 
 	ctx->client = client;
 
+	spin_lock(&client->ctx_lock);
+	list_add_tail_rcu(&ctx->client_link, &client->ctx_list);
+	spin_unlock(&client->ctx_lock);
+
 	return 0;
 
 err:
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index f0d7441aafc8..255fcc469d9b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -103,6 +103,9 @@ struct i915_gem_context {
 	/** client: struct i915_drm_client */
 	struct i915_drm_client *client;
 
+	/** link: &drm_client.context_list */
+	struct list_head client_link;
+
 	/**
 	 * @ref: reference count
 	 *
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index a3de02cc3e6b..c9a510c6c6d4 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -128,6 +128,9 @@ i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
 
 	kref_init(&client->kref);
 	mutex_init(&client->update_lock);
+	spin_lock_init(&client->ctx_lock);
+	INIT_LIST_HEAD(&client->ctx_list);
+
 	client->clients = clients;
 
 	ret = mutex_lock_interruptible(&clients->lock);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 10752107e8c7..0a9f2c0c12dd 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -10,10 +10,12 @@
 #include <linux/device.h>
 #include <linux/kobject.h>
 #include <linux/kref.h>
+#include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/pid.h>
 #include <linux/rcupdate.h>
 #include <linux/sched.h>
+#include <linux/spinlock.h>
 #include <linux/xarray.h>
 
 #include "gt/intel_engine_types.h"
@@ -38,6 +40,9 @@ struct i915_drm_client {
 	char __rcu *name;
 	bool closed;
 
+	spinlock_t ctx_lock;
+	struct list_head ctx_list;
+
 	struct i915_drm_clients *clients;
 
 	struct kobject *root;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (6 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 18:32   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose per-client and per-engine busyness under the previously added sysfs
client root.

The new files are one per-engine instance and located under the 'busy'
directory. Each contains a monotonically increasing nano-second resolution
times each client's jobs were executing on the GPU.

This enables userspace to create a top-like tool for GPU utilization:

==========================================================================
intel-gpu-top -  935/ 935 MHz;    0% RC6; 14.73 Watts;     1097 irqs/s

      IMC reads:     1401 MiB/s
     IMC writes:        4 MiB/s

          ENGINE      BUSY                                 MI_SEMA MI_WAIT
     Render/3D/0   63.73% |███████████████████           |      3%      0%
       Blitter/0    9.53% |██▊                           |      6%      0%
         Video/0   39.32% |███████████▊                  |     16%      0%
         Video/1   15.62% |████▋                         |      0%      0%
  VideoEnhance/0    0.00% |                              |      0%      0%

  PID            NAME     RCS          BCS          VCS         VECS
 4084        gem_wsim |█████▌     ||█          ||           ||           |
 4086        gem_wsim |█▌         ||           ||███        ||           |
==========================================================================

v2: Use intel_context_engine_get_busy_time.
v3: New directory structure.
v4: Rebase.
v5: sysfs_attr_init.
v6: Small tidy in i915_gem_add_client.
v7: Rebase to be engine class based.
v8:
 * Always enable stats.
 * Walk all client contexts.
v9:
 * Skip unsupported engine classes. (Chris)
 * Use scheduler caps. (Chris)
v10:
 * Use pphwsp runtime only.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drm_client.c | 90 +++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_drm_client.h | 11 ++++
 2 files changed, 100 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index c9a510c6c6d4..6df5a21f5d4e 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -10,8 +10,13 @@
 
 #include <drm/drm_print.h>
 
+#include <uapi/drm/i915_drm.h>
+
 #include "i915_drv.h"
 #include "i915_drm_client.h"
+#include "gem/i915_gem_context.h"
+#include "gt/intel_engine_user.h"
+#include "i915_drv.h"
 #include "i915_gem.h"
 #include "i915_utils.h"
 
@@ -47,13 +52,61 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return ret;
 }
 
+static u64
+pphwsp_busy_add(struct i915_gem_context *ctx, unsigned int class)
+{
+	struct i915_gem_engines *engines = rcu_dereference(ctx->engines);
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	u64 total = 0;
+
+	for_each_gem_engine(ce, engines, it) {
+		if (ce->engine->uabi_class == class)
+			total += ce->runtime.total;
+	}
+
+	return total;
+}
+
+static ssize_t
+show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	unsigned int class = i915_attr->engine_class;
+	struct i915_drm_client *client = i915_attr->client;
+	u64 total = atomic64_read(&client->past_runtime[class]);
+	struct list_head *list = &client->ctx_list;
+	struct i915_gem_context *ctx;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(ctx, list, client_link) {
+		total += atomic64_read(&ctx->past_runtime[class]);
+		total += pphwsp_busy_add(ctx, class);
+	}
+	rcu_read_unlock();
+
+	total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", total);
+}
+
+static const char *uabi_class_names[] = {
+	[I915_ENGINE_CLASS_RENDER] = "0",
+	[I915_ENGINE_CLASS_COPY] = "1",
+	[I915_ENGINE_CLASS_VIDEO] = "2",
+	[I915_ENGINE_CLASS_VIDEO_ENHANCE] = "3",
+};
+
 static int
 __i915_drm_client_register(struct i915_drm_client *client,
 			   struct task_struct *task)
 {
 	struct i915_drm_clients *clients = client->clients;
+	struct drm_i915_private *i915 =
+		container_of(clients, typeof(*i915), clients);
 	struct device_attribute *attr;
-	int ret = -ENOMEM;
+	int i, ret = -ENOMEM;
 	char idstr[32];
 	char *name;
 
@@ -92,8 +145,42 @@ __i915_drm_client_register(struct i915_drm_client *client,
 	if (ret)
 		goto err_attr;
 
+	if (HAS_LOGICAL_RING_CONTEXTS(i915)) {
+		client->busy_root =
+			kobject_create_and_add("busy", client->root);
+		if (!client->busy_root)
+			goto err_attr;
+
+		for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) {
+			struct i915_engine_busy_attribute *i915_attr =
+				&client->attr.busy[i];
+
+			if (!intel_engine_lookup_user(i915, i, 0))
+				continue;
+
+			i915_attr->client = client;
+			i915_attr->i915 = i915;
+			i915_attr->engine_class = i;
+
+			attr = &i915_attr->attr;
+
+			sysfs_attr_init(&attr->attr);
+
+			attr->attr.name = uabi_class_names[i];
+			attr->attr.mode = 0444;
+			attr->show = show_client_busy;
+
+			ret = sysfs_create_file(client->busy_root,
+						(struct attribute *)attr);
+			if (ret)
+				goto err_busy;
+		}
+	}
+
 	return 0;
 
+err_busy:
+	kobject_put(client->busy_root);
 err_attr:
 	kobject_put(client->root);
 err_client:
@@ -113,6 +200,7 @@ __i915_drm_client_unregister(struct i915_drm_client *client)
 	if (!client->root)
 		return; /* fbdev client or error during drm open */
 
+	kobject_put(fetch_and_zero(&client->busy_root));
 	kobject_put(fetch_and_zero(&client->root));
 }
 
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 0a9f2c0c12dd..da83259170e7 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -28,6 +28,15 @@ struct i915_drm_clients {
 	struct kobject *root;
 };
 
+struct i915_drm_client;
+
+struct i915_engine_busy_attribute {
+	struct device_attribute attr;
+	struct drm_i915_private *i915;
+	struct i915_drm_client *client;
+	unsigned int engine_class;
+};
+
 struct i915_drm_client {
 	struct kref kref;
 
@@ -46,9 +55,11 @@ struct i915_drm_client {
 	struct i915_drm_clients *clients;
 
 	struct kobject *root;
+	struct kobject *busy_root;
 	struct {
 		struct device_attribute pid;
 		struct device_attribute name;
+		struct i915_engine_busy_attribute busy[MAX_ENGINE_CLASS + 1];
 	} attr;
 
 	/**
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (7 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-10 18:36   ` Chris Wilson
  2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some customers want to know how much of the GPU time are their clients
using in order to make dynamic load balancing decisions.

With the hooks already in place which track the overall engine busyness,
we can extend that slightly to split that time between contexts.

v2: Fix accounting for tail updates.
v3: Rebase.
v4: Mark currently running contexts as active on stats enable.
v5: Include some headers to fix the build.
v6: Added fine grained lock.
v7: Convert to seqlock. (Chris Wilson)
v8: Rebase and tidy with helpers.
v9: Refactor.
v10: Move recording start to promotion. (Chris)
v11: Consolidate duplicated code. (Chris)
v12: execlists->active cannot be NULL. (Chris)
v13: Move start to set_timeslice. (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c       | 20 +++++++++++
 drivers/gpu/drm/i915/gt/intel_context.h       | 13 +++++++
 drivers/gpu/drm/i915/gt/intel_context_types.h |  9 +++++
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     | 15 ++++++--
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 34 ++++++++++++++++++-
 5 files changed, 88 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 01474d3a558b..c09b5fe7f61d 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -296,6 +296,7 @@ intel_context_init(struct intel_context *ce,
 	INIT_LIST_HEAD(&ce->signals);
 
 	mutex_init(&ce->pin_mutex);
+	seqlock_init(&ce->stats.lock);
 
 	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire);
@@ -390,6 +391,25 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
 	return rq;
 }
 
+ktime_t intel_context_get_busy_time(struct intel_context *ce)
+{
+	unsigned int seq;
+	ktime_t total;
+
+	do {
+		seq = read_seqbegin(&ce->stats.lock);
+
+		total = ce->stats.total;
+
+		if (ce->stats.active)
+			total = ktime_add(total,
+					  ktime_sub(ktime_get(),
+						    ce->stats.start));
+	} while (read_seqretry(&ce->stats.lock, seq));
+
+	return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index 18efad255124..b18b0012cb40 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -244,4 +244,17 @@ static inline u64 intel_context_get_avg_runtime_ns(struct intel_context *ce)
 	return mul_u32_u32(ewma_runtime_read(&ce->runtime.avg), period);
 }
 
+static inline void
+__intel_context_stats_start(struct intel_context *ce, ktime_t now)
+{
+	struct intel_context_stats *stats = &ce->stats;
+
+	if (!stats->active) {
+		stats->start = now;
+		stats->active = true;
+	}
+}
+
+ktime_t intel_context_get_busy_time(struct intel_context *ce);
+
 #endif /* __INTEL_CONTEXT_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index c60490e756f9..120532ddd6fa 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -12,6 +12,7 @@
 #include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
+#include <linux/seqlock.h>
 
 #include "i915_active_types.h"
 #include "i915_utils.h"
@@ -96,6 +97,14 @@ struct intel_context {
 
 	/** sseu: Control eu/slice partitioning */
 	struct intel_sseu sseu;
+
+	/** stats: Context GPU engine busyness tracking. */
+	struct intel_context_stats {
+		seqlock_t lock;
+		bool active;
+		ktime_t start;
+		ktime_t total;
+	} stats;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 53ac3f00909a..3845093b41ee 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1594,8 +1594,19 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 
 		engine->stats.enabled_at = ktime_get();
 
-		/* XXX submission method oblivious? */
-		for (port = execlists->active; (rq = *port); port++)
+		/*
+		 * Mark currently running context as active.
+		 * XXX submission method oblivious?
+		 */
+
+		rq = NULL;
+		port = execlists->active;
+		rq = *port;
+		if (rq)
+			__intel_context_stats_start(rq->context,
+						    engine->stats.enabled_at);
+
+		for (; (rq = *port); port++)
 			engine->stats.active++;
 
 		for (port = execlists->pending; (rq = *port); port++) {
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 13941d1c0a4a..5f7bf4cf86cd 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1237,6 +1237,32 @@ static void intel_context_update_runtime(struct intel_context *ce)
 	ce->runtime.total += dt;
 }
 
+static void intel_context_stats_start(struct intel_context *ce)
+{
+	struct intel_context_stats *stats = &ce->stats;
+	unsigned long flags;
+
+	write_seqlock_irqsave(&stats->lock, flags);
+	__intel_context_stats_start(ce, ktime_get());
+	write_sequnlock_irqrestore(&stats->lock, flags);
+}
+
+static void intel_context_stats_stop(struct intel_context *ce)
+{
+	struct intel_context_stats *stats = &ce->stats;
+	unsigned long flags;
+
+	if (!READ_ONCE(stats->active))
+		return;
+
+	write_seqlock_irqsave(&stats->lock, flags);
+	GEM_BUG_ON(!READ_ONCE(stats->active));
+	stats->total = ktime_add(stats->total,
+				 ktime_sub(ktime_get(), stats->start));
+	stats->active = false;
+	write_sequnlock_irqrestore(&stats->lock, flags);
+}
+
 static inline struct intel_engine_cs *
 __execlists_schedule_in(struct i915_request *rq)
 {
@@ -1304,7 +1330,7 @@ static inline void
 __execlists_schedule_out(struct i915_request *rq,
 			 struct intel_engine_cs * const engine)
 {
-	struct intel_context * const ce = rq->context;
+	struct intel_context *ce = rq->context;
 
 	/*
 	 * NB process_csb() is not under the engine->active.lock and hence
@@ -1322,6 +1348,7 @@ __execlists_schedule_out(struct i915_request *rq,
 
 	intel_context_update_runtime(ce);
 	intel_engine_context_out(engine);
+	intel_context_stats_stop(ce);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 	intel_gt_pm_put_async(engine->gt);
 
@@ -1797,6 +1824,11 @@ active_timeslice(const struct intel_engine_cs *engine)
 
 static void set_timeslice(struct intel_engine_cs *engine)
 {
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+
+	if (*execlists->active)
+		intel_context_stats_start((*execlists->active)->context);
+
 	if (!intel_engine_has_timeslices(engine))
 		return;
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (8 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Accumulate software tracked runtime from abandoned engines and destroyed
contexts (same as we previously did for pphwsp runtimes).

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c       | 11 ++++++++++-
 drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  5 +++++
 drivers/gpu/drm/i915/i915_drm_client.h            |  5 +++++
 3 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index d3887712f8c3..abf8c777041d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -267,6 +267,10 @@ static void free_engines_rcu(struct rcu_head *rcu)
 
 		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
 		atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
+
+		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_sw_runtime));
+		atomic64_add(ktime_to_ns(intel_context_get_busy_time(ce)),
+			     &ctx->past_sw_runtime[class]);
 	}
 
 	i915_gem_context_put(ctx);
@@ -326,9 +330,14 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
 		/* Transfer accumulated runtime to the parent drm client. */
 		BUILD_BUG_ON(ARRAY_SIZE(client->past_runtime) !=
 			     ARRAY_SIZE(ctx->past_runtime));
-		for (i = 0; i < ARRAY_SIZE(client->past_runtime); i++)
+		BUILD_BUG_ON(ARRAY_SIZE(client->past_sw_runtime) !=
+			     ARRAY_SIZE(ctx->past_sw_runtime));
+		for (i = 0; i < ARRAY_SIZE(client->past_runtime); i++) {
 			atomic64_add(atomic64_read(&ctx->past_runtime[i]),
 				     &client->past_runtime[i]);
+			atomic64_add(atomic64_read(&ctx->past_sw_runtime[i]),
+				     &client->past_sw_runtime[i]);
+		}
 
 		i915_drm_client_put(client);
 	}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
index 255fcc469d9b..fac11b208ea9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
@@ -185,6 +185,11 @@ struct i915_gem_context {
 	 * @past_runtime: Accumulation of freed intel_context pphwsp runtimes.
 	 */
 	atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
+
+	/**
+	 * @past_sw_runtime: Accumulation of freed intel_context runtimes.
+	 */
+	atomic64_t past_sw_runtime[MAX_ENGINE_CLASS + 1];
 };
 
 #endif /* __I915_GEM_CONTEXT_TYPES_H__ */
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index da83259170e7..aa1e446d0376 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -66,6 +66,11 @@ struct i915_drm_client {
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
 	atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
+
+	/**
+	 * @past_sw_runtime: Accumulation of runtimes from closed contexts.
+	 */
+	atomic64_t past_sw_runtime[MAX_ENGINE_CLASS + 1];
 };
 
 static inline struct i915_drm_client *
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (9 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

When available prefer context tracked context busyness because it provides
visibility into currently executing contexts as well.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drm_client.c | 83 +++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_drm_client.h |  2 +
 2 files changed, 84 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 6df5a21f5d4e..c6b463650ba2 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -91,6 +91,46 @@ show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%llu\n", total);
 }
 
+static u64
+sw_busy_add(struct i915_gem_context *ctx, unsigned int class)
+{
+	struct i915_gem_engines *engines = rcu_dereference(ctx->engines);
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	ktime_t total = 0;
+
+	for_each_gem_engine(ce, engines, it) {
+		if (ce->engine->uabi_class == class)
+			total = ktime_add(total,
+					  intel_context_get_busy_time(ce));
+	}
+
+	return ktime_to_ns(total);
+}
+
+static ssize_t
+show_client_sw_busy(struct device *kdev,
+		    struct device_attribute *attr,
+		    char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	unsigned int class = i915_attr->engine_class;
+	struct i915_drm_client *client = i915_attr->client;
+	u64 total = atomic64_read(&client->past_sw_runtime[class]);
+	struct list_head *list = &client->ctx_list;
+	struct i915_gem_context *ctx;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(ctx, list, client_link) {
+		total += atomic64_read(&ctx->past_sw_runtime[class]);
+		total += sw_busy_add(ctx, class);
+	}
+	rcu_read_unlock();
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", total);
+}
+
 static const char *uabi_class_names[] = {
 	[I915_ENGINE_CLASS_RENDER] = "0",
 	[I915_ENGINE_CLASS_COPY] = "1",
@@ -146,11 +186,39 @@ __i915_drm_client_register(struct i915_drm_client *client,
 		goto err_attr;
 
 	if (HAS_LOGICAL_RING_CONTEXTS(i915)) {
+		struct intel_engine_cs *engine;
+		bool sw_stats = true;
+
 		client->busy_root =
 			kobject_create_and_add("busy", client->root);
 		if (!client->busy_root)
 			goto err_attr;
 
+		/* Enable busy stats on all engines. */
+		i = 0;
+		for_each_uabi_engine(engine, i915) {
+			ret = intel_enable_engine_stats(engine);
+			if (ret) {
+				int j;
+
+				/* Unwind if not available. */
+				j = 0;
+				for_each_uabi_engine(engine, i915) {
+					if (j++ == i)
+						break;
+
+					intel_disable_engine_stats(engine);
+				}
+
+				sw_stats = false;
+				dev_notice_once(i915->drm.dev,
+						"Engine busy stats not available! (%d)",
+						ret);
+				break;
+			}
+			i++;
+		}
+
 		for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) {
 			struct i915_engine_busy_attribute *i915_attr =
 				&client->attr.busy[i];
@@ -168,13 +236,17 @@ __i915_drm_client_register(struct i915_drm_client *client,
 
 			attr->attr.name = uabi_class_names[i];
 			attr->attr.mode = 0444;
-			attr->show = show_client_busy;
+			attr->show = sw_stats ?
+				     show_client_sw_busy :
+				     show_client_busy;
 
 			ret = sysfs_create_file(client->busy_root,
 						(struct attribute *)attr);
 			if (ret)
 				goto err_busy;
 		}
+
+		client->sw_busy = sw_stats;
 	}
 
 	return 0;
@@ -200,6 +272,15 @@ __i915_drm_client_unregister(struct i915_drm_client *client)
 	if (!client->root)
 		return; /* fbdev client or error during drm open */
 
+	if (client->busy_root && client->sw_busy) {
+		struct drm_i915_private *i915 =
+			container_of(client->clients, typeof(*i915), clients);
+		struct intel_engine_cs *engine;
+
+		for_each_uabi_engine(engine, i915)
+			intel_disable_engine_stats(engine);
+	}
+
 	kobject_put(fetch_and_zero(&client->busy_root));
 	kobject_put(fetch_and_zero(&client->root));
 }
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index aa1e446d0376..bc15f371715f 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -62,6 +62,8 @@ struct i915_drm_client {
 		struct i915_engine_busy_attribute busy[MAX_ENGINE_CLASS + 1];
 	} attr;
 
+	bool sw_busy;
+
 	/**
 	 * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
 	 */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 12/12] compare runtimes
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (10 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
@ 2020-03-09 18:31 ` Tvrtko Ursulin
  2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 18:31 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index abf8c777041d..1c41079dcf17 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -264,13 +264,22 @@ static void free_engines_rcu(struct rcu_head *rcu)
 	/* Transfer accumulated runtime to the parent GEM context. */
 	for_each_gem_engine(ce, engines, it) {
 		unsigned int class = ce->engine->uabi_class;
+		u64 runtime[2];
 
 		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
-		atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
+		runtime[0] = ce->runtime.total;
+		atomic64_add(runtime[0], &ctx->past_runtime[class]);
+		runtime[0] *= RUNTIME_INFO(ctx->i915)->cs_timestamp_period_ns;
 
 		GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_sw_runtime));
-		atomic64_add(ktime_to_ns(intel_context_get_busy_time(ce)),
-			     &ctx->past_sw_runtime[class]);
+		runtime[1] = ktime_to_ns(intel_context_get_busy_time(ce));
+		atomic64_add(runtime[1], &ctx->past_sw_runtime[class]);
+
+		if (runtime[0] || runtime[1])
+			printk("%p class%u %llums vs %llums\n",
+			       ce, class,
+			       runtime[0] / 1000000,
+			       runtime[1] / 1000000);
 	}
 
 	i915_gem_context_put(ctx);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5)
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (11 preceding siblings ...)
  2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
@ 2020-03-09 19:05 ` Patchwork
  2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2020-03-09 19:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (rev5)
URL   : https://patchwork.freedesktop.org/series/70977/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
a60fd0a6e3d1 drm/i915: Expose list of clients in sysfs
-:68: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#68: 
new file mode 100644

-:73: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#73: FILE: drivers/gpu/drm/i915/i915_drm_client.c:1:
+/*

-:74: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#74: FILE: drivers/gpu/drm/i915/i915_drm_client.c:2:
+ * SPDX-License-Identifier: MIT

-:234: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#234: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+/*

-:235: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#235: FILE: drivers/gpu/drm/i915/i915_drm_client.h:2:
+ * SPDX-License-Identifier: MIT

-:252: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#252: FILE: drivers/gpu/drm/i915/i915_drm_client.h:19:
+	struct mutex lock;

total: 0 errors, 5 warnings, 1 checks, 348 lines checked
c517889b70f2 drm/i915: Update client name on context create
-:174: ERROR:OPEN_BRACE: open brace '{' following struct go on the same line
#174: FILE: drivers/gpu/drm/i915/i915_drm_client.c:174:
+struct client_update_free
+{

-:216: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#216: FILE: drivers/gpu/drm/i915/i915_drm_client.c:216:
+	if (!name) {
+		drm_notice(&i915->drm,

-:258: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#258: FILE: drivers/gpu/drm/i915/i915_drm_client.h:32:
+	struct mutex update_lock;

total: 1 errors, 1 warnings, 1 checks, 219 lines checked
3867321c33f3 drm/i915: Make GEM contexts track DRM clients
eb5a71b451e7 drm/i915: Use explicit flag to mark unreachable intel_context
-:20: WARNING:MEMORY_BARRIER: memory barrier without comment
#20: FILE: drivers/gpu/drm/i915/gem/i915_gem_context.c:571:
+		smp_store_mb(ce->closed, true);

total: 0 errors, 1 warnings, 0 checks, 26 lines checked
5d9fea699f4f drm/i915: Track runtime spent in unreachable intel_contexts
555e43ccead3 drm/i915: Track runtime spent in closed GEM contexts
b1460e0c42ea drm/i915: Track all user contexts per client
-:89: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#89: FILE: drivers/gpu/drm/i915/i915_drm_client.h:43:
+	spinlock_t ctx_lock;

total: 0 errors, 0 warnings, 1 checks, 59 lines checked
23836735df75 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

-:114: WARNING:STATIC_CONST_CHAR_ARRAY: static const char * array should probably be static const char * const
#114: FILE: drivers/gpu/drm/i915/i915_drm_client.c:94:
+static const char *uabi_class_names[] = {

total: 0 errors, 2 warnings, 0 checks, 150 lines checked
2bd7560f3275 drm/i915: Track per-context engine busyness
20e5180a4335 drm/i915: Carry over past software tracked context runtime
8c5b89e058b5 drm/i915: Prefer software tracked context busyness
c090e12915a4 compare runtimes
-:7: WARNING:COMMIT_MESSAGE: Missing commit description - Add an appropriate one

-:31: WARNING:PRINTK_WITHOUT_KERN_LEVEL: printk() should include KERN_<LEVEL> facility level
#31: FILE: drivers/gpu/drm/i915/gem/i915_gem_context.c:279:
+			printk("%p class%u %llums vs %llums\n",

total: 0 errors, 2 warnings, 0 checks, 25 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Per client engine busyness (rev5)
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (12 preceding siblings ...)
  2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
@ 2020-03-09 19:13 ` Patchwork
  2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2020-03-09 19:13 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (rev5)
URL   : https://patchwork.freedesktop.org/series/70977/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.6.0
Commit: drm/i915: Expose list of clients in sysfs
Okay!

Commit: drm/i915: Update client name on context create
+drivers/gpu/drm/i915/i915_drm_client.c:102:23:    expected struct pid *pid
+drivers/gpu/drm/i915/i915_drm_client.c:102:23:    got struct pid [noderef] <asn:4> *pid
+drivers/gpu/drm/i915/i915_drm_client.c:102:23: warning: incorrect type in argument 1 (different address spaces)
+drivers/gpu/drm/i915/i915_drm_client.c:211:17: error: incompatible types in comparison expression (different address spaces)

Commit: drm/i915: Make GEM contexts track DRM clients
Okay!

Commit: drm/i915: Use explicit flag to mark unreachable intel_context
+drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:2551:24: error: incompatible types in comparison expression (different address spaces)

Commit: drm/i915: Track runtime spent in unreachable intel_contexts
Okay!

Commit: drm/i915: Track runtime spent in closed GEM contexts
Okay!

Commit: drm/i915: Track all user contexts per client
Okay!

Commit: drm/i915: Expose per-engine client busyness
Okay!

Commit: drm/i915: Track per-context engine busyness
Okay!

Commit: drm/i915: Carry over past software tracked context runtime
Okay!

Commit: drm/i915: Prefer software tracked context busyness
Okay!

Commit: compare runtimes
Okay!

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
@ 2020-03-09 21:34   ` Chris Wilson
  2020-03-09 23:26     ` Tvrtko Ursulin
  2020-03-10 11:41   ` Chris Wilson
  2020-03-10 17:59   ` Chris Wilson
  2 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-09 21:34 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
> +struct i915_drm_client *
> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
> +{
> +       struct i915_drm_client *client;
> +       int ret;
> +
> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
> +       if (!client)
> +               return ERR_PTR(-ENOMEM);
> +
> +       kref_init(&client->kref);
> +       client->clients = clients;
> +
> +       ret = mutex_lock_interruptible(&clients->lock);
> +       if (ret)
> +               goto err_id;
> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);

So what's next_id used for that explains having the over-arching mutex?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 00/12] Per client engine busyness
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (13 preceding siblings ...)
  2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2020-03-09 22:02 ` Chris Wilson
  2020-03-09 23:30   ` Tvrtko Ursulin
  2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-09 22:02 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:17)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Another re-spin of the per-client engine busyness series. Highlights from this
> version:
> 
>  * Different way of tracking runtime of exited/unreachable context. This time
>    round I accumulate those per context/client and engine class, but active
>    contexts are kept in a list and tallied on sysfs reads.
>  * I had to do a small tweak in the engine release code since I needed the
>    GEM context for a bit longer. (So I can accumulate the intel_context runtime
>    into it as it is getting freed, because context complete can be late.)
>  * PPHWSP method is back and even comes first in the series this time. It still
>    can't show the currently running workloads but the software tracking method
>    suffers from the CSB processing delay with high frequency and very short
>    batches.

I bet it's ksoftirqd, but this could be quite problematic for us.
gem_exec_nop/foo? I wonder if this also ties into how much harder it is
to saturate the GPU with nops from userspace than it is from the kernel.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-09 21:34   ` Chris Wilson
@ 2020-03-09 23:26     ` Tvrtko Ursulin
  2020-03-10  0:13       ` Chris Wilson
  0 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 23:26 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 09/03/2020 21:34, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
>> +struct i915_drm_client *
>> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
>> +{
>> +       struct i915_drm_client *client;
>> +       int ret;
>> +
>> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
>> +       if (!client)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       kref_init(&client->kref);
>> +       client->clients = clients;
>> +
>> +       ret = mutex_lock_interruptible(&clients->lock);
>> +       if (ret)
>> +               goto err_id;
>> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
>> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);
> 
> So what's next_id used for that explains having the over-arching mutex?

It's to give out client id's "cyclically" - before I apparently 
misunderstood what xa_alloc_cyclic is supposed to do - I thought after 
giving out id 1 it would give out 2 next, even if 1 was returned to the 
pool in the meantime. But it doesn't, I need to track the start point 
for the next search with "next".

I want this to make intel_gpu_top's life easier, so it doesn't have to 
deal with id recycling for all practical purposes.

And a peek into xa implementation told me the internal lock is not 
protecting "next.

I could stick with one lock and not use the internal one if I used it on 
release path as well.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 00/12] Per client engine busyness
  2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
@ 2020-03-09 23:30   ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-09 23:30 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 09/03/2020 22:02, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:17)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Another re-spin of the per-client engine busyness series. Highlights from this
>> version:
>>
>>   * Different way of tracking runtime of exited/unreachable context. This time
>>     round I accumulate those per context/client and engine class, but active
>>     contexts are kept in a list and tallied on sysfs reads.
>>   * I had to do a small tweak in the engine release code since I needed the
>>     GEM context for a bit longer. (So I can accumulate the intel_context runtime
>>     into it as it is getting freed, because context complete can be late.)
>>   * PPHWSP method is back and even comes first in the series this time. It still
>>     can't show the currently running workloads but the software tracking method
>>     suffers from the CSB processing delay with high frequency and very short
>>     batches.
> 
> I bet it's ksoftirqd, but this could be quite problematic for us.
> gem_exec_nop/foo? I wonder if this also ties into how much harder it is
> to saturate the GPU with nops from userspace than it is from the kernel.

At least disappointing, or even problematic yes. I had a cunning plan 
though, to report back max(sw_runtimetime, pphwsp_runtime). Apart from 
it not being that cunning when things start to systematically drift. 
Then it effectively becomes pphwsp runtime. Oh well, don't know at the 
moment, might have to live with pphwsp only.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-09 23:26     ` Tvrtko Ursulin
@ 2020-03-10  0:13       ` Chris Wilson
  2020-03-10  8:44         ` Tvrtko Ursulin
  0 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10  0:13 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 23:26:34)
> 
> On 09/03/2020 21:34, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
> >> +struct i915_drm_client *
> >> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
> >> +{
> >> +       struct i915_drm_client *client;
> >> +       int ret;
> >> +
> >> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
> >> +       if (!client)
> >> +               return ERR_PTR(-ENOMEM);
> >> +
> >> +       kref_init(&client->kref);
> >> +       client->clients = clients;
> >> +
> >> +       ret = mutex_lock_interruptible(&clients->lock);
> >> +       if (ret)
> >> +               goto err_id;
> >> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
> >> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);
> > 
> > So what's next_id used for that explains having the over-arching mutex?
> 
> It's to give out client id's "cyclically" - before I apparently 
> misunderstood what xa_alloc_cyclic is supposed to do - I thought after 
> giving out id 1 it would give out 2 next, even if 1 was returned to the 
> pool in the meantime. But it doesn't, I need to track the start point 
> for the next search with "next".

Ok. A requirement of the API for the external counter.
 
> I want this to make intel_gpu_top's life easier, so it doesn't have to 
> deal with id recycling for all practical purposes.

Fair enough. I only worry about the radix nodes and sparse ids :)
 
> And a peek into xa implementation told me the internal lock is not 
> protecting "next.

See xa_alloc_cyclic(), seems to cover __xa_alloc_cycle (where *next is
manipulated) under the xa_lock.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-10  0:13       ` Chris Wilson
@ 2020-03-10  8:44         ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-10  8:44 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 00:13, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 23:26:34)
>>
>> On 09/03/2020 21:34, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
>>>> +struct i915_drm_client *
>>>> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
>>>> +{
>>>> +       struct i915_drm_client *client;
>>>> +       int ret;
>>>> +
>>>> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
>>>> +       if (!client)
>>>> +               return ERR_PTR(-ENOMEM);
>>>> +
>>>> +       kref_init(&client->kref);
>>>> +       client->clients = clients;
>>>> +
>>>> +       ret = mutex_lock_interruptible(&clients->lock);
>>>> +       if (ret)
>>>> +               goto err_id;
>>>> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
>>>> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);
>>>
>>> So what's next_id used for that explains having the over-arching mutex?
>>
>> It's to give out client id's "cyclically" - before I apparently
>> misunderstood what xa_alloc_cyclic is supposed to do - I thought after
>> giving out id 1 it would give out 2 next, even if 1 was returned to the
>> pool in the meantime. But it doesn't, I need to track the start point
>> for the next search with "next".
> 
> Ok. A requirement of the API for the external counter.
>   
>> I want this to make intel_gpu_top's life easier, so it doesn't have to
>> deal with id recycling for all practical purposes.
> 
> Fair enough. I only worry about the radix nodes and sparse ids :)

I only found in docs that it should be efficient when the data is 
"densely clustered". And given that does appear based on a tree like 
structure I thought that means a few clusters of ids should be okay. But 
maybe in practice we would have more than a few clusters. I guess that 
could indeed be the case.. hm.. Maybe I could use a list and keep 
pointer to last entry. When u32 next wraps I reset to list head. 
Downside is any search for next free id potentially has to walk over one 
used up cluster. May be passable apart for IGT-type stress tests.

>> And a peek into xa implementation told me the internal lock is not
>> protecting "next.
> 
> See xa_alloc_cyclic(), seems to cover __xa_alloc_cycle (where *next is
> manipulated) under the xa_lock.

Ha, true, not sure how I went past top-level and forgot what's in there. :)

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
  2020-03-09 21:34   ` Chris Wilson
@ 2020-03-10 11:41   ` Chris Wilson
  2020-03-10 12:04     ` Tvrtko Ursulin
  2020-03-10 17:59   ` Chris Wilson
  2 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 11:41 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
> +static int
> +__i915_drm_client_register(struct i915_drm_client *client,
> +                          struct task_struct *task)
> +{
> +       struct i915_drm_clients *clients = client->clients;
> +       struct device_attribute *attr;
> +       int ret = -ENOMEM;
> +       char idstr[32];
> +
> +       client->pid = get_task_pid(task, PIDTYPE_PID);
> +
> +       client->name = kstrdup(task->comm, GFP_KERNEL);
> +       if (!client->name)
> +               goto err_name;
> +
> +       if (!clients->root)
> +               return 0; /* intel_fbdev_init registers a client before sysfs */
> +
> +       snprintf(idstr, sizeof(idstr), "%u", client->id);
> +       client->root = kobject_create_and_add(idstr, clients->root);
> +       if (!client->root)
> +               goto err_client;
> +
> +       attr = &client->attr.name;
> +       sysfs_attr_init(&attr->attr);
> +       attr->attr.name = "name";
> +       attr->attr.mode = 0444;
> +       attr->show = show_client_name;
> +
> +       ret = sysfs_create_file(client->root, (struct attribute *)attr);
> +       if (ret)
> +               goto err_attr;
> +
> +       attr = &client->attr.pid;
> +       sysfs_attr_init(&attr->attr);
> +       attr->attr.name = "pid";
> +       attr->attr.mode = 0444;
> +       attr->show = show_client_pid;
> +
> +       ret = sysfs_create_file(client->root, (struct attribute *)attr);
> +       if (ret)
> +               goto err_attr;

How do we think we will extend this (e.g. for client/1/(trace,debug))?

i915_drm_client_add_attr() ?

Or should we put all the attr here and make them known a priori?

I think I prefer i915_drm_client_add_attr, but that will also require a
notification chain? And that smells like overengineering.

At any rate we have 2 other definite users around the corner for the
client sysfs, so we should look at what API suits us.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-10 11:41   ` Chris Wilson
@ 2020-03-10 12:04     ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-10 12:04 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 11:41, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
>> +static int
>> +__i915_drm_client_register(struct i915_drm_client *client,
>> +                          struct task_struct *task)
>> +{
>> +       struct i915_drm_clients *clients = client->clients;
>> +       struct device_attribute *attr;
>> +       int ret = -ENOMEM;
>> +       char idstr[32];
>> +
>> +       client->pid = get_task_pid(task, PIDTYPE_PID);
>> +
>> +       client->name = kstrdup(task->comm, GFP_KERNEL);
>> +       if (!client->name)
>> +               goto err_name;
>> +
>> +       if (!clients->root)
>> +               return 0; /* intel_fbdev_init registers a client before sysfs */
>> +
>> +       snprintf(idstr, sizeof(idstr), "%u", client->id);
>> +       client->root = kobject_create_and_add(idstr, clients->root);
>> +       if (!client->root)
>> +               goto err_client;
>> +
>> +       attr = &client->attr.name;
>> +       sysfs_attr_init(&attr->attr);
>> +       attr->attr.name = "name";
>> +       attr->attr.mode = 0444;
>> +       attr->show = show_client_name;
>> +
>> +       ret = sysfs_create_file(client->root, (struct attribute *)attr);
>> +       if (ret)
>> +               goto err_attr;
>> +
>> +       attr = &client->attr.pid;
>> +       sysfs_attr_init(&attr->attr);
>> +       attr->attr.name = "pid";
>> +       attr->attr.mode = 0444;
>> +       attr->show = show_client_pid;
>> +
>> +       ret = sysfs_create_file(client->root, (struct attribute *)attr);
>> +       if (ret)
>> +               goto err_attr;
> 
> How do we think we will extend this (e.g. for client/1/(trace,debug))?
> 
> i915_drm_client_add_attr() ?
> 
> Or should we put all the attr here and make them known a priori?
> 
> I think I prefer i915_drm_client_add_attr, but that will also require a
> notification chain? And that smells like overengineering.
> 
> At any rate we have 2 other definite users around the corner for the
> client sysfs, so we should look at what API suits us.

It sounds acceptable to me to just call their setup from here. 
__i915_drm_client_register sounds clear enough place.

We potentially just split the function into "client core" and "add-on 
users" for better readability:

__i915_drm_client_register
{
	...register_client();

	...register_client_busy(client, ...);

	...register_client_xxx(client, ...);
}

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5)
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (14 preceding siblings ...)
  2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
@ 2020-03-10 15:11 ` Patchwork
  2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2020-03-10 15:11 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (rev5)
URL   : https://patchwork.freedesktop.org/series/70977/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8106 -> Patchwork_16896
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_16896 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_16896, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_16896:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_busy@busy-all:
    - fi-bsw-nick:        [PASS][1] -> [DMESG-WARN][2] +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bsw-nick/igt@gem_busy@busy-all.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bsw-nick/igt@gem_busy@busy-all.html
    - fi-bdw-5557u:       NOTRUN -> [DMESG-WARN][3] +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bdw-5557u/igt@gem_busy@busy-all.html
    - fi-icl-guc:         [PASS][4] -> [DMESG-WARN][5] +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-guc/igt@gem_busy@busy-all.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-guc/igt@gem_busy@busy-all.html
    - fi-skl-6770hq:      [PASS][6] -> [DMESG-WARN][7] +1 similar issue
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6770hq/igt@gem_busy@busy-all.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6770hq/igt@gem_busy@busy-all.html
    - fi-bsw-kefka:       [PASS][8] -> [DMESG-WARN][9] +1 similar issue
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bsw-kefka/igt@gem_busy@busy-all.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bsw-kefka/igt@gem_busy@busy-all.html
    - fi-kbl-guc:         [PASS][10] -> [DMESG-WARN][11] +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-guc/igt@gem_busy@busy-all.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-guc/igt@gem_busy@busy-all.html
    - fi-kbl-x1275:       [PASS][12] -> [DMESG-WARN][13] +1 similar issue
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-x1275/igt@gem_busy@busy-all.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-x1275/igt@gem_busy@busy-all.html
    - fi-hsw-peppy:       [PASS][14] -> [DMESG-WARN][15] +1 similar issue
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-hsw-peppy/igt@gem_busy@busy-all.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-hsw-peppy/igt@gem_busy@busy-all.html
    - fi-icl-u2:          [PASS][16] -> [DMESG-WARN][17]
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-u2/igt@gem_busy@busy-all.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-u2/igt@gem_busy@busy-all.html
    - fi-icl-y:           [PASS][18] -> [DMESG-WARN][19] +1 similar issue
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-y/igt@gem_busy@busy-all.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-y/igt@gem_busy@busy-all.html
    - fi-glk-dsi:         [PASS][20] -> [DMESG-WARN][21] +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-glk-dsi/igt@gem_busy@busy-all.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-glk-dsi/igt@gem_busy@busy-all.html
    - fi-skl-guc:         [PASS][22] -> [DMESG-WARN][23] +1 similar issue
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-guc/igt@gem_busy@busy-all.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-guc/igt@gem_busy@busy-all.html

  * igt@gem_close_race@basic-process:
    - fi-gdg-551:         [PASS][24] -> [DMESG-WARN][25]
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-gdg-551/igt@gem_close_race@basic-process.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-gdg-551/igt@gem_close_race@basic-process.html

  * igt@i915_module_load@reload:
    - fi-kbl-8809g:       [PASS][26] -> [DMESG-WARN][27] +1 similar issue
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-8809g/igt@i915_module_load@reload.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-8809g/igt@i915_module_load@reload.html
    - fi-cml-u2:          [PASS][28] -> [DMESG-WARN][29] +1 similar issue
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-u2/igt@i915_module_load@reload.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-u2/igt@i915_module_load@reload.html
    - fi-blb-e6850:       [PASS][30] -> [DMESG-WARN][31] +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-blb-e6850/igt@i915_module_load@reload.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-blb-e6850/igt@i915_module_load@reload.html
    - fi-byt-j1900:       [PASS][32] -> [DMESG-WARN][33] +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-byt-j1900/igt@i915_module_load@reload.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-byt-j1900/igt@i915_module_load@reload.html
    - fi-cfl-8700k:       [PASS][34] -> [DMESG-WARN][35] +1 similar issue
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-8700k/igt@i915_module_load@reload.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-8700k/igt@i915_module_load@reload.html
    - fi-apl-guc:         [PASS][36] -> [DMESG-WARN][37] +1 similar issue
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-apl-guc/igt@i915_module_load@reload.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-apl-guc/igt@i915_module_load@reload.html
    - fi-icl-dsi:         [PASS][38] -> [DMESG-WARN][39] +1 similar issue
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-dsi/igt@i915_module_load@reload.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-dsi/igt@i915_module_load@reload.html
    - fi-bxt-dsi:         [PASS][40] -> [DMESG-WARN][41] +1 similar issue
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bxt-dsi/igt@i915_module_load@reload.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bxt-dsi/igt@i915_module_load@reload.html
    - fi-hsw-4770:        [PASS][42] -> [DMESG-WARN][43] +1 similar issue
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-hsw-4770/igt@i915_module_load@reload.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-hsw-4770/igt@i915_module_load@reload.html
    - fi-cml-s:           [PASS][44] -> [DMESG-WARN][45] +1 similar issue
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-s/igt@i915_module_load@reload.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-s/igt@i915_module_load@reload.html
    - fi-pnv-d510:        [PASS][46] -> [DMESG-WARN][47] +1 similar issue
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-pnv-d510/igt@i915_module_load@reload.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-pnv-d510/igt@i915_module_load@reload.html
    - fi-cfl-guc:         [PASS][48] -> [DMESG-WARN][49] +1 similar issue
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-guc/igt@i915_module_load@reload.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-guc/igt@i915_module_load@reload.html
    - fi-bsw-n3050:       [PASS][50] -> [DMESG-WARN][51] +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bsw-n3050/igt@i915_module_load@reload.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bsw-n3050/igt@i915_module_load@reload.html
    - fi-ilk-650:         [PASS][52] -> [DMESG-WARN][53] +1 similar issue
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ilk-650/igt@i915_module_load@reload.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ilk-650/igt@i915_module_load@reload.html
    - fi-skl-6700k2:      [PASS][54] -> [DMESG-WARN][55] +1 similar issue
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6700k2/igt@i915_module_load@reload.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6700k2/igt@i915_module_load@reload.html
    - fi-snb-2600:        NOTRUN -> [DMESG-WARN][56] +1 similar issue
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-snb-2600/igt@i915_module_load@reload.html

  * igt@i915_selftest@live@mman:
    - fi-skl-6770hq:      [PASS][57] -> [INCOMPLETE][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6770hq/igt@i915_selftest@live@mman.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6770hq/igt@i915_selftest@live@mman.html
    - fi-bdw-5557u:       NOTRUN -> [INCOMPLETE][59]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bdw-5557u/igt@i915_selftest@live@mman.html
    - fi-ilk-650:         [PASS][60] -> [INCOMPLETE][61]
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ilk-650/igt@i915_selftest@live@mman.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ilk-650/igt@i915_selftest@live@mman.html
    - fi-cfl-guc:         [PASS][62] -> [INCOMPLETE][63]
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-guc/igt@i915_selftest@live@mman.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-guc/igt@i915_selftest@live@mman.html
    - fi-skl-guc:         [PASS][64] -> [INCOMPLETE][65]
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-guc/igt@i915_selftest@live@mman.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-guc/igt@i915_selftest@live@mman.html
    - fi-skl-6700k2:      [PASS][66] -> [INCOMPLETE][67]
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6700k2/igt@i915_selftest@live@mman.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6700k2/igt@i915_selftest@live@mman.html
    - fi-icl-y:           [PASS][68] -> [INCOMPLETE][69]
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-y/igt@i915_selftest@live@mman.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-y/igt@i915_selftest@live@mman.html
    - fi-cfl-8700k:       [PASS][70] -> [INCOMPLETE][71]
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-8700k/igt@i915_selftest@live@mman.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-8700k/igt@i915_selftest@live@mman.html
    - fi-kbl-x1275:       [PASS][72] -> [INCOMPLETE][73]
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-x1275/igt@i915_selftest@live@mman.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-x1275/igt@i915_selftest@live@mman.html
    - fi-icl-dsi:         [PASS][74] -> [INCOMPLETE][75]
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-dsi/igt@i915_selftest@live@mman.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-dsi/igt@i915_selftest@live@mman.html

  * igt@runner@aborted:
    - fi-pnv-d510:        NOTRUN -> [FAIL][76]
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-pnv-d510/igt@runner@aborted.html
    - fi-kbl-x1275:       NOTRUN -> [FAIL][77]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-x1275/igt@runner@aborted.html
    - fi-snb-2600:        NOTRUN -> [FAIL][78]
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-snb-2600/igt@runner@aborted.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@gem_busy@busy-all:
    - {fi-ehl-1}:         [PASS][79] -> [DMESG-WARN][80] +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ehl-1/igt@gem_busy@busy-all.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ehl-1/igt@gem_busy@busy-all.html

  * igt@i915_module_load@reload:
    - {fi-tgl-dsi}:       [PASS][81] -> [DMESG-WARN][82] +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-dsi/igt@i915_module_load@reload.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-dsi/igt@i915_module_load@reload.html
    - {fi-tgl-u}:         [PASS][83] -> [DMESG-WARN][84] +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-u/igt@i915_module_load@reload.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-u/igt@i915_module_load@reload.html
    - {fi-kbl-7560u}:     [PASS][85] -> [DMESG-WARN][86] +1 similar issue
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-7560u/igt@i915_module_load@reload.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-7560u/igt@i915_module_load@reload.html

  * igt@i915_selftest@live@mman:
    - {fi-ehl-1}:         [PASS][87] -> [INCOMPLETE][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ehl-1/igt@i915_selftest@live@mman.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ehl-1/igt@i915_selftest@live@mman.html
    - {fi-tgl-u}:         [PASS][89] -> [INCOMPLETE][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-u/igt@i915_selftest@live@mman.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-u/igt@i915_selftest@live@mman.html
    - {fi-kbl-7560u}:     [PASS][91] -> [INCOMPLETE][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-7560u/igt@i915_selftest@live@mman.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-7560u/igt@i915_selftest@live@mman.html

  * igt@runner@aborted:
    - {fi-ehl-1}:         NOTRUN -> [FAIL][93]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ehl-1/igt@runner@aborted.html
    - {fi-kbl-7560u}:     NOTRUN -> [FAIL][94]
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-7560u/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_16896 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_busy@busy-all:
    - fi-tgl-y:           [PASS][95] -> [DMESG-WARN][96] ([CI#94]) +1 similar issue
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-y/igt@gem_busy@busy-all.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-y/igt@gem_busy@busy-all.html

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-guc:         [PASS][97] -> [FAIL][98] ([i915#138])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html

  * igt@i915_selftest@live@gtt:
    - fi-byt-j1900:       [PASS][99] -> [INCOMPLETE][100] ([i915#45])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-byt-j1900/igt@i915_selftest@live@gtt.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-byt-j1900/igt@i915_selftest@live@gtt.html

  * igt@i915_selftest@live@mman:
    - fi-cml-s:           [PASS][101] -> [INCOMPLETE][102] ([i915#283])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-s/igt@i915_selftest@live@mman.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-s/igt@i915_selftest@live@mman.html
    - fi-tgl-y:           [PASS][103] -> [INCOMPLETE][104] ([CI#94])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-y/igt@i915_selftest@live@mman.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-y/igt@i915_selftest@live@mman.html
    - fi-pnv-d510:        [PASS][105] -> [INCOMPLETE][106] ([i915#299])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-pnv-d510/igt@i915_selftest@live@mman.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-pnv-d510/igt@i915_selftest@live@mman.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-cml-u2:          [PASS][107] -> [FAIL][108] ([i915#262])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-u2/igt@kms_chamelium@dp-crc-fast.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-u2/igt@kms_chamelium@dp-crc-fast.html

  * igt@prime_vgem@basic-sync-default:
    - fi-tgl-y:           [PASS][109] -> [DMESG-WARN][110] ([CI#94] / [i915#402])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-y/igt@prime_vgem@basic-sync-default.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-y/igt@prime_vgem@basic-sync-default.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [CI#94]: https://gitlab.freedesktop.org/gfx-ci/i915-infra/issues/94
  [i915#138]: https://gitlab.freedesktop.org/drm/intel/issues/138
  [i915#262]: https://gitlab.freedesktop.org/drm/intel/issues/262
  [i915#283]: https://gitlab.freedesktop.org/drm/intel/issues/283
  [i915#299]: https://gitlab.freedesktop.org/drm/intel/issues/299
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#45]: https://gitlab.freedesktop.org/drm/intel/issues/45


Participating hosts (44 -> 35)
------------------------------

  Additional (2): fi-bdw-5557u fi-snb-2600 
  Missing    (11): fi-hsw-4200u fi-bsw-cyan fi-bwr-2160 fi-snb-2520m fi-ctg-p8600 fi-ivb-3770 fi-elk-e7500 fi-skl-lmem fi-byt-clapper fi-bdw-samus fi-kbl-r 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8106 -> Patchwork_16896

  CI-20190529: 20190529
  CI_DRM_8106: 5b0076e8066ea8218e7857ee1aa28b0670acde94 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5504: d6788bf0404f76b66170e18eb26c85004b5ccb25 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_16896: 2d949154d397136f325a861753f0d38e0cf339ba @ git://anongit.freedesktop.org/gfx-ci/linux


== Kernel 32bit build ==

Warning: Kernel 32bit buildtest failed:
https://intel-gfx-ci.01.org/Patchwork_16896/build_32bit.log

  CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  CHK     include/generated/compile.h
Kernel: arch/x86/boot/bzImage is ready  (#1)
  MODPOST 121 modules
ERROR: "__udivdi3" [drivers/gpu/drm/i915/i915.ko] undefined!
scripts/Makefile.modpost:93: recipe for target '__modpost' failed
make[1]: *** [__modpost] Error 1
Makefile:1283: recipe for target 'modules' failed
make: *** [modules] Error 2


== Linux commits ==

2d949154d397 compare runtimes
34a9e27cbdca drm/i915: Prefer software tracked context busyness
ea072e9957e0 drm/i915: Carry over past software tracked context runtime
bbf17d95a1e8 drm/i915: Track per-context engine busyness
012f704f4eaf drm/i915: Expose per-engine client busyness
c74f580b92e6 drm/i915: Track all user contexts per client
fce4aa218a33 drm/i915: Track runtime spent in closed GEM contexts
39835200fa71 drm/i915: Track runtime spent in unreachable intel_contexts
d0a0098b5e97 drm/i915: Use explicit flag to mark unreachable intel_context
7edb0602f4a7 drm/i915: Make GEM contexts track DRM clients
a69bb5d42361 drm/i915: Update client name on context create
0f6f5811f8d8 drm/i915: Expose list of clients in sysfs

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: warning for Per client engine busyness (rev5)
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (15 preceding siblings ...)
  2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
@ 2020-03-10 15:11 ` Patchwork
  2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
  18 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2020-03-10 15:11 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (rev5)
URL   : https://patchwork.freedesktop.org/series/70977/
State : warning

== Summary ==

CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  CHK     include/generated/compile.h
Kernel: arch/x86/boot/bzImage is ready  (#1)
  MODPOST 121 modules
ERROR: "__udivdi3" [drivers/gpu/drm/i915/i915.ko] undefined!
scripts/Makefile.modpost:93: recipe for target '__modpost' failed
make[1]: *** [__modpost] Error 1
Makefile:1283: recipe for target 'modules' failed
make: *** [modules] Error 2

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/build_32bit.log
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5)
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (16 preceding siblings ...)
  2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
@ 2020-03-10 15:19 ` Patchwork
  2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
  18 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2020-03-10 15:19 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (rev5)
URL   : https://patchwork.freedesktop.org/series/70977/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8106 -> Patchwork_16896
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_16896 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_16896, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_16896:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_busy@busy-all:
    - fi-bsw-nick:        [PASS][1] -> [DMESG-WARN][2] +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bsw-nick/igt@gem_busy@busy-all.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bsw-nick/igt@gem_busy@busy-all.html
    - fi-bdw-5557u:       NOTRUN -> [DMESG-WARN][3] +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bdw-5557u/igt@gem_busy@busy-all.html
    - fi-icl-guc:         [PASS][4] -> [DMESG-WARN][5] +1 similar issue
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-guc/igt@gem_busy@busy-all.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-guc/igt@gem_busy@busy-all.html
    - fi-skl-6770hq:      [PASS][6] -> [DMESG-WARN][7] +1 similar issue
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6770hq/igt@gem_busy@busy-all.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6770hq/igt@gem_busy@busy-all.html
    - fi-bsw-kefka:       [PASS][8] -> [DMESG-WARN][9] +1 similar issue
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bsw-kefka/igt@gem_busy@busy-all.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bsw-kefka/igt@gem_busy@busy-all.html
    - fi-kbl-guc:         [PASS][10] -> [DMESG-WARN][11] +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-guc/igt@gem_busy@busy-all.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-guc/igt@gem_busy@busy-all.html
    - fi-kbl-x1275:       [PASS][12] -> [DMESG-WARN][13] +1 similar issue
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-x1275/igt@gem_busy@busy-all.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-x1275/igt@gem_busy@busy-all.html
    - fi-hsw-peppy:       [PASS][14] -> [DMESG-WARN][15] +1 similar issue
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-hsw-peppy/igt@gem_busy@busy-all.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-hsw-peppy/igt@gem_busy@busy-all.html
    - fi-icl-u2:          [PASS][16] -> [DMESG-WARN][17]
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-u2/igt@gem_busy@busy-all.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-u2/igt@gem_busy@busy-all.html
    - fi-icl-y:           [PASS][18] -> [DMESG-WARN][19] +1 similar issue
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-y/igt@gem_busy@busy-all.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-y/igt@gem_busy@busy-all.html
    - fi-glk-dsi:         [PASS][20] -> [DMESG-WARN][21] +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-glk-dsi/igt@gem_busy@busy-all.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-glk-dsi/igt@gem_busy@busy-all.html
    - fi-skl-guc:         [PASS][22] -> [DMESG-WARN][23] +1 similar issue
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-guc/igt@gem_busy@busy-all.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-guc/igt@gem_busy@busy-all.html

  * igt@gem_close_race@basic-process:
    - fi-gdg-551:         [PASS][24] -> [DMESG-WARN][25]
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-gdg-551/igt@gem_close_race@basic-process.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-gdg-551/igt@gem_close_race@basic-process.html

  * igt@i915_module_load@reload:
    - fi-kbl-8809g:       [PASS][26] -> [DMESG-WARN][27] +1 similar issue
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-8809g/igt@i915_module_load@reload.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-8809g/igt@i915_module_load@reload.html
    - fi-cml-u2:          [PASS][28] -> [DMESG-WARN][29] +1 similar issue
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-u2/igt@i915_module_load@reload.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-u2/igt@i915_module_load@reload.html
    - fi-blb-e6850:       [PASS][30] -> [DMESG-WARN][31] +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-blb-e6850/igt@i915_module_load@reload.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-blb-e6850/igt@i915_module_load@reload.html
    - fi-byt-j1900:       [PASS][32] -> [DMESG-WARN][33] +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-byt-j1900/igt@i915_module_load@reload.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-byt-j1900/igt@i915_module_load@reload.html
    - fi-cfl-8700k:       [PASS][34] -> [DMESG-WARN][35] +1 similar issue
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-8700k/igt@i915_module_load@reload.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-8700k/igt@i915_module_load@reload.html
    - fi-apl-guc:         [PASS][36] -> [DMESG-WARN][37] +1 similar issue
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-apl-guc/igt@i915_module_load@reload.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-apl-guc/igt@i915_module_load@reload.html
    - fi-icl-dsi:         [PASS][38] -> [DMESG-WARN][39] +1 similar issue
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-dsi/igt@i915_module_load@reload.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-dsi/igt@i915_module_load@reload.html
    - fi-bxt-dsi:         [PASS][40] -> [DMESG-WARN][41] +1 similar issue
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bxt-dsi/igt@i915_module_load@reload.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bxt-dsi/igt@i915_module_load@reload.html
    - fi-hsw-4770:        [PASS][42] -> [DMESG-WARN][43] +1 similar issue
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-hsw-4770/igt@i915_module_load@reload.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-hsw-4770/igt@i915_module_load@reload.html
    - fi-cml-s:           [PASS][44] -> [DMESG-WARN][45] +1 similar issue
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-s/igt@i915_module_load@reload.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-s/igt@i915_module_load@reload.html
    - fi-pnv-d510:        [PASS][46] -> [DMESG-WARN][47] +1 similar issue
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-pnv-d510/igt@i915_module_load@reload.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-pnv-d510/igt@i915_module_load@reload.html
    - fi-cfl-guc:         [PASS][48] -> [DMESG-WARN][49] +1 similar issue
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-guc/igt@i915_module_load@reload.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-guc/igt@i915_module_load@reload.html
    - fi-bsw-n3050:       [PASS][50] -> [DMESG-WARN][51] +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-bsw-n3050/igt@i915_module_load@reload.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bsw-n3050/igt@i915_module_load@reload.html
    - fi-ilk-650:         [PASS][52] -> [DMESG-WARN][53] +1 similar issue
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ilk-650/igt@i915_module_load@reload.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ilk-650/igt@i915_module_load@reload.html
    - fi-skl-6700k2:      [PASS][54] -> [DMESG-WARN][55] +1 similar issue
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6700k2/igt@i915_module_load@reload.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6700k2/igt@i915_module_load@reload.html
    - fi-snb-2600:        NOTRUN -> [DMESG-WARN][56] +1 similar issue
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-snb-2600/igt@i915_module_load@reload.html

  * igt@i915_selftest@live@mman:
    - fi-skl-6770hq:      [PASS][57] -> [INCOMPLETE][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6770hq/igt@i915_selftest@live@mman.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6770hq/igt@i915_selftest@live@mman.html
    - fi-bdw-5557u:       NOTRUN -> [INCOMPLETE][59]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-bdw-5557u/igt@i915_selftest@live@mman.html
    - fi-ilk-650:         [PASS][60] -> [INCOMPLETE][61]
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ilk-650/igt@i915_selftest@live@mman.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ilk-650/igt@i915_selftest@live@mman.html
    - fi-cfl-guc:         [PASS][62] -> [INCOMPLETE][63]
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-guc/igt@i915_selftest@live@mman.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-guc/igt@i915_selftest@live@mman.html
    - fi-skl-guc:         [PASS][64] -> [INCOMPLETE][65]
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-guc/igt@i915_selftest@live@mman.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-guc/igt@i915_selftest@live@mman.html
    - fi-skl-6700k2:      [PASS][66] -> [INCOMPLETE][67]
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-skl-6700k2/igt@i915_selftest@live@mman.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-skl-6700k2/igt@i915_selftest@live@mman.html
    - fi-icl-y:           [PASS][68] -> [INCOMPLETE][69]
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-y/igt@i915_selftest@live@mman.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-y/igt@i915_selftest@live@mman.html
    - fi-cfl-8700k:       [PASS][70] -> [INCOMPLETE][71]
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cfl-8700k/igt@i915_selftest@live@mman.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cfl-8700k/igt@i915_selftest@live@mman.html
    - fi-kbl-x1275:       [PASS][72] -> [INCOMPLETE][73]
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-x1275/igt@i915_selftest@live@mman.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-x1275/igt@i915_selftest@live@mman.html
    - fi-icl-dsi:         [PASS][74] -> [INCOMPLETE][75]
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-icl-dsi/igt@i915_selftest@live@mman.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-icl-dsi/igt@i915_selftest@live@mman.html

  * igt@runner@aborted:
    - fi-pnv-d510:        NOTRUN -> [FAIL][76]
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-pnv-d510/igt@runner@aborted.html
    - fi-kbl-x1275:       NOTRUN -> [FAIL][77]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-x1275/igt@runner@aborted.html
    - fi-snb-2600:        NOTRUN -> [FAIL][78]
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-snb-2600/igt@runner@aborted.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@gem_busy@busy-all:
    - {fi-ehl-1}:         [PASS][79] -> [DMESG-WARN][80] +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ehl-1/igt@gem_busy@busy-all.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ehl-1/igt@gem_busy@busy-all.html

  * igt@i915_module_load@reload:
    - {fi-tgl-dsi}:       [PASS][81] -> [DMESG-WARN][82] +1 similar issue
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-dsi/igt@i915_module_load@reload.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-dsi/igt@i915_module_load@reload.html
    - {fi-tgl-u}:         [PASS][83] -> [DMESG-WARN][84] +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-u/igt@i915_module_load@reload.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-u/igt@i915_module_load@reload.html
    - {fi-kbl-7560u}:     [PASS][85] -> [DMESG-WARN][86] +1 similar issue
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-7560u/igt@i915_module_load@reload.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-7560u/igt@i915_module_load@reload.html

  * igt@i915_selftest@live@mman:
    - {fi-ehl-1}:         [PASS][87] -> [INCOMPLETE][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-ehl-1/igt@i915_selftest@live@mman.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ehl-1/igt@i915_selftest@live@mman.html
    - {fi-tgl-u}:         [PASS][89] -> [INCOMPLETE][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-u/igt@i915_selftest@live@mman.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-u/igt@i915_selftest@live@mman.html
    - {fi-kbl-7560u}:     [PASS][91] -> [INCOMPLETE][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-7560u/igt@i915_selftest@live@mman.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-7560u/igt@i915_selftest@live@mman.html

  * igt@runner@aborted:
    - {fi-ehl-1}:         NOTRUN -> [FAIL][93]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-ehl-1/igt@runner@aborted.html
    - {fi-kbl-7560u}:     NOTRUN -> [FAIL][94]
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-7560u/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_16896 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_busy@busy-all:
    - fi-tgl-y:           [PASS][95] -> [DMESG-WARN][96] ([CI#94]) +1 similar issue
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-y/igt@gem_busy@busy-all.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-y/igt@gem_busy@busy-all.html

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-guc:         [PASS][97] -> [FAIL][98] ([i915#138])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html

  * igt@i915_selftest@live@gtt:
    - fi-byt-j1900:       [PASS][99] -> [INCOMPLETE][100] ([i915#45])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-byt-j1900/igt@i915_selftest@live@gtt.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-byt-j1900/igt@i915_selftest@live@gtt.html

  * igt@i915_selftest@live@mman:
    - fi-cml-s:           [PASS][101] -> [INCOMPLETE][102] ([i915#283])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-s/igt@i915_selftest@live@mman.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-s/igt@i915_selftest@live@mman.html
    - fi-tgl-y:           [PASS][103] -> [INCOMPLETE][104] ([CI#94])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-y/igt@i915_selftest@live@mman.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-y/igt@i915_selftest@live@mman.html
    - fi-pnv-d510:        [PASS][105] -> [INCOMPLETE][106] ([i915#299])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-pnv-d510/igt@i915_selftest@live@mman.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-pnv-d510/igt@i915_selftest@live@mman.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-cml-u2:          [PASS][107] -> [FAIL][108] ([i915#262])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-cml-u2/igt@kms_chamelium@dp-crc-fast.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-cml-u2/igt@kms_chamelium@dp-crc-fast.html

  * igt@prime_vgem@basic-sync-default:
    - fi-tgl-y:           [PASS][109] -> [DMESG-WARN][110] ([CI#94] / [i915#402])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8106/fi-tgl-y/igt@prime_vgem@basic-sync-default.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/fi-tgl-y/igt@prime_vgem@basic-sync-default.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [CI#94]: https://gitlab.freedesktop.org/gfx-ci/i915-infra/issues/94
  [i915#138]: https://gitlab.freedesktop.org/drm/intel/issues/138
  [i915#262]: https://gitlab.freedesktop.org/drm/intel/issues/262
  [i915#283]: https://gitlab.freedesktop.org/drm/intel/issues/283
  [i915#299]: https://gitlab.freedesktop.org/drm/intel/issues/299
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#45]: https://gitlab.freedesktop.org/drm/intel/issues/45


Participating hosts (44 -> 35)
------------------------------

  Additional (2): fi-bdw-5557u fi-snb-2600 
  Missing    (11): fi-hsw-4200u fi-bsw-cyan fi-bwr-2160 fi-snb-2520m fi-ctg-p8600 fi-ivb-3770 fi-elk-e7500 fi-skl-lmem fi-byt-clapper fi-bdw-samus fi-kbl-r 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8106 -> Patchwork_16896

  CI-20190529: 20190529
  CI_DRM_8106: 5b0076e8066ea8218e7857ee1aa28b0670acde94 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5504: d6788bf0404f76b66170e18eb26c85004b5ccb25 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_16896: 2d949154d397136f325a861753f0d38e0cf339ba @ git://anongit.freedesktop.org/gfx-ci/linux


== Kernel 32bit build ==

Warning: Kernel 32bit buildtest failed:
https://intel-gfx-ci.01.org/Patchwork_16896/build_32bit.log

  CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  CHK     include/generated/compile.h
Kernel: arch/x86/boot/bzImage is ready  (#1)
  MODPOST 121 modules
ERROR: "__udivdi3" [drivers/gpu/drm/i915/i915.ko] undefined!
scripts/Makefile.modpost:93: recipe for target '__modpost' failed
make[1]: *** [__modpost] Error 1
Makefile:1283: recipe for target 'modules' failed
make: *** [modules] Error 2


== Linux commits ==

2d949154d397 compare runtimes
34a9e27cbdca drm/i915: Prefer software tracked context busyness
ea072e9957e0 drm/i915: Carry over past software tracked context runtime
bbf17d95a1e8 drm/i915: Track per-context engine busyness
012f704f4eaf drm/i915: Expose per-engine client busyness
c74f580b92e6 drm/i915: Track all user contexts per client
fce4aa218a33 drm/i915: Track runtime spent in closed GEM contexts
39835200fa71 drm/i915: Track runtime spent in unreachable intel_contexts
d0a0098b5e97 drm/i915: Use explicit flag to mark unreachable intel_context
7edb0602f4a7 drm/i915: Make GEM contexts track DRM clients
a69bb5d42361 drm/i915: Update client name on context create
0f6f5811f8d8 drm/i915: Expose list of clients in sysfs

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: warning for Per client engine busyness (rev5)
  2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
                   ` (17 preceding siblings ...)
  2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
@ 2020-03-10 15:19 ` Patchwork
  18 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2020-03-10 15:19 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (rev5)
URL   : https://patchwork.freedesktop.org/series/70977/
State : warning

== Summary ==

CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  CHK     include/generated/compile.h
Kernel: arch/x86/boot/bzImage is ready  (#1)
  MODPOST 121 modules
ERROR: "__udivdi3" [drivers/gpu/drm/i915/i915.ko] undefined!
scripts/Makefile.modpost:93: recipe for target '__modpost' failed
make[1]: *** [__modpost] Error 1
Makefile:1283: recipe for target 'modules' failed
make: *** [modules] Error 2

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16896/build_32bit.log
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context
  2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
@ 2020-03-10 15:30   ` Chris Wilson
  0 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 15:30 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:21)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 0893ce781a84..0302757396d5 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -2547,8 +2547,8 @@ static void eb_request_add(struct i915_execbuffer *eb)
>         prev = __i915_request_commit(rq);
>  
>         /* Check that the context wasn't destroyed before submission */
> -       if (likely(rcu_access_pointer(eb->context->gem_context))) {
> -               attr = eb->gem_context->sched;
> +       if (likely(!READ_ONCE(eb->context->closed))) {
> +               attr = rcu_dereference(eb->gem_context)->sched;

That's the warn. We don't have a rcu_read_lock here so it complains.

eb->gem_context is a strong ref, no rcu markup require.
(it's the eb->context->gem_context that needs annotation)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
  2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
  2020-03-09 21:34   ` Chris Wilson
  2020-03-10 11:41   ` Chris Wilson
@ 2020-03-10 17:59   ` Chris Wilson
  2 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 17:59 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
> +struct i915_drm_clients {
> +       struct mutex lock;
> +       struct xarray xarray;
> +       u32 next_id;
> +
> +       struct kobject *root;
> +};
> +
> +struct i915_drm_client {
> +       struct kref kref;
> +
> +       struct rcu_head rcu;
> +
> +       unsigned int id;
> +       struct pid *pid;
> +       char *name;
> +       bool closed;

After spending a couple of days with kcsan, I can predict you will want
to mark this up with WRITE_ONCE/READ_ONCE or switch to set_bit/test_bit.

I haven't spotted anything else to complain about, and you already
suggested splitting out the attr setup :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create
  2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
@ 2020-03-10 18:11   ` Chris Wilson
  2020-03-10 19:52     ` Tvrtko Ursulin
  0 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 18:11 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:19)
> @@ -92,8 +107,8 @@ __i915_drm_client_register(struct i915_drm_client *client,
>  static void
>  __i915_drm_client_unregister(struct i915_drm_client *client)
>  {
> -       put_pid(fetch_and_zero(&client->pid));
> -       kfree(fetch_and_zero(&client->name));
> +       put_pid(rcu_replace_pointer(client->pid, NULL, true));
> +       kfree(rcu_replace_pointer(client->name, NULL, true));

client_unregister is not after an RCU grace period, so what's the
protection here?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients
  2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
@ 2020-03-10 18:20   ` Chris Wilson
  0 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 18:20 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:20)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> If we make GEM contexts keep a reference to i915_drm_client for the whole
> of their lifetime, we can consolidate the current task pid and name usage
> by getting it from the client.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c   | 23 +++++++++++---
>  .../gpu/drm/i915/gem/i915_gem_context_types.h | 13 ++------
>  drivers/gpu/drm/i915/i915_debugfs.c           | 31 +++++++++----------
>  drivers/gpu/drm/i915/i915_gpu_error.c         | 21 +++++++------
>  4 files changed, 48 insertions(+), 40 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 2c3fd9748d39..0f4150c8d7fe 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -300,8 +300,13 @@ static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
>  
>  static void i915_gem_context_free(struct i915_gem_context *ctx)
>  {
> +       struct i915_drm_client *client = ctx->client;
> +
>         GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
>  
> +       if (client)
> +               i915_drm_client_put(client);
> +
>         spin_lock(&ctx->i915->gem.contexts.lock);
>         list_del(&ctx->link);
>         spin_unlock(&ctx->i915->gem.contexts.lock);
> @@ -311,7 +316,6 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
>         if (ctx->timeline)
>                 intel_timeline_put(ctx->timeline);
>  
> -       put_pid(ctx->pid);
>         mutex_destroy(&ctx->mutex);
>  
>         kfree_rcu(ctx, rcu);
> @@ -899,6 +903,7 @@ static int gem_context_register(struct i915_gem_context *ctx,
>                                 struct drm_i915_file_private *fpriv,
>                                 u32 *id)
>  {
> +       struct i915_drm_client *client;
>         struct i915_address_space *vm;
>         int ret;
>  
> @@ -910,15 +915,25 @@ static int gem_context_register(struct i915_gem_context *ctx,
>                 WRITE_ONCE(vm->file, fpriv); /* XXX */
>         mutex_unlock(&ctx->mutex);
>  
> -       ctx->pid = get_task_pid(current, PIDTYPE_PID);
> +       client = i915_drm_client_get(fpriv->client);
> +
> +       rcu_read_lock();
>         snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
> -                current->comm, pid_nr(ctx->pid));
> +                rcu_dereference(client->name),
> +                pid_nr(rcu_dereference(client->pid)));
> +       rcu_read_unlock();
>  
>         /* And finally expose ourselves to userspace via the idr */
>         ret = xa_alloc(&fpriv->context_xa, id, ctx, xa_limit_32b, GFP_KERNEL);
>         if (ret)
> -               put_pid(fetch_and_zero(&ctx->pid));
> +               goto err;
> +
> +       ctx->client = client;
>  
> +       return 0;
> +
> +err:
> +       i915_drm_client_put(client);
>         return ret;
>  }
>  
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> index 28760bd03265..b0e03380c690 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> @@ -96,20 +96,13 @@ struct i915_gem_context {
>          */
>         struct i915_address_space __rcu *vm;
>  
> -       /**
> -        * @pid: process id of creator
> -        *
> -        * Note that who created the context may not be the principle user,
> -        * as the context may be shared across a local socket. However,
> -        * that should only affect the default context, all contexts created
> -        * explicitly by the client are expected to be isolated.
> -        */
> -       struct pid *pid;
> -
>         /** link: place with &drm_i915_private.context_list */
>         struct list_head link;
>         struct llist_node free_link;
>  
> +       /** client: struct i915_drm_client */
> +       struct i915_drm_client *client;
> +
>         /**
>          * @ref: reference count
>          *
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 8f2525e4ce0f..0655f1e7527d 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -330,17 +330,17 @@ static void print_context_stats(struct seq_file *m,
>                                 .vm = rcu_access_pointer(ctx->vm),
>                         };
>                         struct drm_file *file = ctx->file_priv->file;
> -                       struct task_struct *task;
>                         char name[80];
>  
>                         rcu_read_lock();
> +
>                         idr_for_each(&file->object_idr, per_file_stats, &stats);
> -                       rcu_read_unlock();
>  
> -                       rcu_read_lock();
> -                       task = pid_task(ctx->pid ?: file->pid, PIDTYPE_PID);
>                         snprintf(name, sizeof(name), "%s",
> -                                task ? task->comm : "<unknown>");
> +                                I915_SELFTEST_ONLY(!ctx->client) ?
> +                                "[kernel]" :


With selftests one can never see debugfs/, so this should be safe to
assume ctx->client is valid.

And the same for the next chunk,
> @@ -1273,19 +1273,16 @@ static int i915_context_status(struct seq_file *m, void *unused)

> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 2a4cd0ba5464..653e1bc5050e 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1221,7 +1221,8 @@ static void record_request(const struct i915_request *request,
>         rcu_read_lock();
>         ctx = rcu_dereference(request->context->gem_context);
>         if (ctx)
> -               erq->pid = pid_nr(ctx->pid);
> +               erq->pid = I915_SELFTEST_ONLY(!ctx->client) ?
> +                          0 : pid_nr(rcu_dereference(ctx->client->pid));

Hmm, I think we may want to capture the i915_drm_client, but we also
want to know the pid at the time of submission, so time of hang is a
good guess. Could we accept the risk here of just using the client
(accepting that a mischievous user could rename the client later)?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts
  2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
@ 2020-03-10 18:25   ` Chris Wilson
  2020-03-10 20:00     ` Tvrtko Ursulin
  0 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 18:25 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:22)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> As contexts are abandoned we want to remember how much GPU time they used
> (per class) so later we can used it for smarter purposes.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_context.c       | 13 ++++++++++++-
>  drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  5 +++++
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index abc3a3e2fcf1..5f6861a36655 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -257,7 +257,19 @@ static void free_engines_rcu(struct rcu_head *rcu)
>  {
>         struct i915_gem_engines *engines =
>                 container_of(rcu, struct i915_gem_engines, rcu);
> +       struct i915_gem_context *ctx = engines->ctx;
> +       struct i915_gem_engines_iter it;
> +       struct intel_context *ce;
> +
> +       /* Transfer accumulated runtime to the parent GEM context. */
> +       for_each_gem_engine(ce, engines, it) {
> +               unsigned int class = ce->engine->uabi_class;
>  
> +               GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
> +               atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
> +       }

-> give this its own routine.

> +
> +       i915_gem_context_put(ctx);
>         i915_sw_fence_fini(&engines->fence);
>         free_engines(engines);
>  }
> @@ -540,7 +552,6 @@ static int engines_notify(struct i915_sw_fence *fence,
>                         list_del(&engines->link);
>                         spin_unlock_irqrestore(&ctx->stale.lock, flags);
>                 }
> -               i915_gem_context_put(engines->ctx);

Or accumulate here? Here we know the engines are idle and released,
albeit there is the delay in accumulating after the swap. I'm not going
to worry about that, live replacement of engines I don't expect anyone
to notice the busy stats being off for a bit. Worst case is that they
see a sudden jump; but typical practice will be to setup engines up
before they being activity. We only have to worry about is if the
transient misleading stats can be exploited.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts
  2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
@ 2020-03-10 18:28   ` Chris Wilson
  2020-03-10 20:01     ` Tvrtko Ursulin
  0 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 18:28 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:23)
> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
> index 7825df32798d..10752107e8c7 100644
> --- a/drivers/gpu/drm/i915/i915_drm_client.h
> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
> @@ -16,6 +16,8 @@
>  #include <linux/sched.h>
>  #include <linux/xarray.h>
>  
> +#include "gt/intel_engine_types.h"
> +
>  struct i915_drm_clients {
>         struct mutex lock;
>         struct xarray xarray;
> @@ -43,6 +45,11 @@ struct i915_drm_client {
>                 struct device_attribute pid;
>                 struct device_attribute name;
>         } attr;
> +
> +       /**
> +        * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
> +        */
> +       atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];

Just to plant a seed: i915_drm_client_stats.[ch] ?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
  2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
@ 2020-03-10 18:32   ` Chris Wilson
  2020-03-10 20:04     ` Tvrtko Ursulin
  0 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 18:32 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:25)
> +static ssize_t
> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
> +{
> +       struct i915_engine_busy_attribute *i915_attr =
> +               container_of(attr, typeof(*i915_attr), attr);
> +       unsigned int class = i915_attr->engine_class;
> +       struct i915_drm_client *client = i915_attr->client;
> +       u64 total = atomic64_read(&client->past_runtime[class]);
> +       struct list_head *list = &client->ctx_list;
> +       struct i915_gem_context *ctx;
> +
> +       rcu_read_lock();
> +       list_for_each_entry_rcu(ctx, list, client_link) {
> +               total += atomic64_read(&ctx->past_runtime[class]);
> +               total += pphwsp_busy_add(ctx, class);
> +       }
> +       rcu_read_unlock();
> +
> +       total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;

Planning early retirement? In 600 years, they'll have forgotten how to
email ;)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness
  2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
@ 2020-03-10 18:36   ` Chris Wilson
  0 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 18:36 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-09 18:31:26)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Some customers want to know how much of the GPU time are their clients
> using in order to make dynamic load balancing decisions.
> 
> With the hooks already in place which track the overall engine busyness,
> we can extend that slightly to split that time between contexts.
> 
> v2: Fix accounting for tail updates.
> v3: Rebase.
> v4: Mark currently running contexts as active on stats enable.
> v5: Include some headers to fix the build.
> v6: Added fine grained lock.
> v7: Convert to seqlock. (Chris Wilson)
> v8: Rebase and tidy with helpers.
> v9: Refactor.
> v10: Move recording start to promotion. (Chris)
> v11: Consolidate duplicated code. (Chris)
> v12: execlists->active cannot be NULL. (Chris)
> v13: Move start to set_timeslice. (Chris)
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_context.c       | 20 +++++++++++
>  drivers/gpu/drm/i915/gt/intel_context.h       | 13 +++++++
>  drivers/gpu/drm/i915/gt/intel_context_types.h |  9 +++++
>  drivers/gpu/drm/i915/gt/intel_engine_cs.c     | 15 ++++++--
>  drivers/gpu/drm/i915/gt/intel_lrc.c           | 34 ++++++++++++++++++-
>  5 files changed, 88 insertions(+), 3 deletions(-)

We also should put together a basic selftest to accompany its
introduction. Just something that runs a context (using a spinner) for
50ms and check the stats report ~50ms.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create
  2020-03-10 18:11   ` Chris Wilson
@ 2020-03-10 19:52     ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-10 19:52 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 18:11, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:19)
>> @@ -92,8 +107,8 @@ __i915_drm_client_register(struct i915_drm_client *client,
>>   static void
>>   __i915_drm_client_unregister(struct i915_drm_client *client)
>>   {
>> -       put_pid(fetch_and_zero(&client->pid));
>> -       kfree(fetch_and_zero(&client->name));
>> +       put_pid(rcu_replace_pointer(client->pid, NULL, true));
>> +       kfree(rcu_replace_pointer(client->name, NULL, true));
> 
> client_unregister is not after an RCU grace period, so what's the
> protection here?

Against concurrent access via sysfs? Hm.. I think kobject_put needs to 
go first and clearing of name and pid last. Will fix this.

Accesses via GEM contexts always have a reference so that should be fine.

RCU business on pid and name is basically only so the two can be 
asynchronously replaced if need to be updated on context create. So 
anyone accessing them sees either old or new, but always valid data.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts
  2020-03-10 18:25   ` Chris Wilson
@ 2020-03-10 20:00     ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-10 20:00 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 18:25, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:22)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> As contexts are abandoned we want to remember how much GPU time they used
>> (per class) so later we can used it for smarter purposes.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_context.c       | 13 ++++++++++++-
>>   drivers/gpu/drm/i915/gem/i915_gem_context_types.h |  5 +++++
>>   2 files changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>> index abc3a3e2fcf1..5f6861a36655 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
>> @@ -257,7 +257,19 @@ static void free_engines_rcu(struct rcu_head *rcu)
>>   {
>>          struct i915_gem_engines *engines =
>>                  container_of(rcu, struct i915_gem_engines, rcu);
>> +       struct i915_gem_context *ctx = engines->ctx;
>> +       struct i915_gem_engines_iter it;
>> +       struct intel_context *ce;
>> +
>> +       /* Transfer accumulated runtime to the parent GEM context. */
>> +       for_each_gem_engine(ce, engines, it) {
>> +               unsigned int class = ce->engine->uabi_class;
>>   
>> +               GEM_BUG_ON(class >= ARRAY_SIZE(ctx->past_runtime));
>> +               atomic64_add(ce->runtime.total, &ctx->past_runtime[class]);
>> +       }
> 
> -> give this its own routine.

Ack.

>> +
>> +       i915_gem_context_put(ctx);
>>          i915_sw_fence_fini(&engines->fence);
>>          free_engines(engines);
>>   }
>> @@ -540,7 +552,6 @@ static int engines_notify(struct i915_sw_fence *fence,
>>                          list_del(&engines->link);
>>                          spin_unlock_irqrestore(&ctx->stale.lock, flags);
>>                  }
>> -               i915_gem_context_put(engines->ctx);
> 
> Or accumulate here? Here we know the engines are idle and released,
> albeit there is the delay in accumulating after the swap. I'm not going
> to worry about that, live replacement of engines I don't expect anyone
> to notice the busy stats being off for a bit. Worst case is that they
> see a sudden jump; but typical practice will be to setup engines up
> before they being activity. We only have to worry about is if the
> transient misleading stats can be exploited.

It was even here initially but then I started fearing it may not be the 
last unpin of intel_context, pending context save/complete so sounded 
safer to make it really really last. And

But I guess you are right in saying that small error when replacing 
engines should not be large concern. If I move the accumulation back 
here I don't need the intel_context->closed patch any more so that's a plus.

Unless it can be a large error if context ran for quite some time. Hm.. 
I think I still prefer to be safe and accumulate latest as possible.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts
  2020-03-10 18:28   ` Chris Wilson
@ 2020-03-10 20:01     ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-10 20:01 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 18:28, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:23)
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
>> index 7825df32798d..10752107e8c7 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.h
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.h
>> @@ -16,6 +16,8 @@
>>   #include <linux/sched.h>
>>   #include <linux/xarray.h>
>>   
>> +#include "gt/intel_engine_types.h"
>> +
>>   struct i915_drm_clients {
>>          struct mutex lock;
>>          struct xarray xarray;
>> @@ -43,6 +45,11 @@ struct i915_drm_client {
>>                  struct device_attribute pid;
>>                  struct device_attribute name;
>>          } attr;
>> +
>> +       /**
>> +        * @past_runtime: Accumulation of pphwsp runtimes from closed contexts.
>> +        */
>> +       atomic64_t past_runtime[MAX_ENGINE_CLASS + 1];
> 
> Just to plant a seed: i915_drm_client_stats.[ch] ?

Let it grow a bit first? :)

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
  2020-03-10 18:32   ` Chris Wilson
@ 2020-03-10 20:04     ` Tvrtko Ursulin
  2020-03-10 20:12       ` Chris Wilson
  0 siblings, 1 reply; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-10 20:04 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 18:32, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:25)
>> +static ssize_t
>> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
>> +{
>> +       struct i915_engine_busy_attribute *i915_attr =
>> +               container_of(attr, typeof(*i915_attr), attr);
>> +       unsigned int class = i915_attr->engine_class;
>> +       struct i915_drm_client *client = i915_attr->client;
>> +       u64 total = atomic64_read(&client->past_runtime[class]);
>> +       struct list_head *list = &client->ctx_list;
>> +       struct i915_gem_context *ctx;
>> +
>> +       rcu_read_lock();
>> +       list_for_each_entry_rcu(ctx, list, client_link) {
>> +               total += atomic64_read(&ctx->past_runtime[class]);
>> +               total += pphwsp_busy_add(ctx, class);
>> +       }
>> +       rcu_read_unlock();
>> +
>> +       total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
> 
> Planning early retirement? In 600 years, they'll have forgotten how to
> email ;)

Shruggety shrug. :) I am guessing you would prefer both internal 
representations (sw and pphwsp runtimes) to be consistently in 
nanoseconds? I thought why multiply at various places when once at the 
readout time is enough.

And I should mention again how I am not sure at the moment how to meld 
the two stats into one more "perfect" output.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
  2020-03-10 20:04     ` Tvrtko Ursulin
@ 2020-03-10 20:12       ` Chris Wilson
  2020-03-11 10:17         ` Tvrtko Ursulin
  0 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2020-03-10 20:12 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-03-10 20:04:23)
> 
> On 10/03/2020 18:32, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-03-09 18:31:25)
> >> +static ssize_t
> >> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
> >> +{
> >> +       struct i915_engine_busy_attribute *i915_attr =
> >> +               container_of(attr, typeof(*i915_attr), attr);
> >> +       unsigned int class = i915_attr->engine_class;
> >> +       struct i915_drm_client *client = i915_attr->client;
> >> +       u64 total = atomic64_read(&client->past_runtime[class]);
> >> +       struct list_head *list = &client->ctx_list;
> >> +       struct i915_gem_context *ctx;
> >> +
> >> +       rcu_read_lock();
> >> +       list_for_each_entry_rcu(ctx, list, client_link) {
> >> +               total += atomic64_read(&ctx->past_runtime[class]);
> >> +               total += pphwsp_busy_add(ctx, class);
> >> +       }
> >> +       rcu_read_unlock();
> >> +
> >> +       total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
> > 
> > Planning early retirement? In 600 years, they'll have forgotten how to
> > email ;)
> 
> Shruggety shrug. :) I am guessing you would prefer both internal 
> representations (sw and pphwsp runtimes) to be consistently in 
> nanoseconds? I thought why multiply at various places when once at the 
> readout time is enough.

It's fine. I was just double checking overflow, and then remembered the
end result is 64b nanoseconds.

Keep the internal representation convenient for accumulation, and the
conversion at the boundary.
 
> And I should mention again how I am not sure at the moment how to meld 
> the two stats into one more "perfect" output.

One of the things that crossed my mind was wondering if it was possible
to throw in a pulse before reading the stats (if active etc). Usual
dilemma with non-preemptible contexts, so probably not worth it as those
hogs will remain hogs.

And I worry about the disparity between sw busy and hw runtime.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
  2020-03-10 20:12       ` Chris Wilson
@ 2020-03-11 10:17         ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-11 10:17 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 10/03/2020 20:12, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-10 20:04:23)
>>
>> On 10/03/2020 18:32, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2020-03-09 18:31:25)
>>>> +static ssize_t
>>>> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
>>>> +{
>>>> +       struct i915_engine_busy_attribute *i915_attr =
>>>> +               container_of(attr, typeof(*i915_attr), attr);
>>>> +       unsigned int class = i915_attr->engine_class;
>>>> +       struct i915_drm_client *client = i915_attr->client;
>>>> +       u64 total = atomic64_read(&client->past_runtime[class]);
>>>> +       struct list_head *list = &client->ctx_list;
>>>> +       struct i915_gem_context *ctx;
>>>> +
>>>> +       rcu_read_lock();
>>>> +       list_for_each_entry_rcu(ctx, list, client_link) {
>>>> +               total += atomic64_read(&ctx->past_runtime[class]);
>>>> +               total += pphwsp_busy_add(ctx, class);
>>>> +       }
>>>> +       rcu_read_unlock();
>>>> +
>>>> +       total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
>>>
>>> Planning early retirement? In 600 years, they'll have forgotten how to
>>> email ;)
>>
>> Shruggety shrug. :) I am guessing you would prefer both internal
>> representations (sw and pphwsp runtimes) to be consistently in
>> nanoseconds? I thought why multiply at various places when once at the
>> readout time is enough.
> 
> It's fine. I was just double checking overflow, and then remembered the
> end result is 64b nanoseconds.
> 
> Keep the internal representation convenient for accumulation, and the
> conversion at the boundary.
>   
>> And I should mention again how I am not sure at the moment how to meld
>> the two stats into one more "perfect" output.
> 
> One of the things that crossed my mind was wondering if it was possible
> to throw in a pulse before reading the stats (if active etc). Usual
> dilemma with non-preemptible contexts, so probably not worth it as those
> hogs will remain hogs.
> 
> And I worry about the disparity between sw busy and hw runtime.

How about I stop tracking accumulated sw runtime and just use it for the 
active portion. So reporting back hw runtime + sw active runtime. In 
other words sw tracking only covers the portion between context_in and 
context_out. Sounds worth a try.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [Intel-gfx] [RFC 00/12] Per client engine busyness
@ 2020-03-11 18:26 Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2020-03-11 18:26 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series. Highlights from this
version:

 * Last two patches contain a hybrid method of tracking context runtime. PPHWSP
   tracked one is used as a baseline and then on top i915 tracks the start time
   of when context was last started executing on the GPU. Together this should
   give better overall resilience against spammy workloads and also provides
   visibility to long/infinite batches.

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── 9
	    ├── busy
	    │   ├── 0
	    │   ├── 1
	    │   ├── 2
	    │   └── 3
	    ├── name
	    └── pid

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

It is stil a RFC since it misses dedicated test cases to ensure things really
work as advertised.

Tvrtko Ursulin (10):
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Make GEM contexts track DRM clients
  drm/i915: Use explicit flag to mark unreachable intel_context
  drm/i915: Track runtime spent in unreachable intel_contexts
  drm/i915: Track runtime spent in closed GEM contexts
  drm/i915: Track all user contexts per client
  drm/i915: Expose per-engine client busyness
  drm/i915: Track context current active time
  drm/i915: Prefer software tracked context busyness

 drivers/gpu/drm/i915/Makefile                 |   3 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  66 ++-
 .../gpu/drm/i915/gem/i915_gem_context_types.h |  21 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   2 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  18 +-
 drivers/gpu/drm/i915/gt/intel_context.h       |   6 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |  25 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  55 ++-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |  31 +-
 drivers/gpu/drm/i915/i915_drm_client.c        | 434 ++++++++++++++++++
 drivers/gpu/drm/i915/i915_drm_client.h        |  94 ++++
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_gem.c               |  35 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |  25 +-
 drivers/gpu/drm/i915/i915_sysfs.c             |   8 +
 16 files changed, 756 insertions(+), 82 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c
 create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2020-03-11 18:26 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-03-09 21:34   ` Chris Wilson
2020-03-09 23:26     ` Tvrtko Ursulin
2020-03-10  0:13       ` Chris Wilson
2020-03-10  8:44         ` Tvrtko Ursulin
2020-03-10 11:41   ` Chris Wilson
2020-03-10 12:04     ` Tvrtko Ursulin
2020-03-10 17:59   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
2020-03-10 18:11   ` Chris Wilson
2020-03-10 19:52     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-03-10 18:20   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
2020-03-10 15:30   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-03-10 18:25   ` Chris Wilson
2020-03-10 20:00     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-03-10 18:28   ` Chris Wilson
2020-03-10 20:01     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-03-10 18:32   ` Chris Wilson
2020-03-10 20:04     ` Tvrtko Ursulin
2020-03-10 20:12       ` Chris Wilson
2020-03-11 10:17         ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2020-03-10 18:36   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
2020-03-09 23:30   ` Tvrtko Ursulin
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-11 18:26 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.