All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 0/5] Per client engine busyness
@ 2019-12-16 12:06 Tvrtko Ursulin
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context " Tvrtko Ursulin
                   ` (7 more replies)
  0 siblings, 8 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 12:06 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Another re-spin of the per-client engine busyness series.

Review feedback from last round has been addressed* and the tracking simplified.

(*Apart from re-using the ctx->idr_lock for the global toggle, I kept using
struct mutext for that.)

Internally we track time spent on engines for each struct intel_context. This
can serve as a building block for several features from the want list:
smarter scheduler decisions, getrusage(2)-like per-GEM-context functionality
wanted by some customers, cgroups controller, dynamic SSEU tuning,...

Externally, in sysfs, we expose time spent on GPU per client and per engine
class.

There is also a global toggle to enable this extra tracking although it is open
whether it is warranted and we should not just always track.

Sysfs interface enables us to implement a "top-like" tool for GPU tasks. Or with
a "screenshot":
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  906/ 955 MHz;    0% RC6;  5.30 Watts;      933 irqs/s

      IMC reads:     4414 MiB/s
     IMC writes:     3805 MiB/s

          ENGINE      BUSY                                      MI_SEMA MI_WAIT
     Render/3D/0   93.46% |████████████████████████████████▋  |      0%      0%
       Blitter/0    0.00% |                                   |      0%      0%
         Video/0    0.00% |                                   |      0%      0%
  VideoEnhance/0    0.00% |                                   |      0%      0%

  PID            NAME  Render/3D      Blitter        Video      VideoEnhance
 2733       neverball |██████▌     ||            ||            ||            |
 2047            Xorg |███▊        ||            ||            ||            |
 2737        glxgears |█▍          ||            ||            ||            |
 2128           xfwm4 |            ||            ||            ||            |
 2047            Xorg |            ||            ||            ||            |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we add a a bunch of files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 9
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── enable_stats

Files in 'busy' directories are numbered using the engine class ABI values and
they contain accumulated nanoseconds each client spent on engines of a
respective class.

I will post the corresponding patch to intel_gpu_top for reference as well.

Tvrtko Ursulin (5):
  drm/i915: Track per-context engine busyness
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Expose per-engine client busyness
  drm/i915: Add sysfs toggle to enable per-client engine stats

 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  24 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  20 ++
 drivers/gpu/drm/i915/gt/intel_context.h       |  11 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |   9 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  16 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  47 +++-
 drivers/gpu/drm/i915/i915_drv.h               |  41 +++
 drivers/gpu/drm/i915/i915_gem.c               | 234 +++++++++++++++++-
 drivers/gpu/drm/i915/i915_sysfs.c             |  84 +++++++
 9 files changed, 465 insertions(+), 21 deletions(-)

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context engine busyness
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
@ 2019-12-16 12:07 ` Tvrtko Ursulin
  2019-12-16 12:40   ` Chris Wilson
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 12:07 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some customers want to know how much of the GPU time are their clients
using in order to make dynamic load balancing decisions.

With the hooks already in place which track the overall engine busyness,
we can extend that slightly to split that time between contexts.

v2: Fix accounting for tail updates.
v3: Rebase.
v4: Mark currently running contexts as active on stats enable.
v5: Include some headers to fix the build.
v6: Added fine grained lock.
v7: Convert to seqlock. (Chris Wilson)
v8: Rebase and tidy with helpers.
v9: Refactor.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c       | 20 ++++++++
 drivers/gpu/drm/i915/gt/intel_context.h       | 11 +++++
 drivers/gpu/drm/i915/gt/intel_context_types.h |  9 ++++
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     | 16 ++++++-
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 47 ++++++++++++++++---
 5 files changed, 95 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index b1e346d2d35f..b211b48d6cae 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -243,6 +243,7 @@ intel_context_init(struct intel_context *ce,
 	INIT_LIST_HEAD(&ce->signals);
 
 	mutex_init(&ce->pin_mutex);
+	seqlock_init(&ce->stats.lock);
 
 	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire);
@@ -337,6 +338,25 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
 	return rq;
 }
 
+ktime_t intel_context_get_busy_time(struct intel_context *ce)
+{
+	unsigned int seq;
+	ktime_t total;
+
+	do {
+		seq = read_seqbegin(&ce->stats.lock);
+
+		total = ce->stats.total;
+
+		if (ce->stats.active)
+			total = ktime_add(total,
+					  ktime_sub(ktime_get(),
+						    ce->stats.start));
+	} while (read_seqretry(&ce->stats.lock, seq));
+
+	return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index b39eb1fcfbca..3a15cf32f0a3 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -160,4 +160,15 @@ static inline struct intel_ring *__intel_context_ring_size(u64 sz)
 	return u64_to_ptr(struct intel_ring, sz);
 }
 
+static inline void
+__intel_context_stats_start(struct intel_context_stats *stats, ktime_t now)
+{
+	if (!stats->active) {
+		stats->start = now;
+		stats->active = true;
+	}
+}
+
+ktime_t intel_context_get_busy_time(struct intel_context *ce);
+
 #endif /* __INTEL_CONTEXT_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index d1204cc899a3..12cbad0798cb 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -11,6 +11,7 @@
 #include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
+#include <linux/seqlock.h>
 
 #include "i915_active_types.h"
 #include "i915_utils.h"
@@ -76,6 +77,14 @@ struct intel_context {
 
 	/** sseu: Control eu/slice partitioning */
 	struct intel_sseu sseu;
+
+	/** stats: Context GPU engine busyness tracking. */
+	struct intel_context_stats {
+		seqlock_t lock;
+		bool active;
+		ktime_t start;
+		ktime_t total;
+	} stats;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 3d1d48bf90cf..ac08781c8b24 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1577,8 +1577,20 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 
 		engine->stats.enabled_at = ktime_get();
 
-		/* XXX submission method oblivious? */
-		for (port = execlists->active; (rq = *port); port++)
+		/*
+		 * Mark currently running context as active.
+		 * XXX submission method oblivious?
+		 */
+
+		rq = NULL;
+		port = execlists->active;
+		if (port)
+			rq = *port;
+		if (rq)
+			__intel_context_stats_start(&rq->hw_context->stats,
+						    engine->stats.enabled_at);
+
+		for (; (rq = *port); port++)
 			engine->stats.active++;
 
 		for (port = execlists->pending; (rq = *port); port++) {
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 4ebfecd95032..1f158cb439bc 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -940,6 +940,7 @@ static void intel_engine_context_in(struct intel_engine_cs *engine)
 	if (engine->stats.enabled > 0) {
 		if (engine->stats.active++ == 0)
 			engine->stats.start = ktime_get();
+
 		GEM_BUG_ON(engine->stats.active == 0);
 	}
 
@@ -1088,6 +1089,30 @@ static void reset_active(struct i915_request *rq,
 	ce->lrc_desc |= CTX_DESC_FORCE_RESTORE;
 }
 
+static void
+intel_context_stats_start(struct intel_context_stats *stats)
+{
+	write_seqlock(&stats->lock);
+	__intel_context_stats_start(stats, ktime_get());
+	write_sequnlock(&stats->lock);
+}
+
+static void
+intel_context_stats_stop(struct intel_context_stats *stats)
+{
+	unsigned long flags;
+
+	if (!READ_ONCE(stats->active))
+		return;
+
+	write_seqlock_irqsave(&stats->lock, flags);
+	GEM_BUG_ON(!READ_ONCE(stats->active));
+	stats->total = ktime_add(stats->total,
+				 ktime_sub(ktime_get(), stats->start));
+	stats->active = false;
+	write_sequnlock_irqrestore(&stats->lock, flags);
+}
+
 static inline struct intel_engine_cs *
 __execlists_schedule_in(struct i915_request *rq)
 {
@@ -1155,7 +1180,7 @@ static inline void
 __execlists_schedule_out(struct i915_request *rq,
 			 struct intel_engine_cs * const engine)
 {
-	struct intel_context * const ce = rq->hw_context;
+	struct intel_context *ce = rq->hw_context;
 
 	/*
 	 * NB process_csb() is not under the engine->active.lock and hence
@@ -1172,6 +1197,7 @@ __execlists_schedule_out(struct i915_request *rq,
 		intel_engine_add_retire(engine, ce->timeline);
 
 	intel_engine_context_out(engine);
+	intel_context_stats_stop(&ce->stats);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 	intel_gt_pm_put_async(engine->gt);
 
@@ -1389,6 +1415,9 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 		write_desc(execlists,
 			   rq ? execlists_update_context(rq) : 0,
 			   n);
+
+		if (n == 0)
+			intel_context_stats_start(&rq->hw_context->stats);
 	}
 
 	/* we need to manually load the submit queue */
@@ -2197,7 +2226,11 @@ static void process_csb(struct intel_engine_cs *engine)
 
 			WRITE_ONCE(execlists->pending[0], NULL);
 		} else {
-			GEM_BUG_ON(!*execlists->active);
+			struct i915_request *rq = *execlists->active++;
+
+			GEM_BUG_ON(!rq);
+			GEM_BUG_ON(execlists->active - execlists->inflight >
+				   execlists_num_ports(execlists));
 
 			/* port0 completed, advanced to port1 */
 			trace_ports(execlists, "completed", execlists->active);
@@ -2208,12 +2241,14 @@ static void process_csb(struct intel_engine_cs *engine)
 			 * coherent (visible from the CPU) before the
 			 * user interrupt and CSB is processed.
 			 */
-			GEM_BUG_ON(!i915_request_completed(*execlists->active) &&
+			GEM_BUG_ON(!i915_request_completed(rq) &&
 				   !reset_in_progress(execlists));
-			execlists_schedule_out(*execlists->active++);
 
-			GEM_BUG_ON(execlists->active - execlists->inflight >
-				   execlists_num_ports(execlists));
+			execlists_schedule_out(rq);
+			rq = *execlists->active;
+			if (rq)
+				intel_context_stats_start(&rq->hw_context->stats);
+
 		}
 	} while (head != tail);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context " Tvrtko Ursulin
@ 2019-12-16 12:07 ` Tvrtko Ursulin
  2019-12-16 12:51   ` Chris Wilson
                     ` (3 more replies)
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 3/5] drm/i915: Update client name on context create Tvrtko Ursulin
                   ` (5 subsequent siblings)
  7 siblings, 4 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 12:07 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose a list of clients with open file handles in sysfs.

This will be a basis for a top-like utility showing per-client and per-
engine GPU load.

Currently we only expose each client's pid and name under opaque numbered
directories in /sys/class/drm/card0/clients/.

For instance:

/sys/class/drm/card0/clients/3/name: Xorg
/sys/class/drm/card0/clients/3/pid: 5664

v2:
 Chris Wilson:
 * Enclose new members into dedicated structs.
 * Protect against failed sysfs registration.

v3:
 * sysfs_attr_init.

v4:
 * Fix for internal clients.

v5:
 * Use cyclic ida for client id. (Chris)
 * Do not leak pid reference. (Chris)
 * Tidy code with some locals.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  21 +++++
 drivers/gpu/drm/i915/i915_gem.c   | 148 ++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/i915_sysfs.c |   8 ++
 3 files changed, 167 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0781b6326b8c..9fcbcb6d6f76 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -224,6 +224,20 @@ struct drm_i915_file_private {
 	/** ban_score: Accumulated score of all ctx bans and fast hangs. */
 	atomic_t ban_score;
 	unsigned long hang_timestamp;
+
+	struct i915_drm_client {
+		unsigned int id;
+
+		struct pid *pid;
+		char *name;
+
+		struct kobject *root;
+
+		struct {
+			struct device_attribute pid;
+			struct device_attribute name;
+		} attr;
+	} client;
 };
 
 /* Interface history:
@@ -1278,6 +1292,13 @@ struct drm_i915_private {
 
 	struct i915_pmu pmu;
 
+	struct i915_drm_clients {
+		spinlock_t idr_lock;
+		struct idr idr;
+
+		struct kobject *root;
+	} clients;
+
 	struct i915_hdcp_comp_master *hdcp_master;
 	bool hdcp_comp_added;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5eeef1ef7448..a65cd7e1ce7b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1457,11 +1457,14 @@ static void i915_gem_init__mm(struct drm_i915_private *i915)
 	i915_gem_init__objects(i915);
 }
 
-void i915_gem_init_early(struct drm_i915_private *dev_priv)
+void i915_gem_init_early(struct drm_i915_private *i915)
 {
-	i915_gem_init__mm(dev_priv);
+	i915_gem_init__mm(i915);
 
-	spin_lock_init(&dev_priv->fb_tracking.lock);
+	spin_lock_init(&i915->fb_tracking.lock);
+
+	spin_lock_init(&i915->clients.idr_lock);
+	idr_init(&i915->clients.idr);
 }
 
 void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
@@ -1518,6 +1521,106 @@ int i915_gem_freeze_late(struct drm_i915_private *i915)
 	return 0;
 }
 
+static ssize_t
+show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_file_private *file_priv =
+		container_of(attr, struct drm_i915_file_private,
+			     client.attr.name);
+
+	return snprintf(buf, PAGE_SIZE, "%s", file_priv->client.name);
+}
+
+static ssize_t
+show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_file_private *file_priv =
+		container_of(attr, struct drm_i915_file_private,
+			     client.attr.pid);
+
+	return snprintf(buf, PAGE_SIZE, "%u", pid_nr(file_priv->client.pid));
+}
+
+static int
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,
+		struct task_struct *task,
+		unsigned int serial)
+{
+	struct i915_drm_client *client = &file_priv->client;
+	struct i915_drm_clients *clients = &i915->clients;
+	struct device_attribute *attr;
+	int ret = -ENOMEM;
+	char id[32];
+
+	if (!clients->root)
+		return 0; /* intel_fbdev_init registers a client before sysfs */
+
+	client->name = kstrdup(task->comm, GFP_KERNEL);
+	if (!client->name)
+		goto err_name;
+
+	snprintf(id, sizeof(id), "%u", serial);
+	client->root = kobject_create_and_add(id, clients->root);
+	if (!client->root)
+		goto err_client;
+
+	attr = &client->attr.name;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "name";
+	attr->attr.mode = 0444;
+	attr->show = show_client_name;
+
+	ret = sysfs_create_file(client->root, (struct attribute *)attr);
+	if (ret)
+		goto err_attr_name;
+
+	attr = &client->attr.pid;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "pid";
+	attr->attr.mode = 0444;
+	attr->show = show_client_pid;
+
+	ret = sysfs_create_file(client->root, (struct attribute *)attr);
+	if (ret)
+		goto err_attr_pid;
+
+	client->id = serial;
+	client->pid = get_task_pid(task, PIDTYPE_PID);
+
+	return 0;
+
+err_attr_pid:
+	sysfs_remove_file(client->root, (struct attribute *)&client->attr.name);
+err_attr_name:
+	kobject_put(client->root);
+err_client:
+	kfree(client->name);
+err_name:
+	return ret;
+}
+
+static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
+{
+	struct i915_drm_clients *clients = &file_priv->dev_priv->clients;
+	struct i915_drm_client *client = &file_priv->client;
+
+	if (!client->name)
+		return; /* intel_fbdev_init registers a client before sysfs */
+
+	sysfs_remove_file(client->root, (struct attribute *)&client->attr.pid);
+	sysfs_remove_file(client->root, (struct attribute *)&client->attr.name);
+	kobject_put(client->root);
+
+	spin_lock(&clients->idr_lock);
+	idr_remove(&clients->idr, client->id);
+	spin_unlock(&clients->idr_lock);
+
+	put_pid(client->pid);
+
+	kfree(client->name);
+}
+
 void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv = file->driver_priv;
@@ -1531,33 +1634,58 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 	list_for_each_entry(request, &file_priv->mm.request_list, client_link)
 		request->file_priv = NULL;
 	spin_unlock(&file_priv->mm.lock);
+
+	i915_gem_remove_client(file_priv);
 }
 
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 {
+	struct i915_drm_clients *clients = &i915->clients;
 	struct drm_i915_file_private *file_priv;
-	int ret;
+	int ret = -ENOMEM;
+	int id;
 
 	DRM_DEBUG("\n");
 
 	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
 	if (!file_priv)
-		return -ENOMEM;
+		goto err_alloc;
+
+	spin_lock(&clients->idr_lock);
+	id = idr_alloc_cyclic(&clients->idr, file_priv, 0, -1, GFP_KERNEL);
+	spin_unlock(&clients->idr_lock);
+	if (id < 0)
+		goto err_alloc;
+
+	ret = i915_gem_add_client(i915, file_priv, current, id);
+	if (ret)
+		goto err_client;
 
 	file->driver_priv = file_priv;
+	ret = i915_gem_context_open(i915, file);
+	if (ret)
+		goto err_context;
+
 	file_priv->dev_priv = i915;
 	file_priv->file = file;
+	file_priv->bsd_engine = -1;
+	file_priv->hang_timestamp = jiffies;
 
 	spin_lock_init(&file_priv->mm.lock);
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
 
-	file_priv->bsd_engine = -1;
-	file_priv->hang_timestamp = jiffies;
+	return 0;
 
-	ret = i915_gem_context_open(i915, file);
-	if (ret)
-		kfree(file_priv);
+err_context:
+	i915_gem_remove_client(file_priv);
+
+err_client:
+	spin_lock(&clients->idr_lock);
+	idr_remove(&clients->idr, id);
+	spin_unlock(&clients->idr_lock);
+	kfree(file_priv);
 
+err_alloc:
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index ad2b1b833d7b..3ab50e29fddf 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -559,6 +559,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	struct device *kdev = dev_priv->drm.primary->kdev;
 	int ret;
 
+	dev_priv->clients.root =
+		kobject_create_and_add("clients", &kdev->kobj);
+	if (!dev_priv->clients.root)
+		DRM_ERROR("Per-client sysfs setup failed\n");
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -619,4 +624,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
+
+	if (dev_priv->clients.root)
+		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 3/5] drm/i915: Update client name on context create
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context " Tvrtko Ursulin
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
@ 2019-12-16 12:07 ` Tvrtko Ursulin
  2019-12-16 12:57   ` Chris Wilson
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 4/5] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 12:07 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some clients have the DRM fd passed to them over a socket by the X server.

Grab the real client and pid when they create their first context and
update the exposed data for more useful enumeration.

v2:
 * Do not leak the pid reference and borrow context idr_lock. (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 24 ++++++++++++++++++---
 drivers/gpu/drm/i915/i915_drv.h             |  7 ++++++
 drivers/gpu/drm/i915/i915_gem.c             |  4 ++--
 3 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 46b4d1d643f8..cd4ba6486f35 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -2178,7 +2178,9 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 {
 	struct drm_i915_private *i915 = to_i915(dev);
 	struct drm_i915_gem_context_create_ext *args = data;
+	struct drm_i915_file_private *file_priv = file->driver_priv;
 	struct create_ext ext_data;
+	struct pid *pid;
 	int ret;
 
 	if (!DRIVER_CAPS(i915)->has_logical_contexts)
@@ -2191,14 +2193,30 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	ext_data.fpriv = file->driver_priv;
+	ext_data.fpriv = file_priv;
+	pid = get_task_pid(current, PIDTYPE_PID);
 	if (client_is_banned(ext_data.fpriv)) {
 		DRM_DEBUG("client %s[%d] banned from creating ctx\n",
-			  current->comm,
-			  pid_nr(get_task_pid(current, PIDTYPE_PID)));
+			  current->comm, pid_nr(pid));
+		put_pid(pid);
 		return -EIO;
 	}
 
+	/*
+	 * Borrow the context idr_lock to protect the client remove-add cycle.
+	 */
+	if (mutex_lock_interruptible(&file_priv->context_idr_lock))
+		return -EINTR;
+	if (pid_nr(file_priv->client.pid) != pid_nr(pid)) {
+		i915_gem_remove_client(file_priv);
+		ret = i915_gem_add_client(i915, file_priv, current,
+					  file_priv->client.id);
+	}
+	mutex_unlock(&file_priv->context_idr_lock);
+	put_pid(pid);
+	if (ret)
+		return ret;
+
 	ext_data.ctx = i915_gem_create_context(i915, args->flags);
 	if (IS_ERR(ext_data.ctx))
 		return PTR_ERR(ext_data.ctx);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9fcbcb6d6f76..1740ce54cb48 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1899,6 +1899,13 @@ void i915_gem_suspend(struct drm_i915_private *dev_priv);
 void i915_gem_suspend_late(struct drm_i915_private *dev_priv);
 void i915_gem_resume(struct drm_i915_private *dev_priv);
 
+int
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,
+		struct task_struct *task,
+		unsigned int serial);
+void i915_gem_remove_client(struct drm_i915_file_private *file_priv);
+
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file);
 void i915_gem_release(struct drm_device *dev, struct drm_file *file);
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a65cd7e1ce7b..59c160534838 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1541,7 +1541,7 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%u", pid_nr(file_priv->client.pid));
 }
 
-static int
+int
 i915_gem_add_client(struct drm_i915_private *i915,
 		struct drm_i915_file_private *file_priv,
 		struct task_struct *task,
@@ -1600,7 +1600,7 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	return ret;
 }
 
-static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
+void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
 {
 	struct i915_drm_clients *clients = &file_priv->dev_priv->clients;
 	struct i915_drm_client *client = &file_priv->client;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 4/5] drm/i915: Expose per-engine client busyness
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
                   ` (2 preceding siblings ...)
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 3/5] drm/i915: Update client name on context create Tvrtko Ursulin
@ 2019-12-16 12:07 ` Tvrtko Ursulin
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats Tvrtko Ursulin
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 12:07 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose per-client and per-engine busyness under the previously added sysfs
client root.

The new files are one per-engine instance and located under the 'busy'
directory. Each contains a monotonically increasing nano-second resolution
times each client's jobs were executing on the GPU.

This enables userspace to create a top-like tool for GPU utilization:

==========================================================================
intel-gpu-top -  935/ 935 MHz;    0% RC6; 14.73 Watts;     1097 irqs/s

      IMC reads:     1401 MiB/s
     IMC writes:        4 MiB/s

          ENGINE      BUSY                                 MI_SEMA MI_WAIT
     Render/3D/0   63.73% |███████████████████           |      3%      0%
       Blitter/0    9.53% |██▊                           |      6%      0%
         Video/0   39.32% |███████████▊                  |     16%      0%
         Video/1   15.62% |████▋                         |      0%      0%
  VideoEnhance/0    0.00% |                              |      0%      0%

  PID            NAME     RCS          BCS          VCS         VECS
 4084        gem_wsim |█████▌     ||█          ||           ||           |
 4086        gem_wsim |█▌         ||           ||███        ||           |
==========================================================================

v2: Use intel_context_engine_get_busy_time.
v3: New directory structure.
v4: Rebase.
v5: sysfs_attr_init.
v6: Small tidy in i915_gem_add_client.
v7: Rebase to be engine class based.
v8: Use rcu_read_lock instead of struct_mutext when iterating contexts.
    (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h |  8 +++
 drivers/gpu/drm/i915/i915_gem.c | 94 +++++++++++++++++++++++++++++++--
 2 files changed, 98 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1740ce54cb48..e9cefd9b55b5 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -188,6 +188,12 @@ struct drm_i915_private;
 struct i915_mm_struct;
 struct i915_mmu_object;
 
+struct i915_engine_busy_attribute {
+	struct device_attribute attr;
+	struct drm_i915_file_private *file_priv;
+	unsigned int engine_class;
+};
+
 struct drm_i915_file_private {
 	struct drm_i915_private *dev_priv;
 
@@ -232,10 +238,12 @@ struct drm_i915_file_private {
 		char *name;
 
 		struct kobject *root;
+		struct kobject *busy_root;
 
 		struct {
 			struct device_attribute pid;
 			struct device_attribute name;
+			struct i915_engine_busy_attribute busy[MAX_ENGINE_CLASS];
 		} attr;
 	} client;
 };
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 59c160534838..2337c4d82ad4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1541,6 +1541,53 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%u", pid_nr(file_priv->client.pid));
 }
 
+struct busy_ctx {
+	unsigned int engine_class;
+	u64 total;
+};
+
+static int busy_add(int id, void *p, void *data)
+{
+	struct busy_ctx *bc = data;
+	struct i915_gem_context *ctx = p;
+	struct i915_gem_engines *engines = rcu_dereference(ctx->engines);
+	unsigned int engine_class = bc->engine_class;
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	uint64_t total = bc->total;
+
+	for_each_gem_engine(ce, engines, it) {
+		if (ce->engine->uabi_class == engine_class)
+			total += ktime_to_ns(intel_context_get_busy_time(ce));
+	}
+
+	bc->total = total;
+
+	return 0;
+}
+
+static ssize_t
+show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	struct drm_i915_file_private *file_priv = i915_attr->file_priv;
+	struct busy_ctx bc = { .engine_class = i915_attr->engine_class };
+
+	rcu_read_lock();
+	idr_for_each(&file_priv->context_idr, busy_add, &bc);
+	rcu_read_unlock();
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", bc.total);
+}
+
+static const char *uabi_class_names[] = {
+	[I915_ENGINE_CLASS_RENDER] = "0",
+	[I915_ENGINE_CLASS_COPY] = "1",
+	[I915_ENGINE_CLASS_VIDEO] = "2",
+	[I915_ENGINE_CLASS_VIDEO_ENHANCE] = "3",
+};
+
 int
 i915_gem_add_client(struct drm_i915_private *i915,
 		struct drm_i915_file_private *file_priv,
@@ -1550,8 +1597,8 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	struct i915_drm_client *client = &file_priv->client;
 	struct i915_drm_clients *clients = &i915->clients;
 	struct device_attribute *attr;
-	int ret = -ENOMEM;
-	char id[32];
+	int i, ret = -ENOMEM;
+	char idstr[32];
 
 	if (!clients->root)
 		return 0; /* intel_fbdev_init registers a client before sysfs */
@@ -1560,8 +1607,8 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	if (!client->name)
 		goto err_name;
 
-	snprintf(id, sizeof(id), "%u", serial);
-	client->root = kobject_create_and_add(id, clients->root);
+	snprintf(idstr, sizeof(idstr), "%u", serial);
+	client->root = kobject_create_and_add(idstr, clients->root);
 	if (!client->root)
 		goto err_client;
 
@@ -1585,11 +1632,43 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	if (ret)
 		goto err_attr_pid;
 
+	client->busy_root = kobject_create_and_add("busy", client->root);
+	if (!client->busy_root)
+		goto err_busy_root;
+
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) {
+		struct i915_engine_busy_attribute *i915_attr =
+			&client->attr.busy[i];
+
+		i915_attr->file_priv = file_priv;
+		i915_attr->engine_class = i;
+
+		attr = &i915_attr->attr;
+
+		sysfs_attr_init(&attr->attr);
+
+		attr->attr.name = uabi_class_names[i];
+		attr->attr.mode = 0444;
+		attr->show = show_client_busy;
+
+		ret = sysfs_create_file(client->busy_root,
+					(struct attribute *)attr);
+		if (ret)
+			goto err_attr_busy;
+	}
+
 	client->id = serial;
 	client->pid = get_task_pid(task, PIDTYPE_PID);
 
 	return 0;
 
+err_attr_busy:
+	for (--i; i >= 0; i--)
+		sysfs_remove_file(client->busy_root,
+				  (struct attribute *)&client->attr.busy[i]);
+	kobject_put(client->busy_root);
+err_busy_root:
+	sysfs_remove_file(client->root, (struct attribute *)&client->attr.pid);
 err_attr_pid:
 	sysfs_remove_file(client->root, (struct attribute *)&client->attr.name);
 err_attr_name:
@@ -1604,10 +1683,17 @@ void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
 {
 	struct i915_drm_clients *clients = &file_priv->dev_priv->clients;
 	struct i915_drm_client *client = &file_priv->client;
+	unsigned int i;
 
 	if (!client->name)
 		return; /* intel_fbdev_init registers a client before sysfs */
 
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
+		sysfs_remove_file(client->busy_root,
+				  (struct attribute *)&client->attr.busy[i]);
+
+	kobject_put(client->busy_root);
+
 	sysfs_remove_file(client->root, (struct attribute *)&client->attr.pid);
 	sysfs_remove_file(client->root, (struct attribute *)&client->attr.name);
 	kobject_put(client->root);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
                   ` (3 preceding siblings ...)
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 4/5] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
@ 2019-12-16 12:07 ` Tvrtko Ursulin
  2019-12-16 13:09 ` [Intel-gfx] [PATCH 0/5] Per client engine busyness Chris Wilson
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 12:07 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

By default we are not collecting any per-engine and per-context
statistcs.

Add a new sysfs toggle to enable this facility:

$ echo 1 >/sys/class/drm/card0/clients/enable_stats

v2: Rebase.
v3: sysfs_attr_init.
v4: Scheduler caps.
v5: for_each_uabi_engine

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  5 ++
 drivers/gpu/drm/i915/i915_sysfs.c | 76 +++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e9cefd9b55b5..e795e6fea6d3 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1305,6 +1305,11 @@ struct drm_i915_private {
 		struct idr idr;
 
 		struct kobject *root;
+
+		struct {
+			bool enabled;
+			struct device_attribute attr;
+		} stats;
 	} clients;
 
 	struct i915_hdcp_comp_master *hdcp_master;
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 3ab50e29fddf..3443ff37b17a 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -554,9 +554,70 @@ static void i915_setup_error_capture(struct device *kdev) {}
 static void i915_teardown_error_capture(struct device *kdev) {}
 #endif
 
+static ssize_t
+show_client_stats(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_private *i915 =
+		container_of(attr, struct drm_i915_private, clients.stats.attr);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", i915->clients.stats.enabled);
+}
+
+static ssize_t
+store_client_stats(struct device *kdev, struct device_attribute *attr,
+		   const char *buf, size_t count)
+{
+	struct drm_i915_private *i915 =
+		container_of(attr, struct drm_i915_private, clients.stats.attr);
+	struct i915_drm_clients *clients = &i915->clients;
+	struct intel_engine_cs *engine;
+	bool disable = false;
+	bool enable = false;
+	bool val = false;
+	int ret;
+
+	 /* Use RCS as proxy for all engines. */
+	if (!(i915->caps.scheduler & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS))
+		return -EINVAL;
+
+	ret = kstrtobool(buf, &val);
+	if (ret)
+		return ret;
+
+	/*
+	 * We use struct_mutex to allow one client at the time toggle the state.
+	 */
+	ret = i915_mutex_lock_interruptible(&i915->drm);
+	if (ret)
+		return ret;
+
+	if (val && !clients->stats.enabled)
+		enable = true;
+	else if (!val && clients->stats.enabled)
+		disable = true;
+
+	if (!enable && !disable)
+		goto out;
+
+	for_each_uabi_engine(engine, i915) {
+		if (enable)
+			WARN_ON_ONCE(intel_enable_engine_stats(engine));
+		else if (disable)
+			intel_disable_engine_stats(engine);
+	}
+
+	clients->stats.enabled = val;
+
+out:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return count;
+}
+
 void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 {
 	struct device *kdev = dev_priv->drm.primary->kdev;
+	struct device_attribute *attr;
 	int ret;
 
 	dev_priv->clients.root =
@@ -564,6 +625,18 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	if (!dev_priv->clients.root)
 		DRM_ERROR("Per-client sysfs setup failed\n");
 
+	attr = &dev_priv->clients.stats.attr;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "enable_stats";
+	attr->attr.mode = 0664;
+	attr->show = show_client_stats;
+	attr->store = store_client_stats;
+
+	ret = sysfs_create_file(dev_priv->clients.root,
+				(struct attribute *)attr);
+	if (ret)
+		DRM_ERROR("Per-client sysfs setup failed! (%d)\n", ret);
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -625,6 +698,9 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
 
+	sysfs_remove_file(dev_priv->clients.root,
+			  (struct attribute *)&dev_priv->clients.stats.attr);
+
 	if (dev_priv->clients.root)
 		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context engine busyness
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context " Tvrtko Ursulin
@ 2019-12-16 12:40   ` Chris Wilson
  2019-12-16 13:09     ` Tvrtko Ursulin
  0 siblings, 1 reply; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 12:40 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:07:00)
> @@ -1389,6 +1415,9 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
>                 write_desc(execlists,
>                            rq ? execlists_update_context(rq) : 0,
>                            n);
> +
> +               if (n == 0)
> +                       intel_context_stats_start(&rq->hw_context->stats);

Too early? (Think preemption requests that may not begin for a few
hundred ms.) Mark it as started on promotion instead (should be within a
few microseconds, if not ideally a few 10 ns)? Then you will also have
better symmetry in process_csb, suggesting that we can have a routine
that takes the current *execlists->active with fewer code changes.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
@ 2019-12-16 12:51   ` Chris Wilson
  2019-12-16 18:34     ` Tvrtko Ursulin
  2019-12-16 12:53   ` Chris Wilson
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 12:51 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Expose a list of clients with open file handles in sysfs.
> 
> This will be a basis for a top-like utility showing per-client and per-
> engine GPU load.
> 
> Currently we only expose each client's pid and name under opaque numbered
> directories in /sys/class/drm/card0/clients/.
> 
> For instance:
> 
> /sys/class/drm/card0/clients/3/name: Xorg
> /sys/class/drm/card0/clients/3/pid: 5664
> 
> v2:
>  Chris Wilson:
>  * Enclose new members into dedicated structs.
>  * Protect against failed sysfs registration.
> 
> v3:
>  * sysfs_attr_init.
> 
> v4:
>  * Fix for internal clients.
> 
> v5:
>  * Use cyclic ida for client id. (Chris)

I think we are now in the age of xa_alloc_cyclic(). At least the
immediate benefit is that we don't have to worry about the ida locking.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
  2019-12-16 12:51   ` Chris Wilson
@ 2019-12-16 12:53   ` Chris Wilson
  2019-12-16 13:13     ` Tvrtko Ursulin
  2019-12-17 17:21     ` Tvrtko Ursulin
  2019-12-16 12:55   ` Chris Wilson
  2019-12-16 13:17   ` Chris Wilson
  3 siblings, 2 replies; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 12:53 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 0781b6326b8c..9fcbcb6d6f76 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -224,6 +224,20 @@ struct drm_i915_file_private {
>         /** ban_score: Accumulated score of all ctx bans and fast hangs. */
>         atomic_t ban_score;
>         unsigned long hang_timestamp;
> +
> +       struct i915_drm_client {
> +               unsigned int id;
> +
> +               struct pid *pid;
> +               char *name;

Hmm. Should we scrap i915_gem_context.pid and just use the client.pid?

> +
> +               struct kobject *root;
> +
> +               struct {
> +                       struct device_attribute pid;
> +                       struct device_attribute name;
> +               } attr;
> +       } client;
>  };
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
  2019-12-16 12:51   ` Chris Wilson
  2019-12-16 12:53   ` Chris Wilson
@ 2019-12-16 12:55   ` Chris Wilson
  2019-12-16 13:16     ` Tvrtko Ursulin
  2019-12-16 13:17   ` Chris Wilson
  3 siblings, 1 reply; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 12:55 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
> +static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
> +{
> +       struct i915_drm_clients *clients = &file_priv->dev_priv->clients;
> +       struct i915_drm_client *client = &file_priv->client;
> +
> +       if (!client->name)
> +               return; /* intel_fbdev_init registers a client before sysfs */
> +
> +       sysfs_remove_file(client->root, (struct attribute *)&client->attr.pid);
> +       sysfs_remove_file(client->root, (struct attribute *)&client->attr.name);
> +       kobject_put(client->root);

Do we need to remove individual files if we unplug the root?
sysfs_remove_dir(client->root) ?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 3/5] drm/i915: Update client name on context create
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 3/5] drm/i915: Update client name on context create Tvrtko Ursulin
@ 2019-12-16 12:57   ` Chris Wilson
  0 siblings, 0 replies; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 12:57 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:07:02)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 46b4d1d643f8..cd4ba6486f35 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -2178,7 +2178,9 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>  {
>         struct drm_i915_private *i915 = to_i915(dev);
>         struct drm_i915_gem_context_create_ext *args = data;
> +       struct drm_i915_file_private *file_priv = file->driver_priv;
>         struct create_ext ext_data;
> +       struct pid *pid;
>         int ret;
>  
>         if (!DRIVER_CAPS(i915)->has_logical_contexts)
> @@ -2191,14 +2193,30 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>         if (ret)
>                 return ret;
>  
> -       ext_data.fpriv = file->driver_priv;
> +       ext_data.fpriv = file_priv;
> +       pid = get_task_pid(current, PIDTYPE_PID);
>         if (client_is_banned(ext_data.fpriv)) {
>                 DRM_DEBUG("client %s[%d] banned from creating ctx\n",
> -                         current->comm,
> -                         pid_nr(get_task_pid(current, PIDTYPE_PID)));
> +                         current->comm, pid_nr(pid));
> +               put_pid(pid);
>                 return -EIO;
>         }
>  
> +       /*
> +        * Borrow the context idr_lock to protect the client remove-add cycle.
> +        */
> +       if (mutex_lock_interruptible(&file_priv->context_idr_lock))

put_pid(pid); /* I'm helping! */

> +               return -EINTR;

> +       if (pid_nr(file_priv->client.pid) != pid_nr(pid)) {
> +               i915_gem_remove_client(file_priv);
> +               ret = i915_gem_add_client(i915, file_priv, current,
> +                                         file_priv->client.id);
> +       }
> +       mutex_unlock(&file_priv->context_idr_lock);
> +       put_pid(pid);
> +       if (ret)
> +               return ret;
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context engine busyness
  2019-12-16 12:40   ` Chris Wilson
@ 2019-12-16 13:09     ` Tvrtko Ursulin
  2020-01-30 18:05       ` Tvrtko Ursulin
  0 siblings, 1 reply; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 13:09 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 12:40, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:07:00)
>> @@ -1389,6 +1415,9 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
>>                  write_desc(execlists,
>>                             rq ? execlists_update_context(rq) : 0,
>>                             n);
>> +
>> +               if (n == 0)
>> +                       intel_context_stats_start(&rq->hw_context->stats);
> 
> Too early? (Think preemption requests that may not begin for a few
> hundred ms.) Mark it as started on promotion instead (should be within a
> few microseconds, if not ideally a few 10 ns)? Then you will also have
> better symmetry in process_csb, suggesting that we can have a routine
> that takes the current *execlists->active with fewer code changes.

Good point, I was disliking the csb latencies and completely missed the 
preemption side of things. Symmetry will be much better in more than one 
aspect.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 0/5] Per client engine busyness
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
                   ` (4 preceding siblings ...)
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats Tvrtko Ursulin
@ 2019-12-16 13:09 ` Chris Wilson
  2019-12-16 13:20   ` Tvrtko Ursulin
  2019-12-16 17:45 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
  2019-12-16 17:58 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  7 siblings, 1 reply; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 13:09 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:06:59)
> Implementation wise we add a a bunch of files in sysfs like:
> 
>         # cd /sys/class/drm/card0/clients/
>         # tree
>         .
>         ├── 7
>         │   ├── busy
>         │   │   ├── 0

Prefer '0' over rcs?

> I will post the corresponding patch to intel_gpu_top for reference as well.

The other requirement is that we need to at least prove the sysfs
interface exists in gt. perf_sysfs?

Quick list,
- check igt_spin_t responses (pretty much verbatim of perf_pmu.c)
- check the client name is correct around fd passing
- check interactions with ctx->engines[]
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:53   ` Chris Wilson
@ 2019-12-16 13:13     ` Tvrtko Ursulin
  2019-12-17 17:21     ` Tvrtko Ursulin
  1 sibling, 0 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 13:13 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 12:53, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index 0781b6326b8c..9fcbcb6d6f76 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -224,6 +224,20 @@ struct drm_i915_file_private {
>>          /** ban_score: Accumulated score of all ctx bans and fast hangs. */
>>          atomic_t ban_score;
>>          unsigned long hang_timestamp;
>> +
>> +       struct i915_drm_client {
>> +               unsigned int id;
>> +
>> +               struct pid *pid;
>> +               char *name;
> 
> Hmm. Should we scrap i915_gem_context.pid and just use the client.pid?

I guess so, did not realize we already keep a reference.

Regards,

Tvrtko

>> +
>> +               struct kobject *root;
>> +
>> +               struct {
>> +                       struct device_attribute pid;
>> +                       struct device_attribute name;
>> +               } attr;
>> +       } client;
>>   };
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:55   ` Chris Wilson
@ 2019-12-16 13:16     ` Tvrtko Ursulin
  0 siblings, 0 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 13:16 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 12:55, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
>> +static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
>> +{
>> +       struct i915_drm_clients *clients = &file_priv->dev_priv->clients;
>> +       struct i915_drm_client *client = &file_priv->client;
>> +
>> +       if (!client->name)
>> +               return; /* intel_fbdev_init registers a client before sysfs */
>> +
>> +       sysfs_remove_file(client->root, (struct attribute *)&client->attr.pid);
>> +       sysfs_remove_file(client->root, (struct attribute *)&client->attr.name);
>> +       kobject_put(client->root);
> 
> Do we need to remove individual files if we unplug the root?
> sysfs_remove_dir(client->root) ?

Kerneldoc indeed suggests this should work. Will try.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
                     ` (2 preceding siblings ...)
  2019-12-16 12:55   ` Chris Wilson
@ 2019-12-16 13:17   ` Chris Wilson
  2019-12-16 13:28     ` Tvrtko Ursulin
  3 siblings, 1 reply; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 13:17 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Expose a list of clients with open file handles in sysfs.
> 
> This will be a basis for a top-like utility showing per-client and per-
> engine GPU load.
> 
> Currently we only expose each client's pid and name under opaque numbered
> directories in /sys/class/drm/card0/clients/.
> 
> For instance:
> 
> /sys/class/drm/card0/clients/3/name: Xorg
> /sys/class/drm/card0/clients/3/pid: 5664

Should we even bother having the name here? And just have a link to pid
instead? Contemplating even pidfd for ultramodern.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 0/5] Per client engine busyness
  2019-12-16 13:09 ` [Intel-gfx] [PATCH 0/5] Per client engine busyness Chris Wilson
@ 2019-12-16 13:20   ` Tvrtko Ursulin
  0 siblings, 0 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 13:20 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 13:09, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:06:59)
>> Implementation wise we add a a bunch of files in sysfs like:
>>
>>          # cd /sys/class/drm/card0/clients/
>>          # tree
>>          .
>>          ├── 7
>>          │   ├── busy
>>          │   │   ├── 0
> 
> Prefer '0' over rcs?

I think so, saves userspace keeping a map of names to class enum. Or 
maybe it doesn't, depends. Saves us having to come up with ABI names. 
But I think I could be easily convinced either way.

>> I will post the corresponding patch to intel_gpu_top for reference as well.
> 
> The other requirement is that we need to at least prove the sysfs
> interface exists in gt. perf_sysfs?
> 
> Quick list,
> - check igt_spin_t responses (pretty much verbatim of perf_pmu.c)
> - check the client name is correct around fd passing
> - check interactions with ctx->engines[]

Yep, I know it will be needed. But haven't been bothering yet since the 
series has been in a hopeless mode for what, two years or so. I forgot 
to name it RFC this time round.. :)

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 13:17   ` Chris Wilson
@ 2019-12-16 13:28     ` Tvrtko Ursulin
  2019-12-16 13:41       ` Chris Wilson
  0 siblings, 1 reply; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 13:28 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 13:17, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Expose a list of clients with open file handles in sysfs.
>>
>> This will be a basis for a top-like utility showing per-client and per-
>> engine GPU load.
>>
>> Currently we only expose each client's pid and name under opaque numbered
>> directories in /sys/class/drm/card0/clients/.
>>
>> For instance:
>>
>> /sys/class/drm/card0/clients/3/name: Xorg
>> /sys/class/drm/card0/clients/3/pid: 5664
> 
> Should we even bother having the name here? And just have a link to pid
> instead? Contemplating even pidfd for ultramodern.

I haven't looked at what symlink creation facilities sysfs would allow. 
But even then, I don't see how we could link to proc from sysfs.

I had a quick read on pidfd and don't see how it fits. What did you have 
in mind?

Regards,

Tvrtko



_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 13:28     ` Tvrtko Ursulin
@ 2019-12-16 13:41       ` Chris Wilson
  0 siblings, 0 replies; 32+ messages in thread
From: Chris Wilson @ 2019-12-16 13:41 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-16 13:28:18)
> 
> On 16/12/2019 13:17, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> Expose a list of clients with open file handles in sysfs.
> >>
> >> This will be a basis for a top-like utility showing per-client and per-
> >> engine GPU load.
> >>
> >> Currently we only expose each client's pid and name under opaque numbered
> >> directories in /sys/class/drm/card0/clients/.
> >>
> >> For instance:
> >>
> >> /sys/class/drm/card0/clients/3/name: Xorg
> >> /sys/class/drm/card0/clients/3/pid: 5664
> > 
> > Should we even bother having the name here? And just have a link to pid
> > instead? Contemplating even pidfd for ultramodern.
> 
> I haven't looked at what symlink creation facilities sysfs would allow. 
> But even then, I don't see how we could link to proc from sysfs.
> 
> I had a quick read on pidfd and don't see how it fits. What did you have 
> in mind?

Just thinking if we should do something like 
	pidfd = open(/.../clients/3/pid);

Ok, looking at pidfd_fops, it doesn't provide anything useful like being
able to safely acquire the pid->comm. Shame.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
                   ` (5 preceding siblings ...)
  2019-12-16 13:09 ` [Intel-gfx] [PATCH 0/5] Per client engine busyness Chris Wilson
@ 2019-12-16 17:45 ` Patchwork
  2019-12-16 17:58 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
  7 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2019-12-16 17:45 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/70977/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
03aa1c71a788 drm/i915: Track per-context engine busyness
de4bbb7cc78d drm/i915: Expose list of clients in sysfs
-:67: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#67: FILE: drivers/gpu/drm/i915/i915_drv.h:1296:
+		spinlock_t idr_lock;

-:124: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#124: FILE: drivers/gpu/drm/i915/i915_gem.c:1546:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 2 checks, 239 lines checked
3caa68403667 drm/i915: Update client name on context create
-:74: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#74: FILE: drivers/gpu/drm/i915/i915_drv.h:1904:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 1 checks, 71 lines checked
db6e83f57574 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

-:98: CHECK:PREFER_KERNEL_TYPES: Prefer kernel type 'u64' over 'uint64_t'
#98: FILE: drivers/gpu/drm/i915/i915_gem.c:1557:
+	uint64_t total = bc->total;

-:125: WARNING:STATIC_CONST_CHAR_ARRAY: static const char * array should probably be static const char * const
#125: FILE: drivers/gpu/drm/i915/i915_gem.c:1584:
+static const char *uabi_class_names[] = {

total: 0 errors, 2 warnings, 1 checks, 157 lines checked
24ccc195eaa8 drm/i915: Add sysfs toggle to enable per-client engine stats

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness
  2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
                   ` (6 preceding siblings ...)
  2019-12-16 17:45 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
@ 2019-12-16 17:58 ` Patchwork
  7 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2019-12-16 17:58 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/70977/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7574 -> Patchwork_15789
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_15789 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_15789, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_15789:

### IGT changes ###

#### Possible regressions ####

  * igt@runner@aborted:
    - fi-pnv-d510:        NOTRUN -> [FAIL][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-pnv-d510/igt@runner@aborted.html
    - fi-gdg-551:         NOTRUN -> [FAIL][2]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-gdg-551/igt@runner@aborted.html
    - fi-snb-2520m:       NOTRUN -> [FAIL][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-snb-2520m/igt@runner@aborted.html
    - fi-whl-u:           NOTRUN -> [FAIL][4]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-whl-u/igt@runner@aborted.html
    - fi-ivb-3770:        NOTRUN -> [FAIL][5]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-ivb-3770/igt@runner@aborted.html
    - fi-bxt-dsi:         NOTRUN -> [FAIL][6]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-bxt-dsi/igt@runner@aborted.html
    - fi-blb-e6850:       NOTRUN -> [FAIL][7]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-blb-e6850/igt@runner@aborted.html
    - fi-snb-2600:        NOTRUN -> [FAIL][8]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-snb-2600/igt@runner@aborted.html
    - fi-elk-e7500:       NOTRUN -> [FAIL][9]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-elk-e7500/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_15789 that come from known issues:

### IGT changes ###

#### Warnings ####

  * igt@runner@aborted:
    - fi-icl-guc:         [FAIL][10] ([fdo#110943] / [fdo#111093]) -> [FAIL][11] ([fdo#111093])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7574/fi-icl-guc/igt@runner@aborted.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-icl-guc/igt@runner@aborted.html
    - fi-byt-j1900:       [FAIL][12] ([i915#816]) -> [FAIL][13] ([i915#418])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7574/fi-byt-j1900/igt@runner@aborted.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-byt-j1900/igt@runner@aborted.html
    - fi-cml-s:           [FAIL][14] ([fdo#111764] / [i915#577]) -> [FAIL][15] ([i915#577])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7574/fi-cml-s/igt@runner@aborted.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/fi-cml-s/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#110943]: https://bugs.freedesktop.org/show_bug.cgi?id=110943
  [fdo#111093]: https://bugs.freedesktop.org/show_bug.cgi?id=111093
  [fdo#111764]: https://bugs.freedesktop.org/show_bug.cgi?id=111764
  [i915#418]: https://gitlab.freedesktop.org/drm/intel/issues/418
  [i915#577]: https://gitlab.freedesktop.org/drm/intel/issues/577
  [i915#584]: https://gitlab.freedesktop.org/drm/intel/issues/584
  [i915#816]: https://gitlab.freedesktop.org/drm/intel/issues/816


Participating hosts (53 -> 46)
------------------------------

  Additional (1): fi-hsw-4770 
  Missing    (8): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-bwr-2160 fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7574 -> Patchwork_15789

  CI-20190529: 20190529
  CI_DRM_7574: 950244ca586c6f0efe243bf8c505c01ea5e579fa @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5349: 048f58513d8b8ec6bb307a939f0ac959bc0f0e10 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_15789: 24ccc195eaa84e9dfc3a82a77e556a72129a503f @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

24ccc195eaa8 drm/i915: Add sysfs toggle to enable per-client engine stats
db6e83f57574 drm/i915: Expose per-engine client busyness
3caa68403667 drm/i915: Update client name on context create
de4bbb7cc78d drm/i915: Expose list of clients in sysfs
03aa1c71a788 drm/i915: Track per-context engine busyness

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15789/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:51   ` Chris Wilson
@ 2019-12-16 18:34     ` Tvrtko Ursulin
  0 siblings, 0 replies; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-16 18:34 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 12:51, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> Expose a list of clients with open file handles in sysfs.
>>
>> This will be a basis for a top-like utility showing per-client and per-
>> engine GPU load.
>>
>> Currently we only expose each client's pid and name under opaque numbered
>> directories in /sys/class/drm/card0/clients/.
>>
>> For instance:
>>
>> /sys/class/drm/card0/clients/3/name: Xorg
>> /sys/class/drm/card0/clients/3/pid: 5664
>>
>> v2:
>>   Chris Wilson:
>>   * Enclose new members into dedicated structs.
>>   * Protect against failed sysfs registration.
>>
>> v3:
>>   * sysfs_attr_init.
>>
>> v4:
>>   * Fix for internal clients.
>>
>> v5:
>>   * Use cyclic ida for client id. (Chris)
> 
> I think we are now in the age of xa_alloc_cyclic(). At least the
> immediate benefit is that we don't have to worry about the ida locking.

Also spin locks and GFP_KERNEL in the current patch do not mix well. Use 
with caution until I send the updated version out.

Regards,

Tvrtko


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-16 12:53   ` Chris Wilson
  2019-12-16 13:13     ` Tvrtko Ursulin
@ 2019-12-17 17:21     ` Tvrtko Ursulin
  2019-12-17 17:26       ` Chris Wilson
  1 sibling, 1 reply; 32+ messages in thread
From: Tvrtko Ursulin @ 2019-12-17 17:21 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 12:53, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index 0781b6326b8c..9fcbcb6d6f76 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -224,6 +224,20 @@ struct drm_i915_file_private {
>>          /** ban_score: Accumulated score of all ctx bans and fast hangs. */
>>          atomic_t ban_score;
>>          unsigned long hang_timestamp;
>> +
>> +       struct i915_drm_client {
>> +               unsigned int id;
>> +
>> +               struct pid *pid;
>> +               char *name;
> 
> Hmm. Should we scrap i915_gem_context.pid and just use the client.pid?

Or maybe leave as it so I don't have to worry about ctx vs client 
lifetime. In other words places where we access ctx->pid and the client 
is maybe long gone. I don't want to ref count clients, or maybe I do.. 
hmm.. keeping GPU load of a client which exited and left work running 
visible?

Regards,

Tvrtko

>> +
>> +               struct kobject *root;
>> +
>> +               struct {
>> +                       struct device_attribute pid;
>> +                       struct device_attribute name;
>> +               } attr;
>> +       } client;
>>   };
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs
  2019-12-17 17:21     ` Tvrtko Ursulin
@ 2019-12-17 17:26       ` Chris Wilson
  0 siblings, 0 replies; 32+ messages in thread
From: Chris Wilson @ 2019-12-17 17:26 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-12-17 17:21:28)
> 
> On 16/12/2019 12:53, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-12-16 12:07:01)
> >> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> >> index 0781b6326b8c..9fcbcb6d6f76 100644
> >> --- a/drivers/gpu/drm/i915/i915_drv.h
> >> +++ b/drivers/gpu/drm/i915/i915_drv.h
> >> @@ -224,6 +224,20 @@ struct drm_i915_file_private {
> >>          /** ban_score: Accumulated score of all ctx bans and fast hangs. */
> >>          atomic_t ban_score;
> >>          unsigned long hang_timestamp;
> >> +
> >> +       struct i915_drm_client {
> >> +               unsigned int id;
> >> +
> >> +               struct pid *pid;
> >> +               char *name;
> > 
> > Hmm. Should we scrap i915_gem_context.pid and just use the client.pid?
> 
> Or maybe leave as it so I don't have to worry about ctx vs client 
> lifetime. In other words places where we access ctx->pid and the client 
> is maybe long gone. I don't want to ref count clients, or maybe I do.. 
> hmm.. keeping GPU load of a client which exited and left work running 
> visible?

Yeah. If we don't and all the GPU time is being hogged by zombies, users
of the interface will not be impressed they can't identify those. Next
up, kill(client_id, SIGKILL).
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context engine busyness
  2019-12-16 13:09     ` Tvrtko Ursulin
@ 2020-01-30 18:05       ` Tvrtko Ursulin
  2020-01-30 21:42         ` Chris Wilson
  0 siblings, 1 reply; 32+ messages in thread
From: Tvrtko Ursulin @ 2020-01-30 18:05 UTC (permalink / raw)
  To: Chris Wilson, Intel-gfx


On 16/12/2019 13:09, Tvrtko Ursulin wrote:
> 
> On 16/12/2019 12:40, Chris Wilson wrote:
>> Quoting Tvrtko Ursulin (2019-12-16 12:07:00)
>>> @@ -1389,6 +1415,9 @@ static void execlists_submit_ports(struct 
>>> intel_engine_cs *engine)
>>>                  write_desc(execlists,
>>>                             rq ? execlists_update_context(rq) : 0,
>>>                             n);
>>> +
>>> +               if (n == 0)
>>> +                       
>>> intel_context_stats_start(&rq->hw_context->stats);
>>
>> Too early? (Think preemption requests that may not begin for a few
>> hundred ms.) Mark it as started on promotion instead (should be within a
>> few microseconds, if not ideally a few 10 ns)? Then you will also have
>> better symmetry in process_csb, suggesting that we can have a routine
>> that takes the current *execlists->active with fewer code changes.
> 
> Good point, I was disliking the csb latencies and completely missed the 
> preemption side of things. Symmetry will be much better in more than one 
> aspect.

Downside of having it in process_csb is really bad accuracy with short 
batches like gem_exec_nop. :( process_csb() latency I think. It gets a 
little bit better for this particular workload if I move the start point 
to submit_ports(), but that has that other problem with preemption.

After this woes I was hopeful pphwsp context runtime could have an 
advantage here, but then I discover it is occasionally not monotonic. At 
least with the spammy gem_exec_nop it occasionally but regularly jumps a 
tiny bit backward:

[ 8802.082980]  (new=7282101 old=7282063 d=38)
[ 8802.083007]  (new=7282139 old=7282101 d=38)
[ 8802.083051]  (new=7282250 old=7282139 d=111)
[ 8802.083077]  (new=7282214 old=7282250 d=-36)
[ 8802.083103]  (new=7282255 old=7282214 d=41)
[ 8802.083129]  (new=7282293 old=7282255 d=38)
[ 8802.083155]  (new=7282331 old=7282293 d=38)

Ouch. Time to sleep on it.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context engine busyness
  2020-01-30 18:05       ` Tvrtko Ursulin
@ 2020-01-30 21:42         ` Chris Wilson
  0 siblings, 0 replies; 32+ messages in thread
From: Chris Wilson @ 2020-01-30 21:42 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2020-01-30 18:05:03)
> 
> On 16/12/2019 13:09, Tvrtko Ursulin wrote:
> > 
> > On 16/12/2019 12:40, Chris Wilson wrote:
> >> Quoting Tvrtko Ursulin (2019-12-16 12:07:00)
> >>> @@ -1389,6 +1415,9 @@ static void execlists_submit_ports(struct 
> >>> intel_engine_cs *engine)
> >>>                  write_desc(execlists,
> >>>                             rq ? execlists_update_context(rq) : 0,
> >>>                             n);
> >>> +
> >>> +               if (n == 0)
> >>> +                       
> >>> intel_context_stats_start(&rq->hw_context->stats);
> >>
> >> Too early? (Think preemption requests that may not begin for a few
> >> hundred ms.) Mark it as started on promotion instead (should be within a
> >> few microseconds, if not ideally a few 10 ns)? Then you will also have
> >> better symmetry in process_csb, suggesting that we can have a routine
> >> that takes the current *execlists->active with fewer code changes.
> > 
> > Good point, I was disliking the csb latencies and completely missed the 
> > preemption side of things. Symmetry will be much better in more than one 
> > aspect.
> 
> Downside of having it in process_csb is really bad accuracy with short 
> batches like gem_exec_nop. :( process_csb() latency I think.

Scary. I hope we can get some insight and kill some latency. Usually
ends in looking at CPU scheduler traces in dismay.

nvme touts "interrupt steering" as crucial to maintaining caches at high
packet frequencies.

We may also see some gains from staring at profilers, but off the top of
my head the worst latency is due to engine->active.lock contention, and
associated irq-off periods.

> It gets a 
> little bit better for this particular workload if I move the start point 
> to submit_ports(), but that has that other problem with preemption.
> 
> After this woes I was hopeful pphwsp context runtime could have an 
> advantage here, but then I discover it is occasionally not monotonic. At 
> least with the spammy gem_exec_nop it occasionally but regularly jumps a 
> tiny bit backward:
> 
> [ 8802.082980]  (new=7282101 old=7282063 d=38)
> [ 8802.083007]  (new=7282139 old=7282101 d=38)
> [ 8802.083051]  (new=7282250 old=7282139 d=111)
> [ 8802.083077]  (new=7282214 old=7282250 d=-36)
> [ 8802.083103]  (new=7282255 old=7282214 d=41)
> [ 8802.083129]  (new=7282293 old=7282255 d=38)
> [ 8802.083155]  (new=7282331 old=7282293 d=38)
> 
> Ouch. Time to sleep on it.

Also scary. How, how, how???
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2021-07-12 12:17 [PATCH 0/8] " Tvrtko Ursulin
@ 2021-07-12 12:35 ` Patchwork
  0 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2021-07-12 12:35 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/92435/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
cd1de51f55b2 drm/i915: Explicitly track DRM clients
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 287 lines checked
f7063d21a85d drm/i915: Update client name on context create
8563c5aeca1a drm/i915: Make GEM contexts track DRM clients
db56ccda834e drm/i915: Track runtime spent in closed and unreachable GEM contexts
726f6083cc01 drm/i915: Track all user contexts per client
76b7c1579fce drm/i915: Track context current active time
-:139: WARNING:LINE_SPACING: Missing a blank line after declarations
#139: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:126:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
e43d1b5db399 drm/i915: Expose client engine utilisation via fdinfo
-:10: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface")'
#10: 
874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the

total: 1 errors, 0 warnings, 0 checks, 101 lines checked
1fe5f6f4bff5 drm: Document fdinfo format specification
-:32: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#32: 
new file mode 100644

-:37: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#37: FILE: Documentation/gpu/drm-usage-stats.rst:1:
+.. _drm-client-usage-stats:

total: 0 errors, 2 warnings, 0 checks, 136 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2021-05-20 15:12 [RFC 0/7] " Tvrtko Ursulin
@ 2021-05-20 15:55 ` Patchwork
  0 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2021-05-20 15:55 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/90375/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
1455dee675ac drm/i915: Explicitly track DRM clients
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 287 lines checked
bfb2b185e63f drm/i915: Update client name on context create
8b3d87efebae drm/i915: Make GEM contexts track DRM clients
bc02e5505f79 drm/i915: Track runtime spent in closed and unreachable GEM contexts
f7fcc7b31e1a drm/i915: Track all user contexts per client
2b0b1913bd6c drm/i915: Track context current active time
-:136: WARNING:LINE_SPACING: Missing a blank line after declarations
#136: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:125:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
d5123f923671 drm/i915: Expose client engine utilisation via fdinfo
-:10: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit 874442541133 ("drm/amdgpu: Add show_fdinfo() interface")'
#10: 
874442541133 ("drm/amdgpu: Add show_fdinfo() interface"), using the

-:31: WARNING:TYPO_SPELLING: 'writting' may be misspelled - perhaps 'writing'?
#31: 
in order to enable writting of generic top-like tools.
                   ^^^^^^^^

-:127: WARNING:PREFER_SEQ_PUTS: Prefer seq_puts to seq_printf
#127: FILE: drivers/gpu/drm/i915/i915_drm_client.c:228:
+	seq_printf(m, "drm-driver:\ti915\n");

total: 1 errors, 2 warnings, 0 checks, 96 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2021-05-13 10:59 [PATCH 0/7] " Tvrtko Ursulin
@ 2021-05-13 11:28 ` Patchwork
  0 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2021-05-13 11:28 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/90128/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
289c10899f79 drm/i915: Expose list of clients in sysfs
-:89: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#89: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 402 lines checked
d83c2db8842f drm/i915: Update client name on context create
71fdfda22feb drm/i915: Make GEM contexts track DRM clients
c8194e95eb88 drm/i915: Track runtime spent in closed and unreachable GEM contexts
fc8d2f8f24bf drm/i915: Track all user contexts per client
9ac58b591817 drm/i915: Track context current active time
-:138: WARNING:LINE_SPACING: Missing a blank line after declarations
#138: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:125:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 296 lines checked
b83b94d7ebd7 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 152 lines checked


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2020-09-14 13:12 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
@ 2020-09-14 17:47 ` Patchwork
  0 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2020-09-14 17:47 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/81652/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
ac3c07ab42f4 drm/i915: Expose list of clients in sysfs
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

-:274: WARNING:SPDX_LICENSE_TAG: Improper SPDX comment style for 'drivers/gpu/drm/i915/i915_drm_client.h', please use '/*' instead
#274: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

-:274: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#274: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

total: 0 errors, 3 warnings, 0 checks, 368 lines checked
5e257bb526a4 drm/i915: Update client name on context create
-:197: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#197: FILE: drivers/gpu/drm/i915/i915_drm_client.c:237:
+	if (!name) {
+		drm_notice(&i915->drm,

total: 0 errors, 1 warnings, 0 checks, 201 lines checked
02a3cfd8e1dd drm/i915: Make GEM contexts track DRM clients
25668d5d310b drm/i915: Track runtime spent in unreachable intel_contexts
556043f2cc84 drm/i915: Track runtime spent in closed GEM contexts
71e7b25bc497 drm/i915: Track all user contexts per client
9cd6cdc6cf88 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 164 lines checked
b58bb1362c49 drm/i915: Track context current active time
-:71: CHECK:LINE_SPACING: Please don't use multiple blank lines
#71: FILE: drivers/gpu/drm/i915/gt/intel_context.c:504:
+
+

-:134: WARNING:LINE_SPACING: Missing a blank line after declarations
#134: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:96:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 1 checks, 248 lines checked
2905d1efe3cd drm/i915: Prefer software tracked context busyness


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2020-09-04 12:59 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
@ 2020-09-04 14:05 ` Patchwork
  0 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2020-09-04 14:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/81336/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
0a3f079e9a9b drm/i915: Expose list of clients in sysfs
-:84: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#84: 
new file mode 100644

-:89: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#89: FILE: drivers/gpu/drm/i915/i915_drm_client.c:1:
+/*

-:90: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#90: FILE: drivers/gpu/drm/i915/i915_drm_client.c:2:
+ * SPDX-License-Identifier: MIT

-:275: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#275: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+/*

-:276: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#276: FILE: drivers/gpu/drm/i915/i915_drm_client.h:2:
+ * SPDX-License-Identifier: MIT

total: 0 errors, 5 warnings, 0 checks, 370 lines checked
83e61dc5c0ff drm/i915: Update client name on context create
-:197: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#197: FILE: drivers/gpu/drm/i915/i915_drm_client.c:238:
+	if (!name) {
+		drm_notice(&i915->drm,

total: 0 errors, 1 warnings, 0 checks, 198 lines checked
c3bc3f4edc15 drm/i915: Make GEM contexts track DRM clients
7e62f1a3be14 drm/i915: Track runtime spent in unreachable intel_contexts
99d0c612c5f9 drm/i915: Track runtime spent in closed GEM contexts
7375c4cd47bf drm/i915: Track all user contexts per client
817eec4963d2 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 164 lines checked
9552c8ab393c drm/i915: Track context current active time
-:71: CHECK:LINE_SPACING: Please don't use multiple blank lines
#71: FILE: drivers/gpu/drm/i915/gt/intel_context.c:504:
+
+

-:134: WARNING:LINE_SPACING: Missing a blank line after declarations
#134: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:96:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 1 checks, 248 lines checked
bf6729db49be drm/i915: Prefer software tracked context busyness


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness
  2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
@ 2020-04-15 11:05 ` Patchwork
  0 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2020-04-15 11:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness
URL   : https://patchwork.freedesktop.org/series/75967/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
027b0636d391 drm/i915: Expose list of clients in sysfs
-:78: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#78: 
new file mode 100644

-:268: WARNING:SPDX_LICENSE_TAG: Improper SPDX comment style for 'drivers/gpu/drm/i915/i915_drm_client.h', please use '/*' instead
#268: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

-:268: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#268: FILE: drivers/gpu/drm/i915/i915_drm_client.h:1:
+// SPDX-License-Identifier: MIT

total: 0 errors, 3 warnings, 0 checks, 367 lines checked
a763b56f5eed drm/i915: Update client name on context create
-:186: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#186: FILE: drivers/gpu/drm/i915/i915_drm_client.c:237:
+	if (!name) {
+		drm_notice(&i915->drm,

total: 0 errors, 1 warnings, 0 checks, 188 lines checked
7aaaba098c18 drm/i915: Make GEM contexts track DRM clients
0cd24e79ea05 drm/i915: Track runtime spent in unreachable intel_contexts
7d61e2172ad1 drm/i915: Track runtime spent in closed GEM contexts
80f8fe2f733c drm/i915: Track all user contexts per client
96b05255f5c0 drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

total: 0 errors, 1 warnings, 0 checks, 164 lines checked
a89a065b63bf drm/i915: Track context current active time
-:138: WARNING:LINE_SPACING: Missing a blank line after declarations
#138: FILE: drivers/gpu/drm/i915/gt/intel_context_types.h:87:
+			u32 last;
+			I915_SELFTEST_DECLARE(u32 num_underflow);

total: 0 errors, 1 warnings, 0 checks, 257 lines checked
e9b16b5095f8 drm/i915: Prefer software tracked context busyness

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2021-07-12 12:36 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-16 12:06 [Intel-gfx] [PATCH 0/5] Per client engine busyness Tvrtko Ursulin
2019-12-16 12:07 ` [Intel-gfx] [PATCH 1/5] drm/i915: Track per-context " Tvrtko Ursulin
2019-12-16 12:40   ` Chris Wilson
2019-12-16 13:09     ` Tvrtko Ursulin
2020-01-30 18:05       ` Tvrtko Ursulin
2020-01-30 21:42         ` Chris Wilson
2019-12-16 12:07 ` [Intel-gfx] [PATCH 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2019-12-16 12:51   ` Chris Wilson
2019-12-16 18:34     ` Tvrtko Ursulin
2019-12-16 12:53   ` Chris Wilson
2019-12-16 13:13     ` Tvrtko Ursulin
2019-12-17 17:21     ` Tvrtko Ursulin
2019-12-17 17:26       ` Chris Wilson
2019-12-16 12:55   ` Chris Wilson
2019-12-16 13:16     ` Tvrtko Ursulin
2019-12-16 13:17   ` Chris Wilson
2019-12-16 13:28     ` Tvrtko Ursulin
2019-12-16 13:41       ` Chris Wilson
2019-12-16 12:07 ` [Intel-gfx] [PATCH 3/5] drm/i915: Update client name on context create Tvrtko Ursulin
2019-12-16 12:57   ` Chris Wilson
2019-12-16 12:07 ` [Intel-gfx] [PATCH 4/5] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2019-12-16 12:07 ` [Intel-gfx] [PATCH 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats Tvrtko Ursulin
2019-12-16 13:09 ` [Intel-gfx] [PATCH 0/5] Per client engine busyness Chris Wilson
2019-12-16 13:20   ` Tvrtko Ursulin
2019-12-16 17:45 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2019-12-16 17:58 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-04-15 10:11 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-04-15 11:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2020-09-04 12:59 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-09-04 14:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2020-09-14 13:12 [Intel-gfx] [PATCH 0/9] " Tvrtko Ursulin
2020-09-14 17:47 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2021-05-13 10:59 [PATCH 0/7] " Tvrtko Ursulin
2021-05-13 11:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2021-05-20 15:12 [RFC 0/7] " Tvrtko Ursulin
2021-05-20 15:55 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2021-07-12 12:17 [PATCH 0/8] " Tvrtko Ursulin
2021-07-12 12:35 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.