All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/5] Per client engine busyness (all aboard the sysfs train!)
@ 2019-10-25 14:21 ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

It was quite some time since I last posted this RFC, but recently there has been
some new interest, this time from OpenCL and related customers, so I decided to
give it a quick respin and test the waters.

This time round it has been really hastily rebased since the upstream changed
quite a lot and I have very little confidence it is technically correct. But it
is enough to illustrate a point of what this feature could provide:

In short it enables a "top-like" display for GPU tasks. Or with a screenshot:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  948/ 999 MHz;    0% RC6;  3.65 Watts;     2165 irqs/s

      IMC reads:     5015 MiB/s
     IMC writes:      143 MiB/s

          ENGINE      BUSY                                                    MI_SEMA MI_WAIT
     Render/3D/0   56.60% |███████████████████████████▋                     |      0%      0%
       Blitter/0   95.65% |██████████████████████████████████████████████▊  |      0%      0%
         Video/0   40.92% |████████████████████                             |      0%      0%
  VideoEnhance/0    0.00% |                                                 |      0%      0%

  PID            NAME       RCS               BCS               VCS               VECS
 5347        gem_wsim |███████▍        ||███████████████▏||██████▌         ||                |
 4929            Xorg |▎               ||                ||                ||                |
 5305        glxgears |                ||                ||                ||                |
 5303        glxgears |                ||                ||                ||                |
 5024           xfwm4 |                ||                ||                ||                |
 4929            Xorg |                ||                ||                ||                |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we would get a a bunch of per-drm-client-per-engine-class
files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 9
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── enable_stats

I will post the corresponding patch to intel_gpu_top for reference as well.

Tvrtko Ursulin (5):
  drm/i915: Track per-context engine busyness
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Expose per-engine client busyness
  drm/i915: Add sysfs toggle to enable per-client engine stats

 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  17 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  20 ++
 drivers/gpu/drm/i915/gt/intel_context.h       |   9 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |   9 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  16 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  66 +++++-
 drivers/gpu/drm/i915/i915_drv.h               |  38 +++
 drivers/gpu/drm/i915/i915_gem.c               | 218 +++++++++++++++++-
 drivers/gpu/drm/i915/i915_sysfs.c             |  81 +++++++
 9 files changed, 451 insertions(+), 23 deletions(-)

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Intel-gfx] [RFC 0/5] Per client engine busyness (all aboard the sysfs train!)
@ 2019-10-25 14:21 ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

It was quite some time since I last posted this RFC, but recently there has been
some new interest, this time from OpenCL and related customers, so I decided to
give it a quick respin and test the waters.

This time round it has been really hastily rebased since the upstream changed
quite a lot and I have very little confidence it is technically correct. But it
is enough to illustrate a point of what this feature could provide:

In short it enables a "top-like" display for GPU tasks. Or with a screenshot:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intel-gpu-top -  948/ 999 MHz;    0% RC6;  3.65 Watts;     2165 irqs/s

      IMC reads:     5015 MiB/s
     IMC writes:      143 MiB/s

          ENGINE      BUSY                                                    MI_SEMA MI_WAIT
     Render/3D/0   56.60% |███████████████████████████▋                     |      0%      0%
       Blitter/0   95.65% |██████████████████████████████████████████████▊  |      0%      0%
         Video/0   40.92% |████████████████████                             |      0%      0%
  VideoEnhance/0    0.00% |                                                 |      0%      0%

  PID            NAME       RCS               BCS               VCS               VECS
 5347        gem_wsim |███████▍        ||███████████████▏||██████▌         ||                |
 4929            Xorg |▎               ||                ||                ||                |
 5305        glxgears |                ||                ||                ||                |
 5303        glxgears |                ||                ||                ||                |
 5024           xfwm4 |                ||                ||                ||                |
 4929            Xorg |                ||                ||                ||                |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Implementation wise we would get a a bunch of per-drm-client-per-engine-class
files in sysfs like:

	# cd /sys/class/drm/card0/clients/
	# tree
	.
	├── 7
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 8
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	├── 9
	│   ├── busy
	│   │   ├── 0
	│   │   ├── 1
	│   │   ├── 2
	│   │   └── 3
	│   ├── name
	│   └── pid
	└── enable_stats

I will post the corresponding patch to intel_gpu_top for reference as well.

Tvrtko Ursulin (5):
  drm/i915: Track per-context engine busyness
  drm/i915: Expose list of clients in sysfs
  drm/i915: Update client name on context create
  drm/i915: Expose per-engine client busyness
  drm/i915: Add sysfs toggle to enable per-client engine stats

 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  17 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |  20 ++
 drivers/gpu/drm/i915/gt/intel_context.h       |   9 +
 drivers/gpu/drm/i915/gt/intel_context_types.h |   9 +
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  16 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  66 +++++-
 drivers/gpu/drm/i915/i915_drv.h               |  38 +++
 drivers/gpu/drm/i915/i915_gem.c               | 218 +++++++++++++++++-
 drivers/gpu/drm/i915/i915_sysfs.c             |  81 +++++++
 9 files changed, 451 insertions(+), 23 deletions(-)

-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC 1/5] drm/i915: Track per-context engine busyness
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some customers want to know how much of the GPU time are their clients
using in order to make dynamic load balancing decisions.

With the hooks already in place which track the overall engine busyness,
we can extend that slightly to split that time between contexts.

v2: Fix accounting for tail updates.
v3: Rebase.
v4: Mark currently running contexts as active on stats enable.
v5: Include some headers to fix the build.
v6: Added fine grained lock.
v7: Convert to seqlock. (Chris Wilson)
v8: Rebase and tidy with helpers.
v9: Rebase.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c       | 20 ++++++
 drivers/gpu/drm/i915/gt/intel_context.h       |  9 +++
 drivers/gpu/drm/i915/gt/intel_context_types.h |  9 +++
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     | 16 ++++-
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 66 ++++++++++++++++---
 5 files changed, 108 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index ee9d2bcd2c13..3d68720df512 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -248,6 +248,7 @@ intel_context_init(struct intel_context *ce,
 	INIT_LIST_HEAD(&ce->signals);
 
 	mutex_init(&ce->pin_mutex);
+	seqlock_init(&ce->stats.lock);
 
 	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire);
@@ -348,6 +349,25 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
 	return rq;
 }
 
+ktime_t intel_context_get_busy_time(struct intel_context *ce)
+{
+	unsigned int seq;
+	ktime_t total;
+
+	do {
+		seq = read_seqbegin(&ce->stats.lock);
+
+		total = ce->stats.total;
+
+		if (ce->stats.active)
+			total = ktime_add(total,
+					  ktime_sub(ktime_get(),
+						    ce->stats.start));
+	} while (read_seqretry(&ce->stats.lock, seq));
+
+	return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index 68b3d317d959..c7ab8efa3573 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -153,4 +153,13 @@ static inline struct intel_ring *__intel_context_ring_size(u64 sz)
 	return u64_to_ptr(struct intel_ring, sz);
 }
 
+static inline void
+__intel_context_stats_start(struct intel_context_stats *stats, ktime_t now)
+{
+	stats->start = now;
+	stats->active = true;
+}
+
+ktime_t intel_context_get_busy_time(struct intel_context *ce);
+
 #endif /* __INTEL_CONTEXT_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 6959b05ae5f8..b3384e01d033 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -11,6 +11,7 @@
 #include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
+#include <linux/seqlock.h>
 
 #include "i915_active_types.h"
 #include "i915_utils.h"
@@ -75,6 +76,14 @@ struct intel_context {
 
 	/** sseu: Control eu/slice partitioning */
 	struct intel_sseu sseu;
+
+	/** stats: Context GPU engine busyness tracking. */
+	struct intel_context_stats {
+		seqlock_t lock;
+		bool active;
+		ktime_t start;
+		ktime_t total;
+	} stats;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 9cc1ea6519ec..6792ec01f3f2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1568,8 +1568,20 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 
 		engine->stats.enabled_at = ktime_get();
 
-		/* XXX submission method oblivious? */
-		for (port = execlists->active; (rq = *port); port++)
+		/*
+		 * Mark currently running context as active.
+		 * XXX submission method oblivious?
+		 */
+
+		rq = NULL;
+		port = execlists->active;
+		if (port)
+			rq = *port;
+		if (rq)
+			__intel_context_stats_start(&rq->hw_context->stats,
+						    engine->stats.enabled_at);
+
+		for (; (rq = *port); port++)
 			engine->stats.active++;
 
 		for (port = execlists->pending; (rq = *port); port++) {
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 523de1fd4452..2305d7a7ac68 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -944,26 +944,60 @@ execlists_context_status_change(struct i915_request *rq, unsigned long status)
 				   status, rq);
 }
 
-static void intel_engine_context_in(struct intel_engine_cs *engine)
+static void
+intel_context_stats_start(struct intel_context_stats *stats, ktime_t now)
+{
+	write_seqlock(&stats->lock);
+	__intel_context_stats_start(stats, now);
+	write_sequnlock(&stats->lock);
+}
+
+static void
+intel_context_stats_stop(struct intel_context_stats *stats, ktime_t now)
+{
+	write_seqlock(&stats->lock);
+	if (stats->active) {
+		stats->total = ktime_add(stats->total,
+					 ktime_sub(now, stats->start));
+		stats->active = false;
+	}
+	write_sequnlock(&stats->lock);
+}
+
+static void intel_context_in(struct intel_context *ce, bool submit)
 {
+	struct intel_engine_cs *engine = ce->engine;
 	unsigned long flags;
+	ktime_t now;
 
 	if (READ_ONCE(engine->stats.enabled) == 0)
 		return;
 
 	write_seqlock_irqsave(&engine->stats.lock, flags);
 
+	if (submit) {
+		now = ktime_get();
+		intel_context_stats_start(&ce->stats, now);
+	} else {
+		now = 0;
+	}
+
 	if (engine->stats.enabled > 0) {
-		if (engine->stats.active++ == 0)
-			engine->stats.start = ktime_get();
+		if (engine->stats.active++ == 0) {
+			if (!now)
+				now = ktime_get();
+			engine->stats.start = now;
+		}
+
 		GEM_BUG_ON(engine->stats.active == 0);
 	}
 
 	write_sequnlock_irqrestore(&engine->stats.lock, flags);
 }
 
-static void intel_engine_context_out(struct intel_engine_cs *engine)
+static void intel_context_out(struct intel_context *ce)
 {
+	struct intel_engine_cs *engine = ce->engine;
 	unsigned long flags;
 
 	if (READ_ONCE(engine->stats.enabled) == 0)
@@ -972,14 +1006,25 @@ static void intel_engine_context_out(struct intel_engine_cs *engine)
 	write_seqlock_irqsave(&engine->stats.lock, flags);
 
 	if (engine->stats.enabled > 0) {
+		ktime_t now = ktime_get();
+		struct i915_request *next;
 		ktime_t last;
 
+		intel_context_stats_stop(&ce->stats, now);
+
+		next = NULL;
+		if (engine->execlists.active)
+			next = *engine->execlists.active;
+		if (next)
+			intel_context_stats_start(&next->hw_context->stats,
+						  now);
+
 		if (engine->stats.active && --engine->stats.active == 0) {
 			/*
 			 * Decrement the active context count and in case GPU
 			 * is now idle add up to the running total.
 			 */
-			last = ktime_sub(ktime_get(), engine->stats.start);
+			last = ktime_sub(now, engine->stats.start);
 
 			engine->stats.total = ktime_add(engine->stats.total,
 							last);
@@ -989,7 +1034,7 @@ static void intel_engine_context_out(struct intel_engine_cs *engine)
 			 * the first event in which case we account from the
 			 * time stats gathering was turned on.
 			 */
-			last = ktime_sub(ktime_get(), engine->stats.enabled_at);
+			last = ktime_sub(now, engine->stats.enabled_at);
 
 			engine->stats.total = ktime_add(engine->stats.total,
 							last);
@@ -1000,7 +1045,7 @@ static void intel_engine_context_out(struct intel_engine_cs *engine)
 }
 
 static inline struct intel_engine_cs *
-__execlists_schedule_in(struct i915_request *rq)
+__execlists_schedule_in(struct i915_request *rq, int idx)
 {
 	struct intel_engine_cs * const engine = rq->engine;
 	struct intel_context * const ce = rq->hw_context;
@@ -1021,7 +1066,7 @@ __execlists_schedule_in(struct i915_request *rq)
 
 	intel_gt_pm_get(engine->gt);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
-	intel_engine_context_in(engine);
+	intel_context_in(rq->hw_context, idx == 0);
 
 	return engine;
 }
@@ -1038,7 +1083,8 @@ execlists_schedule_in(struct i915_request *rq, int idx)
 	old = READ_ONCE(ce->inflight);
 	do {
 		if (!old) {
-			WRITE_ONCE(ce->inflight, __execlists_schedule_in(rq));
+			WRITE_ONCE(ce->inflight,
+				   __execlists_schedule_in(rq, idx));
 			break;
 		}
 	} while (!try_cmpxchg(&ce->inflight, &old, ptr_inc(old)));
@@ -1114,7 +1160,7 @@ __execlists_schedule_out(struct i915_request *rq,
 {
 	struct intel_context * const ce = rq->hw_context;
 
-	intel_engine_context_out(engine);
+	intel_context_out(rq->hw_context);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 	intel_gt_pm_put(engine->gt);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-gfx] [RFC 1/5] drm/i915: Track per-context engine busyness
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some customers want to know how much of the GPU time are their clients
using in order to make dynamic load balancing decisions.

With the hooks already in place which track the overall engine busyness,
we can extend that slightly to split that time between contexts.

v2: Fix accounting for tail updates.
v3: Rebase.
v4: Mark currently running contexts as active on stats enable.
v5: Include some headers to fix the build.
v6: Added fine grained lock.
v7: Convert to seqlock. (Chris Wilson)
v8: Rebase and tidy with helpers.
v9: Rebase.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c       | 20 ++++++
 drivers/gpu/drm/i915/gt/intel_context.h       |  9 +++
 drivers/gpu/drm/i915/gt/intel_context_types.h |  9 +++
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     | 16 ++++-
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 66 ++++++++++++++++---
 5 files changed, 108 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index ee9d2bcd2c13..3d68720df512 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -248,6 +248,7 @@ intel_context_init(struct intel_context *ce,
 	INIT_LIST_HEAD(&ce->signals);
 
 	mutex_init(&ce->pin_mutex);
+	seqlock_init(&ce->stats.lock);
 
 	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire);
@@ -348,6 +349,25 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
 	return rq;
 }
 
+ktime_t intel_context_get_busy_time(struct intel_context *ce)
+{
+	unsigned int seq;
+	ktime_t total;
+
+	do {
+		seq = read_seqbegin(&ce->stats.lock);
+
+		total = ce->stats.total;
+
+		if (ce->stats.active)
+			total = ktime_add(total,
+					  ktime_sub(ktime_get(),
+						    ce->stats.start));
+	} while (read_seqretry(&ce->stats.lock, seq));
+
+	return total;
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftest_context.c"
 #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index 68b3d317d959..c7ab8efa3573 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -153,4 +153,13 @@ static inline struct intel_ring *__intel_context_ring_size(u64 sz)
 	return u64_to_ptr(struct intel_ring, sz);
 }
 
+static inline void
+__intel_context_stats_start(struct intel_context_stats *stats, ktime_t now)
+{
+	stats->start = now;
+	stats->active = true;
+}
+
+ktime_t intel_context_get_busy_time(struct intel_context *ce);
+
 #endif /* __INTEL_CONTEXT_H__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 6959b05ae5f8..b3384e01d033 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -11,6 +11,7 @@
 #include <linux/list.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
+#include <linux/seqlock.h>
 
 #include "i915_active_types.h"
 #include "i915_utils.h"
@@ -75,6 +76,14 @@ struct intel_context {
 
 	/** sseu: Control eu/slice partitioning */
 	struct intel_sseu sseu;
+
+	/** stats: Context GPU engine busyness tracking. */
+	struct intel_context_stats {
+		seqlock_t lock;
+		bool active;
+		ktime_t start;
+		ktime_t total;
+	} stats;
 };
 
 #endif /* __INTEL_CONTEXT_TYPES__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 9cc1ea6519ec..6792ec01f3f2 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1568,8 +1568,20 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 
 		engine->stats.enabled_at = ktime_get();
 
-		/* XXX submission method oblivious? */
-		for (port = execlists->active; (rq = *port); port++)
+		/*
+		 * Mark currently running context as active.
+		 * XXX submission method oblivious?
+		 */
+
+		rq = NULL;
+		port = execlists->active;
+		if (port)
+			rq = *port;
+		if (rq)
+			__intel_context_stats_start(&rq->hw_context->stats,
+						    engine->stats.enabled_at);
+
+		for (; (rq = *port); port++)
 			engine->stats.active++;
 
 		for (port = execlists->pending; (rq = *port); port++) {
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 523de1fd4452..2305d7a7ac68 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -944,26 +944,60 @@ execlists_context_status_change(struct i915_request *rq, unsigned long status)
 				   status, rq);
 }
 
-static void intel_engine_context_in(struct intel_engine_cs *engine)
+static void
+intel_context_stats_start(struct intel_context_stats *stats, ktime_t now)
+{
+	write_seqlock(&stats->lock);
+	__intel_context_stats_start(stats, now);
+	write_sequnlock(&stats->lock);
+}
+
+static void
+intel_context_stats_stop(struct intel_context_stats *stats, ktime_t now)
+{
+	write_seqlock(&stats->lock);
+	if (stats->active) {
+		stats->total = ktime_add(stats->total,
+					 ktime_sub(now, stats->start));
+		stats->active = false;
+	}
+	write_sequnlock(&stats->lock);
+}
+
+static void intel_context_in(struct intel_context *ce, bool submit)
 {
+	struct intel_engine_cs *engine = ce->engine;
 	unsigned long flags;
+	ktime_t now;
 
 	if (READ_ONCE(engine->stats.enabled) == 0)
 		return;
 
 	write_seqlock_irqsave(&engine->stats.lock, flags);
 
+	if (submit) {
+		now = ktime_get();
+		intel_context_stats_start(&ce->stats, now);
+	} else {
+		now = 0;
+	}
+
 	if (engine->stats.enabled > 0) {
-		if (engine->stats.active++ == 0)
-			engine->stats.start = ktime_get();
+		if (engine->stats.active++ == 0) {
+			if (!now)
+				now = ktime_get();
+			engine->stats.start = now;
+		}
+
 		GEM_BUG_ON(engine->stats.active == 0);
 	}
 
 	write_sequnlock_irqrestore(&engine->stats.lock, flags);
 }
 
-static void intel_engine_context_out(struct intel_engine_cs *engine)
+static void intel_context_out(struct intel_context *ce)
 {
+	struct intel_engine_cs *engine = ce->engine;
 	unsigned long flags;
 
 	if (READ_ONCE(engine->stats.enabled) == 0)
@@ -972,14 +1006,25 @@ static void intel_engine_context_out(struct intel_engine_cs *engine)
 	write_seqlock_irqsave(&engine->stats.lock, flags);
 
 	if (engine->stats.enabled > 0) {
+		ktime_t now = ktime_get();
+		struct i915_request *next;
 		ktime_t last;
 
+		intel_context_stats_stop(&ce->stats, now);
+
+		next = NULL;
+		if (engine->execlists.active)
+			next = *engine->execlists.active;
+		if (next)
+			intel_context_stats_start(&next->hw_context->stats,
+						  now);
+
 		if (engine->stats.active && --engine->stats.active == 0) {
 			/*
 			 * Decrement the active context count and in case GPU
 			 * is now idle add up to the running total.
 			 */
-			last = ktime_sub(ktime_get(), engine->stats.start);
+			last = ktime_sub(now, engine->stats.start);
 
 			engine->stats.total = ktime_add(engine->stats.total,
 							last);
@@ -989,7 +1034,7 @@ static void intel_engine_context_out(struct intel_engine_cs *engine)
 			 * the first event in which case we account from the
 			 * time stats gathering was turned on.
 			 */
-			last = ktime_sub(ktime_get(), engine->stats.enabled_at);
+			last = ktime_sub(now, engine->stats.enabled_at);
 
 			engine->stats.total = ktime_add(engine->stats.total,
 							last);
@@ -1000,7 +1045,7 @@ static void intel_engine_context_out(struct intel_engine_cs *engine)
 }
 
 static inline struct intel_engine_cs *
-__execlists_schedule_in(struct i915_request *rq)
+__execlists_schedule_in(struct i915_request *rq, int idx)
 {
 	struct intel_engine_cs * const engine = rq->engine;
 	struct intel_context * const ce = rq->hw_context;
@@ -1021,7 +1066,7 @@ __execlists_schedule_in(struct i915_request *rq)
 
 	intel_gt_pm_get(engine->gt);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
-	intel_engine_context_in(engine);
+	intel_context_in(rq->hw_context, idx == 0);
 
 	return engine;
 }
@@ -1038,7 +1083,8 @@ execlists_schedule_in(struct i915_request *rq, int idx)
 	old = READ_ONCE(ce->inflight);
 	do {
 		if (!old) {
-			WRITE_ONCE(ce->inflight, __execlists_schedule_in(rq));
+			WRITE_ONCE(ce->inflight,
+				   __execlists_schedule_in(rq, idx));
 			break;
 		}
 	} while (!try_cmpxchg(&ce->inflight, &old, ptr_inc(old)));
@@ -1114,7 +1160,7 @@ __execlists_schedule_out(struct i915_request *rq,
 {
 	struct intel_context * const ce = rq->hw_context;
 
-	intel_engine_context_out(engine);
+	intel_context_out(rq->hw_context);
 	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
 	intel_gt_pm_put(engine->gt);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 2/5] drm/i915: Expose list of clients in sysfs
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose a list of clients with open file handles in sysfs.

This will be a basis for a top-like utility showing per-client and per-
engine GPU load.

Currently we only expose each client's pid and name under opaque numbered
directories in /sys/class/drm/card0/clients/.

For instance:

/sys/class/drm/card0/clients/3/name: Xorg
/sys/class/drm/card0/clients/3/pid: 5664

v2:
 Chris Wilson:
 * Enclose new members into dedicated structs.
 * Protect against failed sysfs registration.

v3:
 * sysfs_attr_init.

v4:
 * Fix for internal clients.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  19 +++++
 drivers/gpu/drm/i915/i915_gem.c   | 124 ++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/i915_sysfs.c |   8 ++
 3 files changed, 143 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 16e58a74fa6f..4dc8cadf56eb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -222,6 +222,20 @@ struct drm_i915_file_private {
 	/** ban_score: Accumulated score of all ctx bans and fast hangs. */
 	atomic_t ban_score;
 	unsigned long hang_timestamp;
+
+	struct i915_drm_client {
+		unsigned int id;
+
+		pid_t pid;
+		char *name;
+
+		struct kobject *root;
+
+		struct {
+			struct device_attribute pid;
+			struct device_attribute name;
+		} attr;
+	} client;
 };
 
 /* Interface history:
@@ -1372,6 +1386,11 @@ struct drm_i915_private {
 
 	struct i915_pmu pmu;
 
+	struct i915_drm_clients {
+		struct kobject *root;
+		atomic_t serial;
+	} clients;
+
 	struct i915_hdcp_comp_master *hdcp_master;
 	bool hdcp_comp_added;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 319e96d833fa..d8d352efb9ef 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1492,6 +1492,99 @@ int i915_gem_freeze_late(struct drm_i915_private *i915)
 	return 0;
 }
 
+static ssize_t
+show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_file_private *file_priv =
+		container_of(attr, struct drm_i915_file_private,
+			     client.attr.name);
+
+	return snprintf(buf, PAGE_SIZE, "%s", file_priv->client.name);
+}
+
+static ssize_t
+show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_file_private *file_priv =
+		container_of(attr, struct drm_i915_file_private,
+			     client.attr.pid);
+
+	return snprintf(buf, PAGE_SIZE, "%u", file_priv->client.pid);
+}
+
+static int
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,
+		struct task_struct *task,
+		unsigned int serial)
+{
+	int ret = -ENOMEM;
+	struct device_attribute *attr;
+	char id[32];
+
+	if (!i915->clients.root)
+		return 0; /* intel_fbdev_init registers a client before sysfs */
+
+	file_priv->client.name = kstrdup(task->comm, GFP_KERNEL);
+	if (!file_priv->client.name)
+		goto err_name;
+
+	snprintf(id, sizeof(id), "%u", serial);
+	file_priv->client.root = kobject_create_and_add(id,
+							i915->clients.root);
+	if (!file_priv->client.root)
+		goto err_client;
+
+	attr = &file_priv->client.attr.name;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "name";
+	attr->attr.mode = 0444;
+	attr->show = show_client_name;
+
+	ret = sysfs_create_file(file_priv->client.root,
+				(struct attribute *)attr);
+	if (ret)
+		goto err_attr_name;
+
+	attr = &file_priv->client.attr.pid;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "pid";
+	attr->attr.mode = 0444;
+	attr->show = show_client_pid;
+
+	ret = sysfs_create_file(file_priv->client.root,
+				(struct attribute *)attr);
+	if (ret)
+		goto err_attr_pid;
+
+	file_priv->client.pid = pid_nr(get_task_pid(task, PIDTYPE_PID));
+
+	return 0;
+
+err_attr_pid:
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.name);
+err_attr_name:
+	kobject_put(file_priv->client.root);
+err_client:
+	kfree(file_priv->client.name);
+err_name:
+	return ret;
+}
+
+static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
+{
+	if (!file_priv->client.name)
+		return; /* intel_fbdev_init registers a client before sysfs */
+
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.pid);
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.name);
+	kobject_put(file_priv->client.root);
+	kfree(file_priv->client.name);
+}
+
 void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv = file->driver_priv;
@@ -1505,33 +1598,48 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 	list_for_each_entry(request, &file_priv->mm.request_list, client_link)
 		request->file_priv = NULL;
 	spin_unlock(&file_priv->mm.lock);
+
+	i915_gem_remove_client(file_priv);
 }
 
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 {
+	int ret = -ENOMEM;
 	struct drm_i915_file_private *file_priv;
-	int ret;
 
 	DRM_DEBUG("\n");
 
 	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
 	if (!file_priv)
-		return -ENOMEM;
+		goto err_alloc;
+
+	file_priv->client.id = atomic_inc_return(&i915->clients.serial);
+	ret = i915_gem_add_client(i915, file_priv, current,
+				  file_priv->client.id);
+	if (ret)
+		goto err_client;
 
 	file->driver_priv = file_priv;
+	ret = i915_gem_context_open(i915, file);
+	if (ret)
+		goto err_context;
+
 	file_priv->dev_priv = i915;
 	file_priv->file = file;
+	file_priv->bsd_engine = -1;
+	file_priv->hang_timestamp = jiffies;
 
 	spin_lock_init(&file_priv->mm.lock);
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
 
-	file_priv->bsd_engine = -1;
-	file_priv->hang_timestamp = jiffies;
-
-	ret = i915_gem_context_open(i915, file);
-	if (ret)
-		kfree(file_priv);
+	return 0;
 
+err_context:
+	i915_gem_remove_client(file_priv);
+err_client:
+	atomic_dec(&i915->clients.serial);
+	kfree(file_priv);
+err_alloc:
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index bf039b8ba593..a9f27f4fc245 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -574,6 +574,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	struct device *kdev = dev_priv->drm.primary->kdev;
 	int ret;
 
+	dev_priv->clients.root =
+		kobject_create_and_add("clients", &kdev->kobj);
+	if (!dev_priv->clients.root)
+		DRM_ERROR("Per-client sysfs setup failed\n");
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -634,4 +639,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
+
+	if (dev_priv->clients.root)
+		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-gfx] [RFC 2/5] drm/i915: Expose list of clients in sysfs
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose a list of clients with open file handles in sysfs.

This will be a basis for a top-like utility showing per-client and per-
engine GPU load.

Currently we only expose each client's pid and name under opaque numbered
directories in /sys/class/drm/card0/clients/.

For instance:

/sys/class/drm/card0/clients/3/name: Xorg
/sys/class/drm/card0/clients/3/pid: 5664

v2:
 Chris Wilson:
 * Enclose new members into dedicated structs.
 * Protect against failed sysfs registration.

v3:
 * sysfs_attr_init.

v4:
 * Fix for internal clients.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  19 +++++
 drivers/gpu/drm/i915/i915_gem.c   | 124 ++++++++++++++++++++++++++++--
 drivers/gpu/drm/i915/i915_sysfs.c |   8 ++
 3 files changed, 143 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 16e58a74fa6f..4dc8cadf56eb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -222,6 +222,20 @@ struct drm_i915_file_private {
 	/** ban_score: Accumulated score of all ctx bans and fast hangs. */
 	atomic_t ban_score;
 	unsigned long hang_timestamp;
+
+	struct i915_drm_client {
+		unsigned int id;
+
+		pid_t pid;
+		char *name;
+
+		struct kobject *root;
+
+		struct {
+			struct device_attribute pid;
+			struct device_attribute name;
+		} attr;
+	} client;
 };
 
 /* Interface history:
@@ -1372,6 +1386,11 @@ struct drm_i915_private {
 
 	struct i915_pmu pmu;
 
+	struct i915_drm_clients {
+		struct kobject *root;
+		atomic_t serial;
+	} clients;
+
 	struct i915_hdcp_comp_master *hdcp_master;
 	bool hdcp_comp_added;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 319e96d833fa..d8d352efb9ef 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1492,6 +1492,99 @@ int i915_gem_freeze_late(struct drm_i915_private *i915)
 	return 0;
 }
 
+static ssize_t
+show_client_name(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_file_private *file_priv =
+		container_of(attr, struct drm_i915_file_private,
+			     client.attr.name);
+
+	return snprintf(buf, PAGE_SIZE, "%s", file_priv->client.name);
+}
+
+static ssize_t
+show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_file_private *file_priv =
+		container_of(attr, struct drm_i915_file_private,
+			     client.attr.pid);
+
+	return snprintf(buf, PAGE_SIZE, "%u", file_priv->client.pid);
+}
+
+static int
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,
+		struct task_struct *task,
+		unsigned int serial)
+{
+	int ret = -ENOMEM;
+	struct device_attribute *attr;
+	char id[32];
+
+	if (!i915->clients.root)
+		return 0; /* intel_fbdev_init registers a client before sysfs */
+
+	file_priv->client.name = kstrdup(task->comm, GFP_KERNEL);
+	if (!file_priv->client.name)
+		goto err_name;
+
+	snprintf(id, sizeof(id), "%u", serial);
+	file_priv->client.root = kobject_create_and_add(id,
+							i915->clients.root);
+	if (!file_priv->client.root)
+		goto err_client;
+
+	attr = &file_priv->client.attr.name;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "name";
+	attr->attr.mode = 0444;
+	attr->show = show_client_name;
+
+	ret = sysfs_create_file(file_priv->client.root,
+				(struct attribute *)attr);
+	if (ret)
+		goto err_attr_name;
+
+	attr = &file_priv->client.attr.pid;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "pid";
+	attr->attr.mode = 0444;
+	attr->show = show_client_pid;
+
+	ret = sysfs_create_file(file_priv->client.root,
+				(struct attribute *)attr);
+	if (ret)
+		goto err_attr_pid;
+
+	file_priv->client.pid = pid_nr(get_task_pid(task, PIDTYPE_PID));
+
+	return 0;
+
+err_attr_pid:
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.name);
+err_attr_name:
+	kobject_put(file_priv->client.root);
+err_client:
+	kfree(file_priv->client.name);
+err_name:
+	return ret;
+}
+
+static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
+{
+	if (!file_priv->client.name)
+		return; /* intel_fbdev_init registers a client before sysfs */
+
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.pid);
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.name);
+	kobject_put(file_priv->client.root);
+	kfree(file_priv->client.name);
+}
+
 void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv = file->driver_priv;
@@ -1505,33 +1598,48 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file)
 	list_for_each_entry(request, &file_priv->mm.request_list, client_link)
 		request->file_priv = NULL;
 	spin_unlock(&file_priv->mm.lock);
+
+	i915_gem_remove_client(file_priv);
 }
 
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 {
+	int ret = -ENOMEM;
 	struct drm_i915_file_private *file_priv;
-	int ret;
 
 	DRM_DEBUG("\n");
 
 	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
 	if (!file_priv)
-		return -ENOMEM;
+		goto err_alloc;
+
+	file_priv->client.id = atomic_inc_return(&i915->clients.serial);
+	ret = i915_gem_add_client(i915, file_priv, current,
+				  file_priv->client.id);
+	if (ret)
+		goto err_client;
 
 	file->driver_priv = file_priv;
+	ret = i915_gem_context_open(i915, file);
+	if (ret)
+		goto err_context;
+
 	file_priv->dev_priv = i915;
 	file_priv->file = file;
+	file_priv->bsd_engine = -1;
+	file_priv->hang_timestamp = jiffies;
 
 	spin_lock_init(&file_priv->mm.lock);
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
 
-	file_priv->bsd_engine = -1;
-	file_priv->hang_timestamp = jiffies;
-
-	ret = i915_gem_context_open(i915, file);
-	if (ret)
-		kfree(file_priv);
+	return 0;
 
+err_context:
+	i915_gem_remove_client(file_priv);
+err_client:
+	atomic_dec(&i915->clients.serial);
+	kfree(file_priv);
+err_alloc:
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index bf039b8ba593..a9f27f4fc245 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -574,6 +574,11 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	struct device *kdev = dev_priv->drm.primary->kdev;
 	int ret;
 
+	dev_priv->clients.root =
+		kobject_create_and_add("clients", &kdev->kobj);
+	if (!dev_priv->clients.root)
+		DRM_ERROR("Per-client sysfs setup failed\n");
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -634,4 +639,7 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6_attr_group);
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
+
+	if (dev_priv->clients.root)
+		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 3/5] drm/i915: Update client name on context create
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some clients have the DRM fd passed to them over a socket by the X server.

Grab the real client and pid when they create their first context and
update the exposed data for more useful enumeration.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 17 ++++++++++++++---
 drivers/gpu/drm/i915/i915_drv.h             |  7 +++++++
 drivers/gpu/drm/i915/i915_gem.c             |  4 ++--
 3 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 55f1f93c0925..c7f6684eb366 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -2084,6 +2084,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 {
 	struct drm_i915_private *i915 = to_i915(dev);
 	struct drm_i915_gem_context_create_ext *args = data;
+	pid_t pid = pid_nr(get_task_pid(current, PIDTYPE_PID));
+	struct drm_i915_file_private *file_priv = file->driver_priv;
 	struct create_ext ext_data;
 	int ret;
 
@@ -2097,14 +2099,23 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	ext_data.fpriv = file->driver_priv;
+	ext_data.fpriv = file_priv;
 	if (client_is_banned(ext_data.fpriv)) {
 		DRM_DEBUG("client %s[%d] banned from creating ctx\n",
-			  current->comm,
-			  pid_nr(get_task_pid(current, PIDTYPE_PID)));
+			  current->comm, pid);
 		return -EIO;
 	}
 
+	mutex_lock(&dev->struct_mutex);
+	if (file_priv->client.pid != pid) {
+		i915_gem_remove_client(file_priv);
+		ret = i915_gem_add_client(i915, file_priv, current,
+					  file_priv->client.id);
+	}
+	mutex_unlock(&dev->struct_mutex);
+	if (ret)
+		return ret;
+
 	ext_data.ctx = i915_gem_create_context(i915, args->flags);
 	if (IS_ERR(ext_data.ctx))
 		return PTR_ERR(ext_data.ctx);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 4dc8cadf56eb..b8f7b0637224 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1983,6 +1983,13 @@ void i915_gem_suspend_late(struct drm_i915_private *dev_priv);
 void i915_gem_resume(struct drm_i915_private *dev_priv);
 vm_fault_t i915_gem_fault(struct vm_fault *vmf);
 
+int
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,
+		struct task_struct *task,
+		unsigned int serial);
+void i915_gem_remove_client(struct drm_i915_file_private *file_priv);
+
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file);
 void i915_gem_release(struct drm_device *dev, struct drm_file *file);
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d8d352efb9ef..54a00c954066 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1512,7 +1512,7 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%u", file_priv->client.pid);
 }
 
-static int
+int
 i915_gem_add_client(struct drm_i915_private *i915,
 		struct drm_i915_file_private *file_priv,
 		struct task_struct *task,
@@ -1572,7 +1572,7 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	return ret;
 }
 
-static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
+void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
 {
 	if (!file_priv->client.name)
 		return; /* intel_fbdev_init registers a client before sysfs */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-gfx] [RFC 3/5] drm/i915: Update client name on context create
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Some clients have the DRM fd passed to them over a socket by the X server.

Grab the real client and pid when they create their first context and
update the exposed data for more useful enumeration.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 17 ++++++++++++++---
 drivers/gpu/drm/i915/i915_drv.h             |  7 +++++++
 drivers/gpu/drm/i915/i915_gem.c             |  4 ++--
 3 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 55f1f93c0925..c7f6684eb366 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -2084,6 +2084,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 {
 	struct drm_i915_private *i915 = to_i915(dev);
 	struct drm_i915_gem_context_create_ext *args = data;
+	pid_t pid = pid_nr(get_task_pid(current, PIDTYPE_PID));
+	struct drm_i915_file_private *file_priv = file->driver_priv;
 	struct create_ext ext_data;
 	int ret;
 
@@ -2097,14 +2099,23 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	ext_data.fpriv = file->driver_priv;
+	ext_data.fpriv = file_priv;
 	if (client_is_banned(ext_data.fpriv)) {
 		DRM_DEBUG("client %s[%d] banned from creating ctx\n",
-			  current->comm,
-			  pid_nr(get_task_pid(current, PIDTYPE_PID)));
+			  current->comm, pid);
 		return -EIO;
 	}
 
+	mutex_lock(&dev->struct_mutex);
+	if (file_priv->client.pid != pid) {
+		i915_gem_remove_client(file_priv);
+		ret = i915_gem_add_client(i915, file_priv, current,
+					  file_priv->client.id);
+	}
+	mutex_unlock(&dev->struct_mutex);
+	if (ret)
+		return ret;
+
 	ext_data.ctx = i915_gem_create_context(i915, args->flags);
 	if (IS_ERR(ext_data.ctx))
 		return PTR_ERR(ext_data.ctx);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 4dc8cadf56eb..b8f7b0637224 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1983,6 +1983,13 @@ void i915_gem_suspend_late(struct drm_i915_private *dev_priv);
 void i915_gem_resume(struct drm_i915_private *dev_priv);
 vm_fault_t i915_gem_fault(struct vm_fault *vmf);
 
+int
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,
+		struct task_struct *task,
+		unsigned int serial);
+void i915_gem_remove_client(struct drm_i915_file_private *file_priv);
+
 int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file);
 void i915_gem_release(struct drm_device *dev, struct drm_file *file);
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d8d352efb9ef..54a00c954066 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1512,7 +1512,7 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%u", file_priv->client.pid);
 }
 
-static int
+int
 i915_gem_add_client(struct drm_i915_private *i915,
 		struct drm_i915_file_private *file_priv,
 		struct task_struct *task,
@@ -1572,7 +1572,7 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	return ret;
 }
 
-static void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
+void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
 {
 	if (!file_priv->client.name)
 		return; /* intel_fbdev_init registers a client before sysfs */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 4/5] drm/i915: Expose per-engine client busyness
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose per-client and per-engine busyness under the previously added sysfs
client root.

The new files are one per-engine instance and located under the 'busy'
directory. Each contains a monotonically increasing nano-second resolution
times each client's jobs were executing on the GPU.

This enables userspace to create a top-like tool for GPU utilization:

==========================================================================
intel-gpu-top -  935/ 935 MHz;    0% RC6; 14.73 Watts;     1097 irqs/s

      IMC reads:     1401 MiB/s
     IMC writes:        4 MiB/s

          ENGINE      BUSY                                 MI_SEMA MI_WAIT
     Render/3D/0   63.73% |███████████████████           |      3%      0%
       Blitter/0    9.53% |██▊                           |      6%      0%
         Video/0   39.32% |███████████▊                  |     16%      0%
         Video/1   15.62% |████▋                         |      0%      0%
  VideoEnhance/0    0.00% |                              |      0%      0%

  PID            NAME     RCS          BCS          VCS         VECS
 4084        gem_wsim |█████▌     ||█          ||           ||           |
 4086        gem_wsim |█▌         ||           ||███        ||           |
==========================================================================

v2: Use intel_context_engine_get_busy_time.
v3: New directory structure.
v4: Rebase.
v5: sysfs_attr_init.
v6: Small tidy in i915_gem_add_client.
v7: Rebase to be engine class based.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h |   8 +++
 drivers/gpu/drm/i915/i915_gem.c | 102 ++++++++++++++++++++++++++++++--
 2 files changed, 106 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b8f7b0637224..45f0e2455322 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -186,6 +186,12 @@ struct drm_i915_private;
 struct i915_mm_struct;
 struct i915_mmu_object;
 
+struct i915_engine_busy_attribute {
+	struct device_attribute attr;
+	struct drm_i915_file_private *file_priv;
+	unsigned int engine_class;
+};
+
 struct drm_i915_file_private {
 	struct drm_i915_private *dev_priv;
 
@@ -230,10 +236,12 @@ struct drm_i915_file_private {
 		char *name;
 
 		struct kobject *root;
+		struct kobject *busy_root;
 
 		struct {
 			struct device_attribute pid;
 			struct device_attribute name;
+			struct i915_engine_busy_attribute busy[MAX_ENGINE_CLASS];
 		} attr;
 	} client;
 };
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 54a00c954066..b3d21b6b570c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1512,15 +1512,67 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%u", file_priv->client.pid);
 }
 
+struct busy_ctx {
+	unsigned int engine_class;
+	u64 total;
+};
+
+static int busy_add(int id, void *p, void *data)
+{
+	struct busy_ctx *bc = data;
+	struct i915_gem_context *ctx = p;
+	unsigned int engine_class = bc->engine_class;
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	uint64_t total = bc->total;
+
+	for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
+		if (ce->engine->uabi_class == engine_class)
+			total += ktime_to_ns(intel_context_get_busy_time(ce));
+	}
+	i915_gem_context_unlock_engines(ctx);
+
+	bc->total = total;
+
+	return 0;
+}
+
+static ssize_t
+show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	struct drm_i915_file_private *file_priv = i915_attr->file_priv;
+	struct busy_ctx bc = { .engine_class = i915_attr->engine_class };
+	int ret;
+
+	ret = mutex_lock_interruptible(&file_priv->context_idr_lock);
+	if (ret)
+		return ret;
+
+	idr_for_each(&file_priv->context_idr, busy_add, &bc);
+
+	mutex_unlock(&file_priv->context_idr_lock);
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", bc.total);
+}
+
+static const char *uabi_class_names[] = {
+	[I915_ENGINE_CLASS_RENDER] = "0",
+	[I915_ENGINE_CLASS_COPY] = "1",
+	[I915_ENGINE_CLASS_VIDEO] = "2",
+	[I915_ENGINE_CLASS_VIDEO_ENHANCE] = "3",
+};
+
 int
 i915_gem_add_client(struct drm_i915_private *i915,
 		struct drm_i915_file_private *file_priv,
 		struct task_struct *task,
 		unsigned int serial)
 {
-	int ret = -ENOMEM;
+	int i, ret = -ENOMEM;
 	struct device_attribute *attr;
-	char id[32];
+	char idstr[32];
 
 	if (!i915->clients.root)
 		return 0; /* intel_fbdev_init registers a client before sysfs */
@@ -1529,8 +1581,8 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	if (!file_priv->client.name)
 		goto err_name;
 
-	snprintf(id, sizeof(id), "%u", serial);
-	file_priv->client.root = kobject_create_and_add(id,
+	snprintf(idstr, sizeof(idstr), "%u", serial);
+	file_priv->client.root = kobject_create_and_add(idstr,
 							i915->clients.root);
 	if (!file_priv->client.root)
 		goto err_client;
@@ -1557,10 +1609,44 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	if (ret)
 		goto err_attr_pid;
 
+	file_priv->client.busy_root =
+			kobject_create_and_add("busy", file_priv->client.root);
+	if (!file_priv->client.busy_root)
+		goto err_busy_root;
+
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) {
+		struct i915_engine_busy_attribute *i915_attr =
+			&file_priv->client.attr.busy[i];
+
+		i915_attr->file_priv = file_priv;
+		i915_attr->engine_class = i;
+
+		attr = &i915_attr->attr;
+
+		sysfs_attr_init(&attr->attr);
+
+		attr->attr.name = uabi_class_names[i];
+		attr->attr.mode = 0444;
+		attr->show = show_client_busy;
+
+		ret = sysfs_create_file(file_priv->client.busy_root,
+				        (struct attribute *)attr);
+		if (ret)
+			goto err_attr_busy;
+	}
+
 	file_priv->client.pid = pid_nr(get_task_pid(task, PIDTYPE_PID));
 
 	return 0;
 
+err_attr_busy:
+	for (--i; i >= 0; i--)
+		sysfs_remove_file(file_priv->client.busy_root,
+				  (struct attribute *)&file_priv->client.attr.busy[i]);
+	kobject_put(file_priv->client.busy_root);
+err_busy_root:
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.pid);
 err_attr_pid:
 	sysfs_remove_file(file_priv->client.root,
 			  (struct attribute *)&file_priv->client.attr.name);
@@ -1574,9 +1660,17 @@ i915_gem_add_client(struct drm_i915_private *i915,
 
 void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
 {
+	unsigned int i;
+
 	if (!file_priv->client.name)
 		return; /* intel_fbdev_init registers a client before sysfs */
 
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
+		sysfs_remove_file(file_priv->client.busy_root,
+				  (struct attribute *)&file_priv->client.attr.busy[i]);
+
+	kobject_put(file_priv->client.busy_root);
+
 	sysfs_remove_file(file_priv->client.root,
 			  (struct attribute *)&file_priv->client.attr.pid);
 	sysfs_remove_file(file_priv->client.root,
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-gfx] [RFC 4/5] drm/i915: Expose per-engine client busyness
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Expose per-client and per-engine busyness under the previously added sysfs
client root.

The new files are one per-engine instance and located under the 'busy'
directory. Each contains a monotonically increasing nano-second resolution
times each client's jobs were executing on the GPU.

This enables userspace to create a top-like tool for GPU utilization:

==========================================================================
intel-gpu-top -  935/ 935 MHz;    0% RC6; 14.73 Watts;     1097 irqs/s

      IMC reads:     1401 MiB/s
     IMC writes:        4 MiB/s

          ENGINE      BUSY                                 MI_SEMA MI_WAIT
     Render/3D/0   63.73% |███████████████████           |      3%      0%
       Blitter/0    9.53% |██▊                           |      6%      0%
         Video/0   39.32% |███████████▊                  |     16%      0%
         Video/1   15.62% |████▋                         |      0%      0%
  VideoEnhance/0    0.00% |                              |      0%      0%

  PID            NAME     RCS          BCS          VCS         VECS
 4084        gem_wsim |█████▌     ||█          ||           ||           |
 4086        gem_wsim |█▌         ||           ||███        ||           |
==========================================================================

v2: Use intel_context_engine_get_busy_time.
v3: New directory structure.
v4: Rebase.
v5: sysfs_attr_init.
v6: Small tidy in i915_gem_add_client.
v7: Rebase to be engine class based.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h |   8 +++
 drivers/gpu/drm/i915/i915_gem.c | 102 ++++++++++++++++++++++++++++++--
 2 files changed, 106 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b8f7b0637224..45f0e2455322 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -186,6 +186,12 @@ struct drm_i915_private;
 struct i915_mm_struct;
 struct i915_mmu_object;
 
+struct i915_engine_busy_attribute {
+	struct device_attribute attr;
+	struct drm_i915_file_private *file_priv;
+	unsigned int engine_class;
+};
+
 struct drm_i915_file_private {
 	struct drm_i915_private *dev_priv;
 
@@ -230,10 +236,12 @@ struct drm_i915_file_private {
 		char *name;
 
 		struct kobject *root;
+		struct kobject *busy_root;
 
 		struct {
 			struct device_attribute pid;
 			struct device_attribute name;
+			struct i915_engine_busy_attribute busy[MAX_ENGINE_CLASS];
 		} attr;
 	} client;
 };
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 54a00c954066..b3d21b6b570c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1512,15 +1512,67 @@ show_client_pid(struct device *kdev, struct device_attribute *attr, char *buf)
 	return snprintf(buf, PAGE_SIZE, "%u", file_priv->client.pid);
 }
 
+struct busy_ctx {
+	unsigned int engine_class;
+	u64 total;
+};
+
+static int busy_add(int id, void *p, void *data)
+{
+	struct busy_ctx *bc = data;
+	struct i915_gem_context *ctx = p;
+	unsigned int engine_class = bc->engine_class;
+	struct i915_gem_engines_iter it;
+	struct intel_context *ce;
+	uint64_t total = bc->total;
+
+	for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
+		if (ce->engine->uabi_class == engine_class)
+			total += ktime_to_ns(intel_context_get_busy_time(ce));
+	}
+	i915_gem_context_unlock_engines(ctx);
+
+	bc->total = total;
+
+	return 0;
+}
+
+static ssize_t
+show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct i915_engine_busy_attribute *i915_attr =
+		container_of(attr, typeof(*i915_attr), attr);
+	struct drm_i915_file_private *file_priv = i915_attr->file_priv;
+	struct busy_ctx bc = { .engine_class = i915_attr->engine_class };
+	int ret;
+
+	ret = mutex_lock_interruptible(&file_priv->context_idr_lock);
+	if (ret)
+		return ret;
+
+	idr_for_each(&file_priv->context_idr, busy_add, &bc);
+
+	mutex_unlock(&file_priv->context_idr_lock);
+
+	return snprintf(buf, PAGE_SIZE, "%llu\n", bc.total);
+}
+
+static const char *uabi_class_names[] = {
+	[I915_ENGINE_CLASS_RENDER] = "0",
+	[I915_ENGINE_CLASS_COPY] = "1",
+	[I915_ENGINE_CLASS_VIDEO] = "2",
+	[I915_ENGINE_CLASS_VIDEO_ENHANCE] = "3",
+};
+
 int
 i915_gem_add_client(struct drm_i915_private *i915,
 		struct drm_i915_file_private *file_priv,
 		struct task_struct *task,
 		unsigned int serial)
 {
-	int ret = -ENOMEM;
+	int i, ret = -ENOMEM;
 	struct device_attribute *attr;
-	char id[32];
+	char idstr[32];
 
 	if (!i915->clients.root)
 		return 0; /* intel_fbdev_init registers a client before sysfs */
@@ -1529,8 +1581,8 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	if (!file_priv->client.name)
 		goto err_name;
 
-	snprintf(id, sizeof(id), "%u", serial);
-	file_priv->client.root = kobject_create_and_add(id,
+	snprintf(idstr, sizeof(idstr), "%u", serial);
+	file_priv->client.root = kobject_create_and_add(idstr,
 							i915->clients.root);
 	if (!file_priv->client.root)
 		goto err_client;
@@ -1557,10 +1609,44 @@ i915_gem_add_client(struct drm_i915_private *i915,
 	if (ret)
 		goto err_attr_pid;
 
+	file_priv->client.busy_root =
+			kobject_create_and_add("busy", file_priv->client.root);
+	if (!file_priv->client.busy_root)
+		goto err_busy_root;
+
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) {
+		struct i915_engine_busy_attribute *i915_attr =
+			&file_priv->client.attr.busy[i];
+
+		i915_attr->file_priv = file_priv;
+		i915_attr->engine_class = i;
+
+		attr = &i915_attr->attr;
+
+		sysfs_attr_init(&attr->attr);
+
+		attr->attr.name = uabi_class_names[i];
+		attr->attr.mode = 0444;
+		attr->show = show_client_busy;
+
+		ret = sysfs_create_file(file_priv->client.busy_root,
+				        (struct attribute *)attr);
+		if (ret)
+			goto err_attr_busy;
+	}
+
 	file_priv->client.pid = pid_nr(get_task_pid(task, PIDTYPE_PID));
 
 	return 0;
 
+err_attr_busy:
+	for (--i; i >= 0; i--)
+		sysfs_remove_file(file_priv->client.busy_root,
+				  (struct attribute *)&file_priv->client.attr.busy[i]);
+	kobject_put(file_priv->client.busy_root);
+err_busy_root:
+	sysfs_remove_file(file_priv->client.root,
+			  (struct attribute *)&file_priv->client.attr.pid);
 err_attr_pid:
 	sysfs_remove_file(file_priv->client.root,
 			  (struct attribute *)&file_priv->client.attr.name);
@@ -1574,9 +1660,17 @@ i915_gem_add_client(struct drm_i915_private *i915,
 
 void i915_gem_remove_client(struct drm_i915_file_private *file_priv)
 {
+	unsigned int i;
+
 	if (!file_priv->client.name)
 		return; /* intel_fbdev_init registers a client before sysfs */
 
+	for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)
+		sysfs_remove_file(file_priv->client.busy_root,
+				  (struct attribute *)&file_priv->client.attr.busy[i]);
+
+	kobject_put(file_priv->client.busy_root);
+
 	sysfs_remove_file(file_priv->client.root,
 			  (struct attribute *)&file_priv->client.attr.pid);
 	sysfs_remove_file(file_priv->client.root,
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

By default we are not collecting any per-engine and per-context
statistcs.

Add a new sysfs toggle to enable this facility:

$ echo 1 >/sys/class/drm/card0/clients/enable_stats

v2: Rebase.
v3: sysfs_attr_init.
v4: Scheduler caps.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  4 ++
 drivers/gpu/drm/i915/i915_sysfs.c | 73 +++++++++++++++++++++++++++++++
 2 files changed, 77 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 45f0e2455322..3d2459e9fff4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1397,6 +1397,10 @@ struct drm_i915_private {
 	struct i915_drm_clients {
 		struct kobject *root;
 		atomic_t serial;
+		struct {
+			bool enabled;
+			struct device_attribute attr;
+		} stats;
 	} clients;
 
 	struct i915_hdcp_comp_master *hdcp_master;
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index a9f27f4fc245..b061baf5da49 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -569,9 +569,67 @@ static void i915_setup_error_capture(struct device *kdev) {}
 static void i915_teardown_error_capture(struct device *kdev) {}
 #endif
 
+static ssize_t
+show_client_stats(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_private *i915 =
+		container_of(attr, struct drm_i915_private, clients.stats.attr);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", i915->clients.stats.enabled);
+}
+
+static ssize_t
+store_client_stats(struct device *kdev, struct device_attribute *attr,
+		   const char *buf, size_t count)
+{
+	struct drm_i915_private *i915 =
+		container_of(attr, struct drm_i915_private, clients.stats.attr);
+	bool disable = false;
+	bool enable = false;
+	bool val = false;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	int ret;
+
+	 /* Use RCS as proxy for all engines. */
+	if (!(i915->caps.scheduler & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS))
+		return -EINVAL;
+
+	ret = kstrtobool(buf, &val);
+	if (ret)
+		return ret;
+
+	ret = i915_mutex_lock_interruptible(&i915->drm);
+	if (ret)
+		return ret;
+
+	if (val && !i915->clients.stats.enabled)
+		enable = true;
+	else if (!val && i915->clients.stats.enabled)
+		disable = true;
+
+	if (!enable && !disable)
+		goto out;
+
+	for_each_engine(engine, i915, id) {
+		if (enable)
+			WARN_ON_ONCE(intel_enable_engine_stats(engine));
+		else if (disable)
+			intel_disable_engine_stats(engine);
+	}
+
+	i915->clients.stats.enabled = val;
+
+out:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return count;
+}
+
 void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 {
 	struct device *kdev = dev_priv->drm.primary->kdev;
+	struct device_attribute *attr;
 	int ret;
 
 	dev_priv->clients.root =
@@ -579,6 +637,18 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	if (!dev_priv->clients.root)
 		DRM_ERROR("Per-client sysfs setup failed\n");
 
+	attr = &dev_priv->clients.stats.attr;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "enable_stats";
+	attr->attr.mode = 0664;
+	attr->show = show_client_stats;
+	attr->store = store_client_stats;
+
+	ret = sysfs_create_file(dev_priv->clients.root,
+				(struct attribute *)attr);
+	if (ret)
+		DRM_ERROR("Per-client sysfs setup failed! (%d)\n", ret);
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -640,6 +710,9 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
 
+	sysfs_remove_file(dev_priv->clients.root,
+			  (struct attribute *)&dev_priv->clients.stats.attr);
+
 	if (dev_priv->clients.root)
 		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Intel-gfx] [RFC 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats
@ 2019-10-25 14:21   ` Tvrtko Ursulin
  0 siblings, 0 replies; 24+ messages in thread
From: Tvrtko Ursulin @ 2019-10-25 14:21 UTC (permalink / raw)
  To: Intel-gfx

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

By default we are not collecting any per-engine and per-context
statistcs.

Add a new sysfs toggle to enable this facility:

$ echo 1 >/sys/class/drm/card0/clients/enable_stats

v2: Rebase.
v3: sysfs_attr_init.
v4: Scheduler caps.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h   |  4 ++
 drivers/gpu/drm/i915/i915_sysfs.c | 73 +++++++++++++++++++++++++++++++
 2 files changed, 77 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 45f0e2455322..3d2459e9fff4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1397,6 +1397,10 @@ struct drm_i915_private {
 	struct i915_drm_clients {
 		struct kobject *root;
 		atomic_t serial;
+		struct {
+			bool enabled;
+			struct device_attribute attr;
+		} stats;
 	} clients;
 
 	struct i915_hdcp_comp_master *hdcp_master;
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index a9f27f4fc245..b061baf5da49 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -569,9 +569,67 @@ static void i915_setup_error_capture(struct device *kdev) {}
 static void i915_teardown_error_capture(struct device *kdev) {}
 #endif
 
+static ssize_t
+show_client_stats(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+	struct drm_i915_private *i915 =
+		container_of(attr, struct drm_i915_private, clients.stats.attr);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", i915->clients.stats.enabled);
+}
+
+static ssize_t
+store_client_stats(struct device *kdev, struct device_attribute *attr,
+		   const char *buf, size_t count)
+{
+	struct drm_i915_private *i915 =
+		container_of(attr, struct drm_i915_private, clients.stats.attr);
+	bool disable = false;
+	bool enable = false;
+	bool val = false;
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+	int ret;
+
+	 /* Use RCS as proxy for all engines. */
+	if (!(i915->caps.scheduler & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS))
+		return -EINVAL;
+
+	ret = kstrtobool(buf, &val);
+	if (ret)
+		return ret;
+
+	ret = i915_mutex_lock_interruptible(&i915->drm);
+	if (ret)
+		return ret;
+
+	if (val && !i915->clients.stats.enabled)
+		enable = true;
+	else if (!val && i915->clients.stats.enabled)
+		disable = true;
+
+	if (!enable && !disable)
+		goto out;
+
+	for_each_engine(engine, i915, id) {
+		if (enable)
+			WARN_ON_ONCE(intel_enable_engine_stats(engine));
+		else if (disable)
+			intel_disable_engine_stats(engine);
+	}
+
+	i915->clients.stats.enabled = val;
+
+out:
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return count;
+}
+
 void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 {
 	struct device *kdev = dev_priv->drm.primary->kdev;
+	struct device_attribute *attr;
 	int ret;
 
 	dev_priv->clients.root =
@@ -579,6 +637,18 @@ void i915_setup_sysfs(struct drm_i915_private *dev_priv)
 	if (!dev_priv->clients.root)
 		DRM_ERROR("Per-client sysfs setup failed\n");
 
+	attr = &dev_priv->clients.stats.attr;
+	sysfs_attr_init(&attr->attr);
+	attr->attr.name = "enable_stats";
+	attr->attr.mode = 0664;
+	attr->show = show_client_stats;
+	attr->store = store_client_stats;
+
+	ret = sysfs_create_file(dev_priv->clients.root,
+				(struct attribute *)attr);
+	if (ret)
+		DRM_ERROR("Per-client sysfs setup failed! (%d)\n", ret);
+
 #ifdef CONFIG_PM
 	if (HAS_RC6(dev_priv)) {
 		ret = sysfs_merge_group(&kdev->kobj,
@@ -640,6 +710,9 @@ void i915_teardown_sysfs(struct drm_i915_private *dev_priv)
 	sysfs_unmerge_group(&kdev->kobj, &rc6p_attr_group);
 #endif
 
+	sysfs_remove_file(dev_priv->clients.root,
+			  (struct attribute *)&dev_priv->clients.stats.attr);
+
 	if (dev_priv->clients.root)
 		kobject_put(dev_priv->clients.root);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC 2/5] drm/i915: Expose list of clients in sysfs
@ 2019-10-25 14:35     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:35 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:28)
>  int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
>  {
> +       int ret = -ENOMEM;
>         struct drm_i915_file_private *file_priv;
> -       int ret;
>  
>         DRM_DEBUG("\n");
>  
>         file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>         if (!file_priv)
> -               return -ENOMEM;
> +               goto err_alloc;
> +
> +       file_priv->client.id = atomic_inc_return(&i915->clients.serial);

We should make this a cyclic ida to avoid reuse on wraparound. 32b
wraps will happen, and they will still have client 0 alive! :)

That will mean we need a lock.

(Of course you could use -EEXIST from add_client and keep incrementing
serial until you find a hole :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-gfx] [RFC 2/5] drm/i915: Expose list of clients in sysfs
@ 2019-10-25 14:35     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:35 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:28)
>  int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
>  {
> +       int ret = -ENOMEM;
>         struct drm_i915_file_private *file_priv;
> -       int ret;
>  
>         DRM_DEBUG("\n");
>  
>         file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>         if (!file_priv)
> -               return -ENOMEM;
> +               goto err_alloc;
> +
> +       file_priv->client.id = atomic_inc_return(&i915->clients.serial);

We should make this a cyclic ida to avoid reuse on wraparound. 32b
wraps will happen, and they will still have client 0 alive! :)

That will mean we need a lock.

(Of course you could use -EEXIST from add_client and keep incrementing
serial until you find a hole :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 3/5] drm/i915: Update client name on context create
@ 2019-10-25 14:39     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:39 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:29)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 55f1f93c0925..c7f6684eb366 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -2084,6 +2084,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>  {
>         struct drm_i915_private *i915 = to_i915(dev);
>         struct drm_i915_gem_context_create_ext *args = data;
> +       pid_t pid = pid_nr(get_task_pid(current, PIDTYPE_PID));
> +       struct drm_i915_file_private *file_priv = file->driver_priv;
>         struct create_ext ext_data;
>         int ret;
>  
> @@ -2097,14 +2099,23 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>         if (ret)
>                 return ret;
>  
> -       ext_data.fpriv = file->driver_priv;
> +       ext_data.fpriv = file_priv;
>         if (client_is_banned(ext_data.fpriv)) {
>                 DRM_DEBUG("client %s[%d] banned from creating ctx\n",
> -                         current->comm,
> -                         pid_nr(get_task_pid(current, PIDTYPE_PID)));
> +                         current->comm, pid);
>                 return -EIO;
>         }
>  
> +       mutex_lock(&dev->struct_mutex);
> +       if (file_priv->client.pid != pid) {
> +               i915_gem_remove_client(file_priv);
> +               ret = i915_gem_add_client(i915, file_priv, current,
> +                                         file_priv->client.id);
> +       }
> +       mutex_unlock(&dev->struct_mutex);

You are serialising against multiple context_create_ioctl from the same
file, right? Could abuse fpriv->context_idr_lock. Or add a new one.

> +       if (ret)
> +               return ret;
> +

Hmm, is get_task_pid() the one that returns a reference to the pid_t?
Aye, it is, we need a put_pid().
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-gfx] [RFC 3/5] drm/i915: Update client name on context create
@ 2019-10-25 14:39     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:39 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:29)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 55f1f93c0925..c7f6684eb366 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -2084,6 +2084,8 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>  {
>         struct drm_i915_private *i915 = to_i915(dev);
>         struct drm_i915_gem_context_create_ext *args = data;
> +       pid_t pid = pid_nr(get_task_pid(current, PIDTYPE_PID));
> +       struct drm_i915_file_private *file_priv = file->driver_priv;
>         struct create_ext ext_data;
>         int ret;
>  
> @@ -2097,14 +2099,23 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
>         if (ret)
>                 return ret;
>  
> -       ext_data.fpriv = file->driver_priv;
> +       ext_data.fpriv = file_priv;
>         if (client_is_banned(ext_data.fpriv)) {
>                 DRM_DEBUG("client %s[%d] banned from creating ctx\n",
> -                         current->comm,
> -                         pid_nr(get_task_pid(current, PIDTYPE_PID)));
> +                         current->comm, pid);
>                 return -EIO;
>         }
>  
> +       mutex_lock(&dev->struct_mutex);
> +       if (file_priv->client.pid != pid) {
> +               i915_gem_remove_client(file_priv);
> +               ret = i915_gem_add_client(i915, file_priv, current,
> +                                         file_priv->client.id);
> +       }
> +       mutex_unlock(&dev->struct_mutex);

You are serialising against multiple context_create_ioctl from the same
file, right? Could abuse fpriv->context_idr_lock. Or add a new one.

> +       if (ret)
> +               return ret;
> +

Hmm, is get_task_pid() the one that returns a reference to the pid_t?
Aye, it is, we need a put_pid().
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 4/5] drm/i915: Expose per-engine client busyness
@ 2019-10-25 14:42     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:42 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:30)
> +static int busy_add(int id, void *p, void *data)
> +{
> +       struct busy_ctx *bc = data;
> +       struct i915_gem_context *ctx = p;
> +       unsigned int engine_class = bc->engine_class;
> +       struct i915_gem_engines_iter it;
> +       struct intel_context *ce;
> +       uint64_t total = bc->total;
> +
> +       for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
> +               if (ce->engine->uabi_class == engine_class)
> +                       total += ktime_to_ns(intel_context_get_busy_time(ce));
> +       }
> +       i915_gem_context_unlock_engines(ctx);
> +
> +       bc->total = total;
> +
> +       return 0;
> +}
> +
> +static ssize_t
> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
> +{
> +       struct i915_engine_busy_attribute *i915_attr =
> +               container_of(attr, typeof(*i915_attr), attr);
> +       struct drm_i915_file_private *file_priv = i915_attr->file_priv;
> +       struct busy_ctx bc = { .engine_class = i915_attr->engine_class };
> +       int ret;
> +
> +       ret = mutex_lock_interruptible(&file_priv->context_idr_lock);
> +       if (ret)
> +               return ret;
> +
> +       idr_for_each(&file_priv->context_idr, busy_add, &bc);

If you don a hard hat, this can all be done under rcu_read_lock().
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-gfx] [RFC 4/5] drm/i915: Expose per-engine client busyness
@ 2019-10-25 14:42     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:42 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:30)
> +static int busy_add(int id, void *p, void *data)
> +{
> +       struct busy_ctx *bc = data;
> +       struct i915_gem_context *ctx = p;
> +       unsigned int engine_class = bc->engine_class;
> +       struct i915_gem_engines_iter it;
> +       struct intel_context *ce;
> +       uint64_t total = bc->total;
> +
> +       for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
> +               if (ce->engine->uabi_class == engine_class)
> +                       total += ktime_to_ns(intel_context_get_busy_time(ce));
> +       }
> +       i915_gem_context_unlock_engines(ctx);
> +
> +       bc->total = total;
> +
> +       return 0;
> +}
> +
> +static ssize_t
> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
> +{
> +       struct i915_engine_busy_attribute *i915_attr =
> +               container_of(attr, typeof(*i915_attr), attr);
> +       struct drm_i915_file_private *file_priv = i915_attr->file_priv;
> +       struct busy_ctx bc = { .engine_class = i915_attr->engine_class };
> +       int ret;
> +
> +       ret = mutex_lock_interruptible(&file_priv->context_idr_lock);
> +       if (ret)
> +               return ret;
> +
> +       idr_for_each(&file_priv->context_idr, busy_add, &bc);

If you don a hard hat, this can all be done under rcu_read_lock().
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats
@ 2019-10-25 14:49     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:49 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:31)
> +       ret = i915_mutex_lock_interruptible(&i915->drm);
> +       if (ret)
> +               return ret;
> +
> +       if (val && !i915->clients.stats.enabled)
> +               enable = true;
> +       else if (!val && i915->clients.stats.enabled)
> +               disable = true;

The struct_mutex is just for atomically enabling/disabling stats, right?
Only one user may toggle status at a time.

I'd wrap it a i915->spinlock just so the locking is clear from the
outset.

> +       if (!enable && !disable)
> +               goto out;
> +
> +       for_each_engine(engine, i915, id) {

A quick s/for_each_engine/for_each_uabi_engine/

> +               if (enable)
> +                       WARN_ON_ONCE(intel_enable_engine_stats(engine));
> +               else if (disable)
> +                       intel_disable_engine_stats(engine);
> +       }
> +
> +       i915->clients.stats.enabled = val;

Now as for whether we want a toggle approach, or only while the file is
open (and refcount)?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Intel-gfx] [RFC 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats
@ 2019-10-25 14:49     ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2019-10-25 14:49 UTC (permalink / raw)
  To: Intel-gfx, Tvrtko Ursulin

Quoting Tvrtko Ursulin (2019-10-25 15:21:31)
> +       ret = i915_mutex_lock_interruptible(&i915->drm);
> +       if (ret)
> +               return ret;
> +
> +       if (val && !i915->clients.stats.enabled)
> +               enable = true;
> +       else if (!val && i915->clients.stats.enabled)
> +               disable = true;

The struct_mutex is just for atomically enabling/disabling stats, right?
Only one user may toggle status at a time.

I'd wrap it a i915->spinlock just so the locking is clear from the
outset.

> +       if (!enable && !disable)
> +               goto out;
> +
> +       for_each_engine(engine, i915, id) {

A quick s/for_each_engine/for_each_uabi_engine/

> +               if (enable)
> +                       WARN_ON_ONCE(intel_enable_engine_stats(engine));
> +               else if (disable)
> +                       intel_disable_engine_stats(engine);
> +       }
> +
> +       i915->clients.stats.enabled = val;

Now as for whether we want a toggle approach, or only while the file is
open (and refcount)?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (all aboard the sysfs train!)
@ 2019-10-25 19:45   ` Patchwork
  0 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2019-10-25 19:45 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (all aboard the sysfs train!)
URL   : https://patchwork.freedesktop.org/series/68570/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
c5f5abc9d897 drm/i915: Track per-context engine busyness
2c3edc69e7b6 drm/i915: Expose list of clients in sysfs
-:99: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#99: FILE: drivers/gpu/drm/i915/i915_gem.c:1517:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 1 checks, 204 lines checked
a142a38e3f25 drm/i915: Update client name on context create
-:63: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#63: FILE: drivers/gpu/drm/i915/i915_drv.h:1990:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 1 checks, 63 lines checked
84b4d495586c drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

-:95: CHECK:PREFER_KERNEL_TYPES: Prefer kernel type 'u64' over 'uint64_t'
#95: FILE: drivers/gpu/drm/i915/i915_gem.c:1527:
+	uint64_t total = bc->total;

-:128: WARNING:STATIC_CONST_CHAR_ARRAY: static const char * array should probably be static const char * const
#128: FILE: drivers/gpu/drm/i915/i915_gem.c:1560:
+static const char *uabi_class_names[] = {

-:185: ERROR:CODE_INDENT: code indent should use tabs where possible
#185: FILE: drivers/gpu/drm/i915/i915_gem.c:1633:
+^I^I^I^I        (struct attribute *)attr);$

-:185: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#185: FILE: drivers/gpu/drm/i915/i915_gem.c:1633:
+		ret = sysfs_create_file(file_priv->client.busy_root,
+				        (struct attribute *)attr);

total: 1 errors, 2 warnings, 2 checks, 164 lines checked
9021ed40837a drm/i915: Add sysfs toggle to enable per-client engine stats

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (all aboard the sysfs train!)
@ 2019-10-25 19:45   ` Patchwork
  0 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2019-10-25 19:45 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (all aboard the sysfs train!)
URL   : https://patchwork.freedesktop.org/series/68570/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
c5f5abc9d897 drm/i915: Track per-context engine busyness
2c3edc69e7b6 drm/i915: Expose list of clients in sysfs
-:99: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#99: FILE: drivers/gpu/drm/i915/i915_gem.c:1517:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 1 checks, 204 lines checked
a142a38e3f25 drm/i915: Update client name on context create
-:63: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#63: FILE: drivers/gpu/drm/i915/i915_drv.h:1990:
+i915_gem_add_client(struct drm_i915_private *i915,
+		struct drm_i915_file_private *file_priv,

total: 0 errors, 0 warnings, 1 checks, 63 lines checked
84b4d495586c drm/i915: Expose per-engine client busyness
-:25: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#25: 
     Render/3D/0   63.73% |███████████████████           |      3%      0%

-:95: CHECK:PREFER_KERNEL_TYPES: Prefer kernel type 'u64' over 'uint64_t'
#95: FILE: drivers/gpu/drm/i915/i915_gem.c:1527:
+	uint64_t total = bc->total;

-:128: WARNING:STATIC_CONST_CHAR_ARRAY: static const char * array should probably be static const char * const
#128: FILE: drivers/gpu/drm/i915/i915_gem.c:1560:
+static const char *uabi_class_names[] = {

-:185: ERROR:CODE_INDENT: code indent should use tabs where possible
#185: FILE: drivers/gpu/drm/i915/i915_gem.c:1633:
+^I^I^I^I        (struct attribute *)attr);$

-:185: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#185: FILE: drivers/gpu/drm/i915/i915_gem.c:1633:
+		ret = sysfs_create_file(file_priv->client.busy_root,
+				        (struct attribute *)attr);

total: 1 errors, 2 warnings, 2 checks, 164 lines checked
9021ed40837a drm/i915: Add sysfs toggle to enable per-client engine stats

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* ✗ Fi.CI.BAT: failure for Per client engine busyness (all aboard the sysfs train!)
@ 2019-10-25 20:12   ` Patchwork
  0 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2019-10-25 20:12 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (all aboard the sysfs train!)
URL   : https://patchwork.freedesktop.org/series/68570/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7187 -> Patchwork_14987
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_14987 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_14987, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_14987:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live_gem_contexts:
    - fi-bsw-n3050:       [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-bsw-n3050/igt@i915_selftest@live_gem_contexts.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-bsw-n3050/igt@i915_selftest@live_gem_contexts.html

  
Known issues
------------

  Here are the changes found in Patchwork_14987 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@prime_busy@basic-before-default:
    - fi-icl-u3:          [PASS][3] -> [DMESG-WARN][4] ([fdo#107724]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-icl-u3/igt@prime_busy@basic-before-default.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-icl-u3/igt@prime_busy@basic-before-default.html

  
#### Possible fixes ####

  * igt@gem_exec_suspend@basic-s4-devices:
    - fi-icl-u3:          [DMESG-WARN][5] ([fdo#107724]) -> [PASS][6] +2 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-icl-u3/igt@gem_exec_suspend@basic-s4-devices.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-icl-u3/igt@gem_exec_suspend@basic-s4-devices.html

  * {igt@i915_selftest@live_gt_heartbeat}:
    - {fi-icl-dsi}:       [DMESG-FAIL][7] ([fdo#112096]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-icl-dsi/igt@i915_selftest@live_gt_heartbeat.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-icl-dsi/igt@i915_selftest@live_gt_heartbeat.html
    - fi-cml-u2:          [DMESG-FAIL][9] ([fdo#112096]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-cml-u2/igt@i915_selftest@live_gt_heartbeat.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-cml-u2/igt@i915_selftest@live_gt_heartbeat.html

  * igt@i915_selftest@live_hangcheck:
    - {fi-tgl-u}:         [INCOMPLETE][11] ([fdo#111747]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-tgl-u/igt@i915_selftest@live_hangcheck.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-tgl-u/igt@i915_selftest@live_hangcheck.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-kbl-7500u:       [FAIL][13] ([fdo#111407]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#107713]: https://bugs.freedesktop.org/show_bug.cgi?id=107713
  [fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
  [fdo#111381]: https://bugs.freedesktop.org/show_bug.cgi?id=111381
  [fdo#111407]: https://bugs.freedesktop.org/show_bug.cgi?id=111407
  [fdo#111747]: https://bugs.freedesktop.org/show_bug.cgi?id=111747
  [fdo#112096]: https://bugs.freedesktop.org/show_bug.cgi?id=112096


Participating hosts (49 -> 42)
------------------------------

  Additional (1): fi-pnv-d510 
  Missing    (8): fi-ilk-m540 fi-cml-s fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-icl-y fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7187 -> Patchwork_14987

  CI-20190529: 20190529
  CI_DRM_7187: 9df5aeba240a65ea80008020d3027484bc6055b3 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5241: 17b87c378fa155390b13a43f141371fd899d567b @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_14987: 9021ed40837a0b8ca2bb29e7c53bc15a18ee3a02 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

9021ed40837a drm/i915: Add sysfs toggle to enable per-client engine stats
84b4d495586c drm/i915: Expose per-engine client busyness
a142a38e3f25 drm/i915: Update client name on context create
2c3edc69e7b6 drm/i915: Expose list of clients in sysfs
c5f5abc9d897 drm/i915: Track per-context engine busyness

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (all aboard the sysfs train!)
@ 2019-10-25 20:12   ` Patchwork
  0 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2019-10-25 20:12 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Per client engine busyness (all aboard the sysfs train!)
URL   : https://patchwork.freedesktop.org/series/68570/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_7187 -> Patchwork_14987
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_14987 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_14987, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/index.html

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_14987:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live_gem_contexts:
    - fi-bsw-n3050:       [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-bsw-n3050/igt@i915_selftest@live_gem_contexts.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-bsw-n3050/igt@i915_selftest@live_gem_contexts.html

  
Known issues
------------

  Here are the changes found in Patchwork_14987 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@prime_busy@basic-before-default:
    - fi-icl-u3:          [PASS][3] -> [DMESG-WARN][4] ([fdo#107724]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-icl-u3/igt@prime_busy@basic-before-default.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-icl-u3/igt@prime_busy@basic-before-default.html

  
#### Possible fixes ####

  * igt@gem_exec_suspend@basic-s4-devices:
    - fi-icl-u3:          [DMESG-WARN][5] ([fdo#107724]) -> [PASS][6] +2 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-icl-u3/igt@gem_exec_suspend@basic-s4-devices.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-icl-u3/igt@gem_exec_suspend@basic-s4-devices.html

  * {igt@i915_selftest@live_gt_heartbeat}:
    - {fi-icl-dsi}:       [DMESG-FAIL][7] ([fdo#112096]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-icl-dsi/igt@i915_selftest@live_gt_heartbeat.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-icl-dsi/igt@i915_selftest@live_gt_heartbeat.html
    - fi-cml-u2:          [DMESG-FAIL][9] ([fdo#112096]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-cml-u2/igt@i915_selftest@live_gt_heartbeat.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-cml-u2/igt@i915_selftest@live_gt_heartbeat.html

  * igt@i915_selftest@live_hangcheck:
    - {fi-tgl-u}:         [INCOMPLETE][11] ([fdo#111747]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-tgl-u/igt@i915_selftest@live_hangcheck.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-tgl-u/igt@i915_selftest@live_hangcheck.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-kbl-7500u:       [FAIL][13] ([fdo#111407]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7187/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#107713]: https://bugs.freedesktop.org/show_bug.cgi?id=107713
  [fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
  [fdo#111381]: https://bugs.freedesktop.org/show_bug.cgi?id=111381
  [fdo#111407]: https://bugs.freedesktop.org/show_bug.cgi?id=111407
  [fdo#111747]: https://bugs.freedesktop.org/show_bug.cgi?id=111747
  [fdo#112096]: https://bugs.freedesktop.org/show_bug.cgi?id=112096


Participating hosts (49 -> 42)
------------------------------

  Additional (1): fi-pnv-d510 
  Missing    (8): fi-ilk-m540 fi-cml-s fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-icl-y fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7187 -> Patchwork_14987

  CI-20190529: 20190529
  CI_DRM_7187: 9df5aeba240a65ea80008020d3027484bc6055b3 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5241: 17b87c378fa155390b13a43f141371fd899d567b @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_14987: 9021ed40837a0b8ca2bb29e7c53bc15a18ee3a02 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

9021ed40837a drm/i915: Add sysfs toggle to enable per-client engine stats
84b4d495586c drm/i915: Expose per-engine client busyness
a142a38e3f25 drm/i915: Update client name on context create
2c3edc69e7b6 drm/i915: Expose list of clients in sysfs
c5f5abc9d897 drm/i915: Track per-context engine busyness

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14987/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2019-10-25 20:13 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-25 14:21 [RFC 0/5] Per client engine busyness (all aboard the sysfs train!) Tvrtko Ursulin
2019-10-25 14:21 ` [Intel-gfx] " Tvrtko Ursulin
2019-10-25 14:21 ` [RFC 1/5] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2019-10-25 14:21   ` [Intel-gfx] " Tvrtko Ursulin
2019-10-25 14:21 ` [RFC 2/5] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2019-10-25 14:21   ` [Intel-gfx] " Tvrtko Ursulin
2019-10-25 14:35   ` Chris Wilson
2019-10-25 14:35     ` [Intel-gfx] " Chris Wilson
2019-10-25 14:21 ` [RFC 3/5] drm/i915: Update client name on context create Tvrtko Ursulin
2019-10-25 14:21   ` [Intel-gfx] " Tvrtko Ursulin
2019-10-25 14:39   ` Chris Wilson
2019-10-25 14:39     ` [Intel-gfx] " Chris Wilson
2019-10-25 14:21 ` [RFC 4/5] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2019-10-25 14:21   ` [Intel-gfx] " Tvrtko Ursulin
2019-10-25 14:42   ` Chris Wilson
2019-10-25 14:42     ` [Intel-gfx] " Chris Wilson
2019-10-25 14:21 ` [RFC 5/5] drm/i915: Add sysfs toggle to enable per-client engine stats Tvrtko Ursulin
2019-10-25 14:21   ` [Intel-gfx] " Tvrtko Ursulin
2019-10-25 14:49   ` Chris Wilson
2019-10-25 14:49     ` [Intel-gfx] " Chris Wilson
2019-10-25 19:45 ` ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (all aboard the sysfs train!) Patchwork
2019-10-25 19:45   ` [Intel-gfx] " Patchwork
2019-10-25 20:12 ` ✗ Fi.CI.BAT: failure " Patchwork
2019-10-25 20:12   ` [Intel-gfx] " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.